• DavidGarcia@feddit.nl
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    phi-4 is the only one I am aware of that was deliberately trained to refuse instead of hallucinating. it’s mindblowing to me that that isn’t standard. everyone is trying to maximize benchmarks at all cost.

    I wonder if diffusion LLMs will be lower in hallucinations, since they inherently have error correction built into their inference process

    • MartianSands@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 days ago

      Even that won’t be truly effective. It’s all marketing, at this point.

      The problem of hallucination really is fundamental to the technology. If there’s a way to prevent it, it won’t be as simple as training it differently