• squaresinger@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    They aren’t a technical bug, but an UX bug. Or would you claim that an LLM that outputs 100% non-factual hallucinations and no factual information at all is just as desirable as one that doesn’t do that?

    Btw, LLMs don’t have any traditional code at all.