• CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    8
    ·
    22 hours ago

    I mean, nitpick, but if you blamed mathematics you actually would be. The observation that AI/LLMs are highly unreliable and don’t appear to be getting any better is empirical.

        • cassandrafatigue@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          7
          ·
          21 hours ago

          No but Turing was involved, and the guy who wrote ELIZA

          The tech isn’t new. That’s all the effort I’m willing to put in for this trash.

          • CanadaPlus@lemmy.sdf.org
            link
            fedilink
            arrow-up
            7
            ·
            edit-2
            9 hours ago

            And that paper’s name? Albert Einstein. I can’t find anything on Weizenbaum and Turing authoring together. Weizenbaum seems to have written mostly prose and code, even - he’s not really thought of for his mathematical innovations, although obviously math was his original field.

            Back in the 50’s people thought conventional algorithms, like everybody here has worked with, were going to reach human intelligence. They could play chess, and chess is smart guy stuff, so obviously recognising a bird should be easy, right? Well, they figured out that wasn’t right, and so began the first AI winter.

            The tech of deep neural nets is in fact fairly new. Like, arguably it didn’t become a thing until the Cold War was ending, although there were a lot of precursors, and it kind of arrived gradually.

              • CanadaPlus@lemmy.sdf.org
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                1 hour ago

                Any comments on how you attempted to lie to us all there? To win an internet argument?

                It is. It’s one that has hidden layers, as opposed to a shallow neural net which does not. Shallow neural nets aren’t really a thing anymore, so it’s usually omitted, but historically things like the perceptron go back further, and they’re conceptually simpler to update during training. They also can’t really deal with anything nonlinear.