• technocrit@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    10
    ·
    edit-2
    16 hours ago

    Why would they “prove” something that’s completely obvious?

    The burden of proof is on the grifters who have overwhelmingly been making false claims and distorting language for decades.

    • TheRealKuni@midwest.social
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      14 hours ago

      Why would they “prove” something that’s completely obvious?

      I don’t want to be critical, but I think if you step back a bit and look and what you’re saying, you’re asking why we would bother to experiment and prove what we think we know.

      That’s a perfectly normal and reasonable scientific pursuit. Yes, in a rational society the burden of proof would be on the grifters, but that’s never how it actually works. It’s always the doctors disproving the cure-all, not the snake oil salesmen failing to prove their own prove their own product.

      There is value in this research, even if it fits what you already believe on the subject. I would think you would be thrilled to have your hypothesis confirmed.

        • Hoimo@ani.social
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 hours ago

          I think if you look at child development research, you’ll see that kids can learn to do crazy shit with very little input, waaay less than you’d need to train a neural net to do the same. So either kids are the luckiest neural nets and always make the correct adjustment after failing, or they have some innate knowledge that isn’t pattern-based at all.

          There’s even some examples in linguistics specifically, where children tend towards certain grammar rules despite all evidence in their language pointing to another rule. Pure pattern-matching would find the real-world rule without first modelling a different (universally common) rule.

    • yeahiknow3@lemmings.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      1
      ·
      edit-2
      16 hours ago

      They’re just using the terminology that’s widespread in the field. In a sense, the paper’s purpose is to prove that this terminology is unsuitable.

      • technocrit@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        edit-2
        16 hours ago

        I understand that people in this “field” regularly use pseudo-scientific language (I actually deleted that part of my comment).

        But the terminology has never been suitable so it shouldn’t be used in the first place. It pre-supposes the hypothesis that they’re supposedly “disproving”. They’re feeding into the grift because that’s what the field is. That’s how they all get paid the big bucks.

      • limelight79@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        8 hours ago

        Yep. I’m retired now, but before retirement a month or so ago, I was working on a project that relied on several hundred people back in 2020. “Why can’t AI do it?”

        The people I worked with are continuing the research and putting it up against the human coders, but…there was definitely an element of “AI can do that, we won’t need people” next time. I sincerely hope management listens to reason. Our decisions would lead to potentially firing people, so I think we were able to push back on the “AI can make all of these decisions”…for now.

        The AI people were all in, they were ready to build an interface that told the human what the AI would recommend for each item. Errrm, no, that’s not how an independent test works. We had to reel them back in.