Milestone passed with the debut of Linux 6.14 rc1.

  • max@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    23
    arrow-down
    3
    ·
    2 days ago

    llms have no abstract reasoning, so while they can write an okay-sounding bug report, often it’s wrong meow.

    i do think the linux foundation hires security people, and almost certainly the big contributors do.

    • demesisx@infosec.pub
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      2 days ago

      Doesn’t the new Chinese model just released actually do abstract reasoning?

      DeepSeek-R1 leverages a pure RL approach, enabling it to autonomously develop chain-of-thought (CoT) reasoning, self-verification, and reflection—capabilities critical for solving complex problems.

      To my untrained self, that sounds like reasoning.

      • kryptonidas@lemmings.world
        link
        fedilink
        arrow-up
        20
        arrow-down
        1
        ·
        2 days ago

        With chain of thought it basically asks itself to generate related sub questions and then answers for those sub questions.

        Basically it’s just the same but recursive. So, like it looks like it can tell you things, it just also looks like reasoning.

        Now it may well be an improvement, but it’s still basically. “I have this word, what is statistically most likely to be the next word” over and over again.