Milestone passed with the debut of Linux 6.14 rc1.

    • refalo@programming.dev
      link
      fedilink
      arrow-up
      45
      ·
      edit-2
      19 hours ago

      Depends on your perspective I suppose. One good reason might be that it means more hardware is supported. A bad one might be that it increases the overall attack surface from a security point of view.

      • demesisx@infosec.pub
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        11
        ·
        19 hours ago

        I’d like to see them hire some formal methods people to at least formally verify crucial parts of it.

        It might actually also be good to analyze it with an LLM to identify any hidden problem areas.

        I’m interested to hear why my idea is probably foolish as well, though.

        • henfredemars@infosec.pub
          link
          fedilink
          English
          arrow-up
          20
          ·
          edit-2
          18 hours ago

          A great deal of work is going into this area. In fact, I believe there’s quite a few parties using LLMs to look for security bugs, and the US Department of defense had a multimillion dollar competition to motivate just that.

        • max@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          21
          arrow-down
          3
          ·
          18 hours ago

          llms have no abstract reasoning, so while they can write an okay-sounding bug report, often it’s wrong meow.

          i do think the linux foundation hires security people, and almost certainly the big contributors do.

          • demesisx@infosec.pub
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            3
            ·
            18 hours ago

            Doesn’t the new Chinese model just released actually do abstract reasoning?

            DeepSeek-R1 leverages a pure RL approach, enabling it to autonomously develop chain-of-thought (CoT) reasoning, self-verification, and reflection—capabilities critical for solving complex problems.

            To my untrained self, that sounds like reasoning.

            • kryptonidas@lemmings.world
              link
              fedilink
              arrow-up
              17
              arrow-down
              1
              ·
              16 hours ago

              With chain of thought it basically asks itself to generate related sub questions and then answers for those sub questions.

              Basically it’s just the same but recursive. So, like it looks like it can tell you things, it just also looks like reasoning.

              Now it may well be an improvement, but it’s still basically. “I have this word, what is statistically most likely to be the next word” over and over again.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      19 hours ago

      My opinion is this isn’t a problem. There’s a lot of hardware out there, and the vast majority of that code isn’t going to be loaded into any one kernel installation.