I use Manjaro Linux with the Cinnamon desktop and sometimes run into system-level issues, but I have no idea how to properly debug them. It doesn’t feel as straightforward as debugging a normal program. What’s the best way or resource to learn system debugging on Linux?

  • DigitalDilemma@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    Sysadmin here, this is my usual flow for various distros

    1. as /u/FigMcLargeHuge mentions, recent logfiles in /var/log. Notably /var/log/messages (EL) and syslog (Debian) but anything that’s recent.

    2. journalctl - More and more things are moving to binary logging. If you know the process, then journalctl -u processname restricts to just that. also add a -f for tailing it for ongoing logs.

    3. dmesg -T - especially at system level, this captures any hardware/low level logs. (-T reports actual times, not just seconds since boot)

    4. Once you have some logs that you think are related, but don’t know WTF they actually mean, you have two options. The first is to google likely strings. This is… ineffective much of the time - accidental misinformation and outdated advice is increasingly common. The answer might be there, but it takes time and can be frustrating to weed out the cruft.

    The better way, (IMO, and people downvote me for saying this) is to use AI. Get a few lines of logs with the errors, check them for confidential information, and simply paste the suspect lines into chatgpt, gemini, claude, co-pilot, whatever. No need for context, it’ll figure that out. The LLM will, 4 times out of 5, identify the problem very quickly.

    Now, once it’s identified that, it will offer to fix it for you. This is where you’ve got to be on your toes as LLMs are really really quick to give bad advice at this level. But that first triage is nearly always worth doing and helps shape your own mind as to what’s going on. AI is still useful for fixing it, but do understand what it’s telling you to do.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      23 hours ago

      use AI. Get a few lines of logs with the errors, check them for confidential information, and simply paste the suspect lines into chatgpt, gemini, claude, co-pilot, whatever

      concur. I used to put smaller snippets of the logs into Google search to hopefully bring up pages from fellow sufferers of the same malaise; that usually worked, but AI is doing it better - now.

    • BCsven@lemmy.ca
      link
      fedilink
      arrow-up
      6
      ·
      2 days ago

      I have resorted to the AI step also, if Stract.com doesn’t give me a good link, because if I paste a minidlna crash log Google responds with:

      • Mini Cooper on sale
      • Buy your DAC device here
      • want to sign up to streaming music
      • network and NAS comparisons

      Useless.

      At least AI said: based on your error it appears a file in your database has metdata tags it cannot parse properly. Sure enough the tagger I used had applied a tag to a wmv file and Minidlna couldn’t deal with tag1 area vs tag 2 areas used in other file formats.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      2 days ago

      Did you try to do this workflow with local models? If so, in your experience what are the better models for this?

      • DigitalDilemma@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        We did experiment with local models. They were okay, if a little slow with the resources we allocated for testing. Ultimately though, we paid for copilot. I’m still a little sceptical that it won’t leak data, despite the assurances, so I do clean anything sensitive before pasting.

        As for best models - generally gpt4 or 5 is my go-to, but the others have their uses. I tend to stick with one until it annoys me, then move on. Claude’s pretty good for code help, imo, but there’s not really a huge difference between them.

        What’s your experiences?

        • ZeDoTelhado@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          13 hours ago

          I do not use models in general online, but my needs are also much smaller. Max I use my local model for ollama is translations. I am always interested in seeing more focused models so we can use on lower end hardware