• Artwork@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    3
    ·
    edit-2
    4 days ago

    Please no. Absolutely not. LLM is absolutely not “nice for dealing with confusion” but the very opposite.
    Please do consider people effort, articles, attributions, and actually learning and organizing your knowledge. Please do train your mind, and self-confidence.

    • some_kind_of_guy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      You can’t rely on LLMs to get actual answers for technical things but it can help avoid a huge amount of wasted time and effort, back-and-forth, going in circles, talking around or past the issues etc. that is seen in threads everywhere in these types of expert niche communities. Besides, maybe my question has already been answered.

      When I don’t know the specific terms or framing, am missing context or am trying to get from A to C, but have no idea that B even exists, nevermind how (or who) to ask about it. If I can accelerate the process of clearing that up, I can go to the correct human expert or community with a much better handle on what it is I’m actually looking for and how to ask for it.

      • Artwork@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        14 hours ago

        Thank you, but I do disagree. You cannot know the “result” of that LLM does include all the required context, and you won’t re-clarify it, since the output does already not contain the relevant, and in the end you miss the knowledge and waste the time, too.

        How are you sure the output does include the relevant? Will you ever re-submit the question to an algorithm, without even knowing it is required re-submit it, since there’s even no indication for it? I.e. The LLM just did not include what you needed, did not include also important context surrounding it, and did not even tell you the authors to question further - no attribution, no accountability, no sense, sorry.

        • some_kind_of_guy@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          5 hours ago

          I’m not sure we disagree. I agree that LLMs are not a good source for raw knowledge, and it’s definitely foolish to use them as if they’re some sort of oracle. I already mentioned that they are not good at providing answers, especially in a technical context.

          What they are good at is gathering sources and recontextualizing your queries based on those sources, so that you can pose your query to human experts in a way that will make more sense to them.

          You’re of course in your absolute right to avoid the tech entirely, as it comes with many pitfalls. Many of these models are damn good at gathering info from real human sources, though, if you can be concise with your prompts and avoid the temptation of swallowing its “analysis”.

      • sureshot0@discuss.online
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        2 days ago

        You mention wasted time and effort, going in circles, talking past and around issues/questions, I think a lot of people underestimate that this is why people go to AI in the first place, because asking for help can be genuinely unbearable sometimes