• punrca@piefed.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    5 hours ago

    The software engineer acknowledged that AI tools can help improve productivity if used properly, but for programmers with relatively limited experience, he feels the harm is greater than the benefit. Most of the junior developers at the company, he explained, don’t remember the syntax of the language they’re using due to their overreliance on Cursor.

    Good luck for the future developers I guess.

    companies that’ve spent money on AI enterprise licenses need to show some sort of ROI to the bean-counters. Hence, mandates.

    Can’t wait for AI bubble to pop. If this continues, expect more incidents/outages due to AI generated slop code in the future.

  • jonathan7luke@lemmy.zip
    link
    fedilink
    English
    arrow-up
    22
    ·
    6 hours ago

    For the FAANG companies, they do it in part so they can then turn around and make those flashy claims you see in headlines like “95% of ours devs use [insert AI product they are trying to sell] daily” or “60% of our code base is now ‘written’ by our fancy AI”.

  • floofloof@lemmy.ca
    link
    fedilink
    English
    arrow-up
    74
    ·
    8 hours ago

    “We were still required to find some ways to use AI. The one corporate AI integration that was available to us was the Copilot plugin to Microsoft Teams. So everyone was required to use that at least once a week. The director of engineering checked our usage and nagged about it frequently in team meetings.”

    The managerial idiocy is astounding.

    • gravitas_deficiency@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      15
      ·
      6 hours ago

      It’s pretty easy to set up a cron job to fire off some sort of bullshit LLM request a handful of times a day during working hours. Just set it and forget it.

        • brsrklf@jlai.lu
          link
          fedilink
          English
          arrow-up
          14
          ·
          5 hours ago

          “Prompt yourself with some bullshit so that it looks like you’re doing something productive.”

          Who knows, maybe that’s how you attain AGI? What is a more human kind of intelligence than looking for ways to be a lazy fuck?

          • queerlilhayseed@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            11
            ·
            edit-2
            5 hours ago

            Prompt an LLM to contemplate its own existence every 30 minutes, give it access to a database of its previous outputs on the topic, boom you’ve got a strange loop. IDK why everyone thinks AGI is so hard.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    51
    ·
    edit-2
    8 hours ago

    Nothing tells that AI is a clever use of your ressources like enforcing a mandatory AI query quota for your employees, and having them struggle to find anything it’s good at and failing.

  • Septimaeus@infosec.pub
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    8 hours ago

    I’ll admit, some tools and automation are hugely improved with new ML smarts, but nothing feels dumber than finding problems that fit the boss’s solution.

      • assaultpotato@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        ·
        8 hours ago

        claude performs acceptably at repetitive tasks when I have an existing pattern for it to follow. “Replicate PR 123, but to add support for object Bar instead of Foo”. If I get some of this busy work in my queue I typically just have claude do it while I’m in a meeting.

        I’d never let it do refactors or design work, but as a code generation tool that can use existing code as a template, it’s useful. I wouldn’t pay an arm and a leg for it, but burning $2 while I’m in a meeting to kill chore tasks is worth it to me.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          Agree, I’ve been using claude extensively for about a month, before that for little stuff for about 3 months. It is great at little stuff. It can whip out a program to do X in 5 minutes flat, as long as X doesn’t amount to more than about 1000 lines of code. Need a parser to sift through some crazy combination of logic in thousands of log files: Claude is your man for that job. Want to scan audio files to identify silence gaps and report how many are found? Again, Claude can write the program and generate the report for you in 5 minutes flat (plus whatever time the program takes to decode the audio…)

          Need something more complex, nuanced, multi-faceted? Yeah, it is still easier to do most of the upper level design stuff yourself, but if you can build a system out of a bunch of little modules, AI is getting pretty good at writing the little modules.

      • Septimaeus@infosec.pub
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        7 hours ago

        For example the tools for the really tedious stuff, like large codebase refactoring for style keeping, naming convention adherence, all kinds of code smells, whatever. Lots of those tools have gotten ML upgrades and are a lot smarter and more powerful than what I remember from a decade ago (intellisense, jetbrains helper functions, various opinionated linter toolchains, and so forth).

        While I’ve only experimented a little with some the more explicitly generative LLM-based coding assistant plugins, I’ve been impressed (and a little spooked) at how good they often were at guessing what I’m doing way before I finished doing it.

        I haven’t used the prompt-based LLMs at all, because I’m just not used to it, but I’ve watched nearby devs use them for stuff like manipulating a bunch of files in a repeated pattern, breaking up a spaghetti method into reusable functions, or giving a descriptive overview of some gnarly undocumented legacy code. They seem pretty damn useful.

        I’ll integrate the prompt-based tools once I can host them locally.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          In the work I have done with Claude over the past months, I have not learned to trust it for big things - if anything the opposite. It’s a great tool, but - to anthropomorphize - it’s “hallucination rate” is down there with my less trustworthy colleagues. Ask it to find all instances of X in this code base of 100 files of 1000 lines each… yeah, it seems to get bored or off-track quite a bit, misses obvious instances, finds a lot but misses too much to say it’s really done a thorough review. If you can get it to develop a “deterministic process” for you (shell script or program) and test that program, then that you can trust more, but when the LLM is in the loop it just isn’t all there all the time, and worse: it’ll do some really cool and powerful things 19/20 times, then when you think you can trust it it will screw up an identical sounding task horribly.

          I was just messing around with it and I had it doing a files organization and commit process for me, was working pretty good for a couple of weeks, then one day it just screwed up and irretrievably deleted a bunch of new work. Luckily it was just 5 minutes of its own work, but still… that’s not a great result.