

Agree, I’ve been using claude extensively for about a month, before that for little stuff for about 3 months. It is great at little stuff. It can whip out a program to do X in 5 minutes flat, as long as X doesn’t amount to more than about 1000 lines of code. Need a parser to sift through some crazy combination of logic in thousands of log files: Claude is your man for that job. Want to scan audio files to identify silence gaps and report how many are found? Again, Claude can write the program and generate the report for you in 5 minutes flat (plus whatever time the program takes to decode the audio…)
Need something more complex, nuanced, multi-faceted? Yeah, it is still easier to do most of the upper level design stuff yourself, but if you can build a system out of a bunch of little modules, AI is getting pretty good at writing the little modules.

In the work I have done with Claude over the past months, I have not learned to trust it for big things - if anything the opposite. It’s a great tool, but - to anthropomorphize - it’s “hallucination rate” is down there with my less trustworthy colleagues. Ask it to find all instances of X in this code base of 100 files of 1000 lines each… yeah, it seems to get bored or off-track quite a bit, misses obvious instances, finds a lot but misses too much to say it’s really done a thorough review. If you can get it to develop a “deterministic process” for you (shell script or program) and test that program, then that you can trust more, but when the LLM is in the loop it just isn’t all there all the time, and worse: it’ll do some really cool and powerful things 19/20 times, then when you think you can trust it it will screw up an identical sounding task horribly.
I was just messing around with it and I had it doing a files organization and commit process for me, was working pretty good for a couple of weeks, then one day it just screwed up and irretrievably deleted a bunch of new work. Luckily it was just 5 minutes of its own work, but still… that’s not a great result.