• 0 Posts
  • 381 Comments
Joined 2 years ago
cake
Cake day: July 3rd, 2023

help-circle
  • I’ve worked with a few people who are just incomprehensible. One refuses to write commit messages of any detail. Just “work in progress”. Cast him into the pit.

    There was another guy that refused to name his tests. His code was like

    describe(''. () => {
      it('', () => {
         expect(someFunc()).toEqual(0);
      }
     it('', () => {
        expect(someFunc(1)).toEqual(0);
      }
     it('', () => {
       expect(someFunc("").toEqual(1);
     }
    }
    

    He was like, “Test names are like comments and they turn into lies! So I’m not going to do it.”

    I was like, a. what the fuck. b. do you also not name your files? projects? children?

    He was working at a very big company last I heard.

    edit: If you’re unfamiliar, the convention is to put a human readable description where those empty strings are. This is used in the test output. If one fails, it’ll typically tell include the name in the output.









  • It is absolutely stupid, stupid to the tune of “you shouldn’t be a decision maker”, to think an LLM is a better use for “getting a quick intro to an unfamiliar topic” than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.

    As for “looking up facts when you have trouble remembering it”, using the lie machine is a terrible idea. It’s going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you’d be better off finding a reputable source. If I type in “how do i strip whitespace in python?” an LLM could very well say “it’s your_string.strip()”. That’s wrong. Just send me to the fucking official docs.

    There are probably edge or special cases, but for general search on the web? LLMs are worse than search.












  • Well, in this example, the information provided by the AI was simply wrong. If it had done the traditional search method of pointing to the organization’s website where they had the hours listed, it would have worked fine.

    This idea that “we’re all entitled to our opinion” is nonsense. That’s for when you’re a child and the topic is what flavor Jelly Bean you like. It’s not for like policy or things that matter. You can’t just “it’s my opinion” your way through “this algorithm is O(n^2) but I like it better than O(n) so I’m going to use it for my big website”. Or more on topic, you can’t use it for “these results are wrong but I like them better”