• 1 Post
  • 464 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2024

help-circle

  • Honestly it’s fine. LSPs are nice but you don’t need them per se. A combination of vim, tmux, entr, a fast incremental compiler, grep, and proper documentation can get you a long way there.

    A lot of critically important code that’s running the servers we’re using to communicate was written this way. And, if capitalist decline continues long enough, we will all eventually be begging for vim while writing code with ed.

    Personally I use helix with an LSP, because it helps speed up development quite a bit. I even have a local LLM for writing repetitive boilerplate bullshit. But I also understand that those are ultimately just tools that speed the process up, they do not fundamentally change what I’m doing.


  • It’s nicer to develop anything on a beefy machine, I was rocking a 7950X until recently. The compile times are a huge boon, and for some modern bloated bullshit (looking at you, Android) you definitely need a beefy machine to build it in a realistic timeframe.

    However, we can totally solve a lot of real-world problems with old cheap crappy hardware, we just never wanted to because it was “cheaper” for some poor soul in China to build a new PC every year than for a developer to spend an extra week thinking about efficiency. That appears to be changing now, especially if your code will be running on consumer hardware.

    My dad used to “write” software for basic aerodynamic modelling on punchcards, on a mainframe that has about us much computing power as some modern microcontrollers. You wouldn’t even consider it a potato by today’s standards. I’m sure if we use our wit and combine it with arcane knowledge of efficient algorithms, we can optimize our stacks to compile code on a friggin 3.5GHz 10-core CPU (which are 10 year old now).


  • You can write code just fine on 20 or even 30 year old hardware. Basically if it runs Linux, chances are it can also run vim and compile code. If you spring for 10-15 year old hardware, you can even get an LSP + coc or helix, for error highlighting and goto definition and code actions. And you definitely don’t need a beefy GPU for it (unless you’re doing something GPU-specific of course).

    Editing 720p videos (which, if you encode with a high enough bitrate, still looks alright) can be done on 10-15 year old hardware.

    Research is where it gets complicated. It does indeed often require a lot of computing power to do modern computational research. But for some simpler stuff - especially outside STEM - you can sometimes get away with a LibreOffice spreadsheet on an old Dell or something.

    From the looks of it we will have to get used to doing more with less when it comes to computers. And TBH I’m all for it. I just hope that either my job won’t require compiling a lot more stuff, or they provide me with a modern machine at their expense.






  • Dolphin (well, whatever the KDE’s indexer is called) uses xattrs under the hood for tagging, so it will be compatible with other software (including {get,set}fattr).

    The index has to be up-to-date, but then that would be true with any tag-based filesystem, it’s just happening on a different layer (and arguably a layer which is more suitable for this - not sure it’d be a good idea to enforce synchronous indexing during xattr writes).

    The most significant user-facing obstacle is lack of software which supports this system, but I guess that shows that there’s not much desire for it in reality.








  • Nah, honestly, I think stuffing an entire computer inside a monitor and relying on it to generate/show content is a bad idea no matter what software it runs. A dumb TV + a small computing dongle requires only a tiny fraction more labor to produce than a smart TV, but it’s so much easier to upgrade in the future if you decide you need faster boot times or wanna game on the TV, etc. And if the TV breaks before the dongle does, you can also buy a new TV and keep all your settings/media without transferring anything.




  • balsoft@lemmy.mltoProgrammer Humor@lemmy.mlSenior devs...
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    10 days ago

    Abstraction, when used well, is actually a tool that produces more simple code in the long run. It separates different concerns into different pieces of code, makes code readable by extracting common logic and giving it a recognizable name, and reduces boilerplate.

    That said, OOP-style inheritance-based abstractions, while useful in some cases, quite often lead people down the complete opposite path - mushing together unrelated logic and then making call sites difficult to understand with a lot of hidden state that has to be kept in mind.


  • So yeah, there’s no exact answer to “what happens to Linux after Torvalds”, it’s more of “who gets to add more maintainers to torvalds/linux.git if nobody merges things in there for 72 hours”. I suppose Linus is confident that the system of distributed maintainers is robust enough to survive his & gregkh’s incapacitation, and the only remaining point of failure is access to the central repo itself. I think he is underestimating the governance upheaval that would happen if he was to disappear, so I hope that he puts some more details about his views on future project governance in writing.