It’s not the 18th century anymore, there are no classes.
What do you mean “build”? It’s part of the development process.
Fury? I mean the only slop here are lemmings.
It does respect robots.txt, but that doesn’t mean it won’t index the content hidden behind robots.txt. That file is context dependent. Here’s an example.
Site X has a link to sitemap.html on the front page and it is blocked inside robots.txt. When Google crawler visits site X it will first load robots.txt and will follow its instructions and will skip sitemap.html.
Now there’s site Y and it also links to sitemap.html on X. Well, in this context the active robots.txt file is from Y and it doesn’t block anything on X (and it cannot), so now the crawler has the green light to fetch sitemap.html.
This behaviour is intentional.
What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.
Example: linting on every commit + TDD. You’ll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.
In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.
AI being trained by AI is how you train most models. Man, people here are ridiculously ignorant…
Here’s a thing - it’s not useless.
AI won’t see Markov chains - that trap site will be dropped at the crawling stage.
Proletariat, lol. Are you living in the 18th century?
Making bad financial decisions is not poverty.
Oh no!
Anyways…
I guess someone’s stuck in a kindergarten…
Nope