Homo Homini Lupus Est

  • 0 Posts
  • 598 Comments
Joined 2 years ago
cake
Cake day: October 1st, 2023

help-circle





  • After re-evaluation, you’re right. We can’t. We could just define the outer walls of what we can know. No matter how hard we’d think out of the box, we can’t measure the box itself. We could create such a simulation. But being more limited beings than our creators, our creations could only be even more limited. Like an LLM. It could asses everything there is to know and calculate a theory around it. Yet it will be confined to OUR specifications and the data we let it consume.








  • I would bet my right nut on the real reason for all this is some AI-billionaire who aggressively pushes this with moneyz. Having every fart we make soon be analyzed by AI is the best “natural” training there could be.

    As a cherry on top is the total surveillance for the state(s). AI will probably do a decent job (despite what the article says) in scanning for potential “threats” to let actual people check.

    But I can’t even comprehend the power that would be needed to actually scan every shit by every person every minute. No data center in the world has this oomph. So it has to be a simple keyword-search (in all possible languages, even leetspeek and co?) To forward to ai. And if ai would just report 0.5% as “suspicious” for manual human control, it would be more supermassive than a black hole. This is just not doable and hence defeats it’s fake reason: protecting the kids.

    So that kinda just leaves ai-training and selective easy surveillances without court-orders. Which also won’t protect kids. As every criminal out there will find a loophole.






  • Guessing you either didn’t actually have a real HDR monitor, or just didn’t realise it shouldn’t look like what it did.

    Yeah, me being technologically challenged is probably it. Dude, I retired around 25 because of tech. Besides, if you ever toggled HDR while having 5 monitors attached, you’d think hard if it would be REALLY needed right now 😁

    On device or in cloud, you have privacy settings that control it.

    Sorry, I just don’t have that level of naiveté. A US-company pumping billions into AI and not use our data for anything? That is why they already gather that much, even without LLMs? Half of my firewall-rules are for MS, so little do they phone home 😁

    But at least only one windows machine will be problematic, my others are win-server, they won’t get this crap. So no worries for me anyway.