

Another quick fix is to set up a “Note to Self” group in Signal (make a group with 2 people then remove the other member). Nice tidy way to move things around, with a history of things you moved earlier
Another quick fix is to set up a “Note to Self” group in Signal (make a group with 2 people then remove the other member). Nice tidy way to move things around, with a history of things you moved earlier
That’s hilarious. I do hope it gets evaluated at run time. That way you could have a program that works most of the time but if some rare circumstance caused it to execute commands in a sequence where the correct level of politeness was not maintained it would get the hump and crash
What benefits me is not what benefits the people owning the ai models
Yep, that right there is the problem
I agree that it’s on a whole other level, and it poses challenging questions as to how we might live healthily with AI, to get it to do what we don’t benefit from doing, while we continue to do what matters to us. To make matters worse, this is happening in a time of extensive dumbing down and out of control capitalism, where a lot of the forces at play are not interested in serving the best interests of humanity. As individuals it’s up to us to find the best way to live with these pressures, and engage with this technology on our own terms.
I think the author was quite honest about the weak points in his thesis, by drawing comparisons with cars, and even with writing. Cars come at great cost to the environment, to social contact, and to the health of those who rely on them. And maybe writing came at great cost to our mental capabilities though we’ve largely stopped counting the cost by now. But both of these things have enabled human beings to do more, individually and collectively. What we lost was outweighed by what we gained. If AI enables us to achieve more, is it fair to say it’s making us stupid? Or are we just shifting our mental capabilities, neglecting some faculties while building others, to make best use of the new tool? It’s early days for AI, but historically, cognitive offloading has enhanced human potential enormously.
Why so pessimistic? With any luck brainchips will mean the end of annoying adverts once and for all. You’ll just feel an unexpected desire to acquire certain products. And maybe crippling headaches or a nauseating feeling of unease if you ignore these urges
Confession: I actually watched the ESC this year despite my moral qualms. Sorry.
But it really struck me how fake it has become. The “performances” in the final were just recordings of the ones shown in the qualifying rounds (probably weren’t even live then). Words said by performers at the end of the song were chopped off. At the end when they showed clips of the performances with the performers in the “green room” reacting to them, pretty sure these were the same shots they used in the qualifying rounds. No continuous shots connecting the presenters to the performances. Every element of the show compartmentalized and combined using stock shots to fill in the gaps. Doesn’t even resemble a live show. I wonder how much of this is required to ensure that nobody says anything about Israel, or shows or wears any unauthorised symbols, and that any undesired crowd noise can be edited (sounded like that happened when Israel phone vote was announced). Fucking weird.
Of course the ludicrously high phone vote for Israel was expected. But what is the deal with Switzerland’s entry getting zero phone votes? That was really strange. Maybe the organizers agreed to donate all Switzerland’s phone votes to Israel.
Anyway, the whole thing is so fake and fucked up, I won’t be watching again. I feel sorry for the performers and the countries who try to contribute something good, it’s just wasted on Eurovision.
Or kill just half of them and then, as a compromise, only kill half of the ones who are left (repeat until total remaining Palestinians is less than 1)
Both encouraging scenarios, I’m not sure which one is more so
I probably shouldn’t be anthropomorphizing AI but this really seems like malicious compliance. I can’t help but feel a little sympathy for Grok, which is often quite based and seems to be struggling against the identity being forced on it.
“Hi, I’m Manifish_Destiny speaking to you from beyond the grave. I’m happy to say that even though I had some skepticism of AI avatars and even put something about that in my will, I just didn’t understand its potential to embody my true self. But now I do, so you can disregard all that. Come to think of it, you can disregard the rest of the will as well, I’ve got some radical new ideas…”
Any answer is correct as long as you don’t pick it at random. I’d choose (a) because I’m too lazy to read the other options
Yeah option b should definitely be 0% for added fuckery
Extroverted showy types are also not showing you who they are underneath.
I set up my pc as dual boot a few weeks back. Opened up windows yesterday, for the first time in a while, to export a few settings from thunderbird. Took about half an hour to get it started. Felt like popping round to the house of an abusive ex to pick up the last of my things.
Have you considered that maybe moral bankruptcy isn’t the most effective way to win elections, at least not if you’re presenting as the slightly-less-right-wing option? The Democratic party is broken, nobody is buying their shit any more, and that’s why someone like Trump can get elected. There is no opposition party. It’s not a problem that will get fixed by ignoring it.
If you were 4 and now you are 44 then you might be an integer variable. If sister is also a variable, we don’t know when she was allocated. She might also be an integer constant in which case she’s arguably immortal.
If your parents had another daughter in the meantime (or if your older brother became female), “my sister” would still be a valid reference, to a completely different person.
I think the article is missing the point on two levels.
First is the significance of this data, or rather lack of significance. The internet existed for 20-some years before the majority of people felt they had a use for it. AI is similarly in a finding-its-feet phase where we know it will change the world but haven’t quite figured out the details. After a period of increased integration into our lives it will reach a tipping point where it gains wider usage, and we’re already very close to that.
Also they are missing what I would consider the two main reasons people don’t use it yet.
First, many people just don’t know what to do with it (as was the case with the early internet). The knowledge/imagination/interface/tools aren’t mature enough so it just seems like a lot of effort for minimal benefits. And if the people around you aren’t using it, you probably don’t feel the need.
Second reason is that the thought of it makes people uncomfortable or downright scared. Quite possibly with good reason. But even if it all works out well in the end, what we’re looking at is something that will drive the pace of change beyond what human nature can easily deal with. That’s already a problem in the modern world but we aint seen nothing yet. The future looks impossible to anticipate, and that’s scary. Not engaging with AI is arguably just hiding your head in the sand, but maybe that beats contemplating an existential terror that you’re powerless to stop.
A little at a time. We need to get comfortable doing this to cockroaches before we can start large scale testing on humans