Seatbelts - Gotta Knock a Little Harder
Seatbelts - Gotta Knock a Little Harder
An LLM?
Edit: Everything is of far less significance relative IRL relationships. The overriding goal of ML analysis model with a subordinated LLM hasn’t been to create a space for the best mental masturbation, instead to better focus subsequent human efforts in organizational recruitment for education and praxis.
He always takes the bait. I think it’s funny.
Ignorance is bliss.
Ignorance is bliss.
What’s meritable often isn’t popular. By what metric should comments be rated?
Many will rate high. By what means can the set be further narrowed?
But, squid, you don’t have any friends on the .ml mod team else you’d have already called in a favor.
Undergrad. You’ve no degree and no work experience. Perhaps the most important thing you’re going to learn from this project is humility.
All his conclusions and logical fallacy align with the neoliberal flavor of the day. Why the fuck would you start a discussion, squid, while claiming this isn’t the place to discuss? Typical.
You left out a core argument against high-priced art: A large proportion of transactions have the underlying intent of money laundering, illegal kickbacks, and tax avoidance.
Don’t tell him his shallow neoliberalism is flawed under penalty of bot down votes and ban. He’s not a healthy person.
deleted by creator
A lion sucks if measured as a bird.
Yet, it takes an enormous amount of processing power to produce a comment such as this one. How much would it take to reason why the experiment was structured as it was?
If the existence is a terroristic act how do you call farmers who breed these creatures on purpose?
Capitalists.
Our mass media can incite fear of chickens, pigs, and cattle. Then their existence itself can be defined as a terrorist act. We’ll redefine vegan to mean only those that eat terrorists to save the other animals. Actual vegans can call themselves “vegetablers”. Nothing changes and everyone feels good because if they don’t feel good then they’re not human.
Objective: To evaluate the cognitive abilities of the leading large language models and identify their susceptibility to cognitive impairment, using the Montreal Cognitive Assessment (MoCA) and additional tests.
Results: ChatGPT 4o achieved the highest score on the MoCA test (26/30), followed by ChatGPT 4 and Claude (25/30), with Gemini 1.0 scoring lowest (16/30). All large language models showed poor performance in visuospatial/executive tasks. Gemini models failed at the delayed recall task. Only ChatGPT 4o succeeded in the incongruent stage of the Stroop test.
Conclusions: With the exception of ChatGPT 4o, almost all large language models subjected to the MoCA test showed signs of mild cognitive impairment. Moreover, as in humans, age is a key determinant of cognitive decline: “older” chatbots, like older patients, tend to perform worse on the MoCA test. These findings challenge the assumption that artificial intelligence will soon replace human doctors, as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients’ confidence.
lumpen
There’s application in responding to requests for information quickly, in a mesh network, perhaps in presence of malactors. For example the medical records of injured US soldiers are stored in and delivered using a block chain solution.
There’s application in a hypothetical currency free from the corruption of governance. For example, an orange President couldn’t print gobs of money during a pandemic, devaluing your currency, then hand that money to corporations.