• 0 Posts
  • 6 Comments
Joined 3 months ago
cake
Cake day: March 14th, 2025

help-circle
  • This is the power of the algos.

    I had two Reddit accounts and one was solely used for gaming. Which started to give me feeds completely different then my more left, progressivy main account.

    I would get these heavy metal videos about Mexican rapists that were deported on the one linked with gaming. (A clear conservative take). Showed Canadians as commies / socialist evil staties. Like dudes face - deportation date - what they were convicted of - murder or rape. It was wild.

    And some of my liberal social main account demonstrated or showed a shit load of Canadian content that linked them with MAGA. And consistently highlighted their crimes against indigenous people. With tag lines of like “they want to talk about what we do - look at what they do”.

    so I’m not even sure if the blue line ticking up is “I hate Canadians” or “they are magats too”

    To be clear - am American - hate Americans - hate social media (Lemmy is my last social hope / account left) am team Canada all the way / Europe all the way.



  • Yeah so any space where a caregiver or worker can get fined huge sums of money for not taking adequate action it should just be illegal for AI to inherit that space then?

    Because when I worked in the psych space if I was told XYand Z - I would need to act or as an individual face 30-100,000 dollars in fines.

    If it’s left to the company you will just see shell corps housing the AI client facing hub. That will dissolve when legal critical mass forms and costs now outweigh the revenue wins.

    “We formed LLC psych screen services, who will help our hospital team with mental health call volume!”

    “Psych screen LLC is facing 27 lawsuits and is committing bankruptcy!”

    “We formed LLC psych now using a different AI tool!”


  • So insurance companies use AI to screen claims.

    It denies a claim for life saving intervention - person dies. Who is responsible for that? Historically it would be the insurance company - and worker. Would it be them or the AI company?

    Psych screening tools were using it to pre screen calls.

    Ai tells the person to kill themselves - who is at fault if they do it. Psych screener would lose their job and their license. What and who is impacted if AI does it.

    QA check on a car or product is passed by AI but should have failed.

    Thousands die before the recall. Who is at fault for it? The Company leveraging AI. Or the AI itself?