Schools and lawmakers are grappling with how to address a new form of peer-on-peer image-based sexual abuse that disproportionately targets girls.
Schools and lawmakers are grappling with how to address a new form of peer-on-peer image-based sexual abuse that disproportionately targets girls.
I don’t understand fully how this technology works, but, if people are using it to create sexual content of underage individuals, doesn’t that mean the LLM would need to have been trained on sexual content of underage individuals? Seems like going after the company and whatever it’s source material is would be the obvious choice here
You know how when you look at a picture of someone and you cover up the clothed bits, they look naked. Your brain fills in the gaps with what it knows of general human anatomy.
It’s like that.
I agree with the other comments, but wanted to add how deepfakes work to show how simple they are, and how much less information they need than LLMs.
Step 1: Basically you take a bunch of photos and videos of a specific person, and blur their faces out.
Step 2: This is the hardest step, but still totally feasable for a decent home computer. You train a neural network to un-blur all the faces for that person. Now you have a neural net that’s really good at turning blurry faces into that particular person’s face.
Step 3: Blur the faces in photos/videos of other people and apply your special neural network. It will turn all the blurry faces into the only face it knows how, often with shockingly realistic results.
Cheers for the explanation, had no idea that’s how it works.
So it’s even worse than /u/danciestlobster@lemmy.zip thinks, the person creating the deep fake has to have access to CP then if they want to deepfake it!
There are adults with bodies that resemble underage people that could be used to train models. Kitty Yung has a body that would qualify. You don’t necessarily need to use illegal material to train to get illegal output.
AI can generate images of things that don’t even exist. If it knows what porn looks like and what a child looks like, it can combine those concepts.
You can probably do it with adult material and replace those faces. It will most likely work on models specific trained like the person you selected.
People have also put dots on people’s clothing to trick the brain into thinking their are naked, you can probably fill those dots in with the correct body parts if you have a good enough model.
not necessarily. image generation models work on a more fine-grained scale than that. they can seamlessly combine related concepts, like “photograph”+“person”+“small”+“pose” and generate plausible material due to the fact that all of those concepts have features in common.
you can also use small add-on models trained on very little data (tens to hundreds of images, as compared to millions to billions for a full model) to “steer” the output of a model towards a particular style.
you can make even a fully legal model output illegal data.
all that being said, the base dataset that most of the stable diffusion family of models started out with in 2021 is medical in nature so there could very well be bad shit in there. it’s like 12 billion images so it’s hard to check, and even back with stable diffusion 1.0 there was less than a single bit of data in the final model per image in the data.
This is mostly about swapping faces. You take a video and a photo of someone’s face. Software can replace the face of someone in the video with that face. That’s been around for a decade or so. There are other ways of doing it.
When the face belongs to an underage individual, and the video is pornographic…
LLMs only do text.