While welcoming voluntary CSAM scanning, scientists warn that some aspects of the revised bill “still bring high risks to society without clear benefits for children.”
While welcoming voluntary CSAM scanning, scientists warn that some aspects of the revised bill “still bring high risks to society without clear benefits for children.”
Is there any clarity about what the future with chat control will look like? As in what exactly apps will need to implement.
This part about self evaluation confuses me:
I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.
Does this “mitigating measure” in practice mean sending a hash of each image sent through the messenger to some service built by Google or Apple for comparison against known CSAM? Since building a database of hashes to compare with is only realistically possible for the largest corporations. Or would the actual image itself have to leave the device, since it could be argued that some remote AI could identify any CSAM, even if it is not yet in any database? Perhaps some locally running AI model could do a decent enough job, so that nothing has to leave the device during the evaluation stage.
But then again, there will always be false positives, where an innocent person’s image would be uploaded to… the service provider (like Signal) for review? So you could never be sure that your communication stays private, since the risk of false positives is always there. Regardless of what the solution is, the user will have to give up fully owning there device, since this chat control service can always decide to take control of your device to upload your communication somewhere.
Maybe you have a messenger within your company. Only employees get access. There’s no risks of minors being involved. If chats are private, account association would still be present if one of the two communicating reports issues or crimes. The threat model would be low in this case, and beyond being able to handle reported situations, you don’t need to do anything.
Lemmy is free to sign up, for anyone, and has a messaging system. So you will have to think about and assess sign-up guardrails, requirements and verification, and consequential risks involved. Then you have to think about reporting and moderation, and how you can handle those. What kind of situations and risks are involved? What does due diligence in terms of preparation look like? In terms of monitoring and responding to it?
When weighing privacy against risks you may conclude “client-side scanning of images” is not warranted. Or you may deem it worth or necessary or you don’t think much and just do it because you don’t care but want to cover your bases, in terms of law or publicity. That’s the “voluntary” part.
You can use it, or you can decide not to. As long as you assessed risks and and reasonably prepared for them.