While welcoming voluntary CSAM scanning, scientists warn that some aspects of the revised bill “still bring high risks to society without clear benefits for children.”

  • Kissaki@feddit.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    5 hours ago

    I wonder how this will work. If they use third party services under GDPR they’ll have to disclose that data may or will be shared. So at least in that case it should be obvious that end-to-end is not a thing but only one channel of transfer.

    If they scan themselves, I would assume it’s also necessary to disclose under GDPR as working with and through personal data.

    The Council also wants to make permanent a currently temporary measure that allows companies to – voluntarily – scan their services for child sexual abuse. At present, providers of messaging services, for instance, may voluntarily check content shared on their platforms for online child sexual abuse material, and report and remove it. This is allowed thanks to an exemption from certain rules specific to the electronic communications sector. Although this exemption is due to expire on 3 April 2026, according to the Council position, it will continue to apply. [src]

    So we already had the exemption active, apparently. I just hope this exemption does not invalidate the necessity for transparency about systematic handling of data?

    On the basis of today’s agreement, the Council can start negotiations with the European Parliament with a view to agreeing on the final regulation. The European Parliament reached its position in November 2023.

    • verdi@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 hours ago

      You don’t get to have billionaires in a 6M ppl (and 25M pigs) country by playing fair, being honest and abiding by rules.

  • CubitOom@infosec.pub
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 day ago

    At the beginning of the month, the Danish Presidency decided to change its approach with a new compromise text that makes the chat scanning voluntary, instead

    I give it 2 years before it becomes mandatory.

    • DacoTaco@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      24 hours ago

      Its “voluntary”. You can choose to not do it, but at the same time you have to " reduce any risk on your platform" and 'prevent it from being abused". So you can choose not to scan texts, but can still be fined under the law that youre not reducing risk of your platform being used for criminal acts.

      The new chatcontrol is a piece of shit like it always was. Its also interesting to read the politicians in the netherlands talk about it and they were lucky to have somebody who understood the risk be there ( they were apposed due to the law being extremely vague and having paradoxes )

    • Kami@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 day ago

      Even if it doesn’t, from now on we should assume that our privacy is always being violated.

      • ulterno@programming.dev
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        Yes.
        This essentially gives a clean chit to anyone wanting to do whatever with your data without your consent.

  • 🦄🦄🦄@feddit.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    23 hours ago

    Could e.g. Signal offer a text only version of their app, thereby excluding the possibility of CSAM being shared?

  • simon@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    23 hours ago

    Is there any clarity about what the future with chat control will look like? As in what exactly apps will need to implement.

    This part about self evaluation confuses me:

    Under the new rules, online service providers will be required to assess how their platforms could be misused and, based on the results, may need to “implement mitigating measures to counter that risk,” the Council notes.

    I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

    Does this “mitigating measure” in practice mean sending a hash of each image sent through the messenger to some service built by Google or Apple for comparison against known CSAM? Since building a database of hashes to compare with is only realistically possible for the largest corporations. Or would the actual image itself have to leave the device, since it could be argued that some remote AI could identify any CSAM, even if it is not yet in any database? Perhaps some locally running AI model could do a decent enough job, so that nothing has to leave the device during the evaluation stage.

    But then again, there will always be false positives, where an innocent person’s image would be uploaded to… the service provider (like Signal) for review? So you could never be sure that your communication stays private, since the risk of false positives is always there. Regardless of what the solution is, the user will have to give up fully owning there device, since this chat control service can always decide to take control of your device to upload your communication somewhere.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      I assume all chat apps would have to take measures, since generic data can be sent through them, including CSAM. Or could this quote be interpreted otherwise? I wonder what exactly is meant by voluntary then.

      Maybe you have a messenger within your company. Only employees get access. There’s no risks of minors being involved. If chats are private, account association would still be present if one of the two communicating reports issues or crimes. The threat model would be low in this case, and beyond being able to handle reported situations, you don’t need to do anything.

      Lemmy is free to sign up, for anyone, and has a messaging system. So you will have to think about and assess sign-up guardrails, requirements and verification, and consequential risks involved. Then you have to think about reporting and moderation, and how you can handle those. What kind of situations and risks are involved? What does due diligence in terms of preparation look like? In terms of monitoring and responding to it?

      When weighing privacy against risks you may conclude “client-side scanning of images” is not warranted. Or you may deem it worth or necessary or you don’t think much and just do it because you don’t care but want to cover your bases, in terms of law or publicity. That’s the “voluntary” part.

      You can use it, or you can decide not to. As long as you assessed risks and and reasonably prepared for them.