Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Matrix is a software, it's not a provider. It's out of scope. (see 1. on page 37 for scope)

see page 39 "5. Without prejudice to Article 10a, this Regulation shall not prohibit or make impossible end-to-end encryption, implemented by the relevant information society services or by the users"

https://cdn.netzpolitik.org/wp-upload/2024/05/2024-05-28_Cou...

see also page 46

"... measures shall be ... targeted and proportionate in relation to that risk, taking into account, in particular, the seriousness of the risk as well as the provider’s financial and technological capabilities and the number of users; ..."

also, it's a big framework without any tech requirements (see page 8 recital 17)



You only really answer question 2 of your parent, and they obviously meant for someone operating a Matrix server with regards to their users. It's pretty well summarized in Patrick Breyer's sumary page[0]:

> Only non-commercial services that are not ad-funded, such as many open source software, are out of scope

> How do you even ensure a client is actually self-reporting?

This is an interesting technical question whether or not it's covered by the actual proposal. How do you ensure that Messenger for instance is

1. actually doing the reporting, and not someone simply bypassing the app to keep sending e2ee chats without them being client-side scanned. That would most likely be against ToS and accounts would maybe get banned if doing so

2. prevent against spam reporting, where someone could basically DoS the reporting service with false positives

> If a photo are flagged, will it appear in a GDPR access request?

There are a bunch of dispositions in the draft concerning personal data protection (ctrl+f personal data to find the relevant articles). It also states pretty much everywhere that processing should be done in accordance with Regulation (EU) 2016/679, more commonly known as GDPR.

[0] https://www.patrick-breyer.de/en/posts/chat-control/

What really bugs me though, is this:

> Having regard to the availability of technologies that can be used to meet the requirements of this Regulation whilst still allowing for end-to-end encryption, nothing in this Regulation should be interpreted as prohibiting, requiring to disable, or making end-to-end encryption impossible. Providers should remain free to offer services using end-to-end encryption and should not be obliged by this Regulation to decrypt data or create access to end-to-end encrypted data

I believe this was added as a request from France, which didn't want E2EE to be undermined by this proposal. However, the provider would need to "create access to end-to-end encrypted data" to report it to the EU Centre. Although the following article states that E2EE can still be used if you don't send images, videos and URLs, so I guess that's the compromise?


> However, the provider would need to "create access to end-to-end encrypted data" to report it to the EU Centre.

Sorry, I don't follow. Am I misreading something? To me the the quoted text says the opposite.

"Providers should remain free to [...] and should not be obliged by this Regulation to [...] create access to end-to-end encrypted data"

> prevent against spam reporting, where someone could basically DoS the reporting service with false positives

Yep, probably there's no way to do this. (Likely this whole thing will be a lot of money spent to realize this.)


> Sorry, I don't follow. Am I misreading something? To me the the quoted text says the opposite.

Yeah me too. But how would the provider report CSAM content if they are not obliged to break encryption? I don't really follow the Regulation on that part.


It wouldn't.

It's a broad framework and - based on my cursory reading:

  - providers have to set up a counter-abuse team and fund it
  - authorities and industry-wide cooperation on trying to come up with guidelines and tech
  - counter-abuse team needs to interpret the guidelines, do "due diligence"
  - provider needs to have monitoring to at least have an idea of abuse risks
  - if there are, work on addressing them if possible without breaking privacy

As far as I understand the point is have more of services like "YouTube for Kids", where you can give your kid an account and they can only see stuff tagged "kid appropriate" (and YT simply said we are going to be sure there are no bad comments, so there's no comment section for these videos - which hurts their engagement, which hurts profitability).

There's a section about penalties and fines, up to 6% of global revenue, if the provider doesn't take abuse seriously. And - again, based on my understanding - this is exactly to prod big services to make these "safer, but less profitable" options.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: