-
Notifications
You must be signed in to change notification settings - Fork 384
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MSC4228: Search Redirection #4228
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation requirements:
- Client
- Server
This is amazing idea! Is there any potential harm to implement 403 on MRS right now, without support from major server and client apps support? |
For the federation endpoint specifically, the local user SHOULD have the remote server's error proxied | ||
straight through to them, however some implementations may prefer to replace the error before serving | ||
it to their users. This can help reduce the potential of remote Cross-Server Scripting (XSS) attacks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any guidance on when a server may wish to replace it?
Specific error codes are a potential alternative, however due to the wide variety of illegal material | ||
and jurisdictions, this proposal has determined that a single, generic, error code with specific message | ||
more easily covers the use cases. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This made me think that it was at least adding an error message for illegal content, but I see it is a generic "FORBIDDEN" -- I guess the rationale is that this can be applied for many different cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This made me think that it was at least adding an error message for illegal content, but I see it is a generic "FORBIDDEN" 1
@clokep, implementations like invent.kde.org/network/neochat/-/merge_requests/2023#note_1079171
at least use specific error messages.
Footnotes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I meant "error code", not "error message" sorry for the confusion.
See matrix-org/matrix-spec-proposals#4228 for details. Since this is tricky to test without server-side support, I have added a basic implementation to the mock server in appiumtests/login-server.py 1. Start appiumtests/login-server.py 2. Start neochat with "--test --ignore-ssl-errors" options 3. Open "Explore Rooms" 4. Search for the exact string "forbidden" 5. See new error message provided by server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have implemented the client side of this MSC in NeoChat: https://invent.kde.org/network/neochat/-/merge_requests/2023 - For now without support for MSC4176
See matrix-org/matrix-spec-proposals#4228 for details. Since this is tricky to test without server-side support, I have added a basic implementation to the mock server in appiumtests/login-server.py 1. Start appiumtests/login-server.py 2. Start neochat with "--test --ignore-ssl-errors" options 3. Open "Explore Rooms" 4. Search for the exact string "forbidden" 5. See new error message provided by server
---- | ||
|
||
A common approach for tackling abuse is to prevent the content from being presented to users in any | ||
way, disincentizing the use of the platform for sharing that particular type of content. The common |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- disincentizing
+ disincentivizing
Warning
Content Warning: This proposal discusses mechanisms to reduce searches for illegal or harmful content on a homeserver. This proposal links to research which discusses the impact of Child Sexual Abuse Material (CSAM).
Given the sensitive nature of the topic, comments, suggestions, and concerns may be sent directly to the author. It is important that all members of our community contribute to a safe and positive review atmosphere.
The author can be reached on Matrix at
@travis:t2l.io
or via email at[email protected]
. If you prefer to contact the Trust & Safety (T&S) team instead, please email[email protected]
. The author is a member of the T&S team, and will ensure a different member of the team reviews[email protected]
emails.Rendered
Disclosure: I am Director of Standards Development at The Matrix.org Foundation C.I.C., Matrix Spec Core Team (SCT) member, employed by Element, and operate the t2bot.io service. This proposal is written and published as a Trust & Safety team member allocated in full to the Foundation.