Happened on the Stanford Cyberlaw Center, and notably Transparency laws. Here an excerpt, which leads to more ... Had not taken a look there for some time. Note the mention and criticism of Facebook's rules. Worth covering.
The Center for Internet and Society at Stanford Law School is a leader in the study of the law and policy around the Internet and other emerging technologies.
State abuse of Transparency Laws and How to Stop it By Daphne Keller on September 19, 2022 at 3:49 pm
Around the world, new laws are requiring improved transparency from major Internet platforms about their content moderation. This is generally a very positive development. But it also raises important questions about what kinds of disclosures we expect from platforms -- and what kinds of enforcement we expect from governments. This post uses five concrete examples to illustrate the complexity of these disclosures. The examples also illustrate what I think is a very real risk: that state enforcers may abuse transparency laws, using them to reshape platforms’ actual policies. That is a threat not only to platforms’ editorial and speech rights, but to the rights of all their users. I think it should be possible to mitigate this risk. But we can only do so if we recognize it.
I have written before about the complexity of counting content moderation actions for aggregate transparency reports. This post addresses the seemingly simpler task of describing speech policies or enforcement decisions. This is also complicated, in ways that should come as no surprise to lawyers, parents of young children, or anyone else who has tried to explain and apply rules to disputatious parties. The degree of detail that platforms could include in their explanations is vast -- perhaps not as infinite as the diversity of human misbehavior on the Internet, but the two are certainly correlated. Facebook’s rules, for example, run to over a hundred pages , but are widely criticized as insufficiently clear or detailed. ... '
No comments:
Post a Comment