I hope you understand why nobody here takes the mods seriously. There will be no more condescending threads where rules that will not be enforced are handed down from on high.
Good night, and good luck.
EDIT: In fact this thread needs to be deleted:
Originally posted by wxyz
Time to break up Google, Twitter, Facebook?
No. Just regulate them. If they want to engage in content editing of any kind, then they must face the financial consequences of unfair practices.
If they stop censoring and curating content, then they no longer have to abide by publisher rules.
It's a catch-22 situation.
Stop editing content, don't lose the lawsuits and then don't get regulated.
Since they cannot stop, they will get the regulation, lose the cases, and go bankrupt.
Originally posted by wxyz
What special protections do they have?
Section 230 of the Communications Act proffers tech companies immunity for content posted on their sites.
http://www.law.cornell.edu/uscode/text/47/230
Section 230: "No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider".
Why do we need Section 230 immunity for content hosters like Facebook or Twitter? Because it would be too expensive and costly for those providers to constantly curate their content to ensure all content published on their platforms does not result in tort such as libel.
But...
Here's the problem:
If a platform actually DOES actively censor their content under the guise of litigious protections for both the participants on their platform and the potential targets of the content (that could result in litigation), then they open up the door to being a publisher and the immunity is unnecessary. They are actively demonstrating that they want to act as a publisher and that brings them under the umbrella of "Publisher" actions where they can or cannot take certain action. And this is where the crux of the argument is.
If they demonstrate that they have taken a "due care" action to the content, then they must equally and fairly apply on their platform. The point of Section 230 is to save them the hassle of having to take these due care steps so companies are not sued for user content that is not illegal but does contain copyright infringement or defamation. In tort law, this is a set of evidence that destroys a defense against the lawsuit. If they have taken actions that shows they do not need Section 230 protections because they are taking actions that Section 230 was created specifically so they did not have to, then they open themselves up to lawsuits related to their censorship activities.
So people are pushing (content hosters are also pushing for this because it would save them money and litigation) for a modification to Section 230 that would clarify that only the removal of illegal (crime) content is required. This is actually moving the regulation to a less restrictive state and clarifies what "due care" activities an organization must take place: removing illegal content, only.
There is a difference between the concept of Free Speech and the First Amendment, which has a free speech clause.
The First Amendment is targeted to the government. Free Speech, as a concept, is the unfettered right of speech in all situations. Some people think that the First Amendment should be amended to fit more in line with Free Speech and not simply applicable to government.
I think the reinterpretation of Section 230 to only require companies to censor illegal content is the best approach. It will save the most money for companies, long term.
There are other nuances to this topic such as monopolies on some of these platforms making censorship of any non-illegal content a true First Amendment violation (the complexity of this particular argument is pretty ridiculous and it would take an hour or three just to explain...I do not like this argument but I understand why some make it) and the inappropriate use of "hate speech" as an extremely broad category that is clearly hotly debated.