The Brazilian Supreme Courtis on the verge of decidingwhether digital platforms can be held liable for third-party content even without a judicial order requiring removal. A panel of eleven justices is examining two cases jointly, and one of them directly challenges whether Brazil’s internet intermediary liability regime for user-generated content aligns with the country’s Federal Constitution or fails to meet constitutional standards. The outcome of these cases can seriously undermine important free expression and privacy safeguards if they lead to general content monitoring obligations or broadly expand notice-and-takedown mandates.
The court’s examination revolves around Article 19 of Brazil’sCivil Rights Framework for the Internet (“Marco Civil da Internet”,Law n. 12.965/2014). The provision establishes that an internet application provider can only be held liable for third-party content if it fails to comply with a judicial order to remove the content. A notice-and-takedown exception to the provision applies in cases of copyright infringement, unauthorized disclosure of private images containing nudity or sexual activity, and content involving child sexual abuse. The first two exceptions are in Marco Civil, while the third one comes from a prior rule included in the Brazilian child protection law.
The decision the court takes will set a precedent for lower courts regarding two main topics: whether Marco Civil’s internet intermediary liability regime is aligned with Brazil's Constitution and whether internet application providers have the obligation to monitor online content they host and remove it when deemed offensive, without judicial intervention. Moreover, it can have a regional and cross-regional impact as lawmakers and courts look across borders at platform regulation trends amid global coordination initiatives.
After a public hearing held last year, the Court's sessions about the cases started in late November and, so far, only Justice Dias Toffoli, who is in charge of Marco Civil’s constitutionality case, has concluded the presentation of his vote. The justice declared Article 19 unconstitutional and established the notice-and-takedown regime set in Article 21 of Marco Civil, which relates to unauthorized disclosure of private images, as the general rule for intermediary liability. According to his vote, the determination of liability must consider the activities the internet application provider has actually carried out and the degree of interference of these activities.
However, platforms could be held liable for certain content regardless of notification, leading to a monitoring duty. Examples include content considered criminal offenses, such as crimes against the democratic state, human trafficking, terrorism, racism, and violence against children and women. It also includes the publication of notoriously false or severely miscontextualized facts that lead to violence or have the potential to disrupt the electoral process. If there’s reasonable doubt, the notice-and-takedown rule under Marco Civil’s Article 21 would be the applicable regime.
The court session resumes today, but it’s still uncertain whether all eleven justices will reach a judgement by year’s end.
Some Background About Marco Civil’s Intermediary Liability Regime
The legislative intent back in 2014 to establish Article 19 as the general rule for internet application providers' liability for user-generated content reflected civil society’s concerns over platform censorship. Faced with the risk of being held liable for user content, internet platformsgenerally prioritize their economic interests and securityover preserving users’ protected expression and over-remove content to avoid legalbattles and regulatory scrutiny. The enforcementoverreach of copyright rules online was already a problem when the legislative discussion of Marco Civil took place. Lawmakers chose to rely on courts to balance the different rights at stake in removing or keeping user content online. The approval of Marco Civil had wide societal support and was considered a win for advancing users’ rights online.
The provision was in line with theSpecial Rapporteurs for Freedom of Expression from the United Nations and the Inter-American Commission on Human Rights (IACHR). In that regard, the then IACHR’s Special Rapporteurhad clearly remarked that a strict liability regime creates strong incentives for private censorship, and would run against the State’s duty to favor an institutional framework that protects and guarantees free expression under the American Convention on Human Rights. Notice-and-takedown regimes as the general rule also raised concerns of over-removal and theweaponization ofnotification mechanisms to censor protected speech.
A lot has happened since 2014. Big Tech platforms have consolidated their dominance, the internet ecosystem is more centralized, and algorithmic mediation of content distribution online has intensified, increasingly relying on a corporate surveillance structure. Nonetheless, the concerns Marco Civil reflects remain relevant just as the balance its intermediary liability rule has struck persists as a proper way of tackling these concerns. Regarding current challenges,changes to the liability regime suggested in Dias Toffoli's vote will likely reinforce rather than reduce corporate surveillance, Big Tech’s predominance, and digital platforms’ power over online speech.
The Cases Under Trial and The Reach of the Supreme Court’s Decision
The two individual cases under analysis by the Supreme Court are more than a decade old. Both relate to the right to honor. In thefirst one, the plaintiff, a high school teacher, sued Google Brasil Internet Ltda to remove an online community created by students to offend her on the now defunct Orkut platform. She asked for the deletion of the community and compensation for moral damages, as the platform didn't remove the community after an extrajudicial notification.Google deleted the community following the decision of the lower court, but the judicial dispute about the compensation continued.
In thesecond case, the plaintiff sued Facebook after the companydidn’t remove an offensive fake account impersonating her. The lawsuit sought to shut down the fake account, obtain the identification of the account’s IP address, and compensation for moral damages. As Marco Civil had already passed, the judge denied the moral compensation request. Yet, the appeals court found that Facebook could be liable for not removing the fake account after an extrajudicial notification, finding Marco Civil’s intermediary liability regime unconstitutional vis-à-vis Brazil’s constitutional protection to consumers.
Both cases went all the way through the Supreme Court in two separate extraordinary appeals, now examined jointly. For the Supreme Court to analyze extraordinary appeals, it must identify and approve a “general repercussion” issue that unfolds from the individual case. As such, the topics under analysis of the Brazilian Supreme Court in these appeals are not only the individual cases, but also the court’s understanding about the general repercussion issues involved. What the court stipulates in this regard will orient lower courts’ decisions in similar cases.
The two general repercussion issues under scrutiny are, then, the constitutionality of Marco Civil’s internet intermediary liability regime and whether internet application providers have the obligation to monitor published content and take it down when considered offensive, without judicial intervention.
There’s a lot at stake for users’ rights online in the outcomes of these cases.
The Many Perils and Pitfalls on the Way
Brazil’splatform regulation debate has heated up in the last few years. Concerns over the gigantic power of Big Tech platforms, the negative effects of their attention-driven business model, and revelations of plans and actions from the previous presidential administration to remain in power arbitrarily inflamed discussions of regulating Big Tech. As its main vector,draft bill 2630 (PL 2630), didn’t move forward in the Brazilian Congress, the Supreme Court’s pending cases gained traction as the available alternative for introducing changes.
We’ve written aboutintermediary liability trends around the globe, how tomove forward, and the risks that changes in safe harbors regimes end up reshaping intermediaries’ behavior in ways that ultimately harm freedom of expression and other rights for internet users.
One of these risks is relying on strict liability regimes to moderate user expression online. Holding internet application providers liable for user-generated content regardless of a notification means requiring them to put in place systems of content monitoring and filtering with automated takedowns of potential infringing content.
While platforms like Facebook, Instagram, X (ex-Twitter), Tik Tok, and YouTube already use AI tools to moderate and curate the sheer volume of content they receive per minute, the resources they have for doing so are not available for other, smaller internet application providers that host users’ expression. Making automated content monitoring a general obligation will likely intensify the concentration of the online ecosystem in just a handful of large platforms. Strict liability regimes also inhibit or even endanger the existence ofless-centralized content moderation models, contributing yet again to entrenching Big Tech’s dominance and business model.
But the fact that Big Tech platforms already use AI tools to moderate and restrict content doesn’t mean they do it well. Automated content monitoringis hard at scale and platforms constantly fail at purging content that violates its rules without sweeping up protected content. In addition tohistorical issues with AI-based detection of copyright infringement that havedeeply undermined fair use rules, automated systems often flag and censor crucial information that should stay online.
Just to give a few examples, during the wave of protests in Chile, internet platformswrongfully restricted content reporting police's harsh repression of demonstrations, having deemed it violent content. In Brazil, we sawsimilar concerns when Instagram censored images of Jacarezinho’s community’s massacre in 2021, which was themost lethal police operation in Rio de Janeiro’s history. In other geographies, the quest to restrict extremist content hasremoved videos documenting human rights violations in conflicts in countries like Syria and Ukraine.
These are all examples of content similar to what could fit into Justice Toffoli’s list of speech subject to a strict liability regime. And while this regime shouldn’t apply in cases of reasonable doubt, platform companies won’t likely risk keeping such content up out of concern that a judge decides later that it wasn’t a reasonable doubt situation and orders them to pay damages. Digital platforms have, then, a strong incentive to calibrate their AI systems to err on the side of censorship. And depending on how these systems operate, it means a strong incentive forconducting prior censorship potentially affecting protected expression, which defies Article 13 of the American Convention.
Setting the notice-and-takedown regime as the general rule for an intermediary’s liability also poses risks. While the company has the chance to analyze and decide whether to keep content online, again the incentive is to err on the side of taking it down to avoid legal costs.
Brazil's own experience in courts shows how tricky the issue can be.InternetLab's research based on rulings involving free expression online indicated that Brazilian courts of appeals denied content removal requests in more than 60% of cases. The Brazilian Association of Investigative Journalism (ABRAJI) has alsohighlighted data showing that at some point in judicial proceedings, judges agreed with content removal requests in around half of the cases, and some were reversed later on. This is especially concerning inhonor-related cases. The more influential or powerful the person involved, the higher the chances of arbitrary content removal, flipping the public-interest logic of preserving access to information. We should not forget companies that thrived by offeringreputation management servicesbuilt upon the use oftakedown mechanisms to disappear critical content online.
It's important to underline that this ruling comes in the absence of digital procedural justice guarantees. While Justice Toffoli’s vote asserts platforms’ duty to provide specific notification channels, preferably electronic, to receive complaints about infringing content, there are no further specifications to avoid the misuse of notification systems. Article 21 of Marco Civil sets that notices must allow the specific identification of the contested content (generally understood as the URL) and elements to verify that the complainant is the person offended. Except for that, there is no further guidance on which details and justifications the notice should contain, and whether the content’s author would have the opportunity, and the proper mechanism, to respond or appeal to the takedown request.
Aswe said before, we should not mix platform accountability with reinforcing digital platforms as points of control over people's online expression and actions. This is a dangerous path considering the power big platforms already have and the increasing intermediation of digital technologies in everything we do. Unfortunately, the Supreme Court seems to be taking a direction that will emphasize such a role and dominant position, creating also additional hurdles for smaller platforms and decentralized models to compete with the current digital giants.