What if Europe banned ranking algorithms on social networks? This fictional scenario, less and less fictional, forces us to rethink in depth what it means to be «visible» on the Internet.
This prospective article imagine the consequences of such a scenario (for the moment fictitious). It may seem far removed from the topics we usually cover on the Badsender blog, and yet I think it's essential to sometimes think a little beyond email. Maybe, after all, the last 20 years have been anomalies, and decentralized, open networks could once again become the norm. A standard from which email has never really strayed... despite the attempts of some.
What would be the consequences of this radical change? for the digital communications ecosystem? Is the’emailing have something to teach us? What emerging solutions and changes in practice are needed? Let's take a trip.
The weak signal that becomes a strong signal
November 2025, the European Parliament adopted a non-legislative report calling for a ban on engagement-based recommendation algorithms for minors. Infinite scrolling, autoplay, pull-to-refresh, reward loops: MEPs voted 483 in favor, 92 against. A few weeks later, Council of Europe Commissioner for Human Rights published a note calling for «regulation of platforms, not children», demanding algorithmic transparency, independent audits and structural accountability, rather than changes to the legal age of access to platforms.
This article is freely available.
It took time and expertise!
This month, thanks to our customer-sponsors: Actito, Puig France, Voyageurs du Monde, CMI France, Cegeka, BPI France, Citeo, FFT, Castor & Pollux, Clarins, Mews Group. They enable us to publish free content. Thanks to them, Badsender is fulfilling its mission of educating the French emailing and CRM ecosystem to promote responsible email.
With over 10,000 monthly readers, if only 1% became customers, we'd continue this mission for a long time to come! Become a customer and benefit from our expertise while supporting the production of open knowledge.
In Spain, the Prime Minister Pedro Sanchez announced to erect the algorithmic manipulation as a criminal offence. In Australia, a ban on social networking for under-16s came into force in December 2025. In Germany, the SPD proposed a «youth version» of platforms without algorithms addictive. In France, the National Assembly voted in favor of a bill banning access to social networks for under-15s.
This is not science fiction. It's the news of recent months.
Now let's look a little further ahead. Let's imagine that political, health and citizen pressure crosses a decisive threshold. Let's imagine that the European Union, armed with its global regulatory clout - the same clout that imposed the RGPD on the entire planet - decides to go all the way: a total ban on algorithms used to rank publications on social networks. Back to pure chronological thread. Neutrality of information classification.
What would that change? For platforms? For brands? For email?
Spoiler: it would change absolutely everything.
Why it's plausible
The legislative path already exists. Visit Digital Services Act (DSA), applicable to very large platforms since August 2023 and fully in force since February 2024, requires them to assess the systemic risks of their recommendation systems, and to take «reasonable, proportionate and effective» measures to mitigate them. The European Centre for Algorithmic Transparency (ECAT) provides the scientific and technical expertise needed to supervise these algorithms. The Digital Fairness Act, currently being drafted, goes even further by targeting addictive design practices. and by reinforcing consumer protection in the very architecture of digital services.
The Lancet Psychiatry Commission launched in 2025 a global policy initiative on «addictive design», calling for regulatory frameworks comparable to those governing tobacco or gambling. A meta-analysis of 2022 estimated that around 14% of the world's population showed signs of Internet addiction, This figure is rising steadily, and has been exacerbated by the COVID-19 pandemic. These figures can no longer be ignored.
European regulations leave member states no room for maneuver: the same rule applies to everyone, and it's impossible to legislate separately. This is precisely why the switchover, if it happens, will have to be European to be effective. And if it is European, it will be global in its effects. Economic culture is American. Regulatory innovation is European.
Over the next 10-15 years, depending on the evolution of the political context and the «accidents» that are bound to occur (health scandals, democratic crises linked to algorithms, court cases), there's nothing absurd about this scenario. RGPD also seemed «impossible» in 2008.
The fictitious framework: what would be forbidden, what would remain authorized?
Before going any further, let's define the contours of this fictional scenario. It's the imaginary framework within which all the thinking that follows takes place.
In this hypothetical world, European legislation would lay down a simple principle: the classification of publications on social networking platforms must be chronological and neutral. No algorithm can decide that one publication will be more visible than another on the basis of engagement signals, behavioral profiles or inferred interests.
What would be forbidden:
- Recommendation algorithms based on engagement (likes, viewing time, clicks, comments)
- News feed personalization based on user behavioral profiles
- Algorithmic amplification of certain content to the detriment of others
- Automatic sorting of publications by «relevance» calculated by the platform
- Organic boost systems linked to engagement metrics
What would still be allowed:
- Pay-per-click advertising clearly identified as such (distinct from organic flow)
- User-selected categorization and prioritization tools
- Anti-spam and anti-illegal content filtering (under strict conditions, see below)
- Algorithms for moderating illegal content (hate, child pornography, terrorism)
- Platform-internal search engines (it's still possible to search for specific content)
This fictitious framework is deliberately radical, with a total ban on algorithmic flow classification. It's this radical choice that makes the exercise intellectually interesting: it forces us to think through all the consequences. In reality, it could be more flexible, leaving the choice to the user... but that wouldn't be as much fun.
Phase 1: Platform resistance
The platforms« first reaction would be predictable: legal resistance, intense lobbying, communication about the »benefits" of personalization. We'd hear arguments about users« »freedom of choice’, about »innovation» being sacrificed, about European «competitiveness» being threatened. Elon Musk would tweet something. Mark Zuckerberg would send lobbyists to Brussels.
This is not fiction: a survey by Corporate Europe Observatory documented in February 2026 how the digital giants are already banding together to torpedo the Digital Fairness Act, mobilizing aggressive lobbying and political allies.
But the’argument which would be put forward with the most vigour, and the most bad faith, would be that of freedom of expression. Platforms would present this regulation as an attack on the free circulation of ideas, a disguised act of censorship, a way for states to control what citizens see and read.
This argument is fallacious, and we need to make it clear.
What the regulations would prohibit is not citizen expression. Each user would remain free to publish what he or she wishes (within the limits of existing law). What regulation would constrain is the ability of platforms to decide for themselves, in an opaque and self-interested way, which content is amplified to the detriment of others.
Today, content that generates outrage will be algorithmically boosted because it generates engagement. Content that is nuanced, complex and not conducive to epidermal reactions will be invisible.
To prohibit this is precisely to protect real freedom of expression, that of all content, from the most popular to the most confidential, in a feed where each appears according to its publication date and not according to its presumed advertising profitability.
Let's put it another way: today, freedom of expression on social networks is the freedom to write in a newspaper whose editorial direction is an algorithm optimized to sell advertising. To ban it is to return the newspaper to its authors and readers.
In short, platforms defend their business model, not your fundamental rights.
But let's be honest about what these platforms stand for. An Amnesty International report and the United Nations Fact-Finding Mission have documented how Facebook's algorithm promoted hatred against the Rohingya in Burma, directly contributing to the atrocities of 2017. AI Forensics revealed unmoderated political advertising on Meta. An academic study published at the end of 2025 in the journal Science showed that modifying publication rankings to reduce polarizing content significantly reduced partisan hostility between users. Even more disturbing: a study published in February 2026 in Nature demonstrated that X's «For You» algorithm permanently shifts users' political opinions, with no return after deactivation.
What the platforms are defending is their business model. The algorithm is not a service provided to the user. It's a a tool for maximizing time spent on the platform to sell targeted advertising. Let's remember this every time we hear the term «personalized experience».
Phase 2: The business model collapses (for some)
Then comes the fundamental question: without an engagement algorithm, is it still financially worthwhile for platforms to invest in Europe? ?
The answer is not the same for all.
TikTok would be the big victim. The platform is, in its very essence, an algorithmic construction. Its recommendation algorithm is its product. It analyzes micro-behaviors (viewing times, stops, replays) to create an engagement loop. Without this algorithm, TikTok would simply be a chronological succession of videos by unknown people. The addictive aspect, which is the model, disappears with regulation. It's hard to imagine TikTok finding a viable subscription model with users whose engagement was precisely maintained by the algorithmic drug.
Instagram and Facebook (Meta) would have the resources to pivot, but at what cost? Part of their model still relies on real social relationships - family, friends - that would survive a timeline. But the influence economy, brand partnerships, boosted organic reach: all these are collapsing.
LinkedIn is an interesting case in point. The platform has a perceived professional value independent of the algorithm. Millions of users would probably pay a modest subscription (5 to 10 euros a month?) to retain access to a global professional network. LinkedIn could survive, and even find an opportunity to reposition itself on quality rather than volume.
The third phase, after resistance and attempts at adaptation, would for many platforms be the renouncing the European market. Not necessarily a closure, but a minimal presence, a gradual disinvestment. Which would open up a considerable amount of space.
This withdrawal scenario is not hypothetical: as early as 2022, Meta had explicitly threatened in its annual report to shut down Facebook and Instagram in Europe if data transfer rules became too restrictive. The threat was not carried out, as the European market is too lucrative, but it reveals the logic of the platforms: blackmail as a political weapon in the face of regulation.
Phase 3: The chronological thread creates a new problem
Let's assume that the networks survive, in degraded or reformed mode. A new problem immediately emerges: the chronological thread is unmanageable on a large scale.
Today, you may be following 500, 1000, 2000 accounts. The algorithm makes a brutal selection, but it prevents you from drowning. Without it, your news feed becomes an uninterrupted torrent, where publications from your mother, a marketing influencer and a shoe brand follow one another without hierarchy.
Two behaviours would then naturally emerge:
Mass churn. A large proportion of followed accounts were only followed because the algorithm occasionally pushed them. Without this artificial highlighting, their existence in your network becomes visible, and useless. You'd be unsubscribing en masse from accounts you didn't really care about any more.
Manual categorization. Platforms would be forced to create their own prioritization system. Several levels are conceivable: Personal relationships (people I know in real life), Priority (I want to be notified of every publication), Important (I want to be able to find their contents), Standard feeds (I'll have a look if I have time).
This is very similar to the logic of a modern email inbox, except that categorization is not manual.
This logic of user-controlled curation has already been explored. Bluesky, the decentralized social network built on the AT protocol, has already implemented what founder Jay Graber calls a "user-controlled curation logic". «Algorithm marketplace» Instead of a single algorithm imposed on all, users choose, install and exchange feeds created by third-party developers. It remains to be seen whether this use case would remain permissible in our fictional scenario.
The emergence of open alternatives. And this is where the scenario becomes structurally interesting. The divestment of commercial platforms would create a vacuum. A void that could be filled by alternatives that are already compliant with these regulations, because they are built without ranking algorithms. Visit Fediverse with Mastodon in the lead, but also PeerTube for video, Pixelfed for photos and many specialized networks, operate by default on a chronological feed. They don't have an engagement algorithm to delete, because they never have.
This shift in usage would not be without its difficulties. The vast majority of Mastodon instances are run by volunteers or associations, with very limited resources. A sudden surge of millions of European users fleeing commercial platforms would be a considerable governance and infrastructure challenge. But the regulatory and commercial vacuum created by the flight from the big platforms would be precisely the condition for a critical mass that the Fediverse has never managed to achieve on its own.
Does email have it all figured out before anyone else?
Google launched Gmail tabs (Main, Social Networks, Promotions) in 2013. Priority Inbox had existed since 2010. The Manage subscriptions« functionality», A direct response to the problem of content overload is the "Subscribe" feature, to be rolled out in 2025, which lets you view all your subscriptions at a glance and easily unsubscribe from them.
What Google has solved for email, social networks will be forced to solve for their feeds: not through an opaque algorithm that decides for you, but through manual management tools that put the user back in the driver's seat.
But let's be clear about something the email industry knows well: Gmail doesn't just «sort» emails neutrally. Algorithms decide whether your email lands in the main inbox, in the Promotions tab, in the Social Networks tab or as spam. It's not the user who makes this decision, it's Google, based on engagement signals, content analysis and sender reputation. Email deliverability is precisely the art of understanding and navigating these opaque algorithmic decisions.
And the situation has worsened. Since september 2025, the Promotions tab of Gmail sorts marketing emails by «relevance» rather than by date. An email sent to 100,000 subscribers no longer appears in the same order for everyone; Gmail decides who sees what first, just like a social feed.
In the scenario of a ban on ranking algorithms, should these functions be affected?
The answer is yes, partially. Promotional/principal sorting poses an obvious problem of neutrality: a brand that has obtained your consent and complied with all deliverability rules should not see its visibility depend on an opaque decision by a Google algorithm. Anti-spam filtering, on the other hand, is a legitimate protection against unsolicited communications; it doesn't classify consented content, it blocks intrusions.
The border is there: algorithm for classifying consented content = neutrality problem. Algorithm to protect against non-consensual content = service to the user.
This anti-spam exception is not without its complexities, however. Legislation that bans rating algos while maintaining an exception for anti-spam would have to define strict transparency criteria for the operation of these filters.
Yet this transparency is immediately exploited by spammers to bypass filters. This is the fundamental paradox of deliverability: spam filters work partly because they are opaque. Any obligation of total transparency would be a gift to malicious actors. Legislation should therefore make a fine distinction between transparency of general criteria (acceptable) and disclosure of precise technical parameters (dangerous).
What this would mean for brands
Let's cut to the chase: much of today's social media strategy is based on algorithmic manipulation.
You know the tactics. «Write KIKOULOL in comments to receive my AI white paper.» LinkedIn carousels designed to maximize swipe rate. Short videos designed for the first 3 seconds of retention. Twitter/X threads built as cliffhangers.
All this is optimization for a system that decides for users what they want to see. It's not marketing. It's algorithm manipulation.
Without an algo, the number of followers, the level of artificial interaction and financial power would no longer be enough to be visible. Brands would be returned to a fundamental question they've been avoiding for years: does my content have intrinsic value for my audience?
Paid advertising would still be possible on the commercial platforms that would remain, which is different from the organic ranking algorithm. But on open networks (such as Fediverse, Mastodon), there would be only three levers left: editorial quality, re-sharing by engaged accounts, and the hope of being «categorized» as a priority. by enough users.
The sovereign trinity: Email, Fediverse, Matrix
In this post-algo world, a new brand communication architecture would emerge around three decentralized and open technologies:
Email, the strong subscription channel. Email has been a decentralized network for over 40 years. You own your list. No one can take it away from you. You can change your hosting. And (almost) no one decides on the visibility of your mailings without your consent (apart from anti-spam). In a world without network ranking algorithms, email is further consolidating its position. An evolution that has already been underway for several years, with more and more editorial newsletters long, structured, expected content that arrives directly in the inbox of the subscriber who has chosen to follow you.
The Fediverse, the sovereign social flow. Mastodon, but not only. The Fediverse is an ecosystem of interconnected servers that can host text, video, images, audio (podcasts) and blog posts. A brand can set up its own Mastodon server without paying royalties to anyone, and without being subject to algorithm changes by a third-party platform. Once a community has been set up on Fediverse, it truly belongs to the brand. The downside is that the Fediverse needs to reach a critical mass. But if Europe creates the regulatory conditions that weaken the big platforms, this critical mass could be reached... for want of a legally compliant alternative.
Read this article by Isabelle Mathieu (Emarketinglicious) on the value of Fediverse for brands.
Matrix, the discussion community. Matrix is the decentralized protocol for instant messaging. For brands, it's the sovereign equivalent of WhatsApp groups or social selling: live discussions with prospects or customers, brand communities, support. Without paying a royalty to Meta. Without your conversation data belonging to an American third party. Here again, you need to reach a critical mass to get as many people as possible on board.
💡What email, Fediverse and Matrix have in common: your subscribers belong to you
These three technologies share a fundamental property that centralized social networks have gradually erased: subscribers belong to the brand, not the platform.
On Instagram, your 50,000 followers don't belong to you. Meta can close your account, change its terms, modify its algorithm, and you're back to zero. On an email list, on your Mastodon server, on your Matrix instance: your subscribers are yours. You can export your data. You can migrate to another service provider. You can even host your own infrastructure, without being technologically dependent on anyone else.
It's not just editorial independence, it's technological independence. Setting up your own Mastodon or Matrix server doesn't require authorization, contracts or royalties. The code is open source. The infrastructure is yours. And if your hosting provider closes, your data, subscribers and history can migrate elsewhere.
That's exactly what centralized social networks have taught you to stop demanding. It's time to remember that.
The fundamental change? Brands once again take responsibility for creating their communities, no algorithmic net. No more technological dependence on a social network operator who can change the rules of the game overnight. The community built on these sovereign channels would be stable, lasting and owned by the brand.
The real losers: the platforms themselves
It would be tempting to feel sorry for influencers, social media agencies or content creators. But the big loser in this scenario is a much more powerful entity: the platforms themselves.
Today, Meta, TikTok, X and Google wield considerable double power. Firstly, over brands: they rent them access to their audiences, changing the pricing rules as they see fit. Secondly, over citizens: they shape public opinion, amplify certain content to the detriment of others, and contribute, as documented by AI Forensics in its investigations into the European elections, to the phenomena of large-scale political manipulation.
The disappearance of ranking algorithms means the end of this dual power. Brands would no longer be captive to them. And public opinion would no longer be staged by opaque systems optimized for emotional engagement.
For influencers and content creators, the picture is more nuanced. Those whose audience was based on the artificial amplification of a one-off buzz would indeed be weakened. But those who have built up a real editorial relationship with their community stand to gain Once their audience has been built up on sovereign channels, they will no longer be subject to algorithmic whims. Stability versus dependence.
Resetting the counters
The classic counter-argument: without algorithms, only established accounts would benefit from an audience, making it more difficult for new players to emerge.
This is partly true. But this is partly already the case today, with engagement algorithms massively favoring content that is already generating reactions, creating their own concentration effects.
In a world without algos, this would first of all mean resetting the counters. The best content, that which creates real value for its readers, builds the most engaged communities. Human re-sharing and manual categorization («I'm prioritizing this account») would be the new vectors of emergence. It would only take a few highly-followed accounts sincerely sharing a publication for a new player emerges as quickly as today, but through curation, not algorithm manipulation.
It's a world where quality comes first. Not perfect. But honest.
Conclusion
This article does not say that a ban on ranking algorithms will happen. It says that it is less and less unlikely, that the legislative path exists, and that Europe has the political capacity to take it if circumstances so require.
It also says something more uncomfortable for the marketing industry: a large part of our practices would not survive in this world. Growth hacking, optimization for artificial engagement, reliance on platforms as the sole channel of visibility: all of this is based on sand.
Marketing professionals who have invested in editorial quality, in building engaged email lists, in direct relationships with their audiences rather than in the manipulation of algorithmic metrics : they're already ready.
Leave a Reply