Aristotle, Algorithms and AI: The case for regulating social media content
It’s 2007 and I’ve created my first Facebook account. Within days, MySpace is obsolete and I’ve transitioned my online identity entirely to Facebook. This is the era before our feeds became plagued with advertisements, and the posts I see are photos and life updates of my friends that I otherwise would not have seen or heard. As I move interstate, Facebook allows me to keep in touch with those I’ve left behind. I connect with people I’ve never met; an ex-friend’s ex-wife becomes one of my best friends. To this day, I still haven’t met her in person. Fifteen years later, I reconnect with a childhood friend. As the platform grows, I join Facebook groups dedicated to niche topics, and I’m crowned a ‘Group Expert on Families and Relationships’ in an Australian group centred around women supporting women. Without Facebook, I would have lost these connections and the knowledge I’ve gained from engaging with others across the platform; knowledge that has led to significant growth in my personal, professional and academic life.
According to Aristotle, we are social creatures, and the growth of platforms like Facebook and Instagram can be attributed to a universal desire to connect with people beyond our immediate circles. Our survival and well-being have always been intrinsically linked to the quality, health and quantity of our social connections. At its heart, social media started as a human endeavour — one driven by a spark of curiosity to connect with people who possess knowledge and interests that either mirror or challenge our own — in short, it is a unified pursuit of connection and knowledge. With the birth of Facebook, humanity entered a new path of global connection and knowledge expansion. However, what started as a bright inferno of universal human connection in the early 2000s has now dimmed to a dying ember smothered by an endless void of advertisements and AI content driven by algorithms designed to sell or radicalise rather than connect.
Drawing upon Aristotle’s claim that humans are social creatures and his principle for the need to pursue the good life, I will discuss how AI content, algorithms and bots have contributed to defeating the initial purpose of social media platforms by creating a ‘Dead Internet’ that fosters social harm through political polarisation and social divide. I argue that reduction and regulation of AI-created content and algorithmic information sharing processes are required to return social media platforms to their initial objective of empowering human connection, and by extension — human flourishing.
I. The Dead Internet: The Source of Social Discontent
The ‘Dead Internet Theory’ posits a future where bots and AI-generated content render human content — and subsequently, human-led online spaces — obsolete. However, evidence suggests that we’ve been on the trajectory of a decaying internet since 2016, with 2022 marking a milestone of almost half of all internet traffic and content made up of bots, suggesting that this theory has evolved from a mere hypothesis to a contemporary societal issue. The result is a rampant and unchecked dissemination of misinformation, bot-manufactured echo chambers and algorithms designed to quickly push consumers of social media platforms down online rabbit holes of extremist political views.
The issue with artificial content curating content consumed by humans is that social media platforms, specifically Facebook, have been pivotal in shaping vital political reform by enabling widespread mobilisation. For example, Facebook was used as a tool to organise the when’, ‘where’ and ‘how’ for demonstrations that occurred as part of the Arab Spring; and citizen journalists used the platform to share images of real-time events on the ground, thus turning citizens from passive observers to activists. On Monday 25th May 2020, Ms Darnella Frazier recorded two police officers restraining a distressed George Floyd. With the press of a button, the #BlackLivesMatter movement blazed across the United States and the globe, with rallies of support against racial violence spearheading social discourse at the time. These two cases demonstrate that when social media prioritises human connection and the exchange of information, it can operate as a launch pad for necessary progressive social change.
However, should AI content drown out human content, the likelihood of human content being prioritised becomes less likely to be picked up by algorithms that are, these days, drawn towards content that creates the most engagement — the issue being, that the majority of algorithm-pushed content is largely misinformation or ‘fake news’ created by AI — thus generating a cycle of artificial content-engagement seemingly without any human involvement or oversight.The result is that if there is less exposure to human-created content, there is a significant decrease in human-to-human connection. Authentic and un-tampered interpersonal connection between people is pivotal in fostering and shaping social discourses as it facilitates the exchange of information by allowing a space for those living at the forefront of marginalisation to voice their experiences and contribute to social change. When human testimony is overshadowed or lost in the sheer volume of artificial noise and a widespread proliferation of misinformation is facilitated by bots, a testimonial and epistemic injustice occurs at a universal level.
II. Knowledge, truth and the pursuit of Aristotle’s good life
Aristotle said, “Humans are the rational animal, so living a good human life means seeking to know”. His notion of a ‘good human life’ centres around the concept of eudaimonia, which is a state of human flourishing that involves achieving ‘the highest good’. It is a type of state that ought to be desired not just on an individual level; but that all goods, such as social goods, ought to strive to bring about. The building blocks for eudaimonia are eudaimonic virtues — eg. justice, courage, temperance, intelligence and truth. However, three key elements can be drawn from Aristotle’s framework for human flourishing that can be used to inform an argument for more human-centred social media platforms. As mentioned above, the first is that humans are inherently social creatures. Social media platforms are specifically designed to profit from this intrinsic human trait. By capitalising on what could be considered the amalgamation of human virtues, it could be argued that these platforms have an obligation to consider other elements of humanity, such as rationality, when delivering a service that has the potential to facilitate social cohesion or division. Thus, the second element is that humans are rational creatures — –which can be directly tied with the pursuit of and exchange of knowledge. Third, Aristotle emphasised the ability to reason well, with the highest form of rational activity grounded in truth. If we combine the above traits of sociality as a human condition that ought to support the pursuit of knowledge and truth to achieve human flourishing, the following framework emerges:
1. Eudaimonia, which involves a state of human flourishing, is achieved through virtuous activity that encompasses fostering social connection and the acquisition of knowledge;
2. A crucial aspect of human rationality is the obtainment of the right knowledge, which is truth;
3. Social media platforms are major sites for human social connection and knowledge exchange.
C: To achieve human flourishing and consequently eudaimonia, social media companies should ensure that the knowledge exchanged on their platforms is grounded in truth.
III. Profit Over Hate Speech: The Need for Regulation
In 2018, an internal report from Facebook alerted its board members that its algorithm, which exploited the human brain’s attraction to highly-emotive content, created partisan divides and political polarisation. The report claimed that if left unchecked, the algorithm would continue to serve Facebook users ‘increasingly divisive content to gain user attention’. Facebook’s algorithm works by tapping into its users’ pre-existing political beliefs and then showing them content that would create moral outrage that runs counter to these beliefs. Since the launch of ChatGPT in November 2022 and the introduction of deep-fake technology, AI-generated hate content has exploded across platforms — sucked into the vortex of social media algorithms and spat out as legitimate news sources.
In short, the combination of unchecked algorithms and AI-generated content has created a toxic cycle of hostile political rhetoric to both left-leaning and right-leaning users by centring content that cultivates political animosity and deepens social dissension. The reason for not implementing the recommendation to create a more neutral algorithm? Profit. If Facebook tweaked its algorithm to de-centre radical and fake content, it stands to lose the gains it reaps from being a machine of political divide. It is worth highlighting that, as Facebook is positioned the forefather of algorithm-controlled social media, it sets a strong precedent for all other social media platforms to follow. Consequently, this successive exposure to polarising content forces users to adopt more extreme antagonistic perspectives and opinions — effectively creating a social media environment that is ripe for hate speech.
While this narrative primarily applies to Facebook, it is evident that all other leading social media platforms such as X (formerly Twitter), Instagram (which is now owned by Meta, aka Facebook) and YouTube have succumbed to the same algorithm issues and have sought to prioritise profit over achieving a political neutrality that would cultivate a virtuous common good. With hate speech remaining unchecked, and private social media companies maintaining the power to stand by and allow the global social divide to expand in the name of profit, there emerges a need for legislative measures to step in and control the content that social media platforms allow to be disseminated into and affect society.
While certain conventions have protected freedoms of expression, this right has often been limited by hate speech, with articles 4 and 5 of the Convention on the Elimination of All Forms of Racial Discrimination protecting people from racial discrimination. Although Australia placed a reservation on this treaty in 1975, which continues to stand today, the general consensus of rights and protections from hate speech in Australia has been left to be governed by states or territories jurisdictions, not the Commonwealth. Primarily, protections against hate speech have been contained to racial and disability discrimination. But with the rise of right-wing extremist views spilling over from unchecked social media platforms and into society, legislators are taking steps to expand the consequences of hate speech to other marginalised groups. On 2nd April 2025, Victoria passed the bill for the Justice Legislation Amendment (Anti-vilification and Social Cohesion) Act 2025 (Vic) — effectively expanding the criminalisation of hate speech based on gender, sex, sex characteristics, sexual orientation and disability. It is an effective move to combat the growing discourse of conservative views on gender that have been gaining popularity.
Yet, while legislative change is a good first step in acknowledging hate speech as a criminal offence, a more robust framework to hold social media platforms accountable is required. Such a framework is already in development to combat scams and fraudulent activity from being facilitated on social media. The Scams Prevention Framework, a new bill currently waiting to be passed by the Albanese government, provides the legislative mechanisms to hold social media companies accountable for the activity on their platforms by facilitating fines and enabling consumers to report unlawful activity to a government body. A similar framework could be introduced to control the volume of AI-created hate speech on social media platforms — effectively disincentivising social media platforms from profiting from extremist content disseminated by their algorithms. While hate speech is a crime for humans, AI that produces harmful content remains free of any legal repercussions. Therefore, social media platforms should be responsible for limiting the likelihood of social harm arising from AI-generated content consumed on their platforms. This leads us to our second framework:
1. Unchecked algorithms on social media platforms disseminate AI-created hate speech;
2. Criminal offences exist for humans who engage in hate speech, but as AI is not a legal person, it cannot be held to the same standards of lawful accountability;
3. There is an absence of mechanisms that operate to hold social media companies accountable for the social harms their algorithms create
C: Legislative oversight and regulation are required to ensure social media companies are disincentivised from promoting AI-created hate speech and polarising content on their platforms.
IV. Conclusion
Social media emerged as a platform that enabled genuine human relationships while allowing users to exchange knowledge, build connections and create social transformations. The ‘Dead Internet Theory’, is no longer a theory and evidence provides that it is already creating real-world issues by disrupting the interpersonal connection and knowledge-sharing capabilities of social media platforms. The cyclical nature of AI-generated content and algorithmic manipulation has reshaped these platforms into spaces that now prioritise profit above a shared human good; misinformation above truth; and polarisation above social cohesion. The social harms that arises from the very-real risk of an AI and bot-infested social media landscape requires a legislative response and ethical regulation. In sum, social media can and should be re-established as platforms for truth, human connection and flourishing only when human voices take centre stage.
References
Abdelijalil Benniiche et al, ‘Society 5.0: Internet As If People Mattered’ (2022) 99 (1–8) IEEE Wireless Communications 160–168
Anja Karadeglija, ‘AI-powered hate content is on the rise, experts say’, The Canadian Press (26 May 2024) <https://www.cbc.ca/news/politics/ai-hate-content-1.7215369 >
Aristotle, ‘Book 1, Chapter 2’ in (C.D.C Reeve (trans) Politics (Hackett Publishing, 1998) xxv
Aristotle’s Ethics, Stanford Encyclopedia of Philosophy’ (2nd July 2022) < https://plato.stanford.edu/entries/aristotle-ethics/#:~:text=The%20good%20of%20a%20human,accordance%20with%20virtue%20or%20excellence>
Convention on the Elimination of All Forms of Racial Discrimination
Chelsea Litchfield and Larissa Bamberry, ‘Metanarratives and discourse: shaping inequality’ in Donna Bridges (ed) Gender, Feminist and Queer Studies: Power, Privilege and Inequality in a Time of Neoliberal Conservatism (Taylor & Francis, 2024) 33–107
Chengchen Shao et al, ‘The spread of low-credibility content by social bots’ (2018) 9(4787) Nature Communications 1–9
Exposure Draft Explanatory Materials, Treasury Laws Amendment Bill 2024: Scams Prevention Framework 2024 (Cth)
Filimon Peonidis, ‘Fake news published during the pre-election period and free speech theory’ in in Oscar de la Fuente (ed) Minorities, Free Speech and the Internet (Taylor & Francis Group, 2024) 108–119.
Gustava Ferreira Santos, ‘Misinformation and Hate Speech’ in Oscar de la Fuente (ed) Minorities, Free Speech and the Internet (Taylor & Francis Group, 2024) 123–136
Helen Norton, ‘Manipulation and the First Amendment’ in Oscar de la Fuente (ed) Minorities, Free Speech and the Internet (Taylor & Francis Group, 2024) 93–106
Howard Curzer, ‘ Truthfulness and Integrity’ in Aristotle and the Virtues (2012, Oxford University Press, 195–220
Jake Renzella and Vlada Rozova, ‘The ‘dead internet theory’ makes eerie claims about an AI-run web. The truth is more sinister’, University of New South Wales (online, 21 May 2024) <https://www.unsw.edu.au/newsroom/news/2024/05/-the-dead-internet-theory-makes-eerie-claims-about-an-ai-run-web-the-truth-is-more-sinister>.
James A Coan and Lane Beckes, ‘ Social Baseline Theory: The Role of Social Proximity in Emotion and Economy of Action’ (2011) 5(12) Social and Personality Psychology Compass 976–988
Jon Alterman, ‘The Revolution Will Not Be Tweeted’ (2011) 34(4) The Washington Quarterly, 34(4) 103–116
José an Dijck, ‘Facebook as a Tool for Producing Sociality and Connectivity’ (2011) (13(2) Television & New Media 160–176
Joud Walid, ‘The Legacy of the “Facebook Revolution”: How did the Arab Spring shape citizen use of social media?’, King’s Think Tank (online, 30 January 2025)
Justice Legislation Amendment (Anti-vilification and Social Cohesion) Act 2025 (Vic)
Laura Fox, ‘Chatbots and Consequences: AI, Moral Agency and the Fallacy of Asimov’s Laws’, (Essay, PHIL3073, Australian National University, Semester 1 2025).
Laura Fox, ‘”I’ve lost everything”: An Overview of the Systemic Failure Experienced by Victims of Romance Fraud” (Research Essay, LAWS4430, Australian National University, 2025)
Lawence B Solum, ‘Legal Personhood for Artificial Intelligences’ in Wendell Wallach and peter Asaro (eds), Machine Ethics and Robot Ethics (Taylor & Francis Group, 2016) 415–471.
Marco Conti and Andrea Passarella, ‘The Internet of People: A human and data-centric paradigm for the Next Generation Internet’ (2018) 131(1) Computer Communications 51–65
Mark Alfano, ‘Technological Seduction and Self-Radicalization’ (2018) 4(3) Journal of the American Philosophical Association, 298–322
Minna Ruckenstein and Linda lisa maria Turunen, ‘Re-humanizing the platform: Content moderators and the logic of care’ (2020) 22(6) New Media & Society 1026–1042
Nick Stratt, ‘Facebook reportedly ignored its own research showing algorithms divided users’, The Verge (online, 27 May 2020) <https://www.theverge.com/2020/5/26/21270659/facebook-division-news-feed-algorithms>
‘Right to freedom of opinion and expression’, Attorney-General’s Department (Public sector guidance sheet) < https://www.ag.gov.au/rights-and-protections/human-rights-and-anti-discrimination/human-rights-scrutiny/public-sector-guidance-sheets/right-freedom-opinion-and-expression>
Samuel Hughes, ‘Social Media Case Study: The Killing of George Floyd’, The Institute of Strategic Risk Management (online, 2020) < https://www.theisrm.org/social-media-case-study-the-killing-of-george-floyd/ >
Shannon Vallor, ‘Virtue Ethics, Technology and Human Flourishing’ in Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (2016, Oxford University Press) 17–34
