The law of algorithmic society

Authors

Download

Abstract

The presence of computational algorithms in various areas of everyday life has led us to propose the term ‘algorithmic society’. To investigate how algorithms are influencing society's laws, we searched the Internet for cases. The searches were conducted using the following terms in Spanish, English, and Portuguese: ‘algorithms and law,’ ‘artificial intelligence and law,’ ‘artificial intelligence and legal cases,’ and ‘artificial intelligence and legal decisions.’ The responses were compiled and the data presented in an Excel spreadsheet and analysed from the perspective of Niklas Luhmann's theory of society as a communication system with meaning. In our survey, we observed two extremes: on the one hand, those who consider the relationship between algorithms and law as a way to solve various problems in legal practice and, on the other hand, those who consider this relationship to be harmful, as artificial intelligence algorithms will replace humans. Our research to date leads us to believe that both extremes are misleading, not only because futurism is a form of self-delusion, but also because the responsibility for human decision-making will never cease to be human. After all, in all human interpretation (including legal interpretation), the attribution of value, the acceptance or rejection of an argument or information, is not a matter of the interpreter's exclusive will, but involves the participation and presence of various elements of the materiality, temporality and sociality of meaning.

Keywords:

systems theory , law , algorithms , society , algorithmic society

References

a-chacon. (2025). When Machines Talk: ChatGPT and DeepSeek. Blog a-chacon, 28/06/2025. https://a-chacon.com/en/ai/2025/06/28/chatgpt-and-deepseek-talking.html

Andrighetto, G.; Grieco, D. y Tummolini, L. (2015). Perceived legitimacy of normative expectations motivates compliance with social norms When nobody is watching. Psychology, v. 6, n. 1413. Doi: 10.3389/fpsyg.2015.01413

Baum, T. (2022). Artificial intelligence spotted inventing its own creepy language. New York Post, 03/06/2022. https://nypost.com/2022/06/03/artificial-intelligence-spotted-inventing-its-own-creepy-language/

BPC [Bipartisan Politicy Center] (2023). History of the Cambridge Analytica Controversy. Article. 16/03/2023. https://bipartisanpolicy.org/blog/cambridge-analytica-controversy/

Brüseke, F. J. (2014). Sociologia da inovação técnica: da crítica à técnica ao design sócio-técnico. Revista TOMO. https://doi.org/10.21669/tomo.v0i0.3437

Confessore, N. (2018). Cambridge Analytica and Facebook: The Scandal and the Fallout So Far. New York Times, 04/04/2018. https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

Costa, A. C. (2025). Attributing meaning to algorithms. In: Becker, R. M., Costa, A. L. & Ventamiglia, A. (ed.). Global perspectives on animism and autonomous technologies. Switzerland: Springer, 117-143.

Esposito, E. (2022). Comunicação artificial? A produção de contingência por algoritmos. Revista Brasileira de Sociologia do Direito, 9(1): 4-41.

Glanville, R. (2008). Black Boxes. Cybernetics and Human Knowing, 16(1-2): 153-167.

Hinds, J.; Williams, E. J. y Joinson, A. N. (2020). “It wouldn't happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human-Computer Studies, 143, 102498. https://doi.org/10.1016/j.ijhcs.2020.102498. (https://www.sciencedirect.com/science/article/pii/S1071581920301002)

Luhmann, N. (1983). Fin y racionalidad en los sistemas. Madrid: Editora Nacional, 1983.

Luhmann, N. (1998). Sistemas sociales. Barcelona: Antropos; México: Universidad Iberoamericana; Santa Fé de Bogotá: CEJA-Pontifícia Universidad Católica de Javeriana.

Luhmann, N. (2005). El derecho de la sociedad. Ciudad de México: Herder/ Universidad Iberoamericana .

Luhmann, N. (2007). Sociedad de la sociedad. Ciudad de México: Herder/ Universidad Iberoamericana.

Luhmann, N. (2010). Organización y decisión. Ciudad de México: Herder/ Universidad Iberoamericana.

Mascareño, A. (2009). Problemas de legitimación en la sociedad mundial. Revista da Faculdade de Direito da UFG, 33(2): 9-23.

Morozov, E. (2018). Big Tech. A ascensão dos dados e a morte da política. Tradução: Cláudio Marcondes. São Paulo: Ubu.

Onlim (2024). The History of Chatbots – From ELIZA to ChatGPT. Blog Onlim, 15/02/2024. Última actualización: 15/09/2025. https://onlim.com/en/the-history-of-chatbots/

Pignuoli Ocampo, S. (2024). Comunicação digital e participação dos dispositivos no mundo social. Revista Brasileira de Sociologia do Direito, 11(2): 4-24.

Russell, S. y Norvig, P (2013). Inteligencia artificial. São Paulo: Elsevier.

Serrano Gómez, E. (1994). Legitimación y racionalización. Weber y Habermas: la dimensión normativa de un orden secularizado. Ciudad de México: Anthropos/Universidad Autônoma Metropolitana.

Stamford da Silva, A. (2021). Decisão jurídica na comunicativação. São Paulo: Almedina.

Stamford da Silva, A. y Luckwu, M. (2022). Algoritmos de inteligência artificial e decisão jurídica: o caso da ELIS do Tribunal de Justiça de Pernambuco. Revista do Tribunal Regional Federal da 1ª Região, 34(3), 27-42.

Stamford da Silva, A.; Pinheiro, A. F. y Massa, R. (2025). Legal decison-making in Algoritms Society: observations on Techno-Animism and Comunicactivation. In: Becker, R. M., Costa, A. L. & Ventamiglia, A. (ed.). Global perspectives on animism and autonomous technologies. Switzerland: Springer, 145-177.

Tække, J. (2022). Algorithmic Differentiation of Society – a Luhmann Perspective on the Societal Impact of Digital Media. Journal of Sociocybernetics, 18(1), 2-23.

The Guardian. (2018). Revealed: 50 Million Facebook profiles harvested for Cambridge Analytica in major data breach. 17/03/2018. https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election