Circling Back to AI. A look at its ethical dimension in educational institutions

Social perceptions of artificial intelligence (AI) are deeply shaped by the media and its often apocaliptic-toned narrative (Coeckelbergh, 2020), fostering the emergence of contrasting technophile–technophobe profiles. This informational distortion not only molds collective imagination (Morozov, 2013): it also permeates educational contexts, where technology, through lenses of fear or fascination, is frequently discussed, although rarely done through a critical, nuanced understanding of a phenomenon that carries both risks and promises.

No doubt, then, that AI is these days’ hot topic. In 2021, Eurídice Cabañes noted that the growing prominence of AI in public debate stemmed, on the one hand, from the catastrophic narratives promoted by the media, and on the other, from the increasing social concern over data processing and privacy. Novelty, sensationalism, and disinformation —or malinformation, using MacIntyre’s (2018) terminology— dominate the mental landscape of those who don’t work professionally with technology. Even for those of us who study these processes from critical perspectives, it is difficult to cut through the discursive noise to pose relevant questions. However, parallel to the alarmist visions of technological development, another perspective also emerges—one that does not hold a similar space in the media: an informed view, one capable of distinguishing between technological possibilities, limitations, and actual transformations. In this context, our research is guided by an arising, key question:  What are we really talking about when we talk about AI?

Ethics, institutions, and the mirage of consensus
In the educational field, public and academic conversation frequently appeals to the idea of the “ethical use” of AI by institutions. This approach makes appearances in multiple international reference frameworks —UNESCO (2021), OECD (2023), European Commission (2022)— that outline the elements that must be considered when discussing AI in education: responsibility, transparency, algorithmic and data governance, equity, accessibility, human oversight, and so on. In this regard, the specialized literature agrees that managing AI in a domain as diverse as education requires analysing a myriad of constantly, everchanging sociotechnical factors.

Despite the abundance of ethical principles, institutional discourses are often vague, superficial, or overly aspirational. This conceptual ambiguity is sustained by an element rarely made explicit in institutional narratives: the imbalance between ethical aspirations and their degree of feasibility, constrained by infraestructural availability. Educational institutions may articulate principles for the responsible use of AI, and yet operate in contexts where systems, technical resources, and organizational capacities are profoundly unequal. Infrastructure —economic, technical, and organizational— is not an accessory requirement: it is the ground upon which ethical commitments can be sustained. In a scenario defined by heterogeneous institutional resources, ethical promises risk not being accompanied by necessary tools, generating a mismatch that may affect the educational ecosystem through spillover into teaching practices, decision-making processes, and opportunities for pedagogical use.

What kind of AI are we talking about when we talk about AI in education?
When we speak of “AI in education,” are we really referring to the entire field of AI, or only to the generative models dominating public debate? Recent literature shows a strong tendency to reduce the concept of educational AI exclusively to generative systems, thereby obscuring older, widespread technologies, such as recommendation algorithms, academic management platforms, learning analytics models, or institutional databases.

This conceptual confusion has profound ethical consequences: Should we apply the same ethical criteria to a generative model as to a performance‑prediction system? How do risks, obligations, and responsibilities shift depending on the type of technology used? An ethical review of AI cannot be undertaken without attending to these nuances or without recognizing that the role of teaching staff is reshaped by technologies that automate tasks, mediate decisions, or condition pedagogical practices. Moreover, educational contexts are tremendously heterogeneous: implementing AI in primary education differs from postgraduate settings, as does  rural environments from metropolitan areas. Each context presents different technical, organizational, and cultural challenges (Cotrina-Aliaga et al., 2021; Giró & Sancho-Gil, 2022). Therefore, any ethical approach to AI in education must acknowledge this diversity and avoid universal or oversimplified solutions.

The political dimension of AI and the starting point of our research
AI is, inevitably, a subject of political discussion. Authors such as Winner (1980) and Crawford (2021) have noted how every technology is embedded in a network of decisions, interests, and power relations that give it meaning beyond its technical functioning. Theoretical approaches to technological phenomena concur that discussing AI necessarily involves analysing how its technical, institutional, and social dimensions are articulated, shaping and legitimizing its uses.

This socio‑technical ecosystem constitutes the starting point of the research project developed in my doctoral thesis, whose aim is to understand the uses and perceptions associated with the ethical implications of AI systems in higher education. In the university context, where unequal infrastructures and deeply diverse methodological practices coexist, understanding how AI is perceived and how decisions about it are made requires listening to the plurality of voices involved in its deployment. For this reason, the research is structured around four analytically relevant groups —teaching staff, students, institutional leaders, and AI professionals— with the aim of bringing these perspectives into dialogue in ways that promote the design and development of future measures that are both realistic and helpful. Although extrapolating to other educational levels requires caution, this approach allows for  the obervation of tensions, expectations, and practices that are largely shared across different educational and institutional settings.

The debate on the ethical uses of AI in education is shaped by simplifying narratives, infrastructural and institutional deficits, and a frequent conflation between instrumental use and technological understanding. For this reason, any meaningful ethical approach must begin with an analysis of the implementation context, allowing us to recognize its inequalities and organizational constraints, but also the human potential for transformation. Only from this complex standpoint will we be able to formulate pertinent questions and build ethical frameworks that do not remain on the surface, but respond to the realities that define how, for whom, and for what purpose AI is implemented in educational contexts.

Research Group Website: https://globaleducation.unican.es/

References

Coeckelbergh, M. (2020). Ética de la inteligencia artificial. Cátedra.

Cotrina Aliaga, J., Vera, M., Ortiz-Cotrina, W. y Sosa-Celi, P. (2021). Uso de la Inteligencia Artificial (IA) como estrategia en la educación superior. Revista Iberoamericana de la Educación. https://doi.org/10.31876/ie.vi.81

Crawford, K. (2021). Atlas of AI. Yale.

European Commission: Directorate-General for Education, Youth, Sport and Culture. (2022). Ethical guidelines on the use of artificial intelligence (AI) and data in teaching and learning for educators. Publications Office of the European Union. https://data.europa.eu/doi/10.2766/153756

Giró, J. y Sancho-Gil, J. M. (2022). Inteligencia artificial y equidad educativa: retos, tensiones y oportunidades. RUSC. Universities and Knowledge Society Journal, 19(2), 1–14.

MacIntyre, L. (2018). Posverdad. Cátedra.

Morozov, E. (2013). To save everything, click here: The folly of technological solutionism. PublicAffairs.

OECD (2023). OECD Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem. OECD Publishing. https://doi.org/10.1787/c74f03de-en

The Conversation. (2021). Eurídice Cabañes: “Somos cíborgs, personas híbridas fundidas con la tecnología”. https://doi.org/10.64628/AAO.mxmacappj

UNESCO. (2021). Recommendation on the ethics of artificial intelligence.
https://unesdoc.unesco.org/ark:/48223/pf0000381137

Winner, L. (1980). Do artifacts have politics? Daedalus, 109(1), 121–136.

 

Author:

Jaime Moreno Carpintero

Research Group: Global Education

University of Cantabria

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.