The introduction of AI in higher education brings new challenges for all students in terms of diversity, equity and inclusion (DEI). It also raises questions about accessibility for marginalized student populations.
To cite this dossier
Artificial intelligence (AI) in colleges and universities is best known for its pedagogical monitoring mechanisms: intelligent tutoring systems, adaptive and personalized systems targeting different student profiles, dashboards and prediction for success (Collins & Marceau, 2021).
While automated AI systems are often perceived as neutral and objective, they rely on choices made upstream by design teams that are not neutral, nor necessarily fair and equitable (Université de Montréal & IVADO, 2021).
AI and the Admissions Process: an Example of Biased Data
The decision-making algorithms used in college and university admissions processes are based on data that can be biased.
As a result, AI systems can reproduce, and even amplify, certain biases and stereotypes already present in society (Cachat-Rosset, 2022), which in turn can lead to discriminatory decisions (Noiry, 2021) toward certain people who belong to marginalized groups.
The reproduction of biases in AI systems could lead to limited access to higher education for certain student populations.
What is a bias?
A bias is a deviation from a result that is supposed to be neutral (Bertail & al., 2019). There are two main categories of bias:
(1) Cognitive biases, which refer to incorrect reasoning or errors of judgment or perception that deviate from logical thinking. Often unconscious, they may be linked to emotions (e.g., fear, anger) or long-acquired habits of thought (Gauvreau, 2021).
One of the most widespread cognitive biases is the confirmation bias, which consists in favoring information that supports our opinions, beliefs or values, and ignoring or discrediting information that contradicts them (ibid.).
The essentialist bias is associated with prejudice against members of certain social groups, whose characteristics are perceived as immutable (ibid.).
(2) Representativeness biases, which refer to a mismatch between:
- On the one hand, the data used to design a decision-support algorithm (e.g. from people enrolled in a study program);
- On the other hand, the target data on which the algorithm will be deployed (e.g. from candidates in a program).
Indeed, AI works by “learning”: algorithms are created to extract data, analyze it, identify trends and make predictions. If certain student groups are underrepresented in the data, exclusion is reproduced (Ravanera & Kaplan, 2021).
Thus, the absence of people from a marginalized group in a dataset aimed at creating a decision-support algorithm in admissions processes may constitute a representativeness bias (Bertail & al., 2019).
The representativeness biases present in AI system databases stem, among other things, from the composition of the design teams (Collins & Marceau, 2021; Gaudreau & Lemieux, 2020). This is a predominantly male environment, where the representation of women and sociodemographic minority groups (e.g., racialized people) is very low (Observatoire international sur les impacts sociétaux de l’AI et du numérique – OBVIA, n.d.).
Representation on AI teams:
- AI professional staff: 24% of women in Canada and 22% worldwide (Ravanera & Kaplan, 2021);
- Women and men who reported belonging to a visible minority were less likely to work in science, technology, engineering and mathematics (STEM) than their counterparts who did not report belonging to a visible minority (Statistics Canada, 2019, in Ravanera & Kaplan, 2021);
- Less than 3% of full-time Google employees are black. This percentage rises to 4% at Microsoft (West & al., 2019, in Ravanera & Kaplan, 2021).
Algorithmic Discrimination
Such a lack of diversity within design teams can increase the presence of bias toward certain marginalized groups, who are also part of the student population in higher education institutions. These people run the risk of not having a voice, of being excluded from systems (Gaudreau & Lemieux, 2020) and of remaining invisible because they are not represented in the data or because their needs are not deemed a priority (Gentelet & Lambert, 2021).
A Bit of History
Attention to the reproduction of biases and discrimination is not new. In the 1970s and 1980s, a medical school in the United Kingdom used a computer program to select applicants. It rejected applicants of female gender having non-European names because the algorithm was based on previous data from accepted applications, and these applicants were poorly represented (Ravanera & Kaplan, 2021).
More recently, still in the United Kingdom, the COVID-19 pandemic led to the cancellation of end-of-year exams in high schools. An alternative method in the form of a grading algorithm (A-Level) was designed to determine students’ grades for the 2019–2020 school year.
The publication of the results was widely criticized, particularly for the effect of the grading algorithm. Indeed, the grades of students who attended public schools were lower, while the results of private school students improved. This imbalance can disadvantage students from lower socioeconomic backgrounds, thereby reducing their access to higher education (Poirier, 2020).
As a result, an AI decision-support system based on biased data can lead to a discriminatory decision (Université de Montréal & IVADO, 2021).
Algorithmic Discrimination
Toward Inclusive and Responsible AI
It is crucial that the integration of AI into higher education systems be made by considering issues related to diversity, equity and inclusion (DEI), including the biases and discrimination that can arise as a result (Observatoire international sur les impacts sociétaux de l’IA et du numérique – OBVIA).
Questions that need to be answered
- Do design teams integrate expertise representative of diversity (e.g., ethnocultural, gender, social class) when developing AI technologies? (Collins & Marceau, 2021)
- Are design teams trained in the biases and potentially discriminatory decisions of AI systems? (Cachat-Rosset, 2022)
- What are the best practices for identifying and mitigating biases and discrimination in AI systems? (Université de Montréal & IVADO, 2021). Are they being implemented?
- Should information on how algorithms work be made accessible in public sectors where sensitive data is used (e.g., education, healthcare)? (Gentelet & Lambert, 2021)
Anchoring AI-related issues in a framework founded on DEI and justice (Collins & Marceau, 2021) makes it possible to consider technological development at the service of people — and not the other way around — and would promote inclusive and responsible AI (Castets-Renard, 2019).
To that end, it is necessary to fight this obligation to take the digital turn at any price and at any speed in order to avoid going off course. The only urgent need to consider is to slow down and create the right conditions for citizen participation. In this way, it will be possible to find inclusive alternatives to digital issues that remain above all of a social nature.
Gentelet et Lambert, 2021
It was in response to these concerns that, in 2017, the Université de Montréal drafted the Montréal Declaration for a Responsible Development of Artificial Intelligence to guide the development of AI in an inclusive manner. The Declaration includes ten principles: well-being, respect for autonomy, protection of privacy and intimacy, solidarity, democratic participation, equity, diversity inclusion, caution, responsibility and sustainable development.
Some of UNESCO’s (2019) AI recommendations below also point in this direction:
- Facilitate the development of standards and policies for improved openness and transparency in AI algorithms;
- Reduce digital divides in access to AI, particularly those related to gender, by establishing independent monitoring mechanisms;
- Strive for gender equality and ethnocultural diversity as well as for the inclusion of marginalized groups in multi-stakeholder dialogues on AI issues;
- Evaluate the algorithmic discrimination of historically marginalized populations.
Efforts are underway to consider solutions that mitigate the discrimination and invisibility of groups marginalized by biased AI systems (Gentelet, 2022).
One of the keys to identifying potential discrimination is to study the results of an AI model under development to flush out the underlying discriminatory mechanisms (Université de Montréal & IVADO, 2021).
Best practices to mitigate biases when modeling AI include:
- Assessing the diversity of the design team’s composition right from the start of the project;
- Understanding the purpose, the stakeholders involved and the potential consequences of applying the AI model in development;
- Examining the provenance of datasets;
- Ensuring that the AI model developed is truly in line with responsible practices (ibid.).
This last step involves validating whether the people targeted by an AI system’s decision are prejudiced (e.g., the student population belonging to marginalized groups).
The introduction of AI regulatory mechanisms should move in this direction: in the event that a decision based on an AI system adversely affects a student, who would be held responsible (Gentelet & Lambert, 2021)? The fact that a higher education institution can be held accountable for a discriminatory decision alone justifies AI systems being designed with equity, diversity and inclusion in mind at the beginning of their creation.
References
Bertail, P., Bounie, D., Clémençon, S. & Waelbroeck, P. (2019). Algorithmes : biais, discrimination et équité. Institut Mines-Télécom de France.
Cachat-Rosset, G. (2022, 4 novembre). Les enjeux de discrimination et d’inclusion en IA [communication orale]. Département de mathématiques et de statistique de l’Université du Québec à Trois-Rivières.
Castets-Renard, C. (2019, 10 octobre). Intelligence artificielle : combattre les biais des algorithmes. The Conversation.
Collin, S. & Marceau, E. (2021). L’intelligence artificielle en éducation : enjeux de justice. Formation et profession, 29(2), 1‑4.
Gaudreau, H. & Lemieux, M.-M. (2020). L’intelligence artificielle en éducation : un aperçu des possibilités et des enjeux. Le Conseil.
Gauvreau, C. (2021, january 11). Reconnaître les biais cognitifs. Actualités UQAM.
Gentelet, K. (2022, may 31). Repenser la justice sociale en contexte d’IA : défis majeurs [communication orale]. Chaire Justice sociale et intelligence artificielle, Paris, France.
Gentelet, K. et Lambert, S. (2021, june 14). La justice sociale : l’angle mort de la révolution de l’intelligence artificielle. The Conversation.
Noiry, N. (2021, august 31). Des biais de représentativité en intelligence artificielle. Blog Binaire – Le Monde.
Observatoire international sur les impacts sociétaux de l’IA et du numérique – OBVIA. (s. d.). Équité, diversité et inclusion.
Poirier, A. (2020, august 17). Au Royaume-Uni, le scandale de l’algorithme qui a lésé les lycéens ne s’apaise pas. L’Express.
Ravanera, C. et Kaplan, S. (2021). Une perspective d’équité en matière d’intelligence artificielle. Institute for Gender and the Economy, Rotman School of Management, Université de Toronto.
Sorbonne Université (2021). Les discriminations algorithmiques | 2 minutes d’IA [vidéo]. Sorbonne Center for Artificial Intelligence (SCAI).
Université de Montréal (2017). Déclaration de Montréal pour un développement responsable de l’intelligence artificielle.
Université de Montréal et IVADO (2021). Biais et discrimination en IA [MOOC].
Related Articles
Climate Emergency: Is Higher Education Up to the Challenge? (content in French only)
Intergenerational Cohabitation in Senior Residences: a Project that Combines Residential Security and Enriching Human Experience (content available in French only)
Barriers to Accessing Quebec College Education: What can be Learned from Allophones from Recent Immigrant Backgrounds?
First Peoples at the Heart of Concerns at Université Laval (content available in French only)
To experience McGill University in French : A Portal for the Promotion of the French Language (content available in French only)
UQAT: The Importance of Forging Links with Communities (content available in French only)
Mobilizing Regional Stakeholders: The Case of the Cégep de Matane (content available in French only)