Following up on our series about the dangers of the algorithm-based digital landscape, let’s talk about discrimination and unequal opportunities. Digital tools are often presented as neutral, objective systems, yet they can mirror the biases of the people and structures that create them. When algorithms misinterpret faces, flag disabilities as suspicious behaviour, or limit the visibility of certain voices, the result is a digital landscape where some children feel unseen or unfairly judged. Understanding how these systems can produce unequal outcomes is an essential step in helping young people navigate technology without internalising its flaws.
Algorithms, while in theory being only based on impartial science and immune to bias, in reality carry within them the prejudices of people who create them. This can most overtly be seen in certain software discriminating against some ethnic groups or people with disabilities. For example, in 2022 a software tasked with detecting cheating used by the Free University of Amsterdam has been shown with “sufficient evidence” to have discriminated against a woman, being unable to detect her face due to her dark skin. Similarly, the Center for Democracy and Technology has reported in 2020 that AI-powered monitoring software was flagging disabled students as potential cheaters due to disability-specific movement, speech, and cognitive processing, like students with attention-deficit disorder who might need to get up and pace around the room, students with Tourettes who display motor tics, students with dyslexia who might read questions out loud and many others.
Additionally, some people raise reasonable suspicion about the discriminatory outcomes of social media practices such as “shadow bans”, where the reach of content that treats certain topics or comes from certain categories of creators has been significantly limited. For example, in 2020 on some social media platforms most posts that used the hashtag “Black Lives Matter” would get zero views, reportedly due to a “glitch”. Educational content on issues that concern the LGBTQ community, for example, has often been age-restricted on platforms such as YouTube, despite not having anything that could reasonably be interpreted as inappropriate were it treating the same subjects for cis heterosexual individuals.
The impact individual users may have on these issues may be limited, and children who are impacted by them might currently not have a direct way to combat these limitations. However, gradual positive change is constantly being made, and it is the role of adults to reassure children that their negative experiences with such technologies are not going to define them. We can inspire the younger generation to combat these biases by speaking out about them, raising awareness and encouraging them to later go in the field of technology to help move the needle of progress in a much more egalitarian direction.
At ABI School, we want students to recognise that technology is not infallible — and that unequal treatment by digital systems is a reflection of design limitations, not personal shortcomings. By encouraging open conversations, raising awareness of bias in algorithms, and supporting students who feel marginalised by these tools, we empower them to imagine a more equitable future. And by inspiring young people to enter the fields that shape tomorrow’s technologies, we help them become the voices that drive progress toward fairness and inclusivity.
🔗 Explore more on our website: abischool.fr
🔗 Follow ABI School on LinkedIn, Facebook, and Instagram to read the full series on child digital safety and wellbeing.
To complete the action, please confirm the condition