Working on Results 2, the inclusion guide: the why, what, and first steps
The inclusion guide by FAIaS
Why does FAIaS create an inclusion guide?
When an homogeneous group is creating the AI algorithms, this translates into the creation of algorithms that (can unconsciously) discriminate unethically.
Racial discrimination
The first and most famous case, the COMPAS model, shows how even the simplest models can discriminate unethically according to race.
Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, from ProPublica published on May 23, 2016 the article called ‘Machine Bias’ that states that the artificial intelligence software COMPAS) (Correctional Offender Management Profiling for Alternative Sanctions) used in courtrooms across the United States to predict future crimes is biased against Black defendants. Further research has been done on the topic including criticism of the original article.
Some extra references about the COMPAS model can be found here:
- An article published on Towards Data Science with title: ‘COMPAS Case Study: Fairness of a Machine Learning Model’
- A Massive Science article with title: ‘Can the criminal justice system’s artificial intelligence ever be truly fair? Computer programs used in 46 states incorrectly label Black defendants as “high-risk” at twice the rate as white defendants
- A criticism on the original ProPublica article with title: False Positives, False Negatives, and False Analyses: A Rejoinder to “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks
Gender discrimination
An example of how the creation of algorithms that discriminate is affecting women, and will affect girls is recruitment. More and more the process of selection of candidates for a job and the recruitment process is done via digital systems with AI algorithms behind. This means that the selection of candidates is biased and some groups are being discriminated by the algorithm and will not even have the chance to get to an interview. Some references can be found here:
- A Reuters article explains how and why AI recruiting tools can be biased against women with the use case of Amazon’s automated recruitment system.
- A Bloomberg article from Bass, D. and Huet, E. (2017) that show how Researchers Combat Gender and Racial Bias in Artificial Intelligence.
- A Medium Article from 2018 about ‘Racial bias and gender bias in AI systems with examples’ that also included the COMPAS use case.
With the previous references and examples about bias in AI related to race and gender, we can clearly see that the bias challenges need to be tackle to guarantee that our systems are not automating tendencies or discrimination. Bias should be important for everyone, it is not only about women’s jobs, but about many more aspects that may affect men too. Example: As proportionately more than 90% of murders were commited by men in the last years, and most teachers are female, AI systems that are designed with poor knowledge and consciousness about bias will tend to automate discriminatory practices against men and classify them easily as ‘potential murders’ or as ‘potentially unfit for teaching jobs’. Will we allow this discriminatory systems against our men? Gender and racial discrimination should be important for everyone and the FAIaS project cares about it. For inspiration check the Michael Kimmel’s TED talk on ‘Why gender equality is good for everyone - Men included’
What is the FAIaS’ inclusion guide?
The inclusion guide will take the shape of a handbook with interactive materials and diversity recommendations. With this guide, we want to contribute to the creation of a more inclusive educational system that will also support the workforce to become more diverse and formed by more heterogeneous groups, and/or that the bias is tackled in other ways.
The inclusion guide, also known as FAIaS’s result 2 or PR2, is all about inclusion in Artificial Intelligence and Education. It will provide (non-formal) educators with tools and knowledge about bias in education and bias in AI algorithms, and explain with examples and easy-to-use materials why diversity and gender/racial balance is so important in the AI and Education fields and it will include practical tips to tackle this bias challenge together.
And, how is the development of the inclusion guide going so far?
To achieve the end result, some initial and intermediate steps need to be taken and have been taken.
- In June 2021, CollectiveUP prepared the outline and plan for the development of this project results as PR2 coordinator. The plan was presented to all partners in Madrid at the project meeting organized by FAIaS lead partner URJC (link to official page of the university).
- All partners (URJC, TCB, VUB and CU – add links to all internal pages of the partners) brainstormed on the plan and agreed to some next steps. On one hand, to work closely with (non-formal) educators on the topic of bias and interview them, understand their needs, and on the other hand, start the development of materials to tackle the topic of unconscious bias and bias in AI.
- In relation to project result 1, a lesson plan was outlined by the AI Lab at VUB to present it to teachers from Braga and to test it out.
- Partner Theatro Circo de Braga carried out focus groups, and interviews with teachers to gather feedback about the lesson plan on bias. Some conclusions were made, and they can be found in a previous post
- As PR1 and PR2 are interrelated in some few aspects, the lesson plan on bias created for PR1 was presented to non-formal educators in Belgium to evaluate if it was feasible for them to use it as it is. Those initial talks took place online in the months of June, July, September and October 2021 and they were very beneficial to gather some initial needs from these organizations and educators and understand how this PR2 could be developed for them.
- On the other hand, and in direct relation to PR2, CollectiveUP dedicated the months of October and November 2021, to research the topic of unconscious bias and prepared materials for a theoretical and practical workshop for educators.
- In November 2021, the materials and workshop was tested out with teachers and feedback was received on how to integrate the bias topic into formal education lessons and into non-formal activities.
- In December 2021 and January 2022, some extra talks with (non-formal) educators across Europe who already use AI in the classroom have taken place, and we have gathered inspiration from their own lessons!
As next steps:
- Deep interviews with educators will take place in the coming months, and we will record them, edit them and post them on our youtube channel and publish the text on our blog as new posts.
- We will also create a new post that summarizes the initial conclusion from the talks with educators that took place in June-to-October.
- We will publish on our blog the bias materials and workshop that have been developed so far, and the feedback from teachers.
- We will develop examples that use the LearningML tool to illustrate the concept of bias in a practical way, and incorporate them as extra materials to complement the workshop on unconscious bias already given in Braga.
Remember to follow us on social media and on our newsletter to keep up-to-date on FAIaS developments!