Margaret Mitchell is a computer scientist who works on algorithmic bias and fairness in machine learning. She is most well known for her work on automatically removing undesired biases concerning demographic groups from machine learning models,[2] as well as more transparent reporting of their intended use.[3]
Mitchell is best known for her work on fairness in machine learning and methods for mitigating algorithmic bias. This includes her work on introducing the concept of 'Model Cards' for more transparent model reporting,[3] and methods for debiasing machine learning models using adversarial learning.[2] Margaret Mitchell created the framework for recognizing and avoiding biases by testing with a variable for the group of interest, predictor and an adversary.[5]
In 2012, Mitchell joined the Human Language Technology Center of Excellence at Johns Hopkins University as a postdoctoral researcher, before taking up a position at Microsoft Research in 2013.[6] At Microsoft, Mitchell was the research lead of the Seeing AI project, an app that offers support for the visually impaired by narrating texts and images.[7]
In November 2016, she became a senior research scientist at Google Research and Machine intelligence. While at Google, she founded and co-led the Ethical Artificial Intelligence team together with Timnit Gebru. In May 2018, she represented Google in the Partnership on AI.
In February 2018, she gave a TED talk on 'How we can build AI to help humans, not hurt us'.[8]
In January 2021, after Timnit Gebru's termination from Google, Mitchell reportedly used a script to search through her corporate account and download emails that allegedly documented discriminatory incidents involving Gebru. An automated system locked Mitchell's account in response. In response to media attention Google claimed that she "exfiltrated thousands of files and shared them with multiple external accounts".[9][10][11] After a five-week investigation, Mitchell was fired.[12][13][14] Prior to her dismissal, Mitchell had been a vocal advocate for diversity at Google, and had voiced concerns about research censorship at the company.[15][9]
^ abHu Zhang, Brian; Lemoine, Blake; Mitchell, Margaret (2018-12-01). "Mitigating Unwanted Biases with Adversarial Learning". Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. AAAI/ACM Conference on AI, Ethics, and Society. pp. 220–229. arXiv:1801.07593. doi:10.1145/3278721.3278779.
^ abMitchell, Margaret; Wu, Simone; Zaldivar, Andrew; Barnes, Parker; Vasserman, Lucy; Hutchinson, Ben; Spitzer, Elena; Raji, Inioluwa Deborah; Gebru, Timnit (2019-01-29). "Model Cards for Model Reporting". Proceedings of the Conference on Fairness, Accountability, and Transparency. Conference on Fairness, Accountability, and Transparency. arXiv:1810.03993. doi:10.1145/3287560.3287596.