I attended the Avoiding Bias in AI Symposium hosted by the Center for AI in Medicine at the University of Bern on the 7th of March.  In recognition of International Women’s Day, I am devoting this blog to highlighting the gender biases embedded in society and the vast quantities of data which are used to train advanced AI models.  These biases are not limited to women, but rather affect all gender identities. Bias in data has always been a concern for models and data analytics, but the extent of usage of today’s AI models heightens their impact. Many biases are subconscious but can become glaringly evident through the output of AI models, which learn the intrinsic biases of the data on which they are trained.

GenAI has the potential to be a formidable force for positive change, but to harness this potential requires, for one, recognition of the biases contained within the data which influence AI algorithm outputs.  One cannot blame the algorithms themselves: they’re simply doing as they’re instructed.  However, a judicious evaluation of the results and inspection of the source data are indispensable. 

GenAI is offering us not only an unprecedented opportunity for advancement, it is also providing us a picture of our societal norms that exposes our assumptions and prejudices.  It is up to us to confront and rectify these biases to enable AI to help us create a more equitable and responsible future. 

Date: March 7, 2024
Event: Center for AI in Medicine, University of Bern
Location: Bern, Switzerland