Skip to main content

Industry & alumni


Quantifying Bias In AI: Detecting Bias Along the Machine-Learning Pipeline

The growth of face recognition (FR) technology has been accompanied by consistent assertions that demographic dependencies could lead to accuracy variations and potential bias. For over a decade, the tech industry has anticipated the integration of computer-vision into the human / machine interface to enhance automation. As it turned out, the early traction for FR comes from security and surveillance applications, rightfully prompting dystopian fears. While engineers have benefited from the availability of AI and machine learning tools, allowing them to train their models to ever higher accuracy, the fairness and ethics of their algorithms have often been an afterthought. This project aims to develop a tool set to measure and reduce bias in AI training sets, test sets, and resulting models. Further, its goals include practical Tech Policy recommendations for the industry regarding bias of AI models.

Faculty Adviser

Payman Arabshahi, Associate Professor, UW ECE, Electrical & Computer Engineering

Arindam Das, Affiliate Associate Progessor, Electrical & Computer Engineering


Claudia Valenta
Karlee Wong
Rakesh Pavan
Rhea Bhutani