Research Assistant · University of Toronto
Investigated algorithmic bias in the COMPAS risk assessment tool, analyzing 10k+ case records with Python to evaluate fairness and predictive accuracy. Built reproducible Jupyter experiments, fairness-aware pipelines, and visual reports that improved model accuracy to 77.2% while reducing disparity (SPD –0.012, DI 0.981).