TODO: Link to Blog Post Workshop
-
IBM's AI Fairness 360: This extensible open source toolkit can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.
-
Google's What-If-Tool: Using WIT, you can test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data, and for different ML fairness metrics.
-
Georgia Institute of Technology's CS 6603: AI, Ethics, and Society.
-
UC Berkley's Algorithmic Fairness & Opacity Lecture Series.