Term: Fall 2022 - Full Term (08/29/2022 - 12/12/2022)
Grade Mode: Letter Grading
|Start Date||End Date||Days||Time||Location|
|8/29/2022||12/12/2022||TR||5:10pm - 6:30pm||KING N233|
Machine learning is increasingly all around us, determining who gets loans, parole, educational and career opportunities, medical care, and so on. With this increasing ubiquity--and with the increasing power and complexity of modern machine learning--have come concerns about the fairness, accountability, and transparency of these models and the systems that rely on them. ML fairness seeks to detect and mitigate situations where models learn to unethically (and often illegally) rely on protected attributes like race, gender and sexual orientation in making high-stakes decisions like those in justice and finance. ML accountability seeks to deal properly with model mistakes. Who is at fault when a model screws up? How do we "fire" a model? How do we ensure that a given mistake doesn't happen again? Finally, ML transparency seeks to expose the internal logic of these complicated, nonlinear models in order to help humans use them in more effective, ethical and accountable ways. These three concerns are heavily intertwined, and have given rise to the Fair, Accountable and Transparent (FAccT) movement in machine learning. This movement has become a very popular sub-area of AI, bringing together researchers, businesses and policymakers in thinking about the implications of an AI-reliant society.
This course will be a seminar consisting of roughly 50% reading and 50% coding assignments, including a final project exploring some aspect of FAccT ML. The coding parts of the course will be taught in Python, and will require CS750/CS850: Machine Learning (or equivalent) as a prerequisite.