CIS 482/582: Trustworthy Artificial IntelligenceUniversity of Michigan, Dearborn |
---|
Course Description: This course introduces students to the broad and emerging notion of trustworthy artificial intelligence (AI). Beginning with a hands-on introduction to the basics of Deep Neural Networks (DNNs) and modeling, it will cover three broad areas of trustworthiness in AI. In the first area of robustness, the course will introduce students to the AI threat landscape focusing on training data poisoning, model evasion, privacy-sensitive data inference, model stealing/extraction, and threats to safe deployment of AI. In the second area of transparency, students will be introduced to frameworks used to interpret/explain AI model’s decisions. In the third area of accountability, students will learn methods and tools for reducing bias and ethical pitfalls when AI models are deployed in high-stakes application domains. The course concludes with a broader take on AI trustworthiness by studying the dynamics among the three broad AI trustworthiness desirables. The course will be taught in a predominantly project-based setting to allow students gain hands-on experience beyond conceptual understanding.
On Prerequisites: While prior knowledge of machine learning is not required, it will be a plus. To level the ground for everyone, the course will kick-off with a ML crash course just enough to understand subsequent material. Students are expected to have proficiency in at least one programming language (e.g., Python, C/C++, Java). Knowledge of data structures such as trees and graphs would be a plus.
Reference Materials: This course doesn’t have a dedicated textbook. However, we will use the following three books as our main references. In addition to these books, the course will heavily rely on influential papers for each topic discussed.
On Scope: While this course is about AI/ML, it does not cover formalisms or technical details of ML or Deep Neural Networks. Deep learning fundamentals just enough to grasp subsequent topics are introduced at the beginning of the course. This course is intentionally broad so as to reason about ML trustworthiness beyond ML in the presence of adversaries. It is organized in a manner that expands the focus beyond ML security and privacy to safety, transparency, fairness, and ethical implications of AI/ML deployed in high-stakes application domains. Given the natural focus on breadth instead of depth, emphasis is more on representative trustworthiness risks/pitfalls and remedies/best practices, and the dynamics thereof. The AI/ML trustworthiness field is work-in-progress as it pertains to: techniques, tools, and regulatory provisions. In light of this ongoing evolution, I plan to update the material to keep up with collective progress made by academia, industry, government, and public interest technology/policy initiatives.
Take the below schedule as tentative, depending on progress it will be updated as the semester advances.
© Birhanu Eshete 2024