Computer Science 227 - Neural Safety Net
M/W | 10:05 AM - 11:20 AM
AI systems are increasingly deployed in high-stakes settings, from healthcare to autonomous vehicles, raising critical questions about their safety and reliability. This course explores what it means for an AI system to be safe, examining different definitions of safety and how they can be formally specified. Students will learn to express safety properties using precise mathematical language and explore techniques from machine learning and automated reasoning to verify and enforce these properties. By engaging with real-world examples and theoretical frameworks, students will develop a deeper understanding of the challenges in building trustworthy AI and the tools available to address them.
Requisite: COSC-111. Fall semester. Professor Wu.
How to handle overenrollment: Priority to majors.
Students who enroll in this course will likely encounter and be expected to engage in the following intellectual skills, modes of learning, and assessment: Quantitative work and group projects.