βWe are made wise not by the recollection of our past, but by the responsibility for our future.β
- George Bernard Shaw
The course revolves around the theoretical and technical challenges associated with creating artificial intelligence (AI) systems that are safe to deploy and aligned with human values. This includes a series of questions that arise at the intersection of philosophy and AI, especially with respect to how highly capable AI could shape the future of humanity. We will specifically study the conceptual framework that is relevant to AI risk with questions about intelligence and the pursuit of goals as well as the ethics of creating powerful AI and the moral responsibility towards our collective future. We will examine the recent developments of Large Language Models (LLMs) and consider possible future scenarios for the near-term and the long-term, focusing on the possibility and likelihood of an intelligence explosion (Singularity), and the potential of Artificial General Intelligence (AGI) and Superintelligence.
No prerequisites. Familiarity with Intro to Philosophy material is recommended.
Asynchronous, online.
Please email me at [email protected] if you wish to schedule a Zoom meeting.
All resources are available in the βContentβ section of Brightspace. You don't need to purchase any books for this course, but if you wish to do so, I recommend βHuman Compatibleβ by S. Russell or βEthics of Artificial Intelligenceβ edited by S. M. Liao. Several of the readings are chapters from these two books.
Assignments | Due | Grading |
---|---|---|
Self-Reviews | Every Friday | 20% |
Mid-Term | October 16th | 40% |
Final | December 18th | 40% |
Make copies of this document and use them for your weekly self-reviews. The aim of this is to track your progress and organize your notes for every week of the course. Submit your weekly self-review every Friday.