Module 3: Human-Centered and Earth-Centered AI Ethics Principles
AI Ethics Frameworks
An initial set of AI Ethics frameworks
For this module, we are going to propose a AI for Earth Sciences Ethical Framework. To get us going, we are going to look at existing frameworks that are either already in use or are being proposed. I really like the human-centered approach of these and want us to also focus on this view, with the added Earth-centered view since we are focused on AI for the Earth Sciences.
Assignment 1
Read through the following list of AI frameworks from organizations around the world. I know there are more frameworks than this but this will get us started. Note, this looks like a lot of reading but each of these is quite short, so it will go quickly. Read through them and come prepared to discuss in class.
- NIST Trustworthy and Responsible AI
- Principles for the Ethical Use of Artificial Intelligence in the United Nations System
- DOD Ethical Principles for AI
- The Institute for Ethical AI & Machine Learning has developed their Responsible Machine Learning Principles
- Organisation for Economic Co-operation and Development has their set of AI principles
- The World Economic Forum has developed 9 Ethical AI principles
- UNESCO has four core values and set of human-centered values to follow when developing AI
- PRINCIPLES OF ARTIFICIAL INTELLIGENCE ETHICS FOR THE INTELLIGENCE COMMUNITY
Assignment 2
We have two reading assignments for today. Both are pretty short and will get us thinking hard about our ethical frameworks for AI for Earth Sciences!
- The first reading dives deeply into a human-centered focus on AI. For this class, please read A Human Rights-Based Approach to Responsible AI
- Our second reading is another example of AI (ML in their case) values. Read this short article from Nature Medicine called Do no harm: a roadmap for responsible machine learning for health care. Login to the OU library to get the free PDF or there is a backup PDF on canvas.
Optional:
- Although this may seem tangential, it really fits into the Do No Harm value which I have top of my list of values of AI and so I want you to be aware of it. Please watch this video about banning lethal autonomous weapons
- Optional: If you want to learn more about banning lethal autonomous weapons:
- Ban Lethal Autonomous Weapons
- Future of Life institute (and they have a specific focus on AI) as well as an older call on the autonomous weapons ban