CSETv1 Charts
The CSET AI Harm Taxonomy for AIID is the second edition of the CSET incident taxonomy. It characterizes the harms, entities, and technologies involved in AI incidents and the circumstances of their occurrence. The charts below show select fields from the CSET AI Harm Taxonomy for AIID. Details about each field can be found here. However, brief descriptions of the field are provided above each chart.
The taxonomy provides the CSET definition for AI harm.
AI harm has four elements which, once appropriately defined, enable the identification of AI harm. These key components serve to distinguish harm from non-harm and AI harm from non-AI harm. To be an AI harm, there must be:
- 1) an entity that experienced
- 2) a harm event or harm issue that
- 3) can be directly linked to a consequence of the behavior of
- 4) an AI system.
All four elements need to be present in order for there to be AI harm.
Not every incident in AIID meets this definition of AI harm. The below bar charts show the annotated results for both all AIID incidents and incidents that meet the CSET definition of AI harm.
CSET has developed specific definitions for the underlined phrases that may differ from other organizations’ definitions. As a result, other organizations may make different assessments on whether any particular AI incident is (or is not) AI harm. Details about CSET’s definitions for AI harm can be found here.
Every incident is independently classified by two CSET annotators. Annotations are peer-reviewed and finally randomly selected for quality control ahead of publication. Despite this rigorous process, mistakes do happen, and readers are invited to any errors they might discover while browsing.
Does the incident involve a system that meets the CSET definition for an AI system?
AI System
(by Incident Count)If there was differential treatment, on what basis?
Differential treatment based upon a protected characteristic: This special interest intangible harm covers bias and fairness issues concerning AI. However, the bias must be associated with a group having a protected characteristic.Basis for differential treatment
(by Incident Count)All AIID Incidents
Category | Count |
---|---|
race | 43 |
sex | 21 |
nation of origin, citizenship, immigrant status | 12 |
disability | 11 |
religion | 11 |
sexual orientation or gender identity | 11 |
financial means | 9 |
age | 8 |
geography | 8 |
ideology | 2 |
none | 2 |
familial status (e.g., having or not having children) or pregnancy | 1 |
other | |
unclear |
CSET AI Harm Definition
Category | Count |
---|---|
race | 37 |
sex | 18 |
religion | 11 |
nation of origin, citizenship, immigrant status | 10 |
disability | 10 |
sexual orientation or gender identity | 7 |
age | 7 |
financial means | 6 |
geography | 5 |
ideology | 2 |
none | 1 |
familial status (e.g., having or not having children) or pregnancy | 1 |
other | |
unclear |
In which sector did the incident occur?
Sector of Deployment
(by Incident Count)All AIID Incidents
Category | Count |
---|---|
information and communication | 82 |
Arts, entertainment and recreation | 35 |
transportation and storage | 28 |
wholesale and retail trade | 20 |
law enforcement | 16 |
Education | 15 |
human health and social work activities | 15 |
public administration | 13 |
administrative and support service activities | 11 |
professional, scientific and technical activities | 8 |
financial and insurance activities | 7 |
accommodation and food service activities | 6 |
manufacturing | 3 |
other | 3 |
defense | 2 |
real estate activities | 2 |
other service activities | 1 |
unclear | 1 |
CSET AI Harm Definition
Category | Count |
---|---|
information and communication | 59 |
transportation and storage | 21 |
Arts, entertainment and recreation | 19 |
law enforcement | 14 |
wholesale and retail trade | 13 |
public administration | 9 |
human health and social work activities | 7 |
administrative and support service activities | 7 |
Education | 6 |
accommodation and food service activities | 5 |
professional, scientific and technical activities | 4 |
financial and insurance activities | 4 |
other | 2 |
defense | 1 |
real estate activities | 1 |
other service activities | 1 |
unclear | 1 |
manufacturing |
How autonomously did the technology operate at the time of the incident?
- Level 1: the system operates independently with no simultaneous human oversight.
- Level 2: the system operates independently but with human oversight, where the system makes a decision or takes an action, but a human actively observes the behavior and can override the system in real-time.
- Level 3: the system provides inputs and suggested decisions or actions to a human that actively chooses to proceed with the AI's direction.
Autonomy Level
(by Incident Count)- Autonomy1 (fully autonomous): Does the system operate independently, without simultaneous human oversight, interaction or intervention?
- Autonomy2 (human-on-loop): Does the system operate independently but with human oversight, where the system makes decisions or takes actions but a human actively observes the behavior and can override the system in real time?
- Autonomy3 (human-in-the-loop): Does the system provide inputs and suggested decisions to a human that