
The AI incidents database is a community and open-source project dedicated to documenting the harm caused by artificial intelligence systems.
The project began in early 2019, as indicated by the first entry in the GitHub repository. It has reported more than 1,000 AI incidents since, with the highest annual number – 265 incidents – coming in 2024: only halfway through 2025, already the second-highest annual figure currently stands at 163.
What are AI incidents?
Artificial intelligence, better known as AI, is the capability of machines to perform actions that were initially thought only humans could, such as, but not limited to, pattern recognition, understanding language and inferring. So, AI incidents are alleged harm or near-harm events to people, property, or the environment where AI is implicated. For example, within 24 hours in March 2016, Microsoft released and removed its chatbot Tay over a number of racist and sexist tweets.
Deepfakes on the rise
Since 2014, up to its latest update, the database has reported 190 incidents (17.2 per cent of the total number of incidents) concerning deepfakes, audio or visual content that places someone's face or voice into someone else's in a way that appears real. Since the start of 2023, deepfakes have accounted for almost a third of the number of AI incidents reported annually.
Here are some of this year's cases:
- February 15, 2025: A teenager in Palma de Mallorca, Spain, allegedly generated deepfake nudes of some of his classmates and reportedly shared the altered images with others.
- March - May, 2025: AI-generated deepfake music videos were uploaded across a reported 127 YouTube channels. The videos depict global celebrities praising Burkina Faso President Ibrahim Traore and endorsing military rule.
- June 13, 2025: A deepfake video showing Bulgarian tennis player Grigor Dimitrov has allegedly been circulating on social media promoting a fraudulent stock and cryptocurrency trading programme.
Concerns over AI use
According to the Global Risks Report 2025, the World Economic Forum asked people around the world about the level of severity of a list of potential risks over a two and 10-year period. Adverse outcomes of AI technology ranked 31st worldwide for the two-year spell. However, for long-term risk it climbed to sixth spot and in the Middle East and North Africa region, it occupies second position, an indication of the problems the region is encountering with certain aspects of AI technology.

