10 Shocking AI Controversies That Broke the Internet!

The last few years have seen a rapid improvement in various types of artificial intelligence, many of which are now widely used by both businesses and the general public. However, like most disruptive new technologies, the potential benefits of AI must be balanced with the risks it poses. Some are unsure whether this can be done. After all, not many new technologies come with the risk of becoming smarter than the people who created them.
Happily, this has not happened yet, but AI has been at the center of numerous controversies, crimes, and scandals to get to where it is today. There will likely be many more to come, but here are ten of the most interesting and surprising AI controversies so far.
10. The Wizard of Oz Technique
In the late 2010s, several tech firms were exposed for employing humans to do tasks that they claimed—or at least gave the impression—that their cutting-edge AI was doing. Some described this practice as “pseudo-AI,” but others named it the “Wizard of Oz technique” in reference to the moment in the classic film when the curtain is pulled back to reveal that the giant, fiery wizard was really just an old man operating a machine.
Companies caught using the technique included Facebook, which at one point employed people to act as “virtual” assistants for an expense management app called Expensify and scheduling services X.ai and Clara. As early as 2008, a voicemail-to-text conversion service called Spinvox was said to be hiring overseas workers to transcribe the audio instead of using their software.
9. AI Interrogation
The CIA was one of the earliest organizations to look into AI. Papers show that they were carrying out tests using basic AI as long ago as the early 1980s, such as in 1983 when they used a crude piece of software called “Analiza” to try and interrogate one of their agents.
The basic idea was that the program would remember the agent’s answers and then select a fitting reply, threat, or question from a bank it had saved. It was crude by today’s standards but probably more sophisticated than many would think for the time. Like a real interrogator, it would look for the agent’s vulnerabilities.
8. North Korean Job Applications
One country that has been nefariously using AI is North Korea. The CIA’s counterparts there are believed to have been using AI to generate thousands of applications for remote jobs in the U.S.
AI automation tools help operatives send out hundreds of job applications under different identities and actually get and do one or more of the jobs. They then use the income they make to fund their regime. The government has said that some of these North Korean workers are making as much as $300,000 a year in income. This translates to hundreds of millions for the North Korean regime.
7. Deepfake Scams
One of the most concerning developments in AI is how real deepfakes are now able to look. These are videos where the face, and often the voice and body, of somebody else is digitally imposed over the person actually being filmed.
In early 2024, a finance worker at Arup unknowingly sent $25 million to scammers using deepfake technology. He received an email claiming to be from the CFO, followed by a video call with fake AI-generated colleagues. Believing it was legitimate, he transferred the money.
6. The Hollywood “Double Strike”
In 2023, new TV and film releases were put on hold because both writers and actors went on strike, in large part to protect their careers against the existential threat posed by AI. Writers feared a future where entire scripts would be written by large language models like ChatGPT.
Actors also feared that AI could scan their likenesses once and use them forever. The deal secured by their union required studios to ask for actors’ consent before using AI recreations of their images.
5. Copyright Theft
AI models require training materials in the form of words, images, or sounds. However, some AI firms have been accused of disregarding copyright laws by using content without permission.
Many argue that AI training is “fair use.” Microsoft AI head Mustafa Suleyman stated that anything published online should be freely available for AI to use, sparking a major debate about intellectual property rights.
4. Hallucinations
One major AI problem is hallucinations—convincing but false information. AI systems trained on incorrect or biased data sometimes generate entirely fictitious results.
A Canadian lawyer experienced this firsthand in early 2024 when she used ChatGPT to find legal cases, only to realize later that two of them did not exist. The court ruled it was an innocent mistake but warned of AI’s potential to spread misinformation.
3. Hiring and Firing
Many fear that AI will take jobs, but so far, it has primarily been used to replace some workers with others. Amazon has been accused of using AI to track employee productivity and even fire workers automatically if they fall below set targets.
Employees have reported feeling like they are being monitored by a machine rather than a human manager, leading to high stress levels and concerns over fairness in the workplace.
2. Racial Disparity
Facial recognition technology has rapidly improved, but studies show that its accuracy varies by race. The error rate for black faces can be as much as ten times higher than for white faces.
Companies like Amazon, Microsoft, and IBM have faced criticism over biased AI models. In response, cities such as San Francisco have banned government use of facial recognition software.
1. Bad for Well-Being
Although AI is designed to make life easier, studies suggest that it negatively affects mental well-being. A 2024 study showed that people with higher AI exposure reported worse health and happiness levels.
Researchers believe factors like job insecurity and reduced autonomy may contribute to AI’s negative impact. However, they also hope that, like other technologies, AI could become beneficial as people learn to integrate it into their lives more effectively.
Final Thoughts
AI is still in its early stages, yet it has already sparked intense debates, lawsuits, and even international scandals. As technology continues to evolve, it is crucial to balance its benefits with its risks. Stronger regulations, ethical guidelines, and transparency will be key to ensuring that AI works for humanity rather than against it.