Artificial intelligence (AI) is a powerful technology that can bring many benefits to society. However, AI also poses significant risks and challenges that need to be addressed with caution and responsibility. In this article, we explore the questions, “What are the dangers of artificial intelligence?”, and “Does regulation offer a solution?”
The possible dangers of artificial intelligence have been making headlines lately. First, Elon Musk and several experts called for a pause in the development of AI. They were concerned that we could lose control over AI considering how much progress has been made recently. They expressed their worries that AI could pose a genuine risk to society. A second group of experts, however, replied that Musk and his companions were severely overstating the risks involved and labelled them “needlessly alarmist”. But then a third group again warned of the dangers of artificial intelligence. This third group included people like Geoffrey Hinton, who has been called the godfather of AI. They even explicitly stated that AI could lead to the extinction of humankind.
Since those three groups stated their views, many articles have been written about the dangers of AI. And the calls to regulate AI have become louder than ever before. We published an article on initiatives to regulate AI in October 2022. Several countries have started taking initiatives.
What are the dangers of artificial intelligence?
So, what are the dangers of artificial intelligence? As with any powerful technology, it can be used for nefarious purposes. It can be weaponized and used for criminal purposes. But even the proper use of AI holds inherent risks and can lead to unwanted consequences. Let us have a closer look.
A lot of attention has already been paid in the media to the errors, misinformation, and hallucinations of artificial intelligence. Tools like ChatGPT are programmed to sound convincing, not to be accurate. It gets its information from the Internet, but the Internet contains a lot of information that is not correct. So, its answers will reflect this. Worse, because it is programmed to provide any answer if it can, it sometimes just makes things up. Such instances have been called hallucinations. In a lawsuit in the US, e.g., a lawyer had to admit that the precedents he had quoted did not exist and were fabricated by ChatGPT. In a previous article on ChatGPT, we warned that any legal feedback it gives must be double-checked.
As soon as ChatGPT became available, cybercriminals started using it to their advantage. A second set of dangers therefore has to do with cybercrime and cybersecurity threats: AI can be exploited by malicious actors to launch sophisticated cyberattacks. This includes using AI algorithms to automate and enhance hacking techniques, identify vulnerabilities, and breach security systems. Phishing attacks have also become more sophisticated and harder to detect.
AI can also be used for cyber espionage and surveillance: AI can be employed for sophisticated cyber espionage activities, including intelligence gathering, surveillance, and intrusion into critical systems. Related to this is the risk of invasion of privacy and data manipulation. AI can collect and analyse massive amounts of personal data from various sources, such as social media, cameras, sensors, and biometrics. This can enable AI to infer sensitive information about people’s identities, preferences, behaviours, and emotions. AI can also use this data to track and monitor people’s movements, activities, and interactions. This can pose threats to human rights, such as freedom of expression, association, and assembly.
Increased usage of AI will also lead to the loss of jobs due to automation. AI can perform many tasks faster and cheaper than humans, which will lead to unemployment and inequality. An article on ZD Net estimates that AI could automate 300 million jobs. Approximately 28% of current jobs could be at risk.
There also is a risk of loss of control. As AI systems become more powerful, there is a risk that we will lose control over them. This could lead to AI systems making decisions that are harmful to humans, such as launching a nuclear attack or starting a war. This risk of the loss of control is a major concern about the weaponization of AI. As AI technology advances, there is a worry that it could be weaponized by state or non-state actors. Autonomous weapon systems equipped with AI could potentially make lethal decisions without human intervention, leading to significant ethical and humanitarian concerns.
We already mentioned errors, misinformation, and hallucinations. Those are involuntary side-effects of AI. A related danger of AI is the deliberate manipulation and misinformation of society through algorithms. AI can generate realistic and persuasive content, such as deepfakes, fake news, and propaganda, that can influence people’s opinions and behaviours. AI can also exploit people’s psychological biases and preferences to manipulate their choices and actions, such as online shopping, voting, and dating.
Generative AI tends to use existing data as its basis for creating new content. But this can cause issues of infringement of intellectual property rights. (We briefly discussed this in our article on generative AI).
Another risk inherent to the fact that AI learns from large datasets is bias and discrimination. If this data contains biases, then AI can amplify and perpetuate them. This poses a significant danger in areas such as hiring practices, lending decisions, and law enforcement, where biased AI systems can lead to unfair outcomes. And if AI technologies are not accessible or affordable for all, they could exacerbate existing social and economic inequalities.
Related to this are ethical implications. As AI systems become more sophisticated, they may face ethical dilemmas, such as decisions involving human life or the prioritization of certain values. Think, e.g., of self-driving vehicles when an accident cannot be avoided: do you sacrifice the driver if it means saving more lives? It is crucial to establish ethical frameworks and guidelines for the development and deployment of AI technologies. Encouraging interdisciplinary collaboration among experts in technology, ethics, and philosophy can help navigate these complex ethical challenges.
At present, there is insufficient regulation regarding the accountability and transparency of AI. As AI becomes increasingly autonomous, accountability and transparency become essential to address the potential unintended consequences of AI. In a previous article on robot law, we asked the question who is accountable when, e.g., a robot causes an accident. Is it the manufacturer, the owner, or – as AI becomes more and more self-aware – could it be the robot? Similarly, when ChatGPT provides false information, who is liable? In the US, Georgia radio host Mark Walters found that ChatGPT was spreading false information about him, accusing him of embezzling money. So, he is suing OpenAI, the creators of ChatGPT.
As the abovementioned example of the lawyer quoting non-existing precedents illustrated, there also is a risk of dependence and overreliance: Relying too heavily on AI systems without proper understanding or human oversight can lead to errors, system failures, or the loss of critical skills and knowledge.
Finally, there is the matter of superintelligence that several experts warn about. They claim that the development of highly autonomous AI systems with superintelligence surpassing human capabilities poses a potential existential risk. The ability of such systems to rapidly self-improve and make decisions beyond human comprehension raises concerns about control and ethical implications. Managing this risk requires ongoing interdisciplinary research, collaboration, and open dialogue among experts, policymakers, and society at large. On the other hand, one expert said that it is baseless to automatically assume that superintelligent AI will become destructive, just because it could. Still, the EU initiative includes the requirement of building in a compulsory kill switch that allows to switch the AI off at any given moment.
Does regulation offer a solution?
In recent weeks, several countries have announced initiatives to regulate AI. The EU already had its own initiative. At the end of May, its tech chief Margrethe Vestager said she believed a draft voluntary code of conduct for generative AI could be drawn up “within the next weeks”, with a final proposal for industry to sign up “very, very soon”. The US, Australia, and Singapore also have submitted proposals to regulate AI.
Several of the abovementioned dangers can be addressed through regulation. Let us go over some examples.
Regulations for cybercrime and cybersecurity should emphasize strong cybersecurity measures, encryption standards, and continuous monitoring for AI-driven threats.
To counter cyber espionage and surveillance risks, we need robust cybersecurity practices, advanced threat detection tech, and global cooperation to share intelligence and establish norms against cyber espionage.
Privacy and data protection regulations should enforce strict standards, incentivize secure protocols, and impose severe penalties for breaches, safeguarding individuals and businesses from AI-enabled cybercrime.
To prevent the loss of jobs, societies need to invest in education and training for workers to adapt to the changing labour market and create new opportunities for human-AI collaboration.
Addressing AI weaponization requires international cooperation, open discussions, and establishing norms, treaties, or agreements to prevent uncontrolled development and use of AI in military applications.
To combat deepfakes and propaganda, we must develop ethical standards and regulations for AI content creation and dissemination. Additionally, educating people on critical evaluation and information verification is essential.
Addressing bias and discrimination involves ensuring diverse and representative training data, rigorous bias testing, and transparent processes for auditing and correcting AI systems. Ethical guidelines and regulations should promote fairness, accountability, and inclusivity.
When it comes to accountability and transparency, regulatory frameworks can demand that developers and organizations provide clear explanations of how AI systems make decisions. This enables better understanding, identification of potential biases or errors, and the ability to rectify any unintended consequences.
At the same time, regulation also has its limitations. While it is important, e.g., to regulate things like cybercrime or the weaponization of AI, it is also clear that the regulation will not put an end to these practices. After all, by definition, cybercriminals don’t tend to care about any regulations. And despite the fact that several types of weapons of mass destruction have been outlawed, it is also clear that they are still being produced and used by several actors. But regulation does help to keep the trespassers accountable.
It is also difficult to assess how disruptive the impact of AI will be on society. Depending on how disruptive it is, additional measures may be needed.
Conclusion
We have reached a stage where AI has become so advanced that it will change the world and the way we live. This is already creating issues that need to be addressed. And as with any powerful technology, it can be abused. Those risks, too, need to be addressed. But while we must acknowledge these issues, it should also be clear that the benefits outweigh the risks, as long as we don’t get ahead of ourselves. At present, humans abusing AI are a greater danger than AI itself.
Sources:
- https://www.reuters.com/technology/musk-experts-urge-pause-training-ai-systems-that-can-outperform-gpt-4-2023-03-29/
- https://www.inverse.com/article/34343-a-i-scientists-react-to-elon-musk-ai-comments
- https://www.bbc.com/news/world-us-canada-65452940
- https://www.bbc.com/news/uk-65746524
- https://builtin.com/artificial-intelligence/risks-of-artificial-intelligence
- https://bernardmarr.com/is-artificial-intelligence-dangerous-6-ai-risks-everyone-should-know-about/
- https://www.standaard.be/cnt/dmf20230529_96807942 (“Tien grote gevaren van AI”)
- https://www.standaard.be/cnt/dmf20230421_95596711 (“Advocaat in nauwe schoentjes vanwege ‘precedenten’ gehallucineerd door ChatGPT”)
- https://www.zdnet.com/article/chatgpts-hallucination-just-got-openai-sued-heres-what-happened/
- https://www.zdnet.com/article/ai-could-automate-25-of-all-jobs-heres-which-are-most-and-least-at-risk/
- https://www.zdnet.com/article/singapore-identifies-six-generative-ai-risks-sets-up-foundation-to-guide-adoption/
- https://www.zdnet.com/article/australia-launches-consultative-review-of-artificial-intelligence/
- https://www.reuters.com/technology/eus-vestager-sees-draft-code-conduct-ai-within-weeks-2023-05-31/
- https://www.zdnet.com/article/three-bills-to-regulate-ai-are-swirling-around-congress-heres-what-we-know/