Artificial intelligence (AI) continues to make its way into the forefront of technological innovation, offering countries the promise of highly optimized industries, booming economies, and overall improved quality of life.
However, like all things, with great power comes great responsibility. AI has been proven to be capable of many things, from healthcare and administration to art and entertainment. Yet, all these innovations call forth various concerns about job displacement, privacy leaks, and even art theft.
Following this surge of concerns, governments around the world are racing to try and regulate AI in an attempt to balance its benefits and risks.
For example, as one of the key leaders in AI research and development, the European Union (EU) has taken a variety of approaches to AI regulation. The EU’s AI Act, known as the world’s first complete framework for AI, says that AI systems are analyzed and categorized depending on the level of risk they pose.
Rules such as these help establish guidelines for users depending on the possible risk, ensuring that AI systems do not pose a threat to the country while also offering citizens protection against abuse of technology by authorities. While plenty of AI systems may only pose minimal risks, they still need to be assessed.
According to the European Parliament, risks that are deemed unacceptable are “cognitive behavioural manipulation of people or vulnerable groups, classifying people based on behavior or economic status, and biometric identification and categorization.”
On the other hand, while the EU is focusing more on the protection of human rights and corporate transparency, Britain, in particular, has raised many alarms with its plans to develop AI-melded robotic weaponry. Or in other words, killer robots that require almost zero human control.
In response to this, the UK government put out a statement saying that they do not currently possess any fully autonomous weapon systems (FAWS) and have no intention to develop them. However, all of their current projects involve technology that has a high potential to be used for FAWS.
“Honestly, I didn’t even know something like that was in the works,” said senior Emily Shanks. “But that kind of makes it even more terrifying to learn about now.”
Additionally, the UK has not supported any proposals brought forward by the United Nations to ban these types of weapons.
According to the United Nations Association, Lord Clement-Jones, a member of the UK ‘Stop Killer Robots’ campaign, believes that the UK government’s refusal to rule out the possibility of lethal AI weaponry deployment places them “at odds with almost 70 countries, thousands of scientists […] and the UN secretary-general.”
Not to mention, various other tech moguls have expressed their own wariness over AI.
Tesla CEO Elon Musk has ceased his own developments in AI systems for the time being, while Geoffrey Hinton, known as one of the “godfathers of AI,” believes that AI poses a bigger threat to humanity than climate change does.
“I think it’s so telling that so many tech experts are giving out warnings about AI,” said Shanks. “I mean, really. Would Elon Musk pass up on something that would probably make him a lot of money without reason?”
Nations around the globe have been scrambling to try and tackle AI’s risks, while U.S lawmakers have openly admitted that they barely understand how it all works. This response resulted in President Joe Biden issuing an executive order to begin a U.S. governmentwide approach to expand the research and development of AI.
“I think that the possibilities of AI innovation are scary to think about,” said senior Giovanni Clemente. “I didn’t really believe it could ever get to this point, and it’s mind-blowing seeing it happen real time.”
After the global competition to seize the benefits of AI, louder calls for regulations on it have begun. Many are pleading for regulators to ensure that AI is trustworthy, ethical, and legal, but even lawmakers are struggling to keep up with AI’s rapid evolution.
“It’s insane seeing AI go so wrong, so quick,” said Clemente. “Especially with how easily accessible creating deepfake images is now. No one is safe. Not even celebrities. And the recent scandal around AI-generated deepfakes of Taylor Swift proves that. If even billionaires are having trouble protecting themselves against it, how will regular civilians do against this kind of abuse?”
Nevertheless, it is important to be educated on technologies such as AI, given that they will more than likely become more commonly used in the future.