Regulating AI Tech is No Longer an Option: It’s a Must!

Author: Niel Harper, CDPSE, CRISC, CISA, CISSP, ISACA Board Director, and Technology and Risk Executive
Date Published: 7 April 2023

The late Stephen Hawking maintained that the development of artificial intelligence could spell the end of the human race.

While my position is not that extreme (not saying that Professor Hawking is wrong), I do believe there are a plethora of social, political and economic costs that can manifest if AI development progresses along its present trajectory without strong regulation. These harmful possibilities have been widely discussed in academic and technical communities for years now, but it is important to broaden this discourse to promote better awareness and understanding before it becomes impossible to reverse the damage, specifically due to AI’s promising and wide-reaching potential. More importantly, it is also time for policymakers and lawmakers to act.

Society as we know is really good at jumping on bandwagons, especially where high revenue opportunities, novelty and entertainment are involved. AI ticks all these boxes, hence we have seen the likes of OpenAI and ChatGPT dominate the news cycles in recent months. Simultaneously, the investment in AI-based technologies has skyrocketed, with Microsoft, Google, IBM, Intel Capital, Softbank Group and numerous venture capitalists liberally opening their checkbooks. Before that, blockchain and the metaverse were quite the rage, garnering similar attention and investment, but with limited social benefits demonstrated.

The Risks and Dangers of AI

Most recently, the Italian data protection authority banned the use of the advanced chatbot ChatGPT. The regulator said that the platform had been compromised and user conversations and payment information were subject to unauthorized access. They also strongly argued that there was no legal basis to support "the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform." 

A couple of days prior, several industry leaders, including Elon Musk, called for work on these types of AI systems to be suspended, expressing fears that current development was out of control.

Questions and discussions around who is developing AI, for what purposes, and what risks and dangers are involved are critical to protecting society against the harms of “bad AI” and to engender digital trust. Some of the key issues are as follows:

  • There is no legal basis for the large-scale processing of personal data in AI platforms.
  • AI systems are being increasingly leveraged in propaganda and fake news.
  • Bad actors are weaponizing AI systems in more sophisticated cyberattacks (e.g., advanced malware, stealthier ransomware, more effective phishing techniques, deep fakes, etc.)
  • Thus far, no independent privacy and security audits of AI systems are available to corporate or individual users.
  • The use of autonomous weapons systems powered by AI has become a reality.
  • AI systems are prone to bias and discrimination (garbage in; garbage out), putting minority communities at further risk.
  • The challenges with regards to intellectual property abuse/misuse are extremely high.
  • Legislation and regulation are always behind technology advancement (law lag). Currently, there are no rules or general recourse to protect against negative outcomes of AI—especially around liability.
  • There are threats to democracy in terms of amplifying the “echo chambers” and social manipulation already prevalent in many social platforms.
  • AI systems can be used in foreign or corporate espionage.
  • Algorithmic (AI-based) trading can result in future financial crises.
  • There is potential for widespread loss of jobs due to AI automation.

Driving Toward Responsible and Ethical Use of AI

There is no doubt that AI can and will deliver numerous benefits. That being said, the promise of this technology will not be sufficiently realized without a robust framework of laws and regulations. Several countries have been focusing on AI regulation, and the United States and the European Union have seemingly aligned on what measures are most critical to control the unmitigated proliferation of artificial intelligence. While this suggests that some AI technologies may be banned, it does not prevent researchers and corporations from continuing their work in the field. That being said, AI regulation cannot be knee-jerk and alarmist; it must be pro-innovation and flexible in terms of deciding where AI is beneficial and where it is problematic.

Responsible, ethical use of AI is the key. From a corporate perspective, business leaders need to articulate why they are planning to use AI and how it will benefit individuals. Companies should develop policies and standards for monitoring algorithms and enhancing data governance and be transparent with the results of AI algorithms. Corporate leadership should establish and define company values and AI guidelines, creating frameworks for determining acceptable uses of AI technologies.

Achieving the delicate balance between innovation and human-centered design is the optimal approach for developing responsible technology and guaranteeing that AI delivers on its promise for this and future generations. Discussions of the risks and harms of artificial intelligence should always be front and center, so leaders can find solutions to deliver the technology with human, social and economic benefits as core underlying principles.