Artificial Intelligence is big, and it’s going to become even bigger. No doubt about that.
The worrying part though is that if AI gets out of hand, the consequences could be tragic. These are the sentiments of many both inside and outside the AI train.
One of the notable voices that has been vocal about the need to check the ugly side of Artificial Intelligence is Sam Altman, the CEO of OpenAI- the company that is behind the powerful large AI model ChatGPT.
Appearing before US lawmakers on 17th May 2023, Altman said that there is a need to enact regulations around AI. He particularly singled out large models, pointing to their super capabilities.
The ability of these models to manipulate, to persuade, and to provide sort of one on one interactive communication is a worry, said Altman.
But he also said that people will quickly adapt over time, just like they have done with previous technologies that had raised concerns in their early days. He gave the example of when photoshop was introduced and many people were concerned with its ability to manipulate digital media assets. But over time, people were able to spot differences and single out photoshoped media such as images.
This is Altman’s first time to testify before the US congress. Unlike most of his fellow silicon valley techies who have previously appeared on tense topics like privacy, Altman’s appearance was more diplomatic.
The New York Times reported that Altman also shared dinner with a few house members and that he held a private session with a select number of members before proceeding to the testimony.
Many companies, both corporates and startups, are in a race to pioneer Artificial Intelligence products and services. While the likes of Google and Microsoft are leading the way at the big stage, there are many other middle and lower level players who are aggressively pushing the Artificial Intelligence bandwagon.
There is every sign that all industries will have a day with AI. From healthcare to finance and everything in-between, it’s only a matter of time before we find AI everywhere in our daily life.
What’s most likely to happen is that there will be disruptions both at the highest level of technology and then down across industries and niches within industries. The best way to view AI is as a technology that will permeate almost everything that we do. This means AI will be easily incorporated into tasks, and once this happens then the repercussions will be clear – if the technology is not checked.
This is why it’s important to get legislation in early enough to prevent any unforeseen catastrophes. Left unregulated, bad actors can take advantage of AI and cause chaos in society.
According to Mr. Altman, there will be a huge impact and this requires partnership between government and industry to figure out how to mitigate that.
He gave a few pointers where the government can start. One is to start by putting in place licensing and testing requirements for AI models above a certain threshold of capabilities.
What is clear now is that while AI has many positives, it also has the potential to cause harm and this negative side of Artificial Intelligence is what needs to be checked through regulation.