OpenAI has called on the world to start thinking about the governance of superintelligence.
Superintelligence here refers to the ability of AI systems to become extremely capable than general artificial intelligence.
In a blog post released on May 22, 2023, the company that is behind the now popular AI chatbot ChatGPT, said that it’s projecting a future (approximately next 10 years) where “AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations”.
They go on to say that superintelligence will be more powerful than any technology that human beings have used in the history of human civilization as we know it today.
Risk management should be the key focus right now, while ensuring that the benefits that can possibly come out of Artificial Intelligence are not stifled with unnecessary regulation.
According to OpenAI, it’s important that we are not reactive as this could ruin the good that is possible with Artificial Intelligence.
What should be done to ensure authentic governance?
These are the key areas that OpenAI suggests in the blog post as a good starting point:
Elaborate coordination
This refers to coordination among leading development companies and individuals. This will ensure that the systems that are being developed with potential to eventually lead to superintelligence are done in a manner that is safe and one that will eventually be smoothly integrated into society.
To get this done in an orderly manner, OpenAI proposes a coordination at government level where major governments come together to set up a core project that all developments geared towards superintelligence become part of. Alternatively, all leading development initiatives could collectively agree on a threshold of development per year which shouldn’t be surpassed.
International Agency
Under this, OpenAI suggests an international Artificial Intelligence Agency similar to the International Atomic Energy Agency. This agency will oversee all global developments geared towards superintelligence.
Such an agency will ensure that any AI project whose capability is beyond a certain threshold must be subjected to checks and balances to ensure that it’s developed, operated and maintained within a regulated framework that guarantees safety.
However, OpenAI observes that the mandate of such an agency should be restricted to laying a framework that checks and prevents the existential risk of superintelligence and not concern itself with small matters that can be dealt at country or company level. This is important because while we want to make sure that AI in general does not pose existential risks, we also don’t want to stifle progress that could be useful to the world. Imagine, for example, if we were to police the internet so much as to stifle the kind of beautiful technologies and platforms that the world enjoys today?
Technical capabilities
This, OpenAI says, is important to make superintelligence safe. The big picture here is to have a technical framework upon which safe AI projects will be built.
It’s all about aligning artificial intelligence to human values. OpenAI has already rolled out an initiative dubbed alignment research to foster this front.
If left unaligned, then AGI is most likely to pose significant dangers to humanity.
Even as OpenAI calls for governance at the highest level, the company has also been keen to point out that it’s important to be careful where to regulate.
For example, it’ll not make sense to start burdening startups, companies and developers with things like licenses and audits. This is likely to have the effect of losing out on the bigger picture of Artificial Intelligence – which is to develop systems that bring value to society.
Let’s not forget that AI has a lot of good to offer. Imagine if AI, for example, could diagnose diseases better, work in medical manufacturing to create better drugs, or be used in farming for better agriculture. If we stifle such progress, then this will be a huge hindrance for a technology that is already promising to do so much good.
Public participation is also important. After all, all that is of concern is the impact of AI on the public and society in general.
To this end, OpenAI observes that individual users should also be able to have a say and control on how the specific AI systems they use behave.
At Teqnamo, we believe that the ultimate drive of whichever type of AI systems should be to work on solving existing challenges in order to improve society in general. For example, how can AI be deployed to end the thorny issue of world hunger? It’s still ironic that the world generally produces a lot of food but we still have many people who go to bed hungry in some parts of the world while significant amounts of food go to waste in other parts of the world. We have many people around the world who cannot afford decent accommodation and are condemned to live in filthy environments. How can AI help solve such challenges? Issues such as human-wildlife conflicts, water and air pollution, climate change-induced drought. How can AI be deployed here? And so much more.