Artificial Intelligence has the potential to change the whole world and the human civilization culture. However, it is only possible if we learn to control this innovative technology and minimize the risk of potential side effects and dangers.
Researchers and tech experts are now trying to figure out if we can control Artificial Intelligence. Still, I can say that we can never take control of the advanced version of AI – Artificial Super Intelligence (ASI).
To Solve an Unsolvable Problem
Artificial Intelligence has made extraordinary progress in the past few years. However, this progression was not an easy road. This journey has seen countless AI failures, including dual use, malfunctions, etc. One of the most famous AI failures was when a Microsoft chatbot made an antisemetic remark after spending 24 hours interacting with humans.
Such failures were moments of realization that only creating AI machines is not enough; we must ensure they are beneficial for humanity. Therefore, researchers developed a sub-field, “AI Safety & Security,” and published different research papers on the topic.
All the research assumes that we can control AI machines without logical reasoning or explanation. In short, controlling Artificial Intelligence is an unsolvable problem we are trying to solve.
The Challenges of AI Safety
The biggest challenge we face during AI safety is undoubtedly the control problem. Two main methods are usually practiced to control Artificial Intelligence; Motivational Control and Capability Control. Motivational Control means the ASI systems are designed most safely in the first place, while Capability Control restricts the environment of ASI systems.
I think Motivational Control is the ideal route since Capability Control is not a long-term or permanent solution.
However, we need to keep in mind that even if we get control of AI, we won’t be able to ensure human safety. Let’s justify this statement with the help of a simple example; How an explicitly controlled AI system will respond to the instruction, “Stop the Car!”
- The AI system will immediately listen to the instruction and stop the car, even if it is in the middle of the road or there is traffic behind.
You are controlling the AI in the above example, but is it safe? Definitely No.
The AI Uncontrollability
In conclusion, a misaligned or failed AI system can undoubtedly cause great harm and a severe catastrophe. Up till now, no control method has proven to be fully effective in ensuring Artificial Super Intelligence safety.
To prove the conclusion mentioned above, I am quoting the most famous example/theorem of Godel:
Give an explicitly controlled AI an order: “Disobey!”
If the AI obeys, it violates your order and becomes uncontrolled, but if the AI disobeys, it also violates your orders and is uncontrolled.
With AI, you can either respect humanity or protect it. Sadly, you can’t do both. Let’s hope a new solution is developed in the coming years. But this is all that we have right now.