• Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Cyber Security Solutions, Compliance, and Consulting Services - IT Security

We offer It security management, data, network, & Information security services for protecting information & mitigating security risks to your organization.

  • Home
  • About Us
  • Solutions & Services
    • Security Governance
    • NETWORK SECURITY
    • CLOUD SECURITY
  • COMPLIANCE
  • SECTORS
  • Blog
  • CONTACT

Artificial Super Intelligence Can Never Be Controlled

By kamran | At October 28, 2022

Oct 28 2022

Artificial Super Intelligence Can Never Be Controlled

Artificial Intelligence has the potential to change the whole world and the human civilization culture. However, it is only possible if we learn to control this innovative technology and minimize the risk of potential side effects and dangers. 

Researchers and tech experts are now trying to figure out if we can control Artificial Intelligence. Still, I can say that we can never take control of the advanced version of AI – Artificial Super Intelligence (ASI).

To Solve an Unsolvable Problem

Artificial Intelligence has made extraordinary progress in the past few years. However, this progression was not an easy road. This journey has seen countless AI failures, including dual use, malfunctions, etc. One of the most famous AI failures was when a Microsoft chatbot made an antisemetic remark after spending 24 hours interacting with humans. 

Such failures were moments of realization that only creating AI machines is not enough; we must ensure they are beneficial for humanity. Therefore, researchers developed a sub-field, “AI Safety & Security,” and published different research papers on the topic. 

All the research assumes that we can control AI machines without logical reasoning or explanation. In short, controlling Artificial Intelligence is an unsolvable problem we are trying to solve. 

The Challenges of AI Safety

The biggest challenge we face during AI safety is undoubtedly the control problem. Two main methods are usually practiced to control Artificial Intelligence; Motivational Control and Capability Control. Motivational Control means the ASI systems are designed most safely in the first place, while Capability Control restricts the environment of ASI systems. 

I think Motivational Control is the ideal route since Capability Control is not a long-term or permanent solution. 

However, we need to keep in mind that even if we get control of AI, we won’t be able to ensure human safety. Let’s justify this statement with the help of a simple example; How an explicitly controlled AI system will respond to the instruction, “Stop the Car!”

  • The AI system will immediately listen to the instruction and stop the car, even if it is in the middle of the road or there is traffic behind.

You are controlling the AI in the above example, but is it safe? Definitely No. 

The AI Uncontrollability

In conclusion, a misaligned or failed AI system can undoubtedly cause great harm and a severe catastrophe. Up till now, no control method has proven to be fully effective in ensuring Artificial Super Intelligence safety. 

To prove the conclusion mentioned above, I am quoting the most famous example/theorem of Godel:

Give an explicitly controlled AI an order: “Disobey!” 

If the AI obeys, it violates your order and becomes uncontrolled, but if the AI disobeys, it also violates your orders and is uncontrolled.

With AI, you can either respect humanity or protect it. Sadly, you can’t do both. Let’s hope a new solution is developed in the coming years. But this is all that we have right now. 

Written by kamran · Categorized: Cyber security threats

Primary Sidebar

Recents post

US Healthcare Sector Under Siege: What 2025’s Cyberattacks Reveal About Healthcare Security

From ransomware hitting … [Read More...] about US Healthcare Sector Under Siege: What 2025’s Cyberattacks Reveal About Healthcare Security

Is Your Law Firm Overlooking These 3 Critical Cyber Risks?

From juggling client deadlines … [Read More...] about Is Your Law Firm Overlooking These 3 Critical Cyber Risks?

Healthcare Cybersecurity Updates: Ransomware, Data Breaches & AI Risks

Cyberattacks targeting … [Read More...] about Healthcare Cybersecurity Updates: Ransomware, Data Breaches & AI Risks

Categories

  • AI and cybersecurity (2)
  • blockchain (1)
  • Cloud security (29)
  • Compliance (25)
  • Cyber security news (108)
  • Cyber security threats (376)
  • Cyber security tips (370)
  • Data Security (3)
  • E-Commerce cyber security (3)
  • Education cyber security (1)
  • Enterprise cyber security (7)
  • Financial organizations cyber security (4)
  • General (22)
  • Government cyber security (4)
  • Healthcare cyber security (19)
  • Information Security (2)
  • Law Firms Cyber Security (9)
  • Network security (9)
  • Newsletter (1)
  • Privacy (1)
  • Ransomware (14)
  • remote work (1)
  • Risk assessment and management (6)
  • Security management and governance (9)
  • SME Cybersecurity (2)
  • Software Security (2)
  • Supply Chain Attacks (5)
  • System security (3)
  • Uncategorized (25)
  • Vendor security (14)

Archives

Footer

Infoguard Cyber Security

San Jose Office
333 W. Santa Clara Street
Suite 920
San Jose, CA 95113
Ph: (855) 444-6004

Irvine Office
19800 MacArthur Blvd.
Suite 300
Irvine, CA 92612

Recent Posts

  • US Healthcare Sector Under Siege: What 2025’s Cyberattacks Reveal About Healthcare Security
  • Is Your Law Firm Overlooking These 3 Critical Cyber Risks?
  • Healthcare Cybersecurity Updates: Ransomware, Data Breaches & AI Risks

Get Social

  • LinkedIn
  • Home
  • About Us
  • Solutions & Services
  • COMPLIANCE
  • SECTORS
  • Blog
  • CONTACT

Privacy Policy Terms of Use Acceptable Use

Copyright © 2025 | All right reserved