Table of Contents
Artificial Intelligence holds the potential to mimic human intelligence beyond that is controllable
Artificial Intelligence is generally of three types, Artificial Narrow Intelligence (ANI), Artificial General Intelligence (AGI) and Artificial Super Intelligence (ASI). Artificial narrow intelligence at the moment is most prevalent and in use by the human population to make tasks easier. Self driving cars or Alexa and Siri would be a few examples. Tools like Chat GPT lie somewhere between ANI and AGI but cannot be yet considered a full bodied AGI tool. Artificial narrow intelligence tools perform tasks within certain specified sets of instructions and parameters. They cannot learn on their own or enhance their performance without reprogramming as opposed to Artificial general an artificial super intelligence.
Goal of AGI is to act with Human Intelligence
When it comes to Artificial general intelligence, it is expected that the AGI tool will learn, reason and adapt on it’s own. The AGI system is capable of upgrading itself without reprogramming. Such a scenario if achieved, a scenario in the day which does not seem far, should not only be scary but be a global concern of the level of a pandemic or worse, human level extinction.
We should never reach ASI level of development
Artificial super intelligence is the category of artificial intelligence which should never be achieved as making computers faster and smarter than humans will mean that machines will take over the world! If AI were to become smarter than humans then it can lead to unprecedented catastrophe.
Mitigating risks of extinction from AI should be a global priority
Computer hardware today greatly outperforms it’s human counterparts (yes, we are soon to become counter parts!) at computational activities. Modern transistors can switch states at least 10 million times faster than human neurone can fire. Researchers are releasing updated capable AI systems without really understanding how larger neural networks really work and interact making AI systems far more capable. Even OpenAI says that at present they do not have a solution to steer or control a super AI and prevent it from going rogue.
At present engineers and researchers are developing AGI and ASI without taking ethical concerns into consideration. The objective focused on right now is tech development, power, money and growth in a competitive world. If this should continue the expected outcome of human extinction is not too far.
Aggressive government intervention and policy making is required
The AI systems are becoming highly goal oriented. This means that whatever the goal maybe in front of it, the system will try to achieve it even if it means removing human barrier. AI system, incase it goes rogue might even consider humans trying to shut it down as a barrier than needs to be stooped or putting it bluntly removed. AI with increased computation capabilities is able to focus on long term planning with which the risks of exhibiting unintended dangerous goals is higher. The X-risk (existential risk) is higher than ever before and it is important that governments of world intervene and regulate the development steering it in the right direction through strict policy making and implementation.