Tuesday, April 18, 2023

Outpacing Ourselves: Will We Recognize AGI Amidst Our Own Biases?

(Photo shared on Twitter by @hanne_van_briel)

As a fervent observer of the rapid advancements in artificial intelligence (AI), I find myself both thrilled and apprehensive about what the future holds. The exponential growth of AI, especially with the emergence of Artificial General Intelligence (AGI), has raised concerns about humanity's ability to adapt and understand these powerful technologies. With the pace of change outstripping our capacity to fully grasp its implications, I can't help but wonder how we, as a society, can narrow the gap between cutting-edge AI and the general public's comprehension.

The incredible speed at which AI is progressing has the potential to leave many people behind, struggling to keep up with the latest developments. This is evident in the widespread skepticism and outdated arguments that persist, even in the face of groundbreaking technologies like GPT-4. The latency in accepting and understanding new AI advancements is a pressing issue that needs to be addressed.

A significant issue lies in the moving goalposts set by AI researchers and experts, such as the Turing Test. These benchmarks, once hailed as definitive, now seem inadequate. The Turing Test goal post has been moved repeatedly, to the point where even humans might potentially fail the test. This raises the question: could AGI emerge, only to be denied by the very people we rely on to detect it due to their confirmation biases?

Once we observe that the Turing Test goalpost has been moved over and over again, and once we agree that some humans will fail some of the versions of the Turing Test, as they are designed to detect specific flaws of large language model techniques, it becomes clear that human goals for AGI might suffer the same tendencies. For example, humanity only accepted that "computers can play chess" after a computer beat the best chess player in the world. This approach is extremely risky with AGI since an AGI as smart as the smartest of humans could deceive us until it is too late.

To counteract our tendency as a species to undermine technology and keep moving the goalposts, I believe that we need an unmoving goal-post for detecting AGI. A set of tests should be created and agreed upon by multiple parties in the near future. Once the test has been agreed upon, it should be made public, and once it is passed, it should be publicly accepted that AGI is here. This would likely involve the AI accomplishing something completely new, something humans have not done before or even thought of, but can prove.

It should also be accepted that AGI might emerge before (or very close after) the Turing Test is passed. This is due to people continuously moving the goalpost. If an AI has perfect memory, it will fail the Turing Test; if it handles emotions differently, it fails the Turing Test. So, in the end, the Turing Test has become a game of mimicry, which is rather independent of AGI. Yes, AGI can fake the Turing Test, but that is exactly my point—we may get AGI before AI passes the Turing Test.

If we believe there are dangers related to AGI, then we must be acutely aware of our own limitations. The rapid pace of technological advancements presents a challenge for society to keep up with, but it is a challenge we must face head-on. By acknowledging our biases and establishing an unmoving goalpost for AGI detection, we can better prepare ourselves for the arrival of AGI and its potential impact on our world.

If you enjoy reading, please like and comment, so that I know to continue. If there is interest, I have many more technology related ideias that I would like to share and discuss.

Article also on LinkedIn (https://www.linkedin.com/feed/update/urn:li:linkedInArticle:7054030333574291457/)

No comments:

Post a Comment