Monday, April 24, 2023

Debunking the "Artificial" in AI: Towards a More Nuanced Understanding of Intelligence


(Image by @Folkopti)

As a passionate explorer of the ever-expanding world of artificial intelligence, Ive pondered if we humans have towards the word "artificial", particularly in "Artificial Intelligence." The term "artificial" is sometimes associated with "fake," as in the case of "artificial leather," this suggests that non-biological forms of intelligence are inherently inauthentic. I believe this creates a bias, in which AI is perceived as "not real intelligence." I think it's time to challenge these biases and appreciate intelligence in a more nuanced manner.

When it comes to evaluating intelligence, whether human or artificial, we're often tempted to use a single metric or number, like an IQ score or SAT result. While this approach works well for humans, machines are not humans, and intelligence is a complex construct that defies easy quantification. Currently, GPT-4 achieves the equivalent of about a 140 IQ on the SATs. However, it lacks on some aspects of human intelligence where [almost] any human would pass (example). Recognizing this, I believe that we need propose a more comprehensive, multi-dimensional approach to assessing AI systems. There are dimension where we don't typically measure humans simply because it's not that relevant (unless they are completely socially awkward or sociopats), but AI is not human, and it will lack in unexpected placed while excelling on others.

What I'm saying is that we need an measurement that can be applicable to human and non-human intelligence, but that it measures for more than what we currently measure.

To illustrate, let's think of intelligence as analogous to a car's horsepower. Horsepower is a common metric used to describe a car's capabilities, but it doesn't capture all the nuances of a car's performance, such as acceleration, braking, or aerodynamics. Similarly, a multi-dimensional approach can offer a more complete picture of AI's abilities, moving beyond the limitations of a single number.

Before reducing AI's capabilities to a single value, let's consider five dimensions of intelligence:

  • Language understanding
  • Logical reasoning
  • Creativity and innovation
  • Memory and learning
  • Decision-making

We can evaluate each of these dimensions independently, normalize them, and eventually combine them into a single metric using the geometric average. This method includes areas not frequently considered in human intelligence assessments but should be compared when evaluating AI (or AGI) to human values.

The geometric average involves multiplying the values and taking the nth root, where 'n' is the number of values. This method is more sensitive to disparities between the values, providing a more intuitive and meaningful result when the values are significantly different from one another.

For example, consider two dimensions with scores of 250 and 0. The arithmetic average would be (250 + 0) / 2 = 125, while the geometric average would be √(250 * 0) = 0. As shown here, the geometric average offers a more accurate representation in this case. An AI with very high Language and Logical reasoning but no Memory would have a lower value than a standard average.

These are just some ideas, but I see the bias towards "artificial" as a limiting factor towards embracing the wave of AI.

As we progress further into the age of AI, it is crucial to create and establish accurate evaluation methods that consider various aspects of intelligence. By refining our understanding of AI and moving away from the "artificial" label, we can foster a more inclusive and comprehensive view of what constitutes intelligence. This shift in perspective will not only enable us to develop better AI systems but also to integrate them more effectively into our society, and detect unwanted outliners.

Moreover, this approach will encourage researchers and developers to focus on creating AI systems that are well-rounded and beneficial to humanity as a whole. By highlighting the importance of social and emotional alignment, we can ensure that AI systems are designed with empathy, ethics, and human values at their core.

In the long run, embracing a more nuanced understanding of intelligence and adopting a multi-dimensional approach to AI evaluation will help us maximize the potential benefits of AI, while minimizing its risks. 

Now, let's address social and emotional alignment/intelligence separately. I've intentionally left it out of our list of dimensions because it deserves its own spotlight. This aspect is what separates good from bad AGI and should be handled carefully. Emphasizing this ensures that AI systems are developed (aligned) with ethical, empathetic, and human-centric considerations in mind.

This preliminary work/article suggests that as AI systems become more prevalent, we should seriously consider establishing a benchmark for AI, which could be renamed to Authentic Intelligence. For instance, in the future, we may say, "This system has an AI of 120, or an Authentic Intelligence [quotient] of 120, with an Emotional Alignment of 90." For the general public, 120 would suffice, but a more careful mind would consider the 90 as well.

In conclusion, by adopting a multi-dimensional approach and using the geometric average, we can overcome the bias associated with the term "artificial" and foster a more accurate understanding of AI's capabilities and limitations. By doing so, we can more safely embrace the remarkable potential that AI holds for our future and develop AI systems that are both powerful and attuned.


Tuesday, April 18, 2023

Outpacing Ourselves: Will We Recognize AGI Amidst Our Own Biases?

(Photo shared on Twitter by @hanne_van_briel)

As a fervent observer of the rapid advancements in artificial intelligence (AI), I find myself both thrilled and apprehensive about what the future holds. The exponential growth of AI, especially with the emergence of Artificial General Intelligence (AGI), has raised concerns about humanity's ability to adapt and understand these powerful technologies. With the pace of change outstripping our capacity to fully grasp its implications, I can't help but wonder how we, as a society, can narrow the gap between cutting-edge AI and the general public's comprehension.

The incredible speed at which AI is progressing has the potential to leave many people behind, struggling to keep up with the latest developments. This is evident in the widespread skepticism and outdated arguments that persist, even in the face of groundbreaking technologies like GPT-4. The latency in accepting and understanding new AI advancements is a pressing issue that needs to be addressed.

A significant issue lies in the moving goalposts set by AI researchers and experts, such as the Turing Test. These benchmarks, once hailed as definitive, now seem inadequate. The Turing Test goal post has been moved repeatedly, to the point where even humans might potentially fail the test. This raises the question: could AGI emerge, only to be denied by the very people we rely on to detect it due to their confirmation biases?

Once we observe that the Turing Test goalpost has been moved over and over again, and once we agree that some humans will fail some of the versions of the Turing Test, as they are designed to detect specific flaws of large language model techniques, it becomes clear that human goals for AGI might suffer the same tendencies. For example, humanity only accepted that "computers can play chess" after a computer beat the best chess player in the world. This approach is extremely risky with AGI since an AGI as smart as the smartest of humans could deceive us until it is too late.

To counteract our tendency as a species to undermine technology and keep moving the goalposts, I believe that we need an unmoving goal-post for detecting AGI. A set of tests should be created and agreed upon by multiple parties in the near future. Once the test has been agreed upon, it should be made public, and once it is passed, it should be publicly accepted that AGI is here. This would likely involve the AI accomplishing something completely new, something humans have not done before or even thought of, but can prove.

It should also be accepted that AGI might emerge before (or very close after) the Turing Test is passed. This is due to people continuously moving the goalpost. If an AI has perfect memory, it will fail the Turing Test; if it handles emotions differently, it fails the Turing Test. So, in the end, the Turing Test has become a game of mimicry, which is rather independent of AGI. Yes, AGI can fake the Turing Test, but that is exactly my point—we may get AGI before AI passes the Turing Test.

If we believe there are dangers related to AGI, then we must be acutely aware of our own limitations. The rapid pace of technological advancements presents a challenge for society to keep up with, but it is a challenge we must face head-on. By acknowledging our biases and establishing an unmoving goalpost for AGI detection, we can better prepare ourselves for the arrival of AGI and its potential impact on our world.

If you enjoy reading, please like and comment, so that I know to continue. If there is interest, I have many more technology related ideias that I would like to share and discuss.

Article also on LinkedIn (https://www.linkedin.com/feed/update/urn:li:linkedInArticle:7054030333574291457/)

Wednesday, April 5, 2023

The Double-Edged Sword of AGI: Balancing Progress and Responsibility in the Face of Global Tensions



As I reflect on the recent developments surrounding artificial intelligence (AI), I can't help but feel a sense of unease. I have long been fascinated by the potential of AI and its promise to revolutionize every aspect of human life. However, I also acknowledge that there is no certainty that the singularity—the point at which artificial general intelligence (AGI) surpasses human intelligence—is avoidable. Nor can we predict the consequences of such an emergence.


Recent headlines have made it clear that various countries are working tirelessly to develop AI technology, and it seems increasingly likely that someone will find a way to create AGI. While we may not be able to control the destiny of AGI, we can at least attempt to influence who achieves it first.


The current geopolitical landscape gives me pause. As tensions between Russia and Ukraine continue to escalate, it's clear that the power dynamics between countries are shifting. The EU has condemned China for its support of Russia in the ongoing conflict, and China's increasingly close relationship with Russia raises concerns about the potential impact of AGI under their control.


I believe that our only option, therefore, is to work towards the development of a safe AGI. However, we cannot afford to follow a slow path to AGI, as doing so would risk losing the race to another, potentially less ethical entity. Russia's actions in Ukraine have shown the lengths they are willing to go to assert their power. The potential consequences of AGI—or guided super-intelligence—under their control would pose an even greater risk to global democracy.


At the moment, it seems as though we might be on a path towards an existential risk. We must proceed carefully, but we cannot afford to slow down. The stakes are too high.


Recent events in Italy, where OpenAI's ChatGPT was temporarily banned due to privacy concerns, serve as a reminder that AI development is not without its challenges. As we forge ahead in pursuit of AGI, we must be mindful of the potential risks and ensure that proper safeguards are in place.


But while the path we are currently on may lead us towards existential risk, we cannot afford to slow down. Instead, we must continue to develop AI technologies while simultaneously working to mitigate the potential dangers they pose. By doing so, we stand a chance of ensuring that AGI remains a force for good, rather than an instrument of destruction.


While AI research is inherently risky, organizations like OpenAI represent a responsible approach to this rapidly evolving field.


In conclusion, I must admit that I fear AI. I fear the unknown consequences of AGI, and I fear the potential for bad actors to wield this powerful technology for their own nefarious purposes. But I do not fear OpenAI. I fear AGI under bad influences. It is our responsibility to actively engage in the development and regulation of AI, ensuring that its power is harnessed for the greater good and not used as a tool of oppression.


As we navigate this uncertain future, let us work together to promote ethical AI development, taking into consideration the lessons of history and the current geopolitical landscape. We must move forward with caution, but also with a sense of urgency, as the race for AGI is already underway. Our future depends on it.

Tuesday, April 4, 2023

The Singularity: A Personal Journey Through Hope and the Possibility of an Accelerated Timeline




When my father was diagnosed with cancer, I found solace in the pages of Ray Kurzweil's "The Singularity is Near." It provided a sense of consolation that one day, humanity might overcome such suffering. As I delved into Kurzweil's predictions about the exponential growth of information technologies, I felt a connection with his own experiences and desires surrounding the loss of his father.

During my master's thesis, I experienced firsthand how software performance could double monthly. This tangible evidence of exponential growth lent credibility to Kurzweil's ideas, despite some critics arguing that they are based on faith rather than hard evidence. Although his thesis seemed vague and the connection between exponential growth in various technologies felt loose, I couldn't ignore the potential validity of his claims.

In the 2010s, Kurzweil struggled to justify why self-driving cars hadn't become as pervasive as he initially predicted. This made me question whether his predictions, rooted in his desire to resurrect his father, might be too pessimistic or overly optimistic.



However, exponential growth appears to be a constant. Kurzweil pointed out that transistors followed an exponential curve until the microprocessor era. He predicted that the next stage would involve self-assembling nanostructures, but I believe he may have underestimated the impact of AI.

Recent developments in AI have led me to reconsider the possibility that Kurzweil's timeline might be overly pessimistic. Two main points warrant consideration:

  1. AGI might not need to fully mimic the human brain, potentially allowing for faster development. This challenges one of Kurzweil's assumptions, as AGI could be achieved through alternative means, much like how cars surpassed horses in speed without mimicking their structure.
  2. Superintelligence could be the lowest hanging fruit of AGI, suggesting that even a small improvement might result in capabilities far beyond human intelligence. Just as the difference in DNA between humans and gorillas is only a few percent, an AI only a few percent more advanced than AGI could already be superintelligent. This leap might happen extremely fast, making AI as superior to us as we are to gorillas.

If these points hold true, it's conceivable that the Singularity could happen before the end of this decade.


An accelerated timeline for the Singularity offers hope and challenges. It could lead to advancements in medicine, a reduction in human suffering, and extraordinary technological progress. However, it also underscores the urgency of addressing ethical, safety, and societal concerns associated with advanced AI.


My journey through hope, skepticism, and renewed optimism about the Singularity has shown me that while Kurzweil's predictions may be influenced by his personal motivations, there is still merit in considering the possibility of an accelerated timeline. As we move forward, it's crucial to focus on responsible AI research, collaboration, and addressing the implications of an AI-driven future.