New Survey Shows Cautious Optimism for AGI

PinIt

The AI community outlines where we are in achieving artificial general intelligence (AGI) and what’s important on the journey.

Machines reaching humanlike sentience might be the stuff of nightmares in SciFi, but in our very real world, this goal actually solves many of the problems we’re having. Self-driving cars might make humans safer. Machine-driven cybersecurity can think through attacks or prevent them. We might even make diagnoses quicker and more accurate or perform customer service whenever and wherever people need it.

A recent survey spearheaded by Francesca Rossi, AI researcher and president of the Association for the Advancement of Artificial Intelligence (AAAI), casts a critical light on the current trajectory of AI development. Conducted among hundreds of professionals within the field, the findings reveal a prevailing skepticism. A significant majority doubts that the current technology of large language models, powered by expansive neural networks, will culminate in Artificial General Intelligence (AGI). Unveiled in Philadelphia at the annual AAAI meeting, these insights challenge the efficacy of scaling current technologies and prompt a broader introspection about the ultimate goals of AI research.

Is reaching human-level intelligence the right goal?

The foundation of contemporary AI systems is the neural network, a technological framework designed to mimic the human brain’s ability to learn. Over the past decade, the pursuit of enhanced AI capabilities has primarily focused on scaling these networks. By increasing the volume of training data and the complexity of model parameters, researchers have significantly advanced the performance of generative AI systems, including sophisticated chatbots and advanced image generators that seem to reason.

However, simply “scaling up” has recently come under scrutiny. The allure of larger and more complex models has led to impressive feats in AI, from mastering strategic games like Go to generating eerily accurate humanlike text. Yet, this strategy has its limitations.

These systems lack the nuanced understanding and adaptable reasoning that characterize human intelligence, even if they seem sometimes to be “thinking.” The survey presents a stark statistic: 84% of AI professionals believe that relying solely on neural networks is insufficient for achieving AGI. This is a stark reminder of the chasm between even the most advanced AI systems and the multifaceted capabilities of the human mind.

The fundamental challenges, such as understanding abstract concepts and performing diverse cognitive tasks with flexibility, remain unmet.

See also: Top 5 Challenges When Integrating Generative AI

Survey says… it’s complicated

The survey revealed that more than three-quarters of responders believe that merely enlarging the scale of existing AI systems will not suffice for achieving AGI. This overwhelming consensus suggests that a change in direction may be necessary to overcome the existing barriers.

Furthermore, the survey sheds light on a significant inclination within the AI community towards integrating diverse AI methodologies. More than 60% of those surveyed support the idea that human-level reasoning requires a blend of neural network-based systems and symbolic AI. This approach encodes logical rules directly into AI systems, which contrasts sharply with the statistical learning models that dominate current practices. Proponents argue that Symbolic AI could bring a level of deductive reasoning and rule-based problem-solving that neural networks alone cannot provide.

This isn’t just about methodologies but also prioritizing goals within the AI research community. While some researchers push for the continued pursuit of AGI, a not insignificant group argues that developing AI systems with an acceptable risk-benefit profile should take precedence. This group advocates for a more cautious approach and emphasizes the importance of safety and ethical considerations in AI development.

It’s also worth noting that a smaller, yet notable proportion of the community even suggests halting AGI research until methods for fully controlling these systems are established. This stance reflects deep concerns over the potential repercussions of unchecked AI advancement. Though the majority opinion either doesn’t support a halt or believes one would be unenforceable, the balance between technological advancement and ethical considerations in pursuing a truly humanlike AI is an intricate one.

AI and Cognitive Science: Bridging the Gap with AGI

An interesting aspect of AI development, as noted in the survey, is the potential for significant advancements through increased collaboration with cognitive science. Cognitive science, which encompasses a range of disciplines, including psychology, linguistics, and neuroscience, has historically shared a close but complex relationship with AI. Initially, AI’s exploration into computational models provided a new lens through which to understand human cognition. However, the paths of AI and cognitive science have diverged over time due to differing focuses and methodologies.

But cognitive architectures, which are frameworks for simulating the human mind’s structure and functioning, provide insights into how AI can integrate real-time perception, cognition, and action. These systems could be instrumental in exploring high-level cognitive functions, such as reasoning and learning, that are more aligned with human capabilities.

Furthermore, the survey suggests that expanding AI’s scope to include more of cognitive science’s methodologies could enhance not only AI’s functionality but also its societal integration. For instance, understanding how to build AI that can engage in social contexts and learn through interaction, much like humans, could transform AI into more effective, adaptive collaborators rather than mere tools.

In pursuit of artificial general intelligence (AGI)

Ultimately, most respondents prioritized pursuing AI systems with an acceptable risk-benefit profile over a race to AGI. But that doesn’t answer the question of whether AGI is even feasible. And once again, while most respondents didn’t support halting this ultimate goal, they also didn’t believe current approaches would actually get us there. It’s a cautious approach to progress with a strong eye toward fair, ethical, and safe AI.

Elizabeth Wallace

About Elizabeth Wallace

Elizabeth Wallace is a Nashville-based freelance writer with a soft spot for data science and AI and a background in linguistics. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do.

Leave a Reply

Your email address will not be published. Required fields are marked *