paint-brush
2000+ Researchers Predict the Future of AIby@adrien-book
736 reads
736 reads

2000+ Researchers Predict the Future of AI

by Adrien BookApril 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

A recent survey by AI Impacts, in collaboration with universities, highlights expert predictions on AI's future milestones and potential risks. Experts foresee significant AI advancements by 2028 but express concerns about AI interpretability, societal impacts, and existential risks. Calls for increased AI safety research, global regulations, and responsible AI development are emphasized to navigate AI's opportunities and challenges for a better future.
featured image - 2000+ Researchers Predict the Future of AI
Adrien Book HackerNoon profile picture


The pace of Artificial Intelligence development has reached a natural crescendo. Tools such as GPT-4, Gemini, Claude, etc., and their makers all claim to soon be able to change every facet of society, from healthcare and education to finance and entertainment. This rapid evolution raises evermore critical questions about AI’s trajectory: the technology’s benefits, yes, but also (mostly!) the potential risks it poses to us all.


Under these circumstances, listening, understanding, and heeding experts’ perspectives becomes crucial. A recent survey titled “Thousands of AI Authors on the Future of AI represents the most extensive effort to gauge the opinions of such specialists on AI’s potential. Conducted by Katja Grace and her team at AI Impacts, in collaboration with researchers from the University of Bonn and the University of Oxford, the study surveyed 2,778 researchers, seeking their predictions on AI progress and its social impacts. All contacted had previously written peer-reviewed papers in top-tier AI venues.


Key takeaways from the future of AI's study

In short, the survey highlights the sheer complexity and breadth of expectations and concerns among AI researchers regarding the technology’s future… and its societal impacts.


  • Experts predict that AI will achieve significant milestones as early as 2028: “such as coding an entire payment processing site from scratch and writing new songs indistinguishable from real ones by hit artists such as Taylor Swift”.


  • A significant majority of participants also believe that the best AI systems will likely achieve very notable capabilities within the next two decades. This includes finding “unexpected ways to achieve goals” (82.3%), talking “like a human expert on most topics” (81.4%), and frequently behaving “in ways surprising to humans” (69.1%)​​.


  • Furthermore, the consensus suggests a 50% chance of AI “outperforming” humans in all tasks by 2047, a projection that has moved forward by 13 years compared to forecasts made a year earlier.


  • The chance of all human occupations becoming “fully automatable” is now forecast to reach 10% by 2037, and 50% by 2116 (compared to 2164 in the 2022 survey).


When will AI be able to “do” the following tasks?


  • The survey indicates skepticism about the interpretability of AI decisions, with only 20% of respondents considering it likely that users will be able to “understand the true reasons behind AI systems’ choices” by 2028​​. AI is (infamously) a black box, and this concern reflects real, ongoing challenges in AI transparency. This is particularly relevant in critical applications (finance, healthcare…) where understanding AI decision-making is crucial for trust and accountability.


  • The study also highlights “significant” worries regarding the potential negative impacts of AI. The spread of false information, manipulation of public opinion, and authoritarian uses of AI create, unsurprisingly, substantial concern​​. Calls for proactive measures to mitigate these dangers are far and few today, and that’s a problem.


Amount of concern potential scenarios deserve, organized from most to least extreme concern


  • There’s a diverse range of opinions on the long-term impacts of high-level machine intelligence, with a notable portion of respondents attributing non-zero probabilities to both extremely good and extremely bad outcomes, including human extinction​​​​. That’s scientist for “we don’t f*cking know what’s going to happen next”. But… between 38% and 51% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction, which seems like something we should keep an eye on.


  • Finally, there is disagreement about whether faster or slower AI progress would be better for the future of humanity. However, a majority of respondents advocate for prioritizing AI safety research more than it currently is, reflecting a growing consensus on the importance of addressing AI’s existential risks and ensuring its safe development and deployment​​.


What do we do with that information?

The way forward is pretty clear: governments, the world over, need to increase funding for AI safety research and develop robust mechanisms for ensuring AI systems align with current and future human values and interests.


The UK government recently announced £50M+ in funding for a range of artificial intelligence-related projects, including £30 million for the creation of a new responsible AI ecosystem. The idea is to build tools and programs that ensure responsible and trustworthy applications of AI capabilities.


Meanwhile, the Biden-Harris Administration announced in early 2024 the formation of the U.S. AI Safety Institute Consortium (AISIC), bringing together over 200 AI stakeholders, including industry leaders, academia, and civil society. This consortium aims to support the development and deployment of safe and trustworthy AI by developing guidelines for red-teaming, capability evaluations, risk management, and other critical safety measures​.


These are a start, but all too national ones.


Governments can’t just look at their own backyard today. We also need to implement INTERNATIONAL regulations to guide the ethical development and deployment of AI technologies, ensuring transparency and accountability. This includes fostering interdisciplinary and INTERNATIONAL collaborations among AI researchers, ethicists, and policymakers. I’ll feel safer in the world when I see the following being rolled out to strengthen and improve existing Human Rights frameworks:


  • Global AI safety frameworks
  • International AI safety summits
  • Global AI ethics and safety research funds.


Too soon to draw conclusions

It’s maybe a little early to fall prey to doomerism. While the survey provides valuable insights, it has limitations, including potential biases from self-selection among participants and the (obvious!) challenge of accurately forecasting technological advancements. Further research should aim to expand the diversity of perspectives and explore the implications of AI development in specific sectors.


In the end, and regardless of the accuracy of the predictions made, we need more than words. AI is a source of both unprecedented opportunities and significant challenges. Through open dialogue among researchers, policymakers, and the public, we must create rules to safeguard us from AI’s danger, and steer us towards a better future for all.


The world is very big, and we are very small. Good luck out there.