
A fascinating project spanning nearly a quarter of a century is now exploring ways artificial intelligence could disrupt the future.
From 2004 through 2023 hundreds of students and faculty at Elon University — along with staff members from Pew Research — have surveyed experts to collect tens of thousands of predictions on “the challenges and opportunities of digital evolution…”
But this year their “Imagining the Digital Future” research center announced an “expanded research agenda” collecting up predictions for the impact of AI by the year 2040 — from over 300 technology experts. “We think the spread of AI systems has profound implications for individuals and institutions,” said the research center’s new director, Lee Rainie, in an email interview with The New Stack.
Rainie joins the center after 24 years directing Pew Research Center’s Internet and Technology branch — so this year’s report was also augmented by polling of 1,021 Americans (conducted Oct. 20-22). “So we think we’ve asked a variety of questions that are new,” Rainie said in a recent interview with Elon University president Connie Book, “but we’ve also done it in a new way.”
The resulting report took nine months to prepare, according to that interview, and the effort represents an attempt to bring serious academic focus to the sudden technological change confronting the world.
But it also shows an attempt to not just understand artificial intelligence, but to fairly anticipate its disruptive impacts — both the good and the bad.
Upheavals and Benefits
Both experts and poll respondents see “enormous upheavals” ahead, while many experts also “embrace the idea that important benefits will also result from the spread of AI,” according to the center’s news release.
The enormous upheavals took many forms. “The global experts predicted that as these tools advance we will have to rethink what it means to be human,” the center said in an announcement, adding that experts also predicted “we must reinvent or replace major institutions in order to achieve the best possible future.”
Both experts and poll respondents expressed concerns about wealth inequalities, politics and elections, and the level of civility in society and personal relationships with others, according to a public statement from Rainie. “At the same time, there are more hopeful views about AI making life easier, more efficient and safer in some important respects.” For example, even the polling found more Americans envisioning a positive impact from AI by 2040 in healthcare systems and the quality of medical treatment (36%) — and in their day-to-day work tasks and activities (31%).
But the public opinion poll also found many Americans worrying that AI will have a negative impact by 2040 in numerous ways.
- The further erosion of personal privacy (66%)
- Their opportunities for employment (55%)
- How these systems might change their relationships with others (46%)
- AI applications’ potential impact on basic human rights (41%)
In his February interview, Rainie said Americans seemed to be “all for” AI-powered healthcare diagnostics and public data analysis tools “that are much faster and sometimes much more accurate than humans… So there’s a really mixed picture, which is part of the really interesting story that we’re telling. It’s not a ‘all good’ or ‘all bad’ situation, with public opinion. It’s very nuanced, sometimes scary — but a lot of ways that people are showing some signs of hope, too.”
“There is no prevailing viewpoint about the overall impact of AI…” according to the center’s announcement. “On a broad question about AI ethics, 31% say it is possible for AI programs to be designed that can consistently make decisions in people’s best interest in complex situations, while the exact same share say that is not possible. Some 38% say they are not sure.”
Even the report’s title seemed to capture this “undecided” assessment. (“A New Age of Enlightenment? A New Threat to Humanity?: The Impact of Artificial Intelligence by 2040…”)
Vint Cerf and Esther Dyson
Contributions came from a wide variety of experts — from investor/founder Esther Dyson to “father of the internet” Vint Cerf (now a vice president at Google).
In a longer interview, Cerf made some specific recommendations, touching on the need for transparency and visibility into the data used to train AI, with additional issues like data intentionally falsified with AI and the greater need to declare when AI-generated output is being used.
Esther Dyson even introduced the concept of the “information supply chain” — which includes knowing who chose to produce the content (so readers have a better idea of their motivations) — and arguing that AI could have a role in identifying those sources.
In a longer interview, Dyson noted “This is a war that will never be won,” and sounded the theme of humanity’s role. “I just wrote a piece called, essentially, ‘Don’t fuss about training your AIs. Train your babies.’ Because we need to train people to be skeptical, but not cynical — to be self-aware, and also to be aware of others’ motives…”
Digital Warlords
Perhaps the most dire warning came from William L. Schrader (an Internet Hall of Fame inductee who co-founded one of the world’s first internet service providers).
“Wake up and smell the gunfire,” Schrader wrote, warning that AI “adds greater velocity to the vector of humanity’s troubles. Fascists will dominate nearly all governments. AI will drive dangerous military activity and intelligence gathering.” Schrader’s dystopian view was surprisingly succinct. While also predicting worsening pandemics and global warning, Schrader warned that “AI will make the rich richer, the poor poorer, and the differential will be substantially greater by 2040.”
And in the same document, Chuck Cosson, T-Mobile’s director of privacy and data security, also predicted an increase in “misinformation and other forms of epistemic corruption,” with the end result that “how we know what we know will be challenged…”
Further down, Devin Fidler, founder of the consulting company Rethinkery agreed that while we’re fretting about AI spinning out of control, there’s much more pressing challenges — including the possibility of “potentially nurturing the growth of new kinds of digital warlords.
“This is like worrying about an asteroid collision while your house is in the path of an oncoming wildfire.”
But not all the predictions were glum. Futurist author Jonathan Kolber envisioned AI eliminating the need for most work, with humans enjoying “unlimited material abundance” from asteroid mining (and clean energy). Kolber sees fully immersive VR offering all those experiences that used to require physical possessions, while benevolent self-aware AI protects humankind from nuclear war. (Kolber’s book A Celebration Society expands his vision in more detail.)
What to Do Now
How can we head off the dystopias? The problems might be at least partly organizational, argues Amy Sample Ward, CEO of the nonprofit NTEN, which offers training programs in equitable technology use. In her contribution to the report, Ward advised “redirecting” the current movement toward “condensing power in fewer and fewer systems, governments and individuals.”
Or, as Dyson said in her interview, “Where we’re heading is up to us.”
In his February interview, Rainie expressed a hope “that people will be people, in this age of artificial intelligence, and engage with others, ask questions, offer their expertise, and learn from the things that we are trying to pull together for them.”
And Rainie told The New Stack that he sees this report as a beginning. “The reasons we do research like this and ask questions like this are to help start public and policy conversations around the findings.” So while there’s no specific policy agenda driving their work, “We do hope new data on new topics pushes some of these issues into the public square…
“Our view is that the best conversations — and policies — are made when they are informed by data and input captured by public opinion surveys and diverse expert views.”
Rainie ultimately sees their research being used by the policy-making community, as well as the tech communities who are building and the tools, the broader business community, and “scholars and other analysts who are trying to think about the appropriate use of AI in educational settings and how AIs can be deployed in classrooms.” And of course, he’s also hoping it sees use from “interested workers and citizens who will be affected by the deployment of AI systems in their jobs and in their communities.”
And the future that awaits may be stranger than we can imagine. In February’s interview, Rainie was asked what the center would explore next — and he said AI-powered augmented reality was already “on our mind.” But beyond that, Rainie said that “the merger of artificial intelligence with things in your body is absolutely going to happen — these experts don’t even think that’s a question worth wondering about because it’s so obvious that it’s going to happen.
“The other thing that’s going to happen is how we negotiate all of the ways in which this changes us as human beings.”
The post IT Pioneers Assess the Future Impact of AI appeared first on The New Stack.
Vint Cerf, Ester Dyson, William L. Schrader and others share their worries and hopes about AI in a wide-ranging study from Elon University.