By Reid Hoffman and Greg Beato
Stop thinking about all that could possibly go wrong with AI, and start thinking about all the things that could possibly go right. This is the thesis that Hoffman and Beato put forward in Superagency: What Could Possibly God Right with Our AI Future.
About the authors:
Reid Hoffman is the co-founder of LinkedIn, cofounder of Inflection AI, and a partner at Greylock. He is the author of six best-selling books including Blitzscaling, The Start-up of You, The Alliance, Masters of Scale, Impromptu and most recently, Superagency: What Could Possibly Go Right with Our AI Future. He also hosts the podcasts Masters of Scale and Possible.
Greg Beato has been writing about technology and culture since the early days of the World Wide Web. His work has appeared in The New York Times, Wired, The Washington Post, The International Herald Tribune, Reason, Spin, Slate, Buzzfeed, The Guardian, and more than 100 other publications worldwide.
Superagency’s Four Principles
Designing for human agency is the key for producing broadly beneficial outcomes for individuals and societies alike.
When agency prevails, shared data and knowledge become catalysts for individual and democratic empowerment, not control and compliance.
That innovation and safety are not opposing forces, but rather synergistic ones: giving millions of people hands-on access to AI, through the processes of iterative deployment, is both a productive and and a safe way to make AI more capable and more inclusive.
Similar to what happened during the rapid adoption periods fo the automobile and smartphone, our collective use of AI will have compounding effects.
This fourth principle is what will lead to a new era of superagency.
Think about it . . .
Doom-mongers proposed the printing press, the telephone, the automobile, and our increasing automation as the end of us. Time has proven them wrong.
Hoffman and Beato argue that we can expect a similar positive impact from AI. As much as some may want to point to possible “what if” scenarios (and they are possible), we must also consider the consequences of not adjusting to a new technology.
Case-in-point: The horseless carriage. As much as many thought the horseless carriage would be the bane of our existence, the alternative was 1.3 million pounds of manure left by the horse-drawn carriage … EVERY DAY … just in Manhattan. “We had to envision "what could possibly go right. Had we waited to make automobiles even nominally available to the public until there was certainty these new machines were save beyond a reasonable doube, pedestrians on Manhattan’s busiest streets would be wading throuhg ankle-deep manure to get to their jobs each morning.” 230
If you are looking for references to ChatGTP, GTP4, OpenAI, Large Language Models (LLMs), and such, you will find it, but the author’s bigger issue is, as the subtitle more than hints, that human agency does not have to lose and, in fact, is central to effective AI.
Doomers, Gloomers, Zoomers, Bloomers
Hoffman and Beato utilize these helpful descriptors not to pigeonhole or reduce complexity, but to summarize some basic positions that have fueled th AI debate and ongoing discussion:
Doomers: The “worst-case scenario” crowd that see a future dominated by AI gone wild; void of human agency and which is out to destroy us.
Gloomers: Gloomers are highly critical of AI and the Doomers. They “favor a prohibitive, top-down approach, where development and deployment should be closely monitored and controlled by official regulation and oversight bodies. 20-1
Zommers: Zoomers want “open AI.” They don’t need or want government support or regulation. Productivity gains and innovation will far exceed negative impacts.
Bloomers: Bloomers, like Zoomers, are fundamentally optimistic. They are not a free-wheeling as Zoomers. As with a seed, plant it, monitor it’s growth, and intervene and adapt with that growth. Yield will grow over time.
Points to Ponder
Privacy vs Agency: “When does the production and dissemination of new knowledge bolster individual agency and when does it diminish it? How much does an allegiance to privacy, to withhold information and keep it close to the vest, reduce our opportunities to share knowledge, learn new things, grow?” 35
The growth of global knowledge is why we need AI: In 1980, the average US inhabitant was only consuming 0.004 of the information available each day. Today, in the time it takes you to read this sentence, the world now produces enough data information to fill 23 billion e-books.... And that's precisely why AI is so crucial to our future, as individuals and collectively. In the same way we used to use steam power, and now use multiple sources of synthetic manpower,... AI tools... can convert big data into big knowledge to achieve a new light ages of data-driven clarity and growth. 46
Utilizing AI to combat the rise of mental health crisis. Many mental health studies show that the largely self-guided nature of apps leads to low levels of engagement and high attrition rates, retention dropping to 3.9% after just 15 days in one study. AI provides opportunities to incorporate more conversational or social elements into apps and other interventions. 58 The authors are big proponents of the upside of AI technology as an aid to support human practitioners to make care more “abundant and affordable.” 65
Innovation is safety: The authors make a kind assumption when, discussing AI regulation they write, “By maintaining our [USA] development lead, we’re infusing AI technologies with democratic values, and integrating these technologies across society in ways that bolster our economic power, our national security, and our ability to broadly project our global influence.” 122 Hence, they believe in “permissionless innovation.”
Large Language Models (LLMs): “Life as a human today means constantly upskilling--at work, yes, but everywhere else too. While ongoing digital innovations perpetuate this dynamic, they also help us manage it. To keep up with new informational demands, the 20th century brought us new products and services like e-mail, hyperlinks, search, and emojis. The 21st century has given us AI. As their name implies, large language models are, at heart, systems for analyzing, synthesizing, and mapping language flows. That's what informs the analogy to GPS navigation systems--LLM's are infinitely applicable and extensible maps that help you get from point A to point B with greater certainty and efficiency.” 149
In the Whoa! department:
Encyclopaedia Britannica sold 120,000 copies in the U.S. in 1990 (its best year). Every month Wikipedia records about 4 billion page views. 83
Speak something into ChatGTP-40 and it can reply in as little as 232 milliseconds. 95
Conclusion:
The authors note, “Our goal is simply to once again suggest that technologies that are often depicted by their critics as dehumanizing and constraining generally turn out to be humanizing and liberating.” 212 No doubt those harboring any thoughts of the transitory nature of AI are greatly misled. The technology is here to stay (and grow!). Matters of oversight, privacy, and protection must be at the forefront. One would hope all parties would be as enthusiastic, progress, and yet as careful as our authors appear to be.
