The hype surrounding artificial intelligence is everywhere, from get-rich-quick schemes to fears of sentient robots replacing humans. A quick Amazon search retrieves more than a thousand “books on ChatGPT.” At least three on the first results page include the word “millionaire” in the title. Others are entirely AI-written with bogus claims of legitimate authorship.

Yet AI offers much promise to merchants — content tools, productivity, search engine optimization, you name it.

Cover of AI Snake Oil

AI Snake Oil

A new book, “AI Snake Oil: What AI Can Do, What It Can’t, and How to Tell the Difference,” coming September 24 from Princeton University Press, aims to help non-experts separate reality from hype. The authors are two of “Time” magazine’s “100 Most Influential People in AI.” Arvind Narayanan is a professor of computer science and director of Princeton’s Center for Information Technology Policy. Sayash Kapoor formerly engineered content-moderation software at Facebook and is now a PhD candidate in computer science at Princeton.

They explain what artificial intelligence is, how it works, what it can and can’t do presently, and its likely direction.

AI “snake oil,” per Narayanan and Kapoor, is “AI that does not and cannot work as advertised.”

The book focuses on three AI technologies — predictive, generative, and content moderation — and outlines the capabilities and shortcomings of each, with plenty of real-world examples.

Predictive AI, already popular in business, education, and criminal justice, deserves the “snake oil” label. The book discusses the unverifiable claims made by companies selling these products, problems with their use (such as implicit bias and users who game the system), and the inherent difficulty of forecasting.

They see more potential for generative AI, suggesting when it’s useful and discussing controversies such as academic cheating, copyright infringement, and its likely impact on work.

The authors also detail why AI can’t completely replace human judgment in moderating content, giving examples of shocking failures and concluding that “whether or not a piece of content is objectionable often depends on the context. The inability to discern that context remains a major limitation of AI.” The book’s analysis of social media moderation is enlightening, especially for those of us who have had seemingly innocuous posts banned for no apparent reason,

A chapter titled “Is Advanced AI an Existential Threat?” evaluates “the dire view that AI threatens the future of humanity.” They concede that artificial general intelligence — AI that matches human capabilities — may someday be possible. But they contend “society already has the tools to address its risks calmly,” pointing out that “unlike chatbots, advanced AI can’t be trained on text from the internet and then let loose. That would be like expecting to read a book about biking and then get on a bike and ride.”

The final two chapters, “Why Do Myths about AI Persist?” and “Where Do We Go from Here?” explore the aspects of AI that make it susceptible to hype, suggesting regulations, practices for mitigating negative effects, and best- and worst-case scenarios.

“AI Snake Oil” covers the technology’s key facets in just 285 pages. The explanations are easily understood without being oversimplified.

The authors admirably differentiate fact from opinion, draw from personal experience, give sensible reasons for their views (including copious references), and don’t hesitate to call for action. They also publish a newsletter to monitor developments.

If you’re curious about AI or deciding how to implement it, “AI Snake Oil” offers clear writing and level-headed thinking. The book’s straightforward analysis will help reap AI’s benefits while remaining alert to its drawbacks.

Similar Posts