Gary Marcus proposes generative AI boycott to push for regulation, tame Silicon Valley
'I am deeply concerned about how creative work is essentially being stolen at scale'
Interview Gary Marcus, professor emeritus at New York University and serial entrepreneur, is the author of several books, the latest of which takes the tech industry to task for irresponsibly developing generative AI and putting democracy at risk.
The Register recently corresponded with Marcus to learn more about his work and his concerns.
The Register: The title of your new book is Taming Silicon Valley: How we can ensure that AI works for us. Why does Silicon Valley need to be tamed?
Marcus: Because (a) they have lost the plot, and gone a very long way from "Don't be evil," and (b) they are becoming increasingly powerful, with almost no constraint.
The Register: Can Silicon Valley be tamed? Or does it have to exhaust itself by burning through venture capital? Even when there's clear, persistent evidence of harm – social media and public health, cryptocurrency/AI and the environment – public concern and regulatory pressure seem barely able to move the needle.
Marcus: That's why I wrote the book. We need MORE public pressure. That's what eventually greatly reduced smoking and smoking deaths. Without more public pressure, very little will be done in the US to protect citizens from increasingly invasive and problematic AI.
The Register: Your book cites a dozen concerns about what's referred to as artificial intelligence. What's the most troubling issue for you?
Marcus: The threat to democracy from automatically generated misinformation and deepfakes.
The Register: Many of the concerns about AI – impersonation, misinformation, fraud, bias, and so on – are serious as a result of the ease with which information can be disseminated without accountability or transparency. The New Yorker's famous 1993 cartoon, "On the internet, nobody knows you're a dog," has come to mean that nobody knows whether you're a paid shill, a cybercriminal, or an AI bot designed to push a political agenda. Should free, global, instantaneous content distribution have stronger safeguards that make it easier to hold bad actors accountable? Are there other distribution-focused options like limiting the reach of individual accounts to no more than X number of people?
Marcus: I think these are hard questions. One thing I would add to your analysis, which is quite astute, is that we ought to treat mass-produced misinformation differently from individual citizen speech. It's one thing for my neighbor to have an opinion I disagree with, another for a foreign government or stock scammer to generate billions of lies a day in order to shift public opinion or pump a stock.
The Register: What do you make of the push to put generative AI in every tech platform product? Microsoft has its Copilot products. Apple is about to roll out Apple Intelligence. Google has its AI Overview in Search. Is there a market for that much summarization and suggestion?
Marcus: Summarization certainly has its uses, but (a) generative AI isn't all that reliable at it, often leaving important things out and sometimes making things up, and (b) it's become a commodity thing that many models from many companies can do, so there is a price war and therefore not a huge amount of money to be made.
More broadly, everyone is pushing GenAI to try to make back their immense investments, but it's not going that well. In 2023 there was nothing but hype; in 2024 I see a lot of disillusionment.
The Register: What do you think of Google using AI to provide search results?
Marcus: I assume you mean GenAI; they have used AI since the beginning. Its use of GenAI for search has been problematic thus far, presumably because of inherent limits on LLMs in reliability.
The Register: Where is AI genuinely useful?
Marcus: Traditional web search, GPS navigation systems, recommendation engines, protein-folding. But none of those are GenAI (except protein-folding, which is a hybrid of GenAI and classical AI). GenAI itself has been somewhat useful for coding, but with some technical debt etc. and useful for brainstorming and concept art and that kind of thing. But wildly overhyped.
- Tesla FSD faces yet another probe after fatal low-visibility crash
- Destiny Robotics settles SEC case over AI-powered human robot vaporware
- FTC drops hammer on unwanted subscriptions with 'click to cancel' rule
- Digital River runs dry, hasn't paid developers for sales since July
The Register: An individual posting to Facebook under the name Joseph Browning recently observed: "The underlying purpose of AI is to allow wealth to access skill while removing from skill the ability to access wealth." Is that a fair formulation? And, if so, do you have thoughts on how we might address the challenge posed by being able to capture someone else's digital labor and sell it on an ongoing basis without compensating the source and while simultaneously commoditizing that labor?
Marcus: That's only true for one strand of AI, not all. GPS navigation systems aren't displacing humans, but they are AI.
But yes, I am deeply concerned about how creative work is essentially being stolen at scale, and if we let the GenAI companies get away with that, they will eventually seek to do the same for every profession.
The Register: What's the most misunderstood aspect of AI? The conflation of prediction with intelligence?
Marcus: Far too many people think that because chatbots sound like humans, they think like humans, when really they are just mimicking (in a fairly sophisticated way) what they have been trained on. And to your point, predicting the next word is arguably an element of intelligence, but many other aspects of intelligence, such as reasoning and planning, are lacking in current systems.
The Register: Much of the current focus of those offering AI as an API is LLM agents. Yet software automation has been around for years such as macros, scripts, and so on. Is there reason to believe declarative LLM-based automation and the chaining of tasks will be more broadly useful than the older imperative sort of programmed automation?
Marcus: I think people are in for disappointment. It would be fabulous to be able to commission agents to do what you want based on natural language requests, but current technology is too flaky to make that work well.
The Register: Should AI models that affect the public be required to disclose their training data?
Marcus: Yes. I discuss transparency at length in Taming Silicon Valley, and that includes transparency around training data, because everything about the models (e.g. bias) depends on those data, and the scientific community can't mitigate the harms if we don't understand what's inside.
The Register: What outcome do you expect from the various ongoing lawsuits over generative AI, from book authors, musicians, visual artists, and software developers? And what outcome do you believe would be desirable?
Marcus: I expect that the GenAI companies will be forced to license their raw materials, just like streaming services are, and that's a good outcome.
The Register: Is there reason to hope for meaningful AI regulation that's not watered down by lobbying and for meaningful enforcement of that regulation? Despite several decades of concern about online privacy, not much has changed in the US – we still have no national privacy law. And when large tech firms do get fined over privacy missteps, the amounts are trivial.
Marcus: I'm really not optimistic about the US; our citizens have far less protection around privacy than our European counterparts, and far less protection than they do around AI. Unless our citizens speak up – much more loudly than they have – that's how it will remain, and we will be quite vulnerable. That's exactly why I wrote the book: to encourage citizens to take action, maybe even boycott GenAI until it gets its house in order. ®