On Tuesday of this week, neuroscientist, founder and author Gary Marcus sat between OpenAI CEO Sam Altman and Christina Montgomery, who is IBM’s chief privacy trust officer, as all three testified before the Senate Judiciary Committee for over three hours. The senators were largely focused on Altman because he runs one of the most powerful companies on the planet at the moment, and because Altman has repeatedly asked them to help regulate his work. (Most CEOs beg Congress to leave their industry alone.)
Though Marcus has been known in academic circles for some time, his star has been on the rise lately thanks to his newsletter (“The Road to A.I. We Can Trust“), a podcast (“Humans vs. Machines“), and his relatable unease around the unchecked rise of AI. In addition to this week’s hearing, for example, he has this month appeared on Bloomberg television and been featured in the New York Times Sunday Magazine and Wired among other places.
Because this week’s hearing seemed truly historic in ways — Senator Josh Hawley characterized AI as “one of the most technological innovations in human history,” while Senator John Kennedy was so charmed by Altman that he asked Altman to pick his own regulators — we wanted to talk with Marcus, too, to discuss the experience and see what he knows about what happens next.
Are you still in Washington?
I am still in Washington. I’m meeting with lawmakers and their staff and various other interesting people and trying to see if we can turn the kinds of things that I talked about into reality.
You’ve taught at NYU. You’ve co-founded a couple of AI companies, including one with famed roboticist Rodney Brooks. I interviewed Brooks on stage back in 2017 and he said then he didn’t think Elon Musk really understood AI and that he thought Musk was wrong that AI was an existential threat.
I think Rod and I share skepticism about whether current AI is anything like artificial general intelligence. There are several issues you have to take apart. One is: are we close to AGI and the other is how dangerous is the current AI we have? I don’t think the current AI we have is an existential threat but that it is dangerous. In many ways, I think it’s a threat to democracy. That’s not a threat to humanity. It’s not going to annihilate all humans. But it’s a pretty serious risk.
Not so long ago, you were debating Yann LeCun, Meta’s chief AI scientist. I’m not sure what that flap was about – the true significance of deep learning neural networks?
So LeCun and I have actually debated many things for many years. We had a public debate that David Chalmers, the philosopher, moderated in 2017. I’ve been trying to get [LeCun] to have another real debate ever since and he won’t do it. He prefers to subtweet me on Twitter and stuff like that, which I don’t think is the most adult way of having conversations, but because he is an important figure, I do respond.
One thing that I think we disagree about [currently] is, LeCun thinks it’s fine to use these [large language models] and that there’s no possible harm here. I think he’s extremely wrong about that. There are potential threats to democracy, ranging from misinformation that is deliberately produced by bad actors, from accidental misinformation – like the law professor who was accused of sexual harassment even though he didn’t commit it – [to the ability to] subtly shape people’s political beliefs based on training data that the public doesn’t even know anything about. It’s like social media, but even more insidious. You can also use these tools to manipulate other people and probably trick them into anything you want. You can scale them massively. There’s definitely risks here.
You said something interesting about Sam Altman on Tuesday, telling the senators that he didn’t tell them what his worst fear is, which you called “germane,” and redirecting them to him. What he still didn’t say is anything having to do with autonomous weapons, which I talked with him about a few years ago as a top concern. I thought it was interesting that weapons didn’t come up.
We covered a bunch of ground, but there are lots of things we didn’t get to, including enforcement, which is really important, and national security and autonomous weapons and things like that. There will be several more of [these].
Was there any talk of open source versus closed systems?
It hardly came up. It’s obviously a really complicated and interesting question. It’s really not clear what the right answer is. You want people to do independent science. Maybe you want to have some kind of licensing around things that are going to be deployed at very large scale, but they carry particular risks, including security risks. It’s not clear that we want every bad actor to get access to arbitrarily powerful tools. So there are arguments for and there are arguments against, and probably the right answer is going to include allowing a fair degree of open source but also having some limitations on what can be done and how it can be deployed.
Any specific thoughts about Meta’s strategy of letting its language model out into the world for people to tinker with?
I don’t think it’s great that [Meta’s AI technology] LLaMA is out there to be honest. I think that was a little bit careless. And, you know, that literally is one of the genies that is out of the bottle. There was no legal infrastructure in place; they didn’t consult anybody about what they were doing, as far as I don’t know. Maybe they did, but the decision process with that or, say, Bing, is basically just: a company decides we’re going to do this.
But some of the things that companies decide might carry harm, whether in the near future or in the long term. So I think governments and scientists should increasingly have some role in deciding what goes out there [through a kind of] FDA for AI where, if you want to do widespread deployment, first you do a trial. You talk about the cost benefits. You do another trial. And eventually, if we’re confident that the benefits outweigh the risks, [you do the] release at large scale. But right now, any company at any time can decide to deploy something to 100 million customers and have that done without any kind of governmental or scientific supervision. You have to have some system where some impartial authorities can go in.
Where would these impartial authorities come from? Isn’t everyone who knows anything about how these things work already working for a company?
I’m not. [Canadian computer scientist] Yoshua Bengio is not. There are lots of scientists who aren’t working for these companies. It is a real worry, how to get enough of those auditors and how to give them incentive to do it. But there are 100,000 computer scientists with some facet of expertise here. Not all of them are working for Google or Microsoft on contract.
Would you want to play a role in this AI agency?
I’m interested, I feel that whatever we build should be global and neutral, presumably nonprofit, and I think I have a good, neutral voice here that I would like to share and try to get us to a good place.
What did it feel like sitting before the Senate Judiciary Committee? And do you think you’ll be invited back?
I wouldn’t be shocked if I was invited back but I have no idea. I was really profoundly moved by it and I was really profoundly moved to be in that room. It’s a little bit smaller than on television, I suppose. But it felt like everybody was there to try to do the best they could for the U.S. – for humanity. Everybody knew the weight of the moment and by all accounts, the senators brought their best game. We knew that we were there for a reason and we gave it our best shot.
Gary Marcus is happy to help regulate AI for the U.S. government by Connie Loizos originally published on TechCrunch