The Guy Who Wrote the Viral AI Post Wasn’t Trying to Scare You
You probably don’t know Matt Shumer’s name, but there’s a pretty good chance you’re familiar with his thoughts about AI. On Tuesday, Shumer published an essay to X titled “Something Big Is Coming,” which almost immediately caught fire online. (According to X’s not-always-reliable metrics, it stands at 73 million views as of Thursday morning.) In it, Shumer, the founder of an AI company, warns that enormous advances in technology are poised to reshape society much more quickly than most people realize. He analogizes artificial intelligence’s rapid improvement in recent months to the beginning of the COVID pandemic — a looming, seismic societal change that only a small faction is really paying attention to. And he warns that the tech sector is the canary in the AI coal mine: “We’re not making predictions. We’re telling you what already occurred in our own jobs, and warning you that you’re next.”
As Shumer’s post ricocheted around the internet, it drew a predictably divided response. Some saw it as an incisive warning of things to come, while others dismissed it as another piece of disposable AI hype or a naked money grab. I caught up with Shumer on Thursday to discuss the overwhelming response to his essay, why he used AI to help write it, and whether all of our jobs are actually in immediate danger.
Your essay has been making the rounds in a way that few things do — it broke social-media containment. What has the reaction been like?It’s insane, because I didn’t expect this. I didn’t expect anything close to this. I originally wrote it for my mom and dad because — I was home with them this weekend for the Super Bowl — I’m 26 and I was trying to explain to them what was going on. I felt that there was an inflection point when GPT-5.3-Codex came out. I tried it and was like, Oh my God, this is not just a step better, this is massively better, and it’s the first sign of something a little scary.
The way I view it, AI labs have focused on training models that are really good at writing code, and that’s really important because what we see in the engineering space is like a year ahead of what everybody else has access to. So a model today is, let’s say, at level 50 at writing code, but level 20 at law or something of the sort. The next model will be probably at level 100 on code and level 50 on law. It woke me up, and I felt that I had to share it. I was looking around and I was like, What can I give to my parents to help them understand this, so that they’re not just thinking their idiot son — that’s probably a terrible way of putting it, but you know what I mean — is saying “This is happening” and they have no way to know what or not to believe it?
There are a lot of pieces of great writing in this space, but they’re all extremely technical, and I think that’s part of the reason people don’t understand what’s coming. They’re written for nerds by nerds. They almost take pride in sounding as smart as possible. So I figured it would probably be important to write something that they could understand. And as I wrote it, I realized it could actually help other people. I decided to post it and it quickly broke containment. I have friends who are very much outside of the tech bubble, and it’s being passed around their offices, and they’re texting me and it’s a surreal experience. But I’m glad it’s happening. I didn’t expect my article to be the thing that did it, but it needed to happen. People need to understand what’s coming. It may not affect them today, it may not affect them in a year, but at some point it will, and I’d rather people be aware and have the opportunity to prepare than just be blindsided by forces that they can’t really control.
Did you use AI to write any of it?I did. I actually posted a little bit about this because I think it’s important for people to know. People are responding like, “Look, this is AI-written. You should ignore it.” And it’s not entirely AI-written. I used it to help edit to sort of iterate my ideas and the ways of phrasing things, and it was incredibly helpful, but that’s kind of the point. If this was helped by AI and got millions of views, it’s clearly good enough.
I didn’t say, “Go write this article.” What I did was feed it a bunch of articles that I have read over the years that I think articulate these points really well. I said, “Here’s what I agree with, here’s what I disagree with. Here’s my unique spin and take on this.” And then I said, “Interview me — ask me dozens of questions.” And I spent over an hour answering those first wave of questions. Then we repeated it, and basically I ended up building this huge dossier of everything I believe, and everything I wanted to explain. And then for each thing, asked, “How can I explain it in a way that’s actually useful and understandable by the average person?”
I ended up writing a first draft based on that. Once that was done, I passed my first draft into the AI and I said, “Hey, from an editorial perspective, can you critique this?” It gave me feedback and I adjusted it. So it was very much like having a co-writer, and it clearly worked pretty well. I wouldn’t say it’s the best-written thing ever. I didn’t expect it to do this, and if I did expect this sort of virality, I would’ve put some more work into a lot of parts.
One common critique of the article I’ve seen is that the AI revolution you describe in coding doesn’t neatly apply to other fields, since coding is such a discrete task. Here’s one post I saw: “Coders freaking out that it’s replacing them and extrapolating from their extremely weird domain (as in unusual among knowledge work) to all of work is going to be a major theme of 2026 and kind of embarrassing by 2027.” What’s your response to that?I understand why people think that, and I think it’s very easy to feel like, Oh, the AI can’t do the thing I do because X, Y, and Z. There were arguments like that with coding for a very long time, but what we’ve seen time and again is if the AI labs are sufficiently funded and pick a goal to go after, whether it’s to make something that generates videos or that writes code, given enough money and enough time and sufficient incentive to do it from a financial perspective — which clearly there is — they solve it. It’s this very interesting technology where whatever data you have, if you train the model on it, it can learn it.
My dad’s a lawyer. Is AI going to stand in front of a courtroom? No, it’s not, but I worry that associates are going to have a harder time getting hired. I had dinner with my lawyer a couple nights ago and he was saying that they’re using one of these off-the-shelf programs and it’s already at the level of about a second-year associate. Do I know exactly what that means personally? No, but I’ve talked with enough people who are pushing this stuff in their industries who aren’t in the bubble, but are just like, “Hey, I want to see where this is going,” and the rate that they’re saying it’s improving at is pretty clear. When you actually break a job into its steps, anything that could be done on a computer can theoretically be done by these models.
But I don’t think — and I wanted to make this clear in the article, and if I knew how viral this is going to go, I probably would’ve spent more time trying to make this clearer — that just because the AI can do something doesn’t mean it’s going to immediately proliferate across the economy. There are so many structural things, whether it’s regulations, standards, or people’s comfort with this sort of stuff, and that means that for certain industries it’s going to take more time than others. Code is in this crazy place today where people are saying it’s solved and you can build anything, and that is true, but we’re still figuring out what it means for jobs. I don’t know. No one knows. I think each industry is going to have its own separate reckoning, and it’s going to look different for every industry ten years from now. I think almost everything will be extremely different and almost unrecognizable, but in the interim, everybody has to figure out what this means for them and their industry. But assuming that the AI just can’t do their thing and that their thing is special is not the right approach. Maybe that’s true, but if there’s even a 20 percent chance it’s not, it’s worth preparing.
Another related version of that critique is that for a lot of jobs, dealing with other people is a huge part of it. I’m sure AI could beat a law associate at document review, if not now, then soon. But then you have to actually deal with the client. Most people’s jobs have components like that.A lot of what we’re seeing and what people know as AI today isn’t actually the state of the art of what actually exists. I’m assuming most people that use it are using the free version. The paid version is dramatically better, but there’s a whole level above that of more truly agentic systems, and that’s the sort of scary stuff right now.
When I go and I use AI to build an app, I am not saying, “Hey ChatGPT, build an app.” I have a specialized program that has access to everything my computer has access to, and it can use tools like a person and go off and do things. So I say, “Hey, can you get this on the internet and then see if you can find some early users on Reddit and communicate to them that they should try it?” That is actually possible today.
That is a little spooky.It’s spooky as hell, and I’ve been one of these people that has been predicting this for years, but predicting it and seeing it is a whole different story.
Although many people were suspicious of the fact that at the end of your article, you advise that people pay for certain products and follow you on X to keep up with AI news. The following-me-on-Twitter thing — I agree. The AI products — I have no stake in Anthropic, I have no stake in OpenAI. They don’t pay me or anything. I can see why people might think that, but in fact, I have paid a lot of OpenAI bills over the years when I’ve tested this stuff. For example, for one of the start-ups I invested in, you basically worked with them to allow your AI to not just chat back and forth with you on ChatGPT, but to give them access to their own email inbox, where they can actually reach out and chat with other people and other AIs. I also oscillate back and forth between “This is interesting” and “This is terrifying,” and I don’t know which one is right. I think they’re both right.
It also strikes me that a hallmark of the AI industry from the beginning, as far as I can tell as a lay observer, is that people love to make sweeping predictions about what’s going to happen in a year in two years. I’m pretty sure that in 2023 and 2024, I was hearing that by 2026, white-collar jobs would be totally endangered. Ten years ago, Geoffrey Hinton, an AI pioneer, famously predicted that radiologists would be obsolete by 2020. That did not happen, and it still hasn’t come close to happening. Do you find it a little uncomfortable to be making these somewhat apocalyptic forecasts?Yes. The way that I think about it is, Do I know this to be 100 percent true? Do I know this to be absolutely certainly going to happen? No, I don’t. However, given what I’ve experienced and getting the preview that I have into the industry, I think there’s a better-than-not chance it will.
I find the analogy of the pandemic you use a little off because in January and February 2020, it’s true that a very small number of people were actually paying attention to what was happening in China, and you had to be following the right people on Twitter. In this case — to take an old-school barometer of success, the Time Person of the Year in 2025 was “the architects of AI.” These companies are widely used and in the news, and I’ve had a million conversations with people who are worried about the implications for their jobs. It’s not exactly under the radar. It’s not. But people talk a lot about Terminator-style doom, and I don’t see many people talking about the impact on jobs. In theory, if this could happen two years from now to your job, if your job happens to be one of the more exposed ones, maybe you should just focus a little more on saving today. That’s the angle I wanted to take it from.
Tell me more about this agentic stuff, where AI interacts with the world by itself. Where do you think that could go next?I think it’s just reliability. One of the key things that I’ve learned in AI over the years, because I’ve been doing this since 2019 — I dropped out of college after realizing what this was going to do, and I realized if I didn’t put everything I had into this, I’d regret it for the rest of my life. Basically the best rule of thumb I can give to anybody, and this has been the one thing that’s held true, is it’s not about a specific prediction. It’s not saying it’s going to do X, Y, Z at any given point. It’s just if a model can kind of sort of do something today, even if it’s not good at it, even if it’s unreliable in a year or two years, you can bet that it’ll eventually be near-perfect at that thing.
I can’t say the models today are reliable at using a computer, and I’ve actually been in this part of the space. My company actually built the first publicly available AI that could use a web browser and actually go in and order you a burrito. And was it useful at the time? No, because it got it wrong 50 percent of the time. But now we’re at 80, 90 percent. It can almost use a computer today, which means in a year or two, you can expect that these things will be nearly perfect, probably better than people at using a computer. So if there’s a task that can be done on a computer by a human and doesn’t require going somewhere in person, it’s very likely that AI will be able to do it reliably and well.
Getting to 90 percent reliability for something like that or 95 or even 99 is great, but doesn’t it have to be 100 percent? Because you don’t want to entrust an AI to do something and it screws up one percent of the time, and it could be a very consequential screwup.I’ve thought about this a lot, and I go back and forth. You could take the argument that 99 percent reliability isn’t enough, but then I’ve hired and worked with a lot of people over the last six years or so, and I would say that the rate of success is far lower than 99 percent for most things. So it is very much a perception thing. There are also tricks, and I think this is one of the things labs should be focusing on that they’re not, to mitigate this. If you just tell the AI, “Hey, I want you to do this thing,” there might be a one percent failure rate. But if you then actually have a system that has two AIs, you say “AI One, do this thing,” and then when it’s done, you say, “AI Two, did they do it correctly or do they make a mistake?”
To check the work.Exactly. You actually see that the failure rate goes way, way, down. This has been documented since 2020, when AI wasn’t great at the time. If it was writing a paragraph and most of the paragraphs were awful, you could say, “Hey, can you generate 20 different versions?” And then you actually have a different AI critique them and pick the best one. The results are far better, and I think a lot of those sorts of things haven’t been implemented yet in full. They’re starting to be.
You have been doing this a long time. There’s been a widespread feeling among people who do this work — at least among coders — that they may be rendered obsolete or they already are being rendered obsolete, and it’s bittersweet. The new tools are both incredibly helpful and sobering. They bring up all kinds of questions of human beings’ value in the world. Are you feeling that in your own work right now?This is probably the trickiest question you’ve asked, because there’s so many facets and I hope I hit on the ones that are important. What I’ve seen in my industry is a bifurcation, is the best way I can describe it. People who are really loving what they do, who are already working insanely hard and are adopting this, are pulling away in an extremely strong way. Somebody who was a top-percentile engineer is now ten or 20 times as effective as they were before, and they can do the work of many more people. Then you have the other side of things, which is that folks who really aren’t determined and aren’t top-percentile already are not really getting the value out of it that others are, and it’s not making that big of a difference for them.
And I’m worried because we kind of have this social contract where you go to college, you get a job, and you’ll be taken care of. But because AI is so skewed, at least in engineering, towards the hard workers already — some people are just trying to get by, which is a totally fair thing to do. Not everybody wants to be exceptional. They’re struggling. I’m a hard worker, but that doesn’t mean everybody else should be screwed. And I don’t know if it’s going to be the same for every industry, but that’s what I’m seeing here.
That whole social contract — going to college and getting taken care of — has been getting harder in most industries anyway, and this could really magnify that. I think it’s particularly an American thing to try to be an exceptional striver. It’s easy to imagine other places, like Europe, resisting AI more than we do. Which might be a good thing. There are people in my life that I love who, even without AI, are really struggling to get work right now.
Look I didn’t put this out to scare people, although I understand that there are some elements of that. My goal is to help people see what they might be neglecting, what’s not in their circles yet, because they should be able to know and make their own decisions for how to prepare or not prepare. It feels unfair that so many people think AI is a nothingburger when it’s clearly not. Maybe it’s not everything I say — I think it will be — but it’s not a nothingburger, and no matter what, people should be thinking at least a little bit about it. And my hope is that this just gets people talking and thinking.
So you’re basically aiming at a place like Bluesky, where the idea that AI can do anything useful at all gets immediately swatted down. It’s funny how people get into their factions about stuff like this. Everything takes on a political valence.It shouldn’t. It should be what you need as a person. People get too tribal about things. It’s different for everybody, and everybody should have a different response to this. For some people, it truly won’t matter. Even if everything I say comes to pass, my nurse in a hospital isn’t being replaced anytime soon. They shouldn’t — at least I don’t think — worry, but some people should, and I think it’s important that they know.
This interview has been edited for length and clarity.
More From This Series
How California’s Governor’s Race Became Such an Unholy Mess
The Big OPEC Shakeup Is About a Lot More Than Oil
Will the Voting-Rights Ruling Really Be a Disaster for Democrats?