My old boss Marc Andreessen liked to say that every failed idea from the Dotcom bubble would work now. It just took time - it took years to build out broadband, consumers had to buy PCs, retailers and big companies needed to build e-commerce infrastructure, a whole online ad business had to evolve and grow, and more fundamentally, consumers and businesses had to change their behaviour. The future can take a while - it took more than 20 years for 20% of US retail to move online.
People forget this now, but the iPhone took time as well. Apple sold just 5.4m units in the first 12 months, and it took until 2010 for sales to really work (the iPod took even longer). The same, of course, applies to the enterprise. If you work in tech, cloud is old and boring and done, but it’s still only a third or so of enterprise workflows 25 year after Marc Benioff tried to persuade people to do software in the browser.
ChatGPT happened a lot faster. it exploded into our consciousness in late 2022, and it’s taken all the oxygen in tech almost immediately. If you’re building a startup today that isn’t focused on generative AI, all your friends will point at you and laugh, but much more importantly, ChatGPT got to 100m users in just 2 months. By this spring, unprecedented numbers of people had both heard of it and used it.
As with every observation about the acceleration of tech adoption, a lot of this is ‘standing on the shoulders of giants’ - OpenAI didn’t have to wait for people to buy devices or for telcos to build DSL or 3G. For consumers, ChatGPT is just a website or an app, and (to begin with) it could ride on all of the infrastructure we’ve built over the last 25 years. So a huge number of people went off to try it last year.
The problem is that most of them haven’t been back. If you ask what ‘used’ actually means, it turns out that most people played with it once or twice, or go back only every couple of weeks.
This is a very glass half-empty / glass half-full kind of chart, as the caption points out. On one hand, getting a quarter to a third of the developed world’s population to try a new product in 18 months is very hard. But on the other, most people who tried it didn’t see how it was useful.
Of course, there’s a selection bias here: if you’ve bought a $650 smartphone, you’ve already decided that it’s useful, and you’re a lot less likely to abandon it than a website that you spent 5 minutes playing with. And you could also point out that the best versions of the models are often behind paywalls.
But if this is the amazing magical thing that will change everything, why do most people say, in effect, ‘very clever, but not for me’ and wander off, with a shrug? And why hasn’t there been much growth in the active users (as opposed to the vaguely curious) in the last 9-12 months, as shown in a bunch of similar surveys? The most revealing - possibly - is Google Trends, which must always be used with caution, but which seems to show a correlation with school holidays.
There are a couple of ways that you could answer this. It does take time to change your habits and ways of thinking around an entirely new kind of tool (remember when we printed out emails?). We can be certain that the models will get better at least to some extent - agents, voice and multimodal will expand the problems they can solve. But I’ve also argued (here last October, for example) that an LLM by itself is not a product - it’s a technology that can enable a tool or a feature, and it needs to be unbundled or rebundled into new framings, UX and tools to be become useful. That takes even more time.
I think you can see all of the same issues in this data from Bain, surveying enterprise use of LLMs. Again, this is a glass half-empty / glass half-full chart: there’s a lot of interest, and quite a lot of deployment, but it depends where you look.
Unlike some surveys, which just ask, in effect, ‘is anyone at all anywhere in your organisation using this?’ (um, yes), Bain tried to split the pilots, experiments and trials from the deployment. Everyone has a bunch of tests, but far fewer people are trusting something in their business to this yet, and all of that varies a huge amount depending on your use cases. LLMs are already very useful for coding and marketing, but much less useful for lawyers or HR (though of course lawyers are notably slow adopters of any new tech).
Accenture, meanwhile, gave us a great illustration of the scale of that enterprise experimentation, but also how much it’s only experimentation for now - again, a glass half-full / half-empty illustration. Last summer it proudly announced that it had already done $300m of ‘generative AI’ work for clients… and that it had done 300 projects. Even an LLM can divide 300 by 300 - that’s a lot of pilots, not deployment. The number has gone up a lot since then, but what’s the mix? Indeed, with BCG saying that it expects 20% of its revenue this year will be helping big companies work out what to do about generative AI, the single biggest business from this in 2024 might be for consultants explaining what it is. (It’s the only thing that anyone wants to talk to me about.)
A lot of these charts are really about what happens when the utopian dreams of AI maximalism meet the messy reality of consumer behaviour and enterprise IT budgets - it takes longer than you think, and it’s complicated (this is also one reason why I think ‘doomers’ are naive). The typical enterprise IT sales cycle is longer than the time since Chat GPT3.5 was launched, and Morgan Stanley’s latest CIO survey says that 30% of big company CIOs don’t expect to deploy anything before 2026. They might be being too cautious, but the cloud adoption chart above (especially the expectation data) suggests the opposite. Remember, also, that the Bain ‘Production’ data only means that this is being used for something, somewhere, not that it’s taken over your workflows.
Stepping back, though, the very speed with which ChatGPT went from a science project to 100m users might have been a trap (a little as NLP was for Alexa). LLMs look like they work, and they look generalised, and they look like a product - the science of them delivers a chatbot and a chatbot looks like a product. You type something in and you get magic back! But the magic might not be useful, in that form, and it might be wrong. It looks like product, but it isn’t.
Microsoft’s failed and forgotten attempt to bolt this onto Bing and take on Google at the beginning of last year is a good microcosm of the problem. LLMs look like better databases, and they look like search, but, as we’ve seen since, they’re ‘wrong’ enough, and the ‘wrong’ is hard enough to manage, that you can’t just give the user a raw prompt and a raw output - you need to build a lot of dedicated product around that, and even then it’s not clear how useful this is. Firing LLM web search out of the gate was falling into that trap. Satya Nadella said he wanted to ‘make Google dance,’ but ironically the best way to compete with ‘Bing Copilot’ might have been sit it out - to wait, watch, learn, and work this through before launching anything (if Wall Street had allowed that, of course).
The rush to bolt this into search came from competitive pressure, and stock market pressure, but more fundamentally from the sense that this is the next platform shift and you have to grab it with both hands. That’s much broader than Google. The urgency is accelerated by that ‘standing on the shoulders of giants’ moment - you don’t have time to to wait for people to buy devices - and from the way these things look like finished products. And meanwhile, the firehose of cash that these companies produced in the last decade has collided with the enormous capital-intensity of cutting-edge LLMs like matter meeting anti-matter.
In other words - “These things are the future and will change everything, right now, and they need all this money, and we have all this money.”
As a lot of people have now pointed out, all of that adds up to a stupefyingly large amount of capex (and a lot of other investment too) being pulled forward for a technology that’s mostly still only in the experimental budgets.
That rush means we’ve skipped the slow painful period at the bottom of the S-Curve, where you try to work out what product-market fit looks like, while you build the actual product. The web, and e-commerce, and the iPhone had to go through a painful process of growing and learning to become useful. The App Store wasn’t part of the plan for the iPhone, and Tim Berners-Lee’s original web browser included an editor, because this looked like a better network drive (ask your parents), not a publishing platform. LLMs skipped that part, where you work out what this is and what it’s for, and went straight to ‘it’s for everything!’ before meeting an actual user.
That makes this chart particularly interesting. The straightforward skeptical interpretation is that this is a classic surge in investment that will inevitably turn into a bubble, if it isn’t already, just like the two above.
But you could also suggest that these startups are a collective Silicon Valley bet that LLMs are a technology, not a product, and that we need to go though the conventional process of customer discovery towards product-market fit. The thing that really drives a bubble in generative AI, at least arguably, is the idea that history is over and LLMs will be able to do everything, and in that case we wouldn’t need any of these companies. On that view, these companies are the anti-bubble (a neatly Panglossian idea).
Of course, the crazy dreams of the Dotcom bubble really did happen, and the AI maximalists might be right - it may be that LLMs can do the whole thing. LLMs may be able to swallow most or all of existing software, and they may be able to automate vast new classes of task that were never in software before, just by themselves and with whole new layers of product, company and enterprise sales built around them. This might be the first S-Curve in tech history that turns out to be a J-Curve. But not this year.