Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
A note of threat has crept into the otherwise optimistic language of those who champion the adoption of artificial intelligence.
You can hear it in the rhetoric of UK technology secretary Peter Kyle, who told workers and businesses this month: “Act now, and you will thrive into the future. Don’t, and I think that some people will be left behind.” You can hear it in the marketing spiel used by software companies such as Salesforce: “Companies that fail to leverage AI in their business risk being left behind those that do.” And you can hear it in the way some chief executives are beginning to communicate with staff. “AI is coming for your jobs,” Fiverr chief executive Micha Kaufman wrote in a memo to staff last month. “Are we all doomed? Not all of us, but those who will not wake up and understand the new reality fast, are, unfortunately, doomed.”
But is it true that to “act now” or be “left behind (and/or doomed)” are the only two options available?
Of course, there is such a thing as a first-mover advantage, and you can see why that might accrue to some companies that race to install new AI systems such as autonomous agents. If these investments lead to lower costs and higher productivity, a pioneer could offer a lower price or better service to customers and capture more market share. First-movers also have a better chance of working with tech vendors to shape new systems around their own needs. And for client-facing companies such as consultancies, having a reputation as a first-mover probably helps them to sell their AI expertise to clients.
But there is such a thing as a “second-mover advantage”, too — even in sectors which depend on powerful network effects. Facebook overtook MySpace. Google overtook Ask Jeeves. And — as one senior business executive noted recently in a round-table discussion I attended — agentic AI might prove to be one instance where the second-mover advantage is rather strong.
Being a second-mover is particularly useful when the benefits of a new technology are uncertain and the risks are high, because what you lose in speed you gain in information. You can watch what the first-movers do, make note of their blind alleys and their mistakes, and then chart a more effective course. You are more likely to avoid becoming locked in to technology vendors that ossify your workflows but whose products prove to be inadequate. A recent report by McKinsey, which advocates for agentic AI, warns of a whole host of other risks too, such as “uncontrolled autonomy, fragmented system access, lack of observability and traceability, expanding surface of attack, and agent sprawl and duplication”.
My argument here is not against the idea that some companies should move quickly (as of course they will, and are) but simply that to be temporarily “left behind” can be a valid strategy too. It doesn’t necessarily mean you will be a loser in the long-run. One study of technology adoption by UK microbusinesses in the 2010s (albeit this was an earlier generation of pre-generative AI technologies) found that companies that adopted AI enjoyed higher subsequent innovation capacity, regardless of whether they were first-movers or second-movers.
Indeed, the message that you should “act now or miss out” has more in common with high-pressure sales techniques than business strategy. Online retailers use “urgency claims” such as countdown clocks because they work: the academic literature suggests consumers under time pressure use mental short-cuts to make decisions. Urgency claims can also instil a fear of loss, a sense of competition with others, and a “bandwagon effect” whereby people are given the impression that everyone else is already doing or buying something.
It should be no surprise that technology companies are using these well-known marketing strategies: they have something to sell, and an awful lot of capital investment on which they need to make a return. But there is no need for politicians and business leaders to echo them, especially not by extending the threat to staff. It is well-known in neuroscience that when the brain’s fear system is active, its ability to explore, innovate and take risks is diminished. If you want staff to experiment with how to implement AI effectively, making them fear for their jobs is not the best way to do it. They are also less likely to be honest about emerging risks, for fear of looking like a naysayer.
It is fair enough for tech companies with products to sell to repeatedly urge the world to “act now or lose out”. But the rest of us needn’t confuse a sales technique with anything more profound.