Google recently announced the debut of its artificial intelligence-backed large language model, PaLM2, which, it claims, surpasses the existing GPT-4 in certain aspects and will serve people better.
The move challenges the open letter issued in March by the non-profit Future of Life Institute, which called for pausing all giant AI experiments for six months and to use the time to implement a set of shared safety protocols for advanced AI design and development. However, the sooner a company accomplishes its research, the bigger its market share is going to be. According to US-based Next Move Strategy Consulting, the $100 billion AI market in the US alone is likely to grow 20 times by 2030, while that of the whole world might even reach $1.85 trillion during the same period.
Any company executive knows that no financer would, therefore, risk halting work for that long. That's the biggest dilemma in the whole affair: technologists might have realized the risks involved in AI's fast evolution, but those putting their money into the research are only driven by profits. It is, therefore, easy to persuade those behind academic research, but hard to persuade entrepreneurs whose job is to make profits. Even if some entrepreneurs are willing to suspend AI research for six months, their rivals may not do so and so they risk losing their market share to rivals.
The way to solving this problem lies not in persuading entrepreneurs to halt research but in drafting regulations to minimize their worries about the risks. Calling lawmakers to make such a regulation might be a good idea, to ensure a level playing field for all players.
The Future of Life Institute, which called for "an AI pause", would do well to direct their call to those investing in the research than to the scientists, so that such rules are drafted sooner.