Big tech firms pushing AI regulation are not seeking the public interest

Photo Credit: Getty

Large Language Models (LLMs) have taken the internet by storm thanks to programs like OpenAI’s ChatGPT and Microsoft’s new Bing chatbot. These are opening up countless possibilities for innovation in various industries which could greatly benefit human welfare. Some big tech firms, however, have begun pushing for artificial intelligence regulation, not for public benefit but rather for their own self-interest.

Sen. Chuck Schumer (D-NY), among others, has expressed caution about the advancement of AI technology, citing risks such as privacy issues and the threat of jobs being replaced by AI. As is typical when new innovations occur, some commentators and government officials have sought to regulate AI due to the dangers they perceive.

Tech companies in the AI space have also called for regulation. This appears strange at first glance. The president of Alphabet, Google’s parent company, published a statement laying out Google’s recommendations for AI regulation. Microsoft president Brad Smith proposed a five-point plan for regulating the industry. OpenAI CEO Sam Altman appeared before Congress and agreed with proposals for regulation including licensing for AI tools and a new federal regulatory agency.

Regulation raises these firms’ costs and puts up other obstacles. Why would the biggest companies in the AI industry advocate for new regulations on themselves?

Perhaps these companies are less interested in the public welfare than their own self-interest. While AI regulations may raise costs for established firms, regulation will also raise costs for smaller firms and startups. Large established firms with more capital will be able to absorb the costs of these regulations, but existing small firms in the AI industry often may not, forcing them to go out of business or be acquired by one of the bigger firms.

Additionally, the new regulations would establish high barriers to entry which would prevent future startups from entering the industry. By shutting out small firms and startups, regulation would shut out competition which would otherwise threaten the market position of large tech firms. It is no wonder, then, why big tech firms are pushing for AI regulation.

One proposal big tech firms have supported is a licensing requirement for companies using powerful AI tools. Government licenses typically require a fee to obtain and sometimes are tied to other requirements which small firms may not be able to meet.

AI regulation would also harm consumers. Reduced competition would allow big tech firms to charge higher prices for lower-quality AI products. Regulation would stifle innovation by excluding new entrants with new ideas and by reducing the incentive for incumbents to attract new customers by offering innovative products.

With the benefits incumbents can gain from reducing competition, it is clear why big tech firms would push for AI regulation at the expense of smaller competitors and consumers. Instead of assuming that big tech firms are proposing these regulations in the public interest, we should recognize that these firms are as self-interested as anyone else, and that their policy proposals may not be beneficial to the public welfare.