AI in the UK: Lords’ Report Makes Startups Less Competitive

AI GettyImages-861122794

The United Kingdom’s House of Lords Select Committee on Artificial Intelligence recently published the report, “AI in the UK: Ready, Willing and Able?” The recommendations on how the UK can become a global leader in Artificial Intelligence are off the mark. While the report contains numerous uncontroversial and welcome suggestions on such topics as increased use of AI in the National Health Service, more visas for talented technologists, and the need to make public sector data sets available to the private sector, many of the recommendations would hamper the development of AI domestically and antagonize foreign innovators.

The report calls on the government “to review proactively the use and potential monopolization of data by big technology companies operating in the UK.” It asserts that a few large companies such as “Alibaba, Alphabet, Amazon, Apple, Facebook, Microsoft, and Tencent,” have access to unprecedented amounts of data that make it harder for UK startups to compete adequately. This is not only a grave misuse of the term “monopolization” (given that they identify seven companies in the report alone who have access to distinct data sets) but the claim also misunderstands the use of data in AI.

Despite remarks acknowledging the nature of data as something that can simultaneously be used for multiple purposes, and as something that can be duplicated and shared without loss of quality, the Lords report still claims that the development of greater amounts of data by these companies poses harm to startups. This results from a confusion between network effects, an increasingly popular and misunderstood term, and learning by doing.

Network effects are often cited to claim that online firms tend to be winner-take-all, because their value derives from other users. Google, it is claimed, will continue to dominate search because everyone currently uses it, giving it more data, which makes it an even better service for users. This is an historically naive claim, because Google itself was not the first search engine, and was able to displace earlier competitors—who had more users and more data—by providing a better product.

Raw data itself is not what powers AI, but effective use of data. A company with a larger data set does not necessarily dominate due to network effects, because it does not necessarily know how to use data well. What leads to Google being used routinely is that it is constantly learning by doing—tweaking its algorithms with constant improvements to better use the data it has. This is not unfair to smaller companies, but rather a benefit to consumers.

The Lords are right to worry about the UK failing to utilize its AI talent, but attempting to pry resources way from more innovative American and Chinese firms is not the answer. The UK has some of the world’s top AI researchers, though research often fails to translate into successful companies.

The report acknowledges the need to make it easier for universities to form “spin-out companies,” which are effectively startups with university ownership of intellectual property. Reform of the current spin-out procedure is necessary, though that is only a small part of the large amount of regulatory barriers for startups in the UK. It is not enough to care only about university research when the large American companies criticized for being too large were not university spin-outs themselves.

Removing obstacles to commercialization of AI research is overshadowed by the various new obstacles that would be put in place if the report’s recommendations are adopted. The report calls for increased transparency and “explainability” (helping those without technical backgrounds understand the decision making process) for AI algorithms. In acknowledging that this is technically difficult for processes such as the Deep Neural Networks powering much of the recent advances, the report admits that its recommendations would delay adoption of the latest technologies.

There are further suggestions to reduce the possibility of bias in decision making thanks to data sets that do not reflect the full diversity of society (which suggests that AI could make decisions that have negative social consequences). The problem of bias is something to which industry has already devoted considerable attention, but the report calls for greater government funding to develop auditing tools and increased oversight of data sets. This would punish smaller firms with access to relatively smaller data sets, which have a greater risk of bias. This is another opportunity for learning by doing, something that ex ante regulation makes more difficult, while entrenching the incumbents the Lords seek to regulate.

It is helpful that the UK’s Parliament is examining the opportunities that artificial intelligence creates. However, it would do better to focus on removing the barriers currently in place, rather than developing new ones.