How should AI be regulated?

The mistakes and successes of earlier tech revolutions should be pondered

AI

As artificial intelligence becomes increasingly sophisticated, the time has come for humanity to choose. Should the nations of the world shut down or tightly regulate AI until it is clear a godlike artificial superintelligence will not gain consciousness and exterminate the human race? Or should governments not regulate AI at all, in the hope that it will cause an acceleration of technological progress that results in our colonization of the universe, our uploading as bodiless computer programs into the galaxy-wide web – or both?

Or how about a third option: AI regulation by AI-enabled industry? AI…

As artificial intelligence becomes increasingly sophisticated, the time has come for humanity to choose. Should the nations of the world shut down or tightly regulate AI until it is clear a godlike artificial superintelligence will not gain consciousness and exterminate the human race? Or should governments not regulate AI at all, in the hope that it will cause an acceleration of technological progress that results in our colonization of the universe, our uploading as bodiless computer programs into the galaxy-wide web – or both?

Or how about a third option: AI regulation by AI-enabled industry? AI may turn out to be the latest in a series of “general purpose technologies” (GPTs) that transform the economy, politics and society. Previous GPTs include fire, agriculture, the wheel, writing and, more recently, the inventions that have been the basis of industrial civilization – power sources such as the steam engine, the internal combustion engine, the electric battery, nuclear and solar energy and modes of communication such as the telegram, the telephone, radio, TV, the computer and the internet.

In deciding what to do – and what not to do – when it comes to regulating AI, we can learn from the history of the introduction of technologies such as electricity and the internet.

Those who seek comprehensive regulation of AI as a technology because of its alleged dangers argue that it should be subject to a single set of national or international regulations. But electricity was – and is – dangerous, and industrial countries in the late 19th and early 20th centuries did not see fit to impose comprehensive regulations on electricity as a technology.

To have done so would have been foolish and counter-productive. Imagine if a national government in 1900 had created a Department or Ministry of Electricity and charged it with making and enforcing a single comprehensive code of regulations for all uses of electric energy. The Department of Electricity would have promulgated rules for anything with a battery or an electric motor or electrical wiring – houses, airplanes, telephones, toys, toasters. The complexity of such a mission would have doomed any effort of the sort.

Does this mean libertarians are right, and that all regulations are bad and stifle innovation? Of course not. The uses of electricity as an input to a variety of machines and industries are tightly regulated, to prevent fires, electrocution and other hazards. Building codes contain rules for safe housing construction, to prevent fires caused by faulty wiring, and product safety rules govern the testing and sale of toasters to make sure they don’t electrocute their users. But there is no economy-wide electricity code. Instead, there are separate regulations for housing and toasters, which happen to include subcategories of regulations having to do with their electrical components.

The regulations are not perfect; electrical fires and accidental electrocutions still occur. But the regulation of electricity as an input by the industrial sector has worked better than the alternatives of no regulation or comprehensive regulation of electricity as a technology would have done.

The case of electricity can be contrasted with nuclear energy. In 1946, the US established the Atomic Energy Commission (AEC) to supervise and regulate all uses of atomic energy, whether for military or civilian purposes – the equivalent of the hypothetical Department of Electricity. Before the AEC was abolished in 1974, with its functions divided between the Nuclear Regulatory Commission (NRC) and the Energy Research and Development Administration (ERDA), its multiple missions included promoting the peaceful use of nuclear energy, overseeing the research and development of nuclear weapons and issuing and enforcing nuclear industry safety regulations.

Many of the peaceful uses of atomic energy touted by enthusiasts from the 1940s to the 1950s never materialized. But nuclear power plants did become important contributors to the American energy supply. As a share of total primary energy, including transportation, nuclear energy is less than 10 percent of the total in the US – but nuclear power plants contribute around a fifth of all electricity.

The transatlantic elite believed the fundamental nature of an industry changed if it had ‘cyber’ in front of it

Public attitudes toward civilian nuclear power plants became more negative, beginning in the 1970s. In part, this was the result of accidents such as those at Three Mile Island, Chernobyl and Fukushima. Another factor was the post-1960s counterculture and the neo-Luddite backlash against industrial modernity in favor of the small and organic, with nuclear energy stigmatized as unnatural compared to equally artificial solar panels and windmills. Fears of nuclear war and popular misconceptions about the harm done by radioactive materials were factors too. But many energy analysts blame the centralization of authority in the AEC and its successor, the NRC, and the culture of bureaucracy and litigiousness it spawned, for the slowdown in the licensing of construction of new US nuclear power plants.

The Trump administration has blamed the NRC for harming the US nuclear power industry: “Instead of efficiently promoting safe, abundant nuclear energy, the NRC has instead tried to insulate Americans from the most remote risks.” In early May, the President issued executive orders calling for the quadrupling of US nuclear power generation by 2050, the deployment of advanced nuclear technologies, the build-out of nuclear supply chains, an increase in American nuclear exports – and expediting the nuclear licensing process.

If the history of the AEC/NRC provides a warning against creating single agencies or treaties charged with regulating all AI, the American approach to regulating the early internet in the 1990s provides an example of the folly of the opposite extreme: too light a regulatory touch in the case of a new general purpose technology.

In 1998, Congress passed the Internet Tax Freedom Act, imposing a three-year moratorium on state and local taxation of goods sold via the then-new internet, and outlawing state laws that allegedly discriminated against online commerce. In 2015, the Trade Facilitation and Trade Enforcement Act created permanent moratoriums on some internet-related taxes.

Historians can decide how much these laws were influenced by rational deliberation about the public interest and how much by contributions from the rising tech industry to members of Congress. Whatever the actual motives of lawmakers, their public rationale was that internet-based businesses were so fragile they had to be treated with favoritism compared to brick-and-mortar businesses. In hindsight, this was wrong. For example, online retailers such as Amazon would almost certainly have flourished at the expense of traditional retailers without government favoritism. The moratoriums may just have enriched the shareholders of a few big tech firms.

For a while in the 1990s and 2000s, however, much of the transatlantic elite believed that the fundamental nature of an industry would be changed if there were an “e-” or “cyber-” in front of it: e-commerce, cyber-war. At the time, I thought this was as silly as using “e” as a prefix for everything powered by electricity. It was as though a teenager in the 1950s had suggested, “Hey, gang, let’s ride in my dad’s e-car, with its electronic ignition, to that cool high-tech McDonald’s with its electrical e-sign and have some milkshakes made by their far-out e-mixer.”

Some utopian techno-libertarians a generation ago argued that the internet somehow created a new realm – “cyberspace” – that was, or should be, free from regulation by mere territorial authorities. By the same logic, it might be argued that the printing press created a new dimension, Gutenberg Space, that territorial governments could not legitimately regulate. The next time you are in an international airport, try persuading the customs officer that you do not need a visa because you like to read books and thus are a citizen of Gutenberg Space.

The influence of this nonsense on public policy was manifested recently by the toleration by many government policymakers of the flouting of city and state taxi licensing laws by Uber during the early years of the company. The fact that Uber and Lyft allowed customers to summon them via the internet through their phones was supposed to make them “tech companies” unlike traditional taxi companies with radio dispatchers. Did the adoption by taxi companies of two-way radio technology, beginning in the 1940s, turn old-fashioned taxi firms into “radio companies” or “electronic communications firms?”

In time, traditional taxi companies doubtless would have switched from radio to online communications, and state and municipal taxi licensing laws and regulations would have adapted. What has driven the traditional taxi business in the US and elsewhere into near extinction is not Uber’s technological brilliance but its old-fashioned evasion of labor laws. Uber has claimed that its drivers are self-employed, independent contractors, many of whom just happen to be driving exclusively for Uber and following its rules and regulations. Employees? What employees? By shifting costs such as insurance and health benefits on to their drivers and avoiding the need to pay a minimum wage by pretending that they are self-employed, Uber has managed to undercut traditional taxi companies which treat their workers better.

When it comes to regulating AI, the mistakes and successes of earlier tech revolutions should be pondered

Take away the sprinkling of Silicon Valley pixie dust, and “e-taxi” companies are just plain old taxi companies – but worse. (The labor law and pay issues may be rendered moot, eventually, by the spread of self-driving robot taxis.) When it comes to the regulation of AI, then, the mistakes as well as the successes of earlier technological revolutions should be pondered. Unless we give credence to apocalyptic fears about malevolent superintelligences that owe more to science fiction than to science, it would be a mistake to impose a moratorium on AI development. It would also be a mistake to create a single national or transnational agency to regulate AI in all its uses.

Another mistake would be to give firms with business models enabled by versions of AI special exemption from the rules that apply to other companies in a given industry, such as the “tech firms” Amazon and Uber, which have never been anything other than a tech-enabled retailer and a tech-enabled taxi company. If machine-to-machine (M2M) communications technology allows robot factories, warehouse robots and driverless bots to work together to manufacture and deliver products to consumers, then instead of creating a centralized, all-powerful Department of Machine-to-Machine Communication, governments should modernize their regulation of factories, warehouses and commercial delivery vehicles.

To be sure, if general M2M technologies permit all AI-enabled devices to communicate with each other and machines from trucks to toasters are captured by Skynet and mustered in its war against humanity, I will be proven wrong. In which case I, for one, will welcome our new AI overlords.

This article was originally published in The Spectator’s August 2025 World edition.

Comments
Share
Text
Text Size
Small
Medium
Large
Line Spacing
Small
Normal
Large

Leave a Reply

Your email address will not be published. Required fields are marked *