Can AI Regulation Make Us Safe(r)?

At the end of the 19th century, the last new invention was the automobile. In both Europe and the United States, regulations required a man to wave a red flag in front of the car to alert road users and passers-by of the car’s presence. This also ensured that the car could not travel faster than walking pace, although the main known benefit of the horseless carriage powered by an internal combustion engine was that it could travel faster than the horses it was supposed to replace ( that did not pollute the streets). as the horse was known to do, creating a health and safety hazard, he came a very close second in the profit stake).

This is a classic example of the Precautionary Principle in practice: when a new scientific or technological innovation is implemented, the effects of which are to some extent unknown, we should proceed with caution, at least until we know more about the innovation (scientific effort with its perpetual search for new knowledge is aimed at ensuring that we do this). This does not mean that the technology should be banned, especially if there are substantial social benefits to be gained. Rather, it should be implemented with some guardrails in place, to protect individuals and society from the known harms and uncertainties inherent in its implementation. As long as the social benefits are great and the costs of potential harm are minimized by precautions. take, then the implementation can be considered acceptable to society.

Through Adobe

However, the man with the flag illustrates that the nature of the precautions taken come from what we already think we know about the harms and benefits of the technology, not because of what we can’t (or don’t yet) know about it. And even then, obvious benefits are sometimes avoided because society is not prepared to tolerate the costs of risks that are already presumed to be known. But humans are infinitely rational. They don’t always strike a balance. This is often because the risks that regulators seek to manage are those that they already know well, rather than the uncertainties that innovation entails. Because managing uncertainties means that innovation would never be allowed.

A well-known decision making under uncertainty is for a decision maker facing a complex opaque situation to substitute a situation for which resources are already available. Again, the man with the flag illustrates this. Contemporary regulators were well aware of the dangers to the public attending runaway horses and carriages. The probability of an escape occurring increased with the speed of travel. Limiting the speed of cars protected the public from the risk of harm caused by a runaway car. Warning viewers of the oncoming vehicle also prepared them to proceed with caution. Used current knowledge to address a known current risk (of horse transport)

But ironically, the motor vehicle driver (once trained) exercised significantly greater control over the vehicle than the horse drivers. And the public had more to fear from the sometimes unpredictable behavior of stubborn horses than from an internal combustion engine without a will of its own. Regulations addressing horse-related fears delayed the gains of faster travel and created a false sense of security in the public: as long as the flagman warned them, they had no need to learn for themselves to manage their own behavior in the presence of motor vehicles go faster than horses a much needed skill for when the regulatory rules were relaxed.

These lessons are predictive for regulators of 21st century AI technologies and the public they protect. Both the European Union and the United States regulatory regimes adopt a precautionary risk management approach. But do the risks being managed relate specifically to the characteristics of new technologies, or do regulators use proxies derived from experiences with other technologies, because they are known to work in their original context? The risk management tools used in AI regulation, by their very nature, presume known consequences and quantifiable risks of new technologies. But is this really the case? We may not always be making the environment safer from the real underlying risks posed by the new technology, but delaying the accrual of benefits while giving the public a false sense of security and discouraging their adaptation to a new environment .

As a side note: in 2024, we now think we know that the existential risk of motor vehicles was the carbon emissions from their fuel (although unexpectedly, the replacement of horses created a shortage of agricultural fertilizers , which required the innovative use of fossil fuels by products to fill the gap)

#Regulation #Safer
Image Source : www.aei.org

Leave a Comment