News items and articles about autonomous vehicles are appearing in rapid succession. Google dominated the press for a while, but Mercedes and BMW now are in on the act too. The technology is there and the newest vehicles are working even without a built-in 360 degree world map. Are autonomous vehicles and their promise of safer traffic and fewer traffic jams really about to become reality? Or are we all overlooking the complexity of the issue?
Article from NM-Magazine, February 2015 - also available in PDF (Dutch only)
Layer 1: natural system
In his magnum opus Logica van het gevoel (‘The Logic of Feeling’), the cultural philosopher and epistemologist Arnold Cornelis introduced the idea that cultures have layers of stability, and that emotions are embedded in these layers. Every human being has his own logic of feeling, according to Cornelis. Normally, things remain at the level of emotion, but when problems emerge, it is important to understand this hidden logic. This is embedded in three layers of stability, which represent the growth stages of individuals and societies. The first layer is the natural system, the hidden human being. Cornelis calls this the hidden human being, because the individual in question does not drive himself: he perceives, but does not learn, nor actively change himself or the surrounding environment. Human beings and societies that find themselves at this first layer, are looking for safety and security. If we project this layer onto the world of vehicles – you’ll understand now what we’re aiming at – then it represents the physical-mechanical vehicle with combustion engine and passive safety features.
Layer 2: social regulation system
The second layer of stability is that of the social regulation system, the silent human being. In this layer, human beings are able to learn, but what they learn is how to act. The structure for the way in which they act is determined by the norms of the social regulation system: the natural system of the first layer has now been incorporated in laws and regulations. The second layer still does not offer human beings the possibility of self-driving. Everything is regulated; human beings operate within laws and fixed norms. In terms of vehicles, this represents smart systems for driver assistance, such as lane departure warning or autonomous cruise control. And perhaps the first generation of autonomous vehicles.
Layer 3: communicative self-driving
The third layer of stability is that of communicative self-driving, the communicative, self-driving human being. This self-driving human being transforms his emotions into a driving logic, which in turn becomes the subject of communication. Communication opens a world of possibilities. The specialized human beings of the social regulation system discover that they need each other to truly learn and to arrive at new insights and solutions. Reality is too complex to be understood by any single person. In terms of vehicles, this represents the cooperative, autonomous vehicles that together achieve safer road traffic and better use of the road infrastructure.
Understanding complex situations
If we follow the logic of feeling, autonomous vehicles should be able to understand complex situations ‘intuitively’ and interact with each other to devise suitable solutions – this is the kind of vehicle that we are striving for. Or, to use the words of Maarten Sierhuis, director of the Nissan Research Center in Silicon Valley: “Car makers have to learn to design self-driving cars that not only understand how human beings drive cars in traffic, but that can also imitate these human beings.” The automotive industry has to undergo a paradigm shift from cars as physical-mechanical systems to autonomous, communicative systems that understand and can replicate the behavior of human beings.
And it is evident each day that human beings are extremely versatile. When confronted with unexpected road or traffic situations, drivers warn each other by using their headlights or warning lights to give light signals. People are able to pick up these kinds of signals very quickly, but autonomous vehicles find it a difficult task to understand them. At crossings, pedestrians and cyclists make eye-contact with drivers to gauge each other’s intentions – cross or wait, yield or take precedence. How can an autonomous vehicle learn to understand the intentions of pedestrians and cyclists? Professor Berthold Färber of Munich University has argued that these kinds of informal communication methods play a significant role in daily traffic.
It turns out that drivers, or road users in general, can also recognize and remember patterns in the way types of people and vehicles act. They can recognize and anticipate on differences in driving styles between sports cars and sedans. They can recognize and anticipate on differences between the way children and senior citizens move. An autonomous vehicle will have to learn to recognize these patterns, but in order to be sufficiently predictable for other road users, it will also have to safeguard these patterns in its own driving behavior.
Not only human factors appear in a completely different light if these aspects are taken into account. The way in which vehicles are designed will also begin to change. Software will have a much more prominent place in vehicle design, and especially software that is able to imitate human behavior.
Autonomous vehicles demand an ethical framework
A correct judgment of the road or traffic situation can sometimes cause drivers to consciously infringe the law or the traffic code. If you spot that a trailer in front of you is losing its load, and if the traffic situation permits it, you are likely to swerve immediately onto the oncoming traffic lane to avoid the obstacle. What would an autonomous vehicle do? What is it allowed to do? Will it be permitted, just like human beings, to temporarily break the law, or will it simply stop to avoid hitting the obstacle, with the risk that vehicles behind will run into it? Will it be allowed to complete its journey with a broken headlight, as human beings do, or will it have to park along the side of the road even if this reduces road safety? Formulated more generally, will we be able to trust autonomous vehicles on our roads, simply because they have been programmed to obey the law and the traffic code strictly and to avoid collision under all circumstances?
There are ethical issues that go even further than these legal questions. The point is not just that existing laws and regulations are not equipped to deal with autonomous vehicles. It will also be difficult (if not impossible) to capture all possible situations that may occur, with the required reactions to these situations, in laws and regulations. This is already the case. We tacitly assume that people will be able – especially as their driving experience increases – to take the right decisions in a split second in situations they have not experienced before. Human beings do this on the basis of their ethical sense. If the action that flows from the decision results in an accident, then they are personally responsible. How does this work with autonomous vehicles?
Let’s take an extreme example. A grandfather and his grandchild are walking each on opposite sides of a road. Then they see and recognize each other. The grandchild spontaneously crosses the road to say hello to his grandfather and forgets to look out for traffic. The autonomous vehicle that is approaching is confronted with the choice of hitting the child, swerving onto the sidewalk and hitting the grandfather, or swerving onto the other sidewalk and hitting a couple walking there. A human driver will take a decision like that in a split second and will later have to account for himself to the police or possibly to a judge. If this situation has not been foreseen when programming the autonomous vehicle, the vehicle will also have to take an independent decision in a split second. What does it fall back on when making this decision? On an ethical sense and corresponding moral values that it has been programmed with? How do you program an ethical sense and moral values and how universal are these values across countries and over time? And what about accountability for the results of this decision? Does this lie with the owner of the vehicle or the programmer who should have foreseen the situation? Human beings can be confronted with situations during a journey. Programmers can be expected to think ahead about all possible situations.
Whenever ethics are being discussed, the value of human life is explicitly at issue. The ownership of a car is relevant to this. If the vehicle is a private car, it seems justified to make the protection of the life of the occupants an important priority for the autonomous car, possibly even at the expense of a higher risk to other road users. But what if it is an army or police vehicle? Should protecting the lives of other road users not be given higher priority when it is a soldier’s or police officer’s autonomous car? And what if the police officer is on his way to a fire or a hold-up?
A different issue altogether is the risk of hacking. So far all software systems designed by humans have been hacked, sometimes with pretty disastrous results. Who is responsible for a hacked vehicle that causes an accident? The ethical issues that come with introducing autonomous vehicles may yet prove to be their most difficult stumbling block.
Risks and insurers
Even if we succeed in making legislation, regulation and ethical norms for autonomous vehicles water-tight, it might still be the case that insurers will set additional boundaries based on their perception of risk. What happens when the cooperative autonomous vehicle becomes a frequent sight on our roads? What risks are involved if these vehicles begin to communicate with each other on a large scale? The stock trade has already furnished the example of strong unexpected peaks and troughs that can apparently be ascribed in part to trade between automated systems. Will similar events also take place on the road, leading to a higher risk of traffic accidents?
And then of course there is the question of what risks will arise if some cars on the road are autonomous and others aren’t. We’re not even talking about vintage cars that simply don’t have a lot of modern technology on board. There will also be people who are attached to the ultimate driving experience of driving a car themselves. Some car makers are consciously playing to this feeling. This kind of driver will not be easily persuaded to hand over the wheel to a robot.
How can insurers protect themselves against unknown risks that occur when autonomous vehicles participate in mixed traffic? And what restrictions will this bring for autonomous vehicles on the road? For the time being this will remain an open question.
Autonomous vehicles require debate
The debate on autonomous vehicles started after it became clear that the first experiences on the public road were successful. The demand for new technology and for safer traffic and better road use have fired our collective enthusiasm. Just as with so many new developments, integrating autonomous vehicles into society will prove to be a bigger challenge than the development of the technology required. Whether you are in favor or against it, we all have the obligation to participate in the debate on autonomous vehicles and to the development of the required ethical framework and risk assessment. Autonomous vehicles must be made to the measure of man.
- Paul van Koningsbruggen
- Director Mobility
- Send email