Human acceptance of Algorithmically controlled systems
This page contains a quick overview of the proposed project on human acceptance of algorithmically controlled systems, in particular, self-driving cars.
-
Irrespective of whether you are for or against self-driving cars, it will be useful for you to understand what the roadblocks are.
Under Construction
This page is still under construction. In particular, nothing here is final while this sign still remains here.
A Request
I know I am biased in favor of references that appear in the computer science literature. If you think I am missing a relevant reference (outside or even within CS), please email it to me.
Mentors
Kenny Joseph and Atri.
Background
This proposed project can be stated for a generic algorithmically controlled system but for the sake of concreteness, we will focus on self-driving cars
The UB connection!
In case you have not heard already, UB has a group that works on self-driving vehicles !
Hearing about self-driving cars should immediately be ringing off ethical/moral "alarms." One of the most well-known experiment is the moral machine experiment .
Go ahead: play the game!
Feel free to play the game. But then do read the accompanying paper for the many interesting findings that the authors made based on the data collected from the game.
Done playing the game?
OK, now let us to go to the background more specific to the proposed project. It turns out that humans are reluctant to use self-driving cars. This appears to hold even if the self-driving cars are guaranteed to have a smaller chance of being in accidents. This then clearly is a road-block for self-driving cars to being adopted (see e.g. the the 2017 Nature Human Behavior paper by Sharif, Bonnefon and Rahwan).1
Proposed project
High level goal
The very high level goal of this project to gain more understanding of why humans are skeptical of self-driving cars.
Of course this is not something new but the "twist" here is that we want to do some mathematical modeling to gain some more insight in this issue.
In particular, we want to explore two hypothesis for why humans might be wary of self-driving cars:
Two hypotheses
- The first hypothesis is that humans do not want to give up on control. This could be because
- Humans are overconfident about their driving abilities (i.e. they think they are better). Humans being over-confident about their abilities has been observed (in other domains as well, see e.g. the 2005 article by Sieck and Arkes); OR
- Humans might not like giving up control irrespective of whether they think they would be better drivers (consider e.g. the case that a lot of people have a fear of flying, even though most of them will not think they would be able to fly airplanes better themselves).
- The second hypothesis is that self-driving cars have the potential of getting into a catastrophic accident simultaneously. The high level idea is that say if all self-driving cars are using the same software and the software has a bug then all of the self-driving cars could fail at the same time since the bug (could) affect all the cars uniformly. On the other hand, human drivers tend of behave "independently" and hence the chance of all human drivers failing at the same time is very unlikely. Here is a tweet from someone who is not us, saying something similar in a different context (automated hiring):
Clearly, hiring is an area that is replete with all kinds of systematic biases, and much of it is about arbitrary signaling, but this sort of system essentially encodes one particular bias at scale, rather than having a distribution of people making independent decisions. (2/n)
— Dallas Card (@dallascard) October 23, 2019- A related (and perhaps a more pertinent) reason could be that humans tend to react to certain "large events" in a way that might not be very logical. For example, see this article on how people think of death from terrorism to be scarier than the (many orders more likely) death from gun violence.
Think beyond the two hypothesis above!
If you do this project (or even if you do not), please do think beyond the above two hypotheses. The above are our initial thoughts-- and they could be wrong!
Proposed project-specific deliverables
We first list all the steps that would need to be done to take this project to its logical conclusion:
- Read the relevant literature to make sure you do not "reinvent the wheel" (ask the mentors for pointers on where to look).
- Figure out if you want to model something beyond the two hypothesis from the previous section into this project.
- Come up with a mathematical model that somehow "encodes" the various hypothesis about why humans was uneasy about using self-driving cars.
- Use the model to come up with a "cross-point" when humans would start getting comfortable with using self-driving cars. This ideally should be a theorem that should follow once the model is mathematically defined. However, if the mentors agree, then this step can be done via simulations as well.
- Think of "interventions" that would make the crossing point "easier" or "harder" to achieve.
- Test out the theoretical findings with some experiment result. Then go back to Step 1 if necessary and repeat!
What is expected from you in this project
Among all the proposed projects, this is perhaps the most open-ended. What this means e.g. that we do not know what is the "right" way to solve this problem and it might take some iterations before things can be fleshed out more.
To get full credit on this project you have to do steps 1, 2, 3 and 4 at least once. Of course doing the latter steps would be awesome and is greatly encouraged!
References
- Jenny Anderson, The psychology of why 94 deaths from terrorism are scarier than 301,797 deaths from guns . Online article.
- Nathan Feiles, Why we fear flying: Part 1". Online article.
- Azim Shariff, Jean-Francois Bonnefon and Iyad Rahwan, Psychological roadblocks to the adoption of self-driving vehicles . In Nature Human Behavior 2017. [Read-only direct link via Rahwan ]
- Winston R. Sieck and Hal R. Arkes, The recalcitrance of overconfidence and its contribution to decision aid neglect . In Journal of Behavioral Decision Making 2005.