The White House recently held the first of several workshops on AI, and in it experts discussed some of the emerging legal and ethical ramifications of machine learning. As machines become embedded ever more deeply into our everyday systems and processes for making decisions, how will we feel about being denied, flagged, or discriminated against by machines without direct human involvement and by machines independently interacting with other machines?
The potential for social and legal conflict between individuals and parties arising from the actions taken and the decisions made by machines is an example of the broader impacts that machine autonomy will have on human conflict. When we think of the impact of machines (robots) on conflict, we naturally first think about things like drones and gun wielding robots, i.e. machines have a direct role in conflict. Yet, some of the more profound impacts may very well come from the indirect impacts of machines on everyday life.
So, we can perhaps start to play with an emerging typology of autonomous conflict. At this point I would suggest two initial types of conflicts (with more to follow):
- First order conflict: where machines are engaged directly in prosecuting conflict; where they are actors (e.g. alongside or against human soldiers)
- Second order conflict: conflict between humans that arises indirectly from the actions of machines; here machines aren’t immediately involved as actors, but their actions create new conflict between human parties
Having a typology like this might seem academic, but from a futures point of view something like this could be extremely helpful in guiding us to explore the future more systemically and to uncover more of the possible ways in which machine automation will have long-term impacts of conflict.