Responsive Ads Here

Thursday, 21 September 2017

Issues Concerning Human-Centered Intelligent Systems: What's "human-centered" and what's the problem?

IntroductionIn medicine, an expert used to be defined as, "some specialist from the Mayo Clinic, lost in a railroad station with slides". I am not an expert and I don't have any slides, but I do have viewgraphs and I hope I'm at the right place in the right city.
You could fairly ask why I'm on this program at all. My background is in aerospace medicine, not computer science. Some of your position papers terrify me. Furthermore, much of what I am going to say has already been expressed in some eloquent position papers by Emilie Roth, Jane Malin and others.
I think perhaps I'm here because I am a user - a consumer - of the concepts and products you have given much of your lives to developing. My domain, aviation, has done as much to stimulate advanced technology, by buying it and using it, as any endeavor since the dawn of the industrial revolution. We in the aviation community have been working with complex human-computer systems in a highly dynamic, distributed, real-time environment for over two decades - shortly after people in the computer business figured out how to make computers small enough so we could get them off the ground. These computers have helped us move from an aviation system in which operators never had enough information to one in which we can drown operators in information.
In the course of these two decades of constant exposure, we have learned some lessons about how computers and people can work together, regardless of where they are located, to accomplish difficult tasks under sometimes difficult conditions. Sadly, we have also failed to learn some lessons we should have learned about how to do exactly the same thing - and we have left some shards of steel and aluminum in various odd spots in the process. Those lessons - the ones we have failed to learn - are what I would like to share with you today, as you begin this workshop on Intelligent Human-Machine Systems. It is my real hope that you can avoid some of the mistakes we have made as we have conceptualized, constructed and operated high-technology devices in pursuit of social goals. Foremost among the mistakes I hope you will avoid is the mistake of conceptualizing human-centered systems, then designing and building technology-centered systems. Dave Woods has said that, "The road to technology-centered systems is paved with human-centered intentions". I shall try to point out that he was quite right.
What Does it Mean to be "Human-Centered"?Investigators have been studying human-machine systems for as long as such systems have been around. The problems people have in interacting with such systems have long been recognized. Ever since World War II, investigators have tried to lay down principles by which such systems should be constructed. These principles have been variously called "user-centered", "use-centered", "user-friendly", "human-centered", and more recently, "practice-centered". What do these terms mean? What principles must be embodied in a human-machine system to warrant such appellations?
As a user, I am not going to become involved in which of these terms or constructs is the best to describe what we are trying to conceptualize. Instead, I am going to offer some more principles I believe are necessary in what I will continue to call "human-centered" systems, simply because I'm comfortable with that term. Though most of my experience has been in the aviation domain, and my illustrations will reflect that, I am convinced that these principles apply to many human-machine systems in a variety of domains, and that they are therefore deserving of careful attention by designers and operators of any intelligent system. I'm going to describe what I'll call some "first principles": principles that I believe are essential elements in any over-arching philosophy for such systems.
First Principles of Human-Centered Systems
Premise: Humans are responsible for outcomes in human-machine systems.I shall proceed from a premise, which stated in Human-Centered Intelligent Systems terms is that human operators are entirely responsible for the outcomes of processes conducted by humans and machines.
Axiom: Humans must be in command of human-machine systems.If one accepts that premise, I think it is axiomatic that humans must be in command of all components of the systems that undertake those processes. They must have full authority over the systems, which means that they must have the means to intervene constructively in the processes. I shall try to justify this axiom as we go along.
This axiom implies certain corollaries, which appear to be consistent with our experience with human-machine systems in aviation. Briefly stated, they are as follows.
Corollary: Humans must be actively involved in the processes undertaken by these systems.Many human-machine systems distance the operator from ongoing processes, some by intention, others by default. Without continuing active involvement in a process, the human operator will be unable to understand the problem and reenter the performance loop in case of machine failure.
Corollary: Humans must be adequately informed of human-machine system processes.Without good information concerning an ongoing process, a human operator cannot remain actively involved in that process. If this happens, the machine, not the human, is in control.
Corollary: Humans must be able to monitor the machine components of the system.As machines have progressed from simple inner-loop control tasks to management of information and more recently to management of entire processes, it has become harder to follow what they are doing. This leads to a need to inform the human that such machines are still functioning properly, rather than simply when they have failed.
Corollary: The activities of the machines must therefore be predictable.Unless a machine behaves predictably, a human cannot form an internal model of how it functions, and thus cannot remain involved in the ongoing process.
Corollary: The machines must also be able to monitor the performance of the humans.Humans fail too. Machines know a good deal about human-machine processes, and this knowledge can permit machines to monitor human performance for errors, just as humans must be able to monitor machine performance for errors or failures.
Corollary: Each intelligent agent in a human-machine system must have knowledge of the intent of the other agents.In order to understand what outcome is desired, any agent in a human-machine system must understand what the other components of the system are trying to accomplish. This requires knowledge of the intentions of each of the agents, by all of them.
Why Was it Necessary to Construct Yet More Principles for HCIS?My annunciation of these principles was motivated by serious and in some cases spectacular failures of human-machine systems in aviation. These are the operating modes of the flight control system in a modern transport. Modern aircraft automation is very capable, very flexible - and sometimes very hard to understand. There are ten altitude-change modes on some modern airplanes, several of which interact with or are conditional on other modes.
Aircraft automation has become highly autonomous. A flight from New York to Tokyo may require very little active participation by the pilots once the machine has been programmed. Removing authority over aircraft control or management of systems from the human operator may require only a line or two of source code.
Yet the human always remains responsible for the outcomes. The "first principles" I have enumerated are an attempt to go back to basics: to state what the relationship between the human and machine components of the system must be if the human is to be able to remain in command of the system. Let me state more specifically what the problem is, in terms of hard data from the domain in which I work.
Since the mid-1970s, a number of incidents have come to light that were associated with, and in some cases were enabled by, complex machine systems. Table 1 shows a partial list of a number with which I am familiar. I have taken a few liberties with this list of relevant factors in these incidents; not all were signed out this way by investigating authorities, though I am certain that the factors shown were critical to the outcomes.

No comments:

Post a Comment