Wednesday, October 2, 2013

Barriers to Accountability in Health Information Technology

Let’s go over the barriers from Nissenbaum (1996).  These are “the problem of many hands”, “bugs”, “the computer as a scapegoat” and “ownership without liability” (Nissenbaum, 1996).  Let us discuss each in detail and remedies for each.

Problem of many hands
The issue with this and how it is a barrier is this.  According to Dennis Thompson (2011), when things happen we want to look for someone to hold responsibility.  In the case of large organizations (ie. Healthcare Vendors) it can be very difficult to pinpoint one person because in these organizations, many persons are involved and by pinpointing a single person it could be an unfair judgment, if they could not have prevented the event (Thompson, 2011).  On the other hand, if we hold the entire organization accountable, this could unfairly affect people that are not responsible at all (Thompson, 2011).  Thompson proposes one way around this barrier is “prospective design responsibility” (Thompson, 2011).  This is making an independent body responsible for the design of the processes that the organization must follow in order to monitor production and to avert malfunctions.  This body would then be the ones held responsible if they did not fix broken processes or ignore warnings of failure (Thompson, 2011).

Bugs
Bugs, or coding errors (or as a company I worked for called them “undocumented features”) are part of the natural process in coding computer applications.  If one were to count the lines of code for any large, complex applications, one would most likely count up into the millions of lines of codes.  Inevitably, there will be errors in the code.  These errors can be simple mistakes, things brought out by other features or on the more sinister side, they may be known and ignored.  When these cause issues they can become a barrier to accountability because one can say “it’s just a bug in the software that was overlooked.”  So how can there be accountability for the bugs that were intentionally ignored or even created?  Most of the outside sources that I found mentioned Nissenbaum, so here is what she has to say.  Basically, we know that bugs are a natural part of coding.  Here lies the problem.  Many times this causes people to use this as an excuse for sloppiness and incompletion (Nissenbaum, 1996).  This means that people need to take more accountability in their coding and testing of the software in order to keep bugs to a minimum.

The computer as the scapegoat
At times it may be easy for someone to blame the computer for faults.  This can be a plausible explanation, since we know that there are inherently bugs in any system.  Back in school I remember a saying, popular with computer-minded individuals.  “Garbage in, garbage out.”  This meant that in many cases, errors are attributed to the user and not the computer.   The information or commands given to the machine are what is responsible for the action it takes.  According to Friedman and Millett (1995), the reason that some people blame computers is because of the perceived decision making capabilities of these machines (Friedman & Millett, 1995).  The go on to say that “designers should communicate through the system that a (human) who -- and not a (computer) what -- is responsible for the consequences of the computer use” (Friedman & Millett, 1995). 

Ownership without liability
Nissenbaum discusses the barrier and states “along with privileges and profits of ownership comes responsibility” (Nissenbaum, 1996).  To me this means that if you are benefiting you should be held responsible for any errors or mishaps resulting from use.  This means both the vendor and the purchaser.  Unfortunately in the software industry it is becoming a trend to deny accountability of software produced while retaining “maximal property protection” (Nissenbaum, 1996).  The way around this barrier is through contracts and particular attention to End User Agreements.


References:
Friedman, B & Millett, L.  (1995).  "It's the Computer's Fault" -- Reasoning About Computers as Moral Agents.  Retrieved from http://www.sigchi.org/chi95/proceedings/shortppr/bf2_bdy.htm
Nissenbaum, H. (1996). Accountability in a Computerized Society. Science and Engineering Ethics, 2(1), pp. 25-42. DOI: 10.1007/BF02639315
Thompson, D.  (2011 Jan 28).  Designing Responsibility: The Problem of Many Hands in Complex Organizations.  Harvard University.  Retrieved from http://scholar.harvard.edu/files/dft/files/designing_responsibility_1-28-11.pdf