About algorithms being black boxes

For 36111 Philosophies of Data Science Practices’ first assignment, I am exploring the emerging practice of holding algorithms accountable.

Often, people refer to algorithms as black boxes.

There are three different definitions of a black box, according to merriam webster:

Definition of black box

1 :a usually complicated electronic device whose internal mechanism is usually hidden from or mysterious to the user; broadly :anything that has mysterious or unknown internal functions or mechanisms
2 :a crashworthy device in aircraft for recording cockpit conversations and flight data
3 :a device in an automobile that records information (such as speed, temperature, or gasoline efficiency) which can be used to monitor vehicle performance or determine a cause in the event of an accident

Usually, when people refer to algorithms, they classify them as type 1 black box.  So what does that imply about how we interact with these black boxes? Its something mysterious that mungifies inputs and turns them into instructions you blindly follow?

If you treat algorithms like this, you may end up opening up a type 2 black box.

Let me explain what I mean with an example, courtesy of Air Crash Investigations tv series (see the episode perhaps illegally uploaded to YouTube here).

In 2002, two planes collided mid air over Überlingen in Germany , tragically killing everyone on board, mostly children. Afterwards, the devastated air traffic controller was murdered in his front garden by a father driven mad by grief who lost his  entire family in the crash (Wikipedia). Absolutely awful.

One of the contributing factors to this disaster was confusion in the human/computer interaction in the use of the Traffic Alert and Collision Avoidance System (TCAS) (see Kuchar and Drumm for how it works). TCAS is basically is a system of sensors and algorithms, that alert and advise pilots of what action to take to avoid collisions . In this incident, there was conflict between the instructions of TCAS and the air traffic controller. One pilot followed TCAS, the other air traffic control, so they both descended, ultimately ending in tragedy.

TCAS software itself did not fail, but as there was no international code on what to do in these circumstances, the overall system failed. The supporting infrastructure was not there.  The human computer interaction was not adequately considered and training. A previous incident in Japan (Wikipedia) had been reported to the International Civil Aviation Authority but no action had been taken. (If that crash had occurred, 677 people would have died, and it would’ve been the largest toll ever).

So my work is going to consider not just countering machine bias in the algorithm itself, but also considering the context in which it is used, and whether this is appropriate.

At the end of the day, holding an algorithm accountable is actually a ludicrous concept. It can only be the humans who are accountable.

On countering machine bias

ProPublica have a whole section dedicated to this topic. So glad to see this, and it appears they have covered insurance companies charging higher premiums in minority neighbourhoods, which I always suspected was happening. Cant wait to read that!

This is a topic for another blog post!

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: