Why the Future Needs Us

Nov 25

Many science fiction writers have painted a picture of a future where humanity and technology merge and the lines between man and machine blur. Some scientists have gone so far as to argue that the future doesn’t need us, apparently lending credence to science fiction stories. While this is fertile ground for ethical analysis of choice, it depends on some false assumptions, which are discussed below. One characteristic human tendency is to defer blame or avoid direct responsibility. For example, the phrase “The devil made me do it”, which some say traces it’s origins back to the bible, suggests that we are not the masters of our own destiny. We mistakenly attribute to technology the responsibility for the bad things that happen in our day, while simultaneously celebrating it for the good it accomplishes. Such a bipolar perception of man vs. technology has the potential to confuse the ethical analysis of human choice whenever there is a technological component involved.

At the root of the false assumptions mentioned above is the mistaken association between compute capacity and sentience. Compute capacity refers to the ability of a machine to mimic human thought, such as a processor in a modern computer. When a processor performs math or renders three dimensional pictures or simulates complex systems, it is doing what it has been taught to do by a human. As a result, some humans refer to the processor as the ‘brain’ of a computer and attribute human characteristics to it, such as labeling a computer as ‘smart’. Observation of nature is often the impetus that inspires technological invention, which may make this type of association feel natural. However, regardless of how much compute capacity grows with future technological advances, by it’s nature it lacks subjectivity. The human who writes the program remains the subjective party.

Real risk is unbounded trust

Trust is a result of repeated experiences where expectations agree with outcomes. In Bill Joy’s article, linked above, he did accurately identify that human trust in technology can lead to dependence.

“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions”

In the context of the technological revolution, this idea of the human race voluntarily surrendering its independence to the machines better judgement sounds new and frightening. In reality this is nothing new. The deference given to smart machines today is no different than that given to magicians, rulers and other natural phenomena throughout history. Powerful and intelligent individuals, natural occurrences, such as volcanoes and earthquakes, and even the stars, so distant and unaffected by human happenings, have served the same purpose in previous times that technology is playing in ours. It can be natural to doubt one’s own ability and trust in another’s ability.

People want to be told what to do. One notable example appears in David Brin’s book The Postman. After a nuclear holocaust, one group of people are held together by their hope in a super computer that is in the process of creating a plan to rebuild humanity. The unity and survival of one group of humans has the peculiar focus of protecting this computer until it can complete its plan to rebuild. What the people don’t know is that the computer was destroyed and that its shell was reconfigured to make it appear to function to give them hope. People would rather rely on someone or something other than themselves.

People who suffer are often willing to suspend their disbelief in pursuit of relief. One notable, although far from isolated, product in our history is Snake Oil. As a population we may have the opinion that we’re growing more sophisticated, but that doesn’t seem to diminish the lure of a cure all. In our day, men and businesses still sell snake oil, but they use the language of the day and claim to support it with research. Whether it’s omega 3 fatty acids or cortizone or vitamins or magnetic clothing inserts, humans tend to believe just about anything. I’m reminded of a story told to me by an ad executive whose father got his start in advertising by travelling with a man selling a “cure of all things, including old age”. As a boy, his father’s only job was to stand in front of the wagon holding the bottle while the respected man gave his speech about the product. Then afterward, as people came up to ask him whether the man was lying about his claims of being nearly 200 years old, he was instructed to say “I really don’t know. I’ve only been with him for the last 80 years”.

It’s all been done before, and our technological advances aren’t any different. Just as with previous cures, they bring their own pitfalls. That doesn’t stop us from giving our trust to new technologies that make big promises, effectively surrendering our independence. The nature of ‘smart’ technology provides a better idea of what humans are really doing when they place their trust in it.

The real nature of computers

Computers are the ultimate instruction followers. The same is true for machines that are controlled by computers. Very specific instructions are required to do very small tasks. These instructions include specific types of inputs, algorithmic processes applied to those inputs followed by an output of a predefined type. The rigid character of these inputs and outputs is what drives the specificity mentioned above. Any deviation typically results in failure.

What about artificial intelligence (AI)? Increasingly sophisticated algorithms and advanced statistical analysis are employed in ways that mimic human reasoning and response. These mechanisms allow for less specific inputs and a broader range of outputs. Learning is accomplished by including past outcomes in current statistical analysis. Depending on the assessment methodology used, it may even appear that the AI program can reason. However, to suggest that these techniques produce subjectivity or feeling, both of which are required for sentience, is incorrect. At their finest they are masterful recreations of human thought brought about by skillful technologists.

Guns and drugs are not ethically responsible for the damage done by them. The people who use guns and drugs are ethically and legally responsible.  In a similar way, ethical and legal responsibility related to technology must trace back to the technologist responsible for creating and employing it.

Risk of decreasing awareness

Many technologies promise improved precision. Whether GPS, or accounting or robotic assembly in manufacturing facilities, the promise of improved accuracy and precision is appealing. This can even be as simple as a digital Rolodex to store contact information. The pursuit of precision all too often results in a loss of intuition. The expected improvement in desired performance all to often brings the cost of dependence to the technology instead of improved mental capacity.

Unintended outcomes

When trust in technology combines with decreasing awareness, unintended consequences can follow. The medical industry can provide a number of examples of this. Before technology was used to process and dispense medications, doctors and nurses were required to be intimately familiar with these drugs and their interactions. Doctors required deep knowledge about symptoms and their relationship to various environmental and physiological details in order to accurately diagnose illness. What happens when medical technology fails? Who is liable? The doctor or the technology on which he relied?

The increasing complexity of modern technological systems inhibits manual correction. Professionals who rely on technological systems may be unprepared to make decisions independent of technology. Interestingly, reliance on technology provides liability shielding to professionals who make mistakes based on faulty technology. Is it possible that liability concerns in a world of increasing litigation might inhibit necessary intervention by some professionals in cases where it’s clear the technology has failed?

The Internet of Things

The internet of things refers to the increasing number of devices that are connected to one another and communicate without humans initiating that communication. The privacy concerns related to an internet of things has become a hot topic in recent months. The types of devices that are interconnected include objects as small as cell phones and machines as large as automobiles. While privacy is a concern, there are others that apply Bill Joy’s original article to the internet of things, suggesting that in time, these interconnected devices may replace humans.

It’s clear that interconnected devices are here to stay, but they aren’t self-aware and they never will be. There are various commercial and even social reasons to portray technology as being aware or intelligent, such as IBM’s Watson and Apple’s Siri. In spite of this appearance of intelligence and for the same reasons mentioned above, the interconnected network of devices is not on the path to sentience. That doesn’t mean that it doesn’t have ethical implications. Rather it means that the ethical duties and consequences related to these technologies must apply to those responsible for the technology, not the technologies themselves.

Sustainability and the fragile future of technology

The idea of a future teeming with autonomous technology which operates independent of humans is quickly tempered when contrasted with the current state of available energy sources. Both renewable and limited forms of energy present availability and capacity problems. Current battery technology can power increasingly power hungry devices for very short periods of time. Even if technology reached a state of sophistication that would enable human free extraction of energy from the earth, the sources of energy will eventually deplete and mobility will always be limited.

One man can plant a seed, but no one man can create a new technological device. The fundamental building blocks of today’s technology have grown so complex that future devices cannot be built without significant interaction with others. There are so many disciplines involved in the most basic circuits now that without advanced knowledge in semiconductor device fabrication, material science, chemistry, mathematics and manufacturing, new technologies would effectively halt.

The current state of technology is much more fragile and much less sustainable than the general population is aware. As long as a constant, growing reservoir of energy supply is maintained, our technology will serve us. When that reservoir runs dry or becomes overly expensive, what will happen to our technology? For example, if the price of Gasoline were to suddenly jump to $20/gallon, how many people would continue to commute the same way they do today? There would likely be an immediate increase in the number of perfectly functional cars sitting idle in garages around the country.

The real drivers are profit and power, the ethics haven’t changed

Power and profit are still the primary decision drivers in society. Capital investment, which leads to technological advancement, is driven by power and profit. Whether the objective is military dominance, commercial advantage or social notoriety, the motivations behind the pursuit of technology remain mostly constant.

This means that the ethics haven’t changed either, and they still apply to humans, not technology. Sorry Mr. Asimov.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.