The Artificial Impasse

Moore’s Law states that the development of denser and thus more efficient electronic circuits translates into faster technology approximately every two years. This observation, first studied by the law’s namesake, Intel co-founder Gordon Moore, promises an exponential growth in technological capability every few years. Thus far, technological development has done nothing but proven Moore right. From self-driving cars to all-encompassing smart home systems, the pace of technological advancement is something that excites us and dictates our futures.

But with all the focus on the vision of a glimmering technological future, society has forgotten the ethical quandaries and barriers posed by the advancement of technology.

Most daunting of all the dilemmas are the ethical challenges that arise from the development of advanced artificial intelligence, or AI. Particularly, the growing scope and complexity of AI threatens to upend our current societal norms that underpin basic interaction. Writing for the Cambridge Handbook of Artificial Intelligence, Oxford Philosophy Professor and famous ethicist, Nick Bostrom, tackles the question of how to approach the ways that AI will soon intersect and interact with society. Bostrom poses a hypothetical scenario involving new and complex machine learning AI systems that could eventually be used to streamline and evaluate mortgage applications. He then asks what would happen if a person believed they were rejected on the basis of their race. With AI developments such as “complicated neural network[s]” and “genetic algorithm[s] produced by directed evolution,” Bostrom warns that “it may prove nearly impossible to understand why, or even how, the algorithm is judging applicants based on their race.”

What then would the bank have to do? Hold the designer liable? Further daunting are the residual legal and ethical questions that this example and countless others pose for the future of our legal system tasked with evaluating the behaviors of not only humans, but advanced AI systems. At what point does an AI system become liable for its actions? Ethicists agree, for now, that there isn’t a clear answer.

Do not, however, think that this means we are off the hook, nor that AI has yet to arrive. Many of the high-level industrial, municipal, financial, and research and development work that goes on today is driven by or assisted with a form of machine learning based artificial intelligence. For years now, labs that needed to conduct large-scale experiments or wade through oceans of data have designed their own high-level, yet highly specialized, AIs as tools in research. However, since these systems are not optimized for broad scale learning, adaptation, and evolution, and are not designed or tasked with interacting with average adult humans, much of the research and discussion of the ethical dilemmas in AI has been stunted. Meanwhile, the advancement of AI development has continued unhindered.

The wanton path traversed by America’s technological juggernaut has attracted pleas for common sense and similarly harsh predictions from top scientists and innovators including physicist Stephen Hawking and Microsoft founder Bill Gates.

Thankfully, at least for a few more years, non-specialized advanced AI systems integrated into daily human life like those envisioned by Bostrom and others have yet to arrive.  

In the meantime, development of theories and research on the role that AI can safely play in society is critical. Consider the development of new and innovative “smart” cars. Carrying the same name as the extra compact Smart Car brand, this class of vehicle is being designed by tech giants like Google, Facebook, Amazon, and Tesla. Smart cars, designed and marketed for their technological integration, promise to put drivers on the cutting edge of the intersection of technology and automobiles.

The problem that these new smart cars bring is the development of onboard artificial intelligence systems. Recently, this has resulted in features that give AI control and oversight over parts of the vehicle in ways that remove operator responsibility. Features that started as automated parallel parking evolved into self-driving cars. With each development the walls of basic societal interactions tumble down and unanswered and unregulated confusion follows.

Take for example the release of the latest onboard operating system for Tesla vehicles. The features of this onboard AI now include the ability to drive autonomously for stretches at a time — an intriguing and attractive feature, but one that poses many grey area issues. Take, for example, liability. If the system is designed to transport people safely and without driver input, how can a driver be liable if the onboard computer causes a wreck? Similarly, in 2015, the MIT Technology Review posed the deceptively simple question “how should the [smart] car be programmed to act in the event of an unavoidable accident?” In essence, who should the car choose to kill if someone is going to die, and why?

Still unaffordable to the average American and thus not commonly seen, Tesla’s fleet of electric cars is a shining beacon of both the exciting potential for innovation in smart cars and the sticky ethical quandaries left in the wake.

However, this is not just a question of advanced scientific research or even expensive self-driving cars. The development of AI systems implicates every person’s fundamental privacy and anonymity. Massive tech conglomerates like Alphabet (The newly minted parent company that owns Google), Apple, Amazon, and Facebook are all invested in AI companies and systems. This has led to side projects like Amazon’s Alexa AI assistant and Google’s Go playing AI that beat an international Go master. While these advancements in AI technology may seem trivial and entertaining, individuals and consumers need to focus less on the results of AI development and more on how Google, Amazon, and the rest are integrating and developing their systems and what that means for the consumer.

The reality of AI carries a strict privacy cost for the consumer. Take Google’s popular voice assistant, Google Assistant (formerly named Google Now). Google Assistant is an advanced AI system trained to learn your patterns and habits in order to aid in tasks like scheduling, emailing, or even listening to music. For Google, this means the collection of individual data to improve predictive algorithms choosing what types of ads, recommendations, and websites you will see. In a purely outcomes-dominated frame of reference, the sacrifice of our own meta-data for the sake of helpful AI development and growth seems harmless. This is a forced paradigm for evaluating research and development. Our role as consumers and members of society is to ensure these projects and their developers truly meet the burden of ethical advancement of technology that does not come at the cost of our rights and our dignity.

Far more critical is the relationship that AI development has with students. Curriculums need to be designed in a fashion that tempers excitement in technology with a heavier focus on the ethics of technology. Regardless of major, the lessons posed by AI require we place a larger general focus on the changing and developing ethics of humankind.

Leave a Reply

Your email address will not be published. Required fields are marked *