Artificial Intelligence: The brave new world of moral issues

By Chris Brooks

Photo courtesy of Unsplash.

Data is not information.

Information is not knowledge.

Knowledge is not understanding.

Understanding is not wisdom.

***

I often open my Artificial Intelligence classes with this epigram, an amalgam of quotes from Albert Einstein, Clifford Stoll, and Frank Zappa. Students are asked to think a lot about the ways in which computers can be made to act intelligently; to really grapple with this, we need to be clearer about the sorts of reasoning we want our machines to do.

Data is not information. We are awash in data — scores, articles, sensor readings, television shows, songs, photos, and more. It’s estimated we create 1.14 trillion terabytes of data each day, and this number is growing by 7% per year. But almost all of this data is worthless to us. It only becomes information when it tells us something new, helps us to make better decisions, or illuminates something we didn’t understand before. Numbers on a medical chart are data; they become information when we attach labels, units and times that allow a doctor or a computational assistant to give them meaning and context.

Information is not knowledge. Information is mathematically defined as a reduction in uncertainty. That is, the amount of information in a message is defined by the amount to which it increases our awareness of the world. More pragmatically, information tells us something we didn’t know: the patient’s blood pressure has increased, or the fuel tank on the car is almost empty, a new episode of our favorite TV show is available. But to apply information in a complex setting, we need to be able to integrate it with other facts about the world; we call this relationship knowledge. Knowledge might tell us that the patient is a 53-year-old male with a history of heart conditions, or that there’s a recharging station 10 miles away. Knowledge is often implicit and informal; exposing it and making it actionable is a core challenge for modern AI systems.

Knowledge is not understanding. The map is not the territory. Having the salient facts about a situation is not the same as truly understanding it. The more complex the domain, the more that understanding it requires integrating disparate sources of knowledge, some of which may be difficult to articulate. A computational medical assistant can integrate knowledge about cancer drugs, prognoses, side effects and treatment regimens from different data sources to develop a treatment plan, but it may not be able to empathize in a way that allows it to effectively communicate this to an elderly patient with a fear of medical treatment. An autonomous vehicle can know the local traffic rules, see the road and know how to maneuver the car, which allows it to solve simple driving tasks, but properly transporting passengers through rush-hour traffic in an

urban setting requires it to understand the system it operates in and the people it interacts with, and all their complicated, unwritten rules and objectives, at a much deeper level.

Understanding is not wisdom. Wisdom tells us how to proceed. Which action is truly the best? What values should we uphold? What are the fundamental objectives we ought to pursue? Computational agents are able to objectively consider outcomes and recommend those that are preferred — one treatment may produce better results than another, or one delivery route is better than another. But we have not yet figured out how they might engage in these questions of meaning and purpose — what is the best holistic treatment for a patient? What are the problems worth solving?

This idea of reasoning about reasoning, asking those deeper, more fundamental questions, is a hallmark of Jesuit education, and one that we try to develop in our students. As computing, artificial intelligence (AI) and machine learning (ML) become a greater part of our lives, more and more of these levels of reasoning are carried out on our behalf by computational entities, or agents. In some situations, that’s fine and works perfectly, whereas in others the complexity and level of risk is unacceptable.

In general, we are comfortable giving over low-level data and information tasks to computers. We are happy to have an app help us sort our photos or recognize spam. But as the risk level or the knowledge complexity of the problem increases, we become less comfortable with giving this responsibility over to a computational agent. Most of us would not be comfortable with computers performing surgery unassisted or acting as police — these are high-risk activities that require a great deal of understanding and wisdom.

This highlights a very important point. In popular culture, the AI is usually a wise-cracking robot sidekick, or perhaps a super-intelligent, world-destroying supercomputer. These make great characters, but they produce the implication that the primary goal of building AI is to create an other, a machine capable of thinking for itself, and that the core ethical challenges revolve around making sure that this other behaves correctly, and thinking through the sorts of safeguards needed to prevent a takeover by a hostile computer.

But this is not what we are currently building. The term “artificial intelligence” is misleading here; “augmented intelligence” is a better fit. What we’re creating today is not a different kind of entity that stands apart from us with its own morals and desires. Rather, we’re creating systems that extend our own knowledge, understanding, capabilities and biases. Therefore, the question is not how we ensure that an artificial being can behave ethically, but the harder, ongoing question of how we as humans use these powerful new tools in ways that are ethically appropriate. This includes both constraints, which ensure that we prevent harm, and goals, which determine what problems we should be solving.

Machine learning is a rich and varied field, with many different approaches and emphases. In the last decade, it’s been transformed by the successes of a family of methods known as deep learning. Deep learning is particularly effective at a specific type of task: mapping a high-dimensional input (such as an image, sound, or sequence of words) into a high-dimensional output (such as a different sequence of words, or another image or sound). This has allowed for the development of a dizzying array of applications, including automated captioning of photos and videos, synthesizing of fake videos, and generation of seemingly intelligent text.

But it’s important to understand what is and is not happening in this learning. When humans learn, we typically think of the acquisition of concepts, which come together to create knowledge. We generalize, we make connections, and we abstract. Deep learning systems are not creating abstract symbolic concepts, or doing the sort of metareasoning we ask of our students (although these are sometimes components of other AI systems). Rather, they are using subsymbolic statistical mechanisms to make predictions. They are acting at the level of information; any knowledge or understanding is imposed by the humans creating, using or viewing the system. Because these learners exist within a larger human system, they are subject to the same flaws and biases that make up that system.

For example, one topic I teach all my students about is that of hidden bias. A hidden bias is a bias that exists in a system as a result of some underlying cause that is not immediately apparent. Often, these hidden biases are a result of structural prejudices or inequities.

A classic example of this is the COMPAS system. COMPAS was originally developed as a tool to reduce bias in the detention of criminal defendants awaiting trial. The goal was to use machine learning to take the place of humans, who can clearly have biases, and bail, which is known to be biased against low-income defendants.

By using a database of previous defendants, COMPAS would use past data to learn to predict a risk profile for each defendant. Low-risk defendants could be freed, while high-risk defendants would remain in jail pending trial. Human bias would be left out, leading to a fairer system. Protected characteristics such as race and ethnicity were withheld, so it was assumed that the system would be fair. However, the result was not what was expected.

It turns out that it’s not possible to design a system that a) uses the same risk profile for defendants of different races and b) minimizes the number of wrongly jailed defendants. COMPAS can either have different risk profiles for different races, or else wrongly jail too many Black defendants.

The reason for this is strikingly simple. In the real world, black and white defendants are re-arrested at different rates, often due to racist policing practices such as New York City’s famed “broken windows” policy. This means that the dataset COMPAS uses overrepresents Black defendants. Since COMPAS is trained using biased data, it learns and perpetuates those biases, even though race and ethnicity were withheld. The bias is systemic.

In this case, the fundamental AI dilemma is not the creation of an other that will take over or enslave us and needs to learn ethics. Rather, the issue is us — the risks and flaws in COMPAS come from developing systems that perpetuate our own biases, that allow us to take action without grasping the consequences, and that replicate the inequities and injustices that are part of our world. This is one of many examples of AI replicating existing biases. It’s also been seen in face recognition, where systems were less able to recognize Black faces due to poor training sets, and in hiring, where systems to identify promising recruits replicated the company’s internal gender bias.

So we need to not only build better, smarter AI, we need to be better and smarter ourselves. We need to be more aware of what we are building and why we are building it. Where are we getting our data? What generated it, and what underlying biases are present? How do we really define “progress” and “success”? Is it quarterly profits, or “more eyeballs,” or is it something deeper, more impactful? How does our work have meaning?

This hard work, the work of thinking carefully and deeply about how to proceed, is the hallmark of Jesuit education. Ignatian pedagogy has at its core a) the ideas of discernment, which teaches us to think carefully about what we choose to do and what values we use in making those choices, b) magis, which calls us not to “do more,” but to strive to be better, closer to God, and c) solidarity, which reminds us to know and connect with the people affected by this technology.

Within the AI community, there are efforts to do better; to focus our efforts towards problems such as using AI and ML to address problems such as climate change and sustainable development, and to develop frameworks for measuring and mitigating bias. Within the computing education community, more emphasis on these issues is needed. Discussions on computing’s role in society tend to focus on narrow topics of professional ethics, privacy, and intellectual property; these are important questions, but they focus on avoiding harms, rather than choosing to actively do good.

Jesuit universities are poised to take a leadership role in these fundamental questions of how AI and ML should be developed. We explicitly engage in questions of understanding and wisdom, and train students to ask deep questions, to walk in accompaniment with the poor and oppressed, which allows us to better understand their true needs, and to form students who not only ask how they might avoid inadvertently harmful outcomes, but clearly understand the promise, implications and limitations of these tools, and possess the wisdom to use them to fashion a more humane and just world.

We have both the challenge, and the unique skill set, to help students connect this formational, experiential knowledge with their technical knowledge to create a deeper understanding. What we need is a concerted effort to integrate our Jesuit way of proceeding with the rapid development taking place within the technical spaces.

As we stand at the precipice of this brave new world, we run the risk once again of deploying technologies whose implications we haven’t truly grasped. In order to use AI and ML effectively, it’s critical that we understand what they are and are not. AI isn’t going to replace us; rather, it’s going to extend our capabilities and flaws. If we focus our energies on preventing the next Skynet or Ultron, we’ll miss the mark; we don’t need to create rules for intelligent robots.

Rather, we need to do the hard work of improving ourselves.

Christopher Brooks is professor of computer science and engineering at the University of San Francisco and a past member of the National Seminar on Jesuit Higher Education.

Five Suggestions for Ethical Scientists

Ask: Why are you doing this? What is the fundamental problem you are addressing in your work? Who will it help?

It’s easy to get caught up in the hamster-wheel cycle of grant proposals, articles, and deadlines, and forget to reflect on the larger purpose of our work. In this context, it’s helpful to recall that St. Ignatius reminds us to discern on our vocation, to ask whether the choices we make are in service of God’s glory and reflective of our truest, deepest selves.

Ask: Who is included, and who is not?

Have you heard from all the people who might be involved or affected by your work? Have you considered and incorporated all the perspectives necessary to help you understand the implications of inclusion/exclusion in your work? We all have blind spots and places where our experience is lacking; it’s essential to reflect on and identify those gaps.

Ask: How can you connect your work to others? How might your work impact other areas?

Remember to think in systems and to see your contribution as one part of a larger whole. Science is a collaboration, not a competition.

Question authority, both when it comes to getting resources and aligning your values with an institution’s agenda.

Doing science can require significant resources. Getting access to those resources requires advancing the agenda of entities that have the power to provide those resources, such as research labs, corporations, or funding agencies. That agenda might be explicit and laudable, such as addressing something that is obviously harmful like climate change. More often, though, the agenda might be something more implicitly complex, such as manufacturing weapons or serving the goals of large tech companies. As you seek out the resources and opportunities to pursue scientific questions, look deeply at the agendas of the institutions you collaborate with, and ask whether they truly align with your values.

Be your own biggest skeptic.

Physicist Richard Feynman famously said, “The first principle is that you must not fool yourself — and you are the easiest person to fool.” Put differently: It’s easy to interpret your findings in a way that leads to the conclusion you were hoping for or to present your results in a way that discourages honest criticism. Be sure not to succumb to this natural temptation.

We need to not only build better, smarter AI, we need to be better and smarter ourselves.

--

--