

The hope that technology will serve human purposes has gotten a little more desperate
There’s no question that the topic of artificial Intelligence (AI) has everyone’s attention. The world is not short of newsworthy front-page items, nor lacking in seemingly intractable problems to keep us occupied. We have the Ukraine, Taiwan, the Sudan, the banking crisis, the 2024 presidential election, and a host of other long-term political, economic, environmental, and moral challenges to keep even the best minds awake at night. But articles on AI keep coming.
The issues of “machine learning” are not new. In a recent discussion with one of my mathematics/ computer science colleagues, he mentioned that this had been a core aspect of his graduate work in the early 2000’s. For years we have watched the gradual advance of AI in everything from medical diagnostics, credit ratings, and assembly lines to home security and entertainment systems. But something has jarred us out of our apathetic slumber.
Was it the discovery that AI in such forms as ChatGPT could not only process information but generate texts? Was it the Future of Life Institute’s open letter, signed by such luminaries as Elon Musk, calling for a six-month pause on AI research (assuming such a thing were even possible)? Is it the current global situation and the growing awareness of AI’s potential to exacerbate the power of misinformation hostile to the West and its friends? Whatever it is, we are awake now and trying to make sense of our uneasiness, wondering how to move from paralyzing fear to productive agency.
Given the media’s growing preoccupation with AI over the past two months, I have tried to draw on the tools of my historian’s training to sort out what has not changed and what is truly new here, seeking to identify what merits our society’s most earnest and imaginative efforts to ensure that AI continues to serve the common good rather than sabotage the future of human flourishing. Here is what I believe remains the same:
- We know that all technology has the potential to be used for great good or great ill. The most graphic example here might be nuclear power.
- We know that new technology has often threatened existing jobs—at least initially—only to create new employment opportunities in its wake. The classic example is how the invention of the power loom put English handloom weavers of the late-eighteenth century out of work, resulting in the Luddite riots—from which all subsequent fearmongering of new technology got its name. More recently, we have the story of the displacement of human “computers” by mainframe machines made famous by the movie and book Hidden Figures.
- We know that moral, ethical, and theological reflection almost always trails the initial emergence of technology, driven in its early stages by the more visceral motives of curiosity and profit. We know this well from the history of Western exploration and expansion, the initial use of nuclear power during World War II, and the unregulated consumption of natural resources, to mention only a few of the most obvious examples.
- Whenever new technology expands access to information, it poses risks to existing power structures and provokes efforts to control that technology. The invention of the printing press enabled the unprecedented spread of the written word, threatening both religious and political hierarchies of sixteenth-century Europe. This disruptive impact of newly accessible printed material—especially in indigenous languages and notably when combined with efforts to expand literacy instruction—still challenges entrenched elites around the globe.
- We know that all new technology creates new elites with specialized knowledge (and often specialized equipment) that limits access for the majority of the public, and even for those in the position to provide some measure of regulation or oversight. Here, for starters, consider genetics research and digital communication technology.
- To move a bit deeper, we also have come to appreciate how the purported systemic neutrality of technology turns out to be inextricably intertwined with the bias and politics of entrenched power structures. The Royal Society—formed by Charles II of England in 1666 to free society from the self-interested rivalries of religious and politics and to allow for the free operation of “objective” scientific research—was itself eventually exposed as riddled with the bias and blindness of elitism that inhibited the emergence of new insights that did not fit established “orthodoxies.”
- Finally, we also know that just as freedom and reliability of information are always bound up with structures of power, so also are they bound up with structures of trust, context, and community. The structures of trust that allow for communication are the same structures of trust that allow for abuse of power, compromised critical thinking, and transmission of bias and prejudice. Therefore, getting information from individuals we know in person or in real time is no guarantee that we will not be misled.
Enough on the side of continuity. The point here is not to dismiss concerns or misgivings with AI that arise from this list of factors but rather to resist engaging in a misguided or simplistic romanticism about the past.
So then, what is the big deal about AI? What’s new?
While it is still too early to speak with authority about what exactly will prove revolutionary about AI, even in this moment there are some hints of what is to come. It is these elements that will require the greatest imagination and creativity. And it these items that should be occupying our greatest collective attention and calibrating our appropriate level of unease.
- Perhaps most unnerving is the apparently collective judgment of even those with technical expertise that AI is not behaving as expected. No one is quite sure how it works. It does not seem to work like the human brain, but has a “mind of its own” to speak anthropomorphically. This is not the first time scientists have encountered mysterious elements in their work. But within the context of the rush to make use of AI for commercial purposes, there may not be time for humans to understand enough about the technology to stay out ahead of the machines.
- The accelerated pace of change surrounding the expansion of AI usage not only leaves us less time for understanding the technology, it also leaves us even more than usually behind in applying regulatory or ethical considerations. Further, the sheer scope of the challenge is mind-boggling. Given the volume of data, even a possible 99% accuracy rating in detecting mistakes or intentional “deepfakes” leaves the public at great risk of misinformation.
- It is difficult even to imagine sufficiently effective self-regulating communities or systems of accountability in the world of AI. In the academy, the international political arena and elsewhere, what has historically served (to some degree) to check for bias, allow for the rise of originality and excellence, preserve the possibility of relevant balances of power, and otherwise inhibit rogue players that threaten the common good is now made difficult to conceive. To start with, AI functions at the intersection of multiple semi-independent political, economic, and technical power structures. This reality raises questions about how we can navigate the complexities of the simultaneous concentration and dispersion of power in determining the impact of AI in any given situation. Furthermore, many of the standard categories of analysis that have guided moral reflection, professional ethics, and legislation do not easily transfer to the world of AI. For how do we think about such notions as “originality,” “intellectual property,” “free speech,” “personal responsibility” or “human agency” in the abstract, disembodied world of AI? I do take some comfort in knowing that the federal government is calling on such experienced international human rights theorists as Professor Paolo Carozza from the Notre Dame Law School to help sort through the ethical and legal questions raised by AI. But this only points up one more challenge to the emergence of effective self-regulating communities of accountability for AI: the need for honest and sustained dialogue between those with potentially relevant theoretical knowledge and those with the requisite technical expertise.
- Finally, I will mention the daunting challenge of preparing the public to be critical processors of the growing deluge of AI-generated communication that will impact their lives at every turn. Of course, it has always been complicated to prepare citizens to be critically reflective consumers of information. Bias in the media, political propaganda, and the ever-present need to “consider the source” are not new to human experience. What is new are the differences in technical savvy coupled with the differences of access to the available technology. Take the emerging generation who have no memory of a pre-digital world: they tend to possess more technical knowledge, but are also more trusting. Those older folk with the questions and concerns often do not have the technical skills, the inclination, or even the vocabulary to engage in productive discussions about what it will mean to be source critical.
This is not merely an issue of how we get our news about the world or the latest goings on in Washington. The chasm in technical savvy also inhibits our efforts to effectively navigate the ever-growing challenge of obtaining access to real human persons, whether it be in reaching out to a government agency or one’s credit card or cell phone company. What began as the mildly irritating experience of dealing with phone trees is rapidly becoming the risk of encountering questionable machine-generated data for which no real person feels quite responsible or in a position to correct.
As a recently retired academic schooled in the humanities and with absolutely no natural instincts in technology, I am tempted to leave the challenges of AI to someone else. At the same time, I believe it is incumbent on my generation—and responsible individuals of any age—not to do that. In fact, as a generation that at least remembers the ideals of working for the common good and recalls the existence of a vibrant civil society—where voluntary non-profits and multi-faceted sources of authority enriched and complicated that space between the individual and the regulatory state, and fostered a less polarized and less reductionist world of public discourse—we have much to contribute. But, whether as educators committed to the development of individual potential, as citizens of countries that have historically espoused the values of freedom, human rights and human responsibility, or as parents or grandparents who care about families—we all have work to do in this moment. At the very least, we can educate ourselves. We can ask questions. We can be actively promoting the preparation of curriculum in schools, professional organizations, and faith-based contexts that ensure that AI will continue to serve distinctively human and humane purposes. We have a responsibility to invest in that hope!
Shirley A. Mullen (PhD) is President Emerita of Houghton College and longtime history professor.
Leave a Reply
You must be logged in to post a comment.