Skip to main content
Pixel illustration with two figures surrounded by more pixels

With AI, we need to … Teach students to probe ethical and societal questions for human good

October 21, 2025 | Story by: Lara Ehrlich

In a world where the possibilities for using artificial intelligence are endless, the questions are, too.

Consider just a few:

  • Should AI be used for reconnaissance in an occupied territory to quickly differentiate between hostile individuals and innocent civilians?
  • As AI takes on a more constant presence in daily life, what type of leaders are best suited to lead expanded uses—and develop guidelines and guardrails?
  • What happens when AI algorithms used to make policy or social-services decisions are based on data that are incomplete or disproportionately harm one demographic group or another?

At Case Western Reserve University, faculty were deeply engaged in AI issues long before ChatGPT and other popular platforms entered the scene. And they are bringing critical inquiry to these and other questions through interdisciplinary and cutting-edge collaborations. Their shared goal: to prevent harm and empower students to ask probing questions and play an active role in designing the future.

Seeking responsible uses

Black and white head-and-shoulders photographic portrait of Shannon E. French.

Shannon E. French, PhD, often spends her days discussing ethical issues and societal implications—with students, including U.S. Army chaplains, in the CWRU master’s degree program in military ethics she co-leads and in interdisciplinary meetings involving groundbreaking biomedical-engineering advances on campus.

“It is vital for those who are ethical leaders for the troops to understand how new technologies may affect decision-making, accountability, and the chance of moral injury,” said French, CWRU’s Inamori Professor in Ethics and director of the campus Inamori International Center for Ethics and Excellence.

“Similarly,” she said, “cutting-edge innovators, like those on our campus, can benefit from conversations about the ethical, legal and societal implications of their work at every stage of the design process. This adds fresh, interdisciplinary perspectives and protects developers from being surprised by problems later.”

An internationally known expert in military ethics, French also was part of discussions with a U.S. Department of Defense agency about whether it should adopt a particular AI-enabled surveillance system to use in occupied territory to scout for hostile combatants. The purpose was to increase safety for U.S. troops. But French said she and an international-law expert objected, concerned the system posed serious risks to civilians and wouldn’t reliably identify real threats. Ultimately, French said, the project was not implemented.

 “Sometimes you have to recognize that AI is the wrong tool to use altogether for the problem you are trying to solve,” French said.

“Ethics isn’t just about what you shouldn’t do,” she added. “It’s about what you should do to make human lives better.”

“I like to remind my students that AI isn’t magical or all-knowing. They shouldn’t surrender their decision-making or critical thinking to AI. The human brain remains much more marvelous, but use it or lose it!” — Shannon E. French, Inamori Professor in Ethics

Black and white head-and-shoulders photographic portrait of Timothy Beal.

Melding knowledge

Years before AI became ubiquitous, Timothy Beal, PhD, believed its impact would be as profound as that of the printing press, which led to a revolution in scholarship and literacy, the spread of new knowledge—and made possible the modern university.

And that’s why Beal—a CWRU Distinguished University Professor and the Florence Harkness Professor of Religion—aims to help educate the next generation of  leaders who have both technical skills and cultural, historical and ethical knowledge gained from an academic immersion in the humanities.

“Technological changes have the potential to change us more than we change them,” Beal said. “Discerning what we can and can’t control calls for leaders who have both.”
For several years, he’s been working to ensure more students have an integrated academic experience. “CWRU is uniquely suited for this because it naturally blends a STEM [science, technology, engineering and mathematics] institution with a traditional liberal arts college,” he said.

Last year, Beal launched a campus course, “Responsible AI: Cultivating a Just and Sustainable Socio-technical Future Through Data Citizenship,” created with a National Humanities Center grant and in partnership with Google.

He also co-developed CWRU’s new Humanity and Technology major, which debuted this fall and combines humanities coursework and STEM.

“Never forget local communities and particular experiences, perspectives, and voices. They are not simply data to feed hungry AI systems. They should be the reason we engage these and other technologies in the first place.” — Timothy Beal, Distinguished University Professor

Black and white head-and-shoulders photographic portrait of Francisca García-Cobián Richter.

Data science for social impact

After spending more than 20 years steeped in data to illuminate social-welfare issues and poverty, Francisca García-Cobián Richter, PhD, has seen the promise of statistics to help vulnerable populations—and the problems that result when those populations are left out of the data, or even harmed by it.

“People who are marginalized are often not represented accurately in the data,” said Richter, a research associate professor at the Jack, Joseph and Morton Mandel School of Applied Social Sciences and associate director of the university’s Center on Poverty and Community Development.

Richter said, for example, she’s seen cases in which people experiencing homelessness should qualify for priority housing, but don’t; and people should be flagged to receive opioid interventions, but aren’t. The reasons: Algorithms that identified people for homes or services contained faulty assumptions, unintended biases or data based on incomplete information.

To address problems she saw, Richter co-created both a framework for systematically identifying biases and missing data, and a program of social-welfare and engineering courses to give social workers, data scientists and others a shared knowledge base.

That program is CWRU’s Certificate in Data Sciences for Social Impact, and it was co-developed with Case School of Engineering faculty members Roger French, PhD, a Distinguished University Professor, and Laura Bruckman, PhD, the Climo Associate Professor.

“We want to bridge language barriers,” Richter said, “and make sure people from diverse backgrounds come together to improve how we develop and use data for the social good.” 

“AI can hinder or augment our capacity to learn and develop as professionals who are critical and ethical members of a society continuously transformed by AI. It is on all of us to leverage this technology for the public interest.” — Francisca García-Cobián Richter, research associate professor

Illustrations by Mathieu Lacrecque