Four ways artificial intelligence (AI) takes shape at CWRU—and across higher education
Across higher education, conversations around artificial intelligence (AI) have shifted rapidly throughout the years. What began as debates over whether AI tools should be allowed in classrooms has evolved into a more nuanced question: how can universities use AI responsibly, ethically and effectively to enhance learning and research?
At Case Western Reserve University, Sumon Biswas, PhD, assistant professor at the Department of Computer and Data Sciences, noted how institutions nationwide are moving away from blanket restrictions and toward intentional integration, increasing the need for campuswide guidance on acceptable AI use and disclosure, practical literacy and AI-enabled research workflows with stronger attention to verification and ethics.
“Many institutions are providing early or easy access to application programming interfaces (API), such as ChatGPT, and modern AI tools like GitHub Copilot, often offering free or pro versions through educational benefits,” he shared. “CWRU is well-positioned due to its strength in applied, interdisciplinary work, especially at the intersection of computing, health and engineering.
Read on to learn four ways AI takes shape at CWRU and across higher education, according to Biswas. Then, discover how members of the CWRU community are putting AI to work for good.
Answers have been edited for clarity and length.
1. At CWRU, artificial intelligence is used as an accelerator, not an authority.
At Biswas’ lab, researchers focus on understanding and improving large language models in an effort to make them safer and more reliable through novel prompt-engineering strategies, fine-tuning methods and theoretical advances in model alignment. They also build on open-weight models, rather than simply depending on proprietary black-box models. Additionally, those part of Biswas’ lab use AI to brainstorm research ideas, prototype experiments and analyze results with rigorous verification and reproducibility checks.
2. AI tools are changing how students engage with coursework.
Today, students interact with assignments in more repetitive ways—often testing ideas and asking AI tools for quick feedback and revisions rather than submitting a single draft. As a result, there is an increased need to emphasize process and justification, including why a solution is correct, safe or appropriate. It also leads professors to rely heavily on group activities and live demonstrations where students present their work directly to teaching assistants and instructors, which mirror real-world industry practices. These approaches, along with open-ended projects and code reviews, helps students develop collaboration and communication skills while ensuring they can explain and defend their solutions in real time.
3. Setting clear expectations are becoming key to academic integrity and ethical use.
While not all instructors and higher education institutions prohibit the use of AI tools, many establish clear verification steps—testing, validation and documentation—to treat AI as a co-creator rather than a substitute for individuals’ own reasoning. For example, ChatGPT-style assistants and AI-enabled coding copilots are common for brainstorming, debugging and refining code under these guidelines. Learn about AI policies at CWRU.
4. AI literacy is increasingly supported through project-based learning and campuswide programs.
From computer and data science courses such as “CSDS 447: Responsible AI Engineering”, which focuses on building AI systems that are fair, robust, safe and interpretable, to campuswide events, such as the AI in Action program—which offered participants practical insights to advance research, teaching and innovation—CWRU is building AI literacy through coursework and interdisciplinary collaboration. This provides opportunities to expand interdisciplinary offerings across various disciplines, including health and software engineering, to build potential AI-based solutions for critical healthcare needs.