The 2025-2026 cohort of Freedman Fellows includes three professors whose expertise span different disciplines and colleges. Fellows will work with an assigned liaison from the Freedman Center and present their work at the annual forum in early fall.
Learn more about the program and see how to apply at this link.
Touch in Context: Building a Labeled Video Dataset of Human Social Touch for Human-Robot Interaction
Project Lead: Alexis Block
Affiliation(s)
Assistant Professor, Department of Electrical, Computer, and Systems Engineering; Assistant Professor, Department of Mechanical and Aerospace Engineering; Assistant Professor, Department of Computer and Data Science
Project Description
Despite the central role of touch in human relationships, there exists no publicly available dataset of real-world or acted video clips labeled by type of social touch. This absence limits the ability of researchers to train robots to interpret or replicate naturalistic physical interactions. This project proposes to create a curated, labeled dataset of short video clips (10–20 seconds) that depict diverse social touch interactions—from supportive hugs to celebratory high-fives—across a range of social, emotional, and cultural contexts. This dataset will serve as a foundational resource for developing emotionally intelligent robotic systems capable of social-physical human-robot interaction.
The goal is to ensure this dataset is legally compliant and technically sound. Once a diverse library of clips is curated, the dataset will be annotated through Amazon Mechanical Turk (MTurk), a platform that enables researchers to collect large-scale, crowd-sourced human judgment data. Participants will label each video with touch type (e.g., hug, handshake), intent (e.g., comforting, congratulating), emotional valence, and relationship context. Inter-rater reliability will be calculated, and consensus or majority labels will be retained to ensure high-quality data.
This project will result in an anonymized, labeled dataset of videos that can be used by researchers in human-robot interaction, affective computing, psychology, and communication. It will advance efforts to teach robots how to recognize, respond to, and replicate human-like touch in appropriate and context-sensitive ways. The resulting dataset can inform large-scale robotics foundation models (AI), such as NVIDIA’s GR00T, which currently focuses on task-based movements but lacks expressive or comforting social behaviors. This dataset will help pave the way for the inclusion of socially meaningful gestures in robot learning frameworks. I acknowledge that covering every culturally specific touch gesture in one project is impossible. Therefore, this work will serve as a proof-of-concept focused on widely accepted or near-universal social touch-types, forming a base from which culturally specific datasets can grow.
Freedman Center Liaison: Jared Bendis
Enhancing KV Cache Compression in LLMs with Tucker Decomposition
Project Lead: Gourav Datta
Affiliation(s)
Assistant Professor, Department of Computer and Systems Engineering
Project Description
Large Language Models (LLMs) have become foundational tools across academic research, creative writing, data analysis, and instruction. Yet, their widespread deployment remains restricted by substantial hardware demands—particularly the memory required during inference.
A major contributor to this challenge is the Key-Value (KV) cache used in attention mechanisms. This cache stores the attention states for every token across all layers and attention heads, growing linearly with sequence length. As a result, long-context applications or multi-session serving can quickly exceed on-device memory limits, making LLMs impractical for use on many university or public-facing computing resources.
This project aims to improve the scalability and accessibility of LLMs by developing a novel method for compressing the KV cache using Tucker decomposition, a form of multi-dimensional tensor factorization. While previous work has shown that low-rank approximations and pruning techniques can significantly reduce memory usage, these methods often struggle to balance compression with inference accuracy and latency. Tucker decomposition offers a principled approach to represent the key and value tensors using a compact core tensor and mode-specific projection matrices, capturing redundancies more efficiently across multiple dimensions of the attention tensors.
The project will build on recent advances in KV cache compression by integrating Tucker-based compression directly into the attention layers of a transformer model. This will involve designing an adaptive framework for factorizing the key and value projections during inference, caching only their compressed representations, and reconstructing them on-the-fly when computing attention scores. By fusing the reconstruction with the attention mechanism, we can reduce both storage and memory bandwidth costs without adding significant latency. We will evaluate the approach across various model sizes and tasks, aiming to demonstrate at least a 2× reduction in KV cache memory with negligible loss in task performance.
Our working hypothesis is that Tucker decomposition provides a more flexible and expressive compression method than traditional low-rank factorization, particularly in capturing interactions across attention heads, tokens, and projection dimensions. If successful, this would open the door for broader deployment of LLMs in constrained environments such as campus compute clusters, lab workstations, and edge servers.
Freedman Center Liaison: R. David Beales
AI/ML Tools for Dataset Development: Policies to Mitigate Environmental Problems
Project Lead: Kelly McMann
Affiliation(s)
Lucy Adams Leffingwell Professor, Department of Political Science
Project Description
This project identifies and employ artificial intelligence (AI) and machine learning (ML) to create a higher quality environmental policies dataset more efficiently than could be done manually. The future public release of this dataset will allow scholars and practitioners in numerous fields to answer pressing questions more effectively. For example, how do political ideologies affect the adoption of different environmental protection measures and how do different environmental policies impact the economic performance of the industries most affected by them
The dataset my research assistants and I will create will comprehensively document environmental policies that national governments have adopted (or not) in all countries of the world to address the following environmental problems: non-renewable energy, transport fuel/oil consumption, CO2 emissions, air pollution, water pollution, soil pollution, deforestation, and ecosystem degradation. For each policy indicator there will be a datapoint for each country for each year data are available.
Policy indicators will measure existence of a policy and, for some, their economic sector and geographic targets. Web scraping and AI/ML tools will hopefully speed up this time-intensive process and allow us to create new indicators in datasets.
We will dramatically improve upon existing datasets by:
- including policies to address all major environmental problems
- overing more countries of the world
- providing new indicators to better identify specific policies.
Freedman Center Liaison: R. Benjamin Gorham