University of South Carolina researchers pose with Zand and ABii at a 2023 Computer Science and Engineering symposium. (Photos courtesy of iCAS Lab/Carolina News & Reporter)

A University of South Carolina research project on AI is working to aid American Sign Language users through facial recognition data. 

Assistant professor Ramtin Zand of the Molinaroli College of Engineering and Computing has researched this topic among peers since 2022. Zand’s research has resulted in multiple published studies. Most of the studies involve developing chips to use in facial recognition AI on small handheld or wearable devices. Facial expressions are a large part of communication in American Sign Language, which is a language of hand symbols. 

The work received a $600,000 financial award from the National Science Foundation in 2024. 

“This recognition has brought a lot of financial support for us, which can help us maintain the research,” Zand said. “We have all the students who are working on these projects, the equipment that we … need to buy, the lab that we have to build. That recognition has been very useful for us to develop the team.”

Where did the idea come from? Zand’s students from his Edge and Neuromorphic Computing course wanted to pursue AI and sign language-related projects. 

“I asked the students to come up with applications for one theme of a project that they have throughout the semester,” Zand said. “A couple of these students who later joined my research lab. They were interested in doing facial expression recognition.”

Several current and former students involved include Heath Smith, Lareb Khan, Sara Hendrix, James Seekings, Mohammadreza Mohammadi, Mahsa Ardakani, Hasti Zanganeh, Arshia Eslami, and Peyton Chandarana. Chandarana was first introduced to research through Zand’s work. 

“Dr. Zand has a kind of unique mentality that research is research,” Chandarana said. “It doesn’t matter what title you have. There isn’t a huge disconnect between the classroom and the research lab.”

The project has a non-human contributor as well: ABii, a social robot developed by Van Robotics. The robot was designed for teaching to elementary school students but is now part of the sign language development work. 

The research is still ongoing with no immediate stopping point in mind, Zand said. But the effectiveness of the AI still has some way to go. Sign language is known to have many signs that appear similar but have dramatically different meanings (i.e. the hand signs for beer vs. brown). The facial expressions can also influence the meaning.

“We’re actually building a data set for ASL, because there are data sets out there, but they’re kind of limited in their complexity and some of them fail to recognize that exact use of the intricacies,” Chandarana said.

The concern also has been shared by those who study the language, like USC student Emily Whitaker. Whitaker said she isn’t a fan of AI because it isn’t always accurate. 

“If you’re able to get a robot that knows how to properly communicate for them, to bridge that communication barrier, it definitely could be helpful,” she said. “But it would not be if it can’t do, like, the facial expressions or the proper movements for it.”

Fortunately, the project is avoiding concerns about privacy that come with working with children. Laws prevent the researchers from keeping or sharing private data within the Cloud they work on. 

“One of the reasons that we’re trying to bring AI to small devices and not rely on data centers to do processing of these AI algorithms and models is because we want to make sure that data never leaves the device,” Zand said. “So if someone is using, let’s say, these smart glasses to do real-time sign language translation, it’s important for us that information never leaves the device.”

Mahsa Ardakani demonstrates ABii to a child.

USC mascot Cocky observes researchers’ projects.