TikTok成人版

Skip to main content Skip to search

TikTok成人版 News

TikTok成人版 News

AI Student Uses Radar and Language Models to Simulate Human Motion

Chengyi Liu, a student in the M.S. in Artificial Intelligence, is helping teach machines to recognize human activities in a way that鈥檚 smarter, safer鈥攁nd more private.

By Dave DeFusco

Chengyi Liu, a student in the M.S. in Artificial Intelligence, is helping teach machines to recognize human activities in a way that鈥檚 smarter, safer鈥攁nd more private. At the Katz School鈥檚 Graduate Symposium on Science, Technology and Health, Liu presented his research on improving human activity recognition using millimeter wave (mmWave) radar and large language models, the AI engines behind tools like ChatGPT.

鈥淭raditionally, activity recognition has relied on cameras or wearable sensors,鈥 said Liu. 鈥淐ameras raise privacy concerns and wearables aren鈥檛 always practical, but mmWave offers a way to recognize motion without seeing or touching the person.鈥

Think of mmWave like radar that bounces signals off a person鈥檚 body to sense how they鈥檙e moving. This technology can enable applications such as fall detection for seniors, fitness coaching and motion tracking in virtual reality without needing a camera in the room. But there鈥檚 a catch.

鈥淭o make mmWave human activity reaction work well, you need a lot of labeled training data collected from real people in many different settings,鈥 said Dr. Yucheng Xie, Liu鈥檚 advisor and an assistant professor in the Department of Graduate Computer Science and Engineering. 鈥淭hat鈥檚 extremely time-consuming and expensive.鈥

Liu鈥檚 solution was to use AI to fake it鈥攔ealistically. He and Dr. Xie created a framework that combines large language models with 3D motion simulation. The idea is to use LLMs to write descriptions of human activities, things like 鈥渁 firefighter rapidly running forward鈥 or 鈥渁 child slowly turning in a circle,鈥 and then turn those words into biomechanically realistic digital motion.

鈥淚n our system, the LLM automatically generates 50 different versions of each activity, like walking or running, and varies the speed, direction or body orientation,鈥 said Liu. 鈥淭hen we use a motion synthesis model to create 3D skeleton movements from those descriptions.鈥

To make sure the synthetic motions look real and follow human anatomy, Liu applies what鈥檚 called an inverse kinematics filter. This step weeds out any impossible movements, like bending a knee the wrong way or twisting an arm unnaturally. The result: a library of realistic, diverse human movements generated from simple text, and because each movement is described in words first, the system can adjust to new scenarios just by updating the descriptions.

Once the movements are generated, Liu simulates how mmWave radar would see them. Using a digital human body model called SMPL and a technique known as ray tracing, the team builds a 3D representation of how radio waves would bounce around a room and off the person鈥檚 body.

鈥淭his is where the environment comes in,鈥 said Liu. 鈥淲alls, furniture and body shape all affect how mmWave signals behave. Our system takes those factors into account to generate more accurate radar data.鈥

By combining personalized 3D human meshes with environment-aware simulation, Liu can create synthetic mmWave data for a huge range of realistic scenarios without ever needing to record a person in a lab. With the synthetic mmWave data in hand, the final step is to teach the system to recognize what activity is being performed. Here, Liu again uses LLMs not just for writing motion descriptions, but for interpreting them.

鈥淭he language model helps match the mmWave signals to the activity descriptions it helped create,鈥 he said. 鈥淭hat way, the recognition system becomes more adaptable to different settings and individuals.鈥

This feedback loop, from activity name to description to simulation and back to recognition, makes Liu鈥檚 system flexible, efficient and capable of learning from synthetic data. Although the research is still in its early stages, the implications are big. By reducing the need for costly real-world data collection, Liu鈥檚 framework could speed up development of privacy-safe motion recognition tools for healthcare, sports, virtual reality, and more.

鈥淭his work shows how combining language models with physical simulation can unlock powerful new capabilities,鈥 said Dr. Xie. 鈥淚t鈥檚 a great example of what interdisciplinary AI research can do.鈥

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us