Xiaofan (Fred) Jiang (Columbia University)
Shahriar Nirjon (University of North Carolina, Chapel Hill)
Yang Liu (Nokia Bell Labs)
Stephen Xia (Northwestern University)
Jingping Nie (University of North Carolina, Chapel Hill)
VP Nguyen (University of Massachusetts Amherst)
Local Time: Hong Kong Time (UTC+8)
| 9:15 AM – 9:30 AM | Opening |
|---|---|
| 9:30 AM – 10:30 AM |
Keynote 1 Speaker: Prof. Qian Zhang, Hong Kong University of Science and Technology Title: We can Learn a lot from Our Breathing Sound Abstract: Breathing sounds provide valuable insights about the condition of the respiratory airways, pulmonary structures, and can be analyzed to assess respiratory health. Subtle differences among respiratory sounds can be differentiated more accurately using AI. In this talk, I would like to share with you some of our efforts that we leveraged the good understanding of sound to conduct lung function assessment, the acute exacerbation detection of COPD, as well as sputum location detection. Short Bio: Qian Zhang is now the head of Division of Integrative System and Design (ISD) as well as Tencent Professor of Engineering and Chair Professor of the Department of Computer Science and Engineering (CSE) of the Hong Kong University of Science and Technology (HKUST). Before that, she was in Microsoft Research Asia, Beijing, from July 1999, where she was the research manager of the Wireless and Networking Group. Dr. Zhang has published more than 400 refereed papers in international leading journals and key conferences. She is the inventor of more than 50 granted and 20 pending international patents. Her current research interests include Internet of Things, smart health, mobile computing and sensing, wireless networking, as well as cyber security. She is a Fellow of the IEEE and the Hong Kong Academy of Engineering (HKAE). |
| 10:30 AM – 11:00 AM | Break |
| 11:00 AM – 12:00 PM |
Keynote 2 Speaker: Speaker: Prof. Nirupam Roy, University of Maryland Title: Physical Intelligence: Bridging the Physical and Semantic Worlds Abstract: The next generation of intelligent systems must bridge the divide between the physical and the semantic—transforming raw sensor signals into meaningful representations of the world. This talk presents a vision for physical intelligence: the ability of machines to sense, interpret, and reason about their environments through multimodal cues such as sound, radio, motion, and touch. By combining physics-informed learning with data-driven semantic modeling, we explore how systems can move beyond perception toward understanding—integrating diverse signals, adapting to environmental changes, and maintaining coherence over time. The talk concludes with a roadmap toward foundational models of physical perception that unify sensing, representation, and meaning for embodied intelligence. Short Bio: Nirupam Roy is an Associate Professor of Computer Science at the University of Maryland, College Park, where he leads the iCoSMoS Lab. His research explores how machines can sense, interpret, and reason about the physical world by integrating acoustics, wireless signals, and embedded AI. His work bridges physical sensing and semantic understanding, with recognized contributions across intelligence acoustics, embedded-AI, and multimodal perception. He is a recipient of the NSF CAREER Award, the Meta Research Award, and multiple best paper honors, and his innovations have inspired startups and societal impact through accessible, intelligent technologies. |
| 12:00 PM – 12:30 PM | Panel Discussion |
| 12:30 PM – 2:00 PM | Lunch |
| 2:00 PM – 4:00 PM |
SESSION 1: Acoustic Intelligence and Industrial Applications • BReAD: Boosting Relational Knowledge Distillation with Large Language Model for Acoustic Industrial Anomaly Detection • Assembly Stethoscope: Detecting Assembly Errors through Frequency Sweeping – A Feasibility Study • RRAR: Robust Real-World Activity Recognition with Vibration by Scavenging Near-Surface Audio Online • Biometric Authentication Using Smartphone-Generated Acoustic Signals Modulated by Vascular Dynamics • Breathing and Semantic Pause Detection and Exertion-Level Classification in Post-Exercise Speech |
| 4:00 PM – 4:30 PM | Coffee Break |
| 4:30 PM – 5:15 PM |
SESSION 2: Acoustic and Multimodal Sensing for Human Health • IMUSteth: On-Body Stethoscope Localization with Inertial Sensing for Home Self-Screening • EarFusion: Quality-Aware Fusion of In-Ear Audio and Photoplethysmography for Heart Rate Monitoring |
| 5:15 PM – 6:00 PM |
Closing, Business Meeting and Awards For each technical paper: 15 min talk + 5 min Q&A |
Columbia University
Prof. Xiaofan Jiang (Associate Professor of Electrical Engineering at Columbia University) has 18+ years of experience at the intersection of systems and data, with a focus on intelligent embedded systems and their applications in mobile and wearable computing, intelligent built environments, Internet of Things, and connected health. His recent work on intelligent drones (which received a Best Demo Award at ACM SenSys), modular sensing platforms, and health are highly acclaimed in the SIGMOBILE community. Prof. Jiang’s research webpage is http://icsl.ee.columbia.edu/.
Email: jiang@ee.columbia.edu
University of North Carolina
Prof. Shahriar Nirjon (Associate Professor of Computer Science at UNC) has 15+ years of experience in multi-modal, sensor-enabled intelligent embedded systems. He has published over 50 research papers involving audio, RF, depth cameras, environmental and wearable sensors, and health trackers. Several of his recent works involve multi-modal data fusion and learning. Prof. Nirjon’s research webpage is https://www.cs.unc.edu/~nirjon/research.html.
Email: nirjon@cs.unc.edu
Nokia Bell Labs
Dr. Yang Liu (Research Scientist in the Device Forms team at Nokia Bell Labs) is in the Pervasive Systems Research Department in Cambridge (UK). He holds a PhD in Communication and Information Systems from the University of Chinese Academy of Sciences, China. He was a Marie Curie Research Fellow in Wireless Communications at The University of Sheffield, UK. He currently focuses on the research of wearable devices, human-centric embedded systems, backscatter communications, and wireless sensing.
Email: yang.16.liu@nokia.com