Visualization and Human-Computer Interaction

Chairs

Shixia Liu

Shixia Liu

Tsinghua University

Lingyun Yu

Lingyun Yu

Xi'an Jiaotong-Liverpool University

Siming Chen

Siming Chen

Fudan University

Time & Venue

14:00-17:00, August 19, 2025 (Tuesday)

Yunhai Ballroom 1, 3rd Floor, Westin Hotel

Speakers

Tony Huang

Tony HuangUniversity of Technology Sydney


Biography: Dr. Tony Huang is an Associate Professor from the University of Technology Sydney, Australia, specializing in AR/VR/XR-based human-computer interaction and visual perception of network visualizations. He has over 150 publications in these areas. He has served as conference chair, program committee chair, and organization chair for international events including ACM SUI, OzCHI, AusDM, and VINCI. He is an Associate Editor for Behaviour and Information Technology, and a co-chair of the IEEE SMC's Visual Analytics and Communication Technical Committee. He has also guest-edited a number of special issues for SCI-indexed journals.
Title: Supporting AR-based hand gestures for remote guidance
Abstract

Many real-world scenarios require a remote expert guiding a local user to perform physical tasks, such as remote machine maintenance. Theories and systems have been developed to support this type of collaboration by augmenting the local user’s workspace with the expert’s hand gestures. In these systems, hand gestures are shared in different formats such as raw hands, projected hands, digital representations of gestures and sketches. However, effects of combination of these gesturing formats have not been fully explored and understood. We therefore have developed a series of systems to meet the needs of different real-world working scenarios using emerging and wearable technologies. In this talk, I will introduce some innovative techniques and systems that we designed, developed and evaluated in supporting remote guidance through augmented reality-based sharing of hand gestures.

Christian Desrosiers

Christian DesrosiersUniversity of Quebec


Biography: Christian Desrosiers is a researcher and professor specializing in machine learning and computer vision. He is the co-director of the Laboratory on Imaging, Vision and AI (LIVIA) and a member of the International Laboratory on Learning Systems (ILLS). His research focuses on representation learning, deep neural networks, and domain adaptation, with applications ranging from medical imaging to autonomous systems. He has published extensively in top-tier conferences and journals and is recognized for his contributions to unsupervised and test-time learning methods. Christian is also actively involved in mentoring students and fostering collaborations between academia and industry.
Title: Beyond the Image: Test-Time Adaptation for Multimodal and Point Cloud Data
Abstract

Test-time adaptation (TTA) has emerged as a powerful approach for adapting pre-trained models to novel, unseen data distributions during inference—without requiring access to the original training data. Unlike traditional domain adaptation techniques that rely on source data, TTA operates in a source-free and unsupervised manner, updating the model on-the-fly using incoming test batches.
While TTA has been extensively studied in the context of natural images, its extension to other data modalities remains relatively unexplored. In this talk, we highlight recent advances in TTA for multimodal and 3D data. First, we demonstrate how a widely used Vision-Language Model (VLM), based on Contrastive Language-Image Pre-training (CLIP), can be adapted at test time to address a variety of domain shifts and corruptions in both image classification and segmentation tasks. We also introduce TTA techniques tailored for 3D point cloud data, improving robustness to modality-specific challenges such as occlusions, viewpoint variation, and background noise.  

Jun Tao

Jun TaoSun Yat-sen University


Biography: Jun Tao is an Associate Professor at the School of Computer Science, Sun Yat-sen University, and the National Supercomputing Center in Guangzhou. He received his Ph.D. in Computer Science from Michigan Technological University in 2015 and worked as a postdoctoral researcher at the University of Notre Dame from 2015 to 2018. His research interests include the visualization of large-scale scientific simulation data, with a focus on the application of deep learning, information theory, optimization techniques, and interactive exploration methods in flow field visualization, as well as high-performance analysis methods for large-scale scientific data.
Title: Semantics-based Scientific Visualization
Abstract

Conventional interactive scientific visualization systems rely heavily on graphical user interfaces, which, while flexible, often present a steep learning curve, particularly when expressing complex analytical intent. Semantics-based methods offer an alternative by allowing users to specify goals and interests through natural language. The system interprets these queries, extracts relevant features from scalar or vector field data, and generates appropriate visualizations. This approach lowers the barrier to entry and improves analytical efficiency. However, challenges remain, including aligning semantics with data, expressing and extracting complex features, and selecting suitable visualization parameters. This talk presents our ongoing work on natural language interfaces for semantics-driven scientific visualization.

Le Liu

Le LiuNorthwestern Polytechnical University


Biography: Dr. Le Liu is an associate professor in the School of Computer Science at the Northwestern Polytechnical University. He received his PhD from Clemson University. His current research lies at the intersection of visualization, visual perception, computer graphics, and artificial intelligence.
Title: Visualizing Multifaceted Forecast Uncertainty in Immersive Environments
Abstract

In fields such as weather forecasting, oceanographic research, and disaster early warning, ensemble data plays a crucial role. However, effectively communicating the various uncertainties present in ensemble data remains a significant challenge in the field of visualization. Currently, ensemble data visualization mainly relies on two-dimensional displays, which are limited by the constraints of visual channels. This can result in cognitive biases, visual clutter, and obscured information. This report aims to discuss stereoscopic visualization techniques for ensemble data, utilizing depth cues to enhance the visual encoding and representation of multidimensional information. The goal is to improve the perception and reasoning of uncertainty distributions.

Yuxin Ma

Yuxin MaSouthern University of Science and Technology


Biography: Yuxin Ma is a tenure-track Associate Professor in the Department of Computer Science and Engineering, Southern University of Science and Technology (SUSTech), China. He received B.Eng. and Ph.D. from Zhejiang University, China, supervised by Prof. Wei Chen from the State Key Lab of CAD&CG. Before joining SUSTech, he worked as a Postdoctoral Research Associate in VADER Lab, SCAI, Arizona State University, USA. His primary research interests are in the areas of data visualization and human-AI collaborative data analytics, focusing on applications related to explainable artificial intelligence, high-dimensional data, spatiotemporal data, and interactive education support. His work has been published in various top venues in visualization and human-computer interaction (IEEE TVCG, IEEE VIS, ACM CHI, etc.) and recognized through Honorable Mention Awards at ACM CHI (2022) and CVMJ (2018). He has served as a program committee member and reviewer for major conferences and journals in visualization, human-computer interaction, and artificial intelligence.
Title: Visual Analytics on Explainable AI: Case Studies on Analyzing Optimization Processes and Language Models
Abstract

In recent years, the widespread application of AI techniques has enabled the inspection, understanding, and prediction of the behaviors of individuals and groups within massive datasets, providing new insights into society and the world while optimizing social governance. However, the black-box nature of complex algorithms and models has limited users' understanding of the mechanisms and predictions of the models, thereby affecting trust in the models. This talk will address the challenges by exploring how visual analytics approaches can facilitate the comprehension of complex algorithms and models, with a particular emphasis on optimization processes and language models.

Agenda:

Time Title Speaker
Moderator: Siming Chen
14:00 - 14:10 Opening Siming Chen
14:10 - 14:45 Supporting AR-based hand gestures for remote guidance Tony Huang
14:45 - 15:20 Beyond the Image: Test-Time Adaptation for Multimodal and Point Cloud Data Christian Desrosiers
15:20 - 15:40 Coffee break
Moderator: Lingyun Yu
15:40 - 16:10 Semantics-based Scientific Visualization Jun Tao
16:10 - 16:40 Visualizing Multifaceted Forecast Uncertainty in Immersive Environments Le Liu
16:40 - 17:10 Visual Analytics on Explainable AI: Case Studies on Analyzing Optimization Processes and Language Models Yuxin Ma
17:10 - 17:15 Closing Lingyun Yu