China-Israel Symposium on Emerging Graphics Technologies

China-Israel Symposium on Emerging Graphics Technologies

  

Workshop Chair

Hui Huang

Hui Huang

Shenzhen University

Mengyu Chu

Mengyu Chu

Peking University

Time & Venue

13:30-17: 30, August 20, 2025 (Wednesday)

Donglai Hall, 2nd Floor, Westin Hotel

Speakers

Daniel Cohen-Or

Daniel Cohen-OrTel-Aviv University


Biography: Daniel Cohen-Or is a Professor in the Department of Computer Science at Tel Aviv University, an ACM Fellow, and The Isaias Nizri Chair in Visual Computing. His research focuses on computer graphics, visual computing, and geometric modeling, with recent interests in generative models. He is a recipient of the ACM SIGGRAPH Computer Graphics Achievement Award and the Eurographics Distinguished Career Award, among numerous other honors recognizing his impactful contributions to the field.
Title: Attention and Semantic Control in Generative Models
Abstract

Attention layers play a critical role in generative models. In this talk, I will show that these layers capture rich semantic information, and particularly semantic correspondences between elements within the image and across different images. Through several works, I will show that the rich representations learned by these layers can be leveraged for image manipulation, consistent image generation, and personalization. Additionally, I will discuss the challenges that arise, especially in scenarios involving complex prompts with multiple subjects. Specific issues, such as semantic leakage during the denoising process, can lead to inaccurate representations, resulting in poor generations.

Ariel Shamir

Ariel ShamirReichman University


Biography: Ariel Shamir is a professor and former Dean at Reichman University, known for his influential work in computer graphics, image/video processing, and machine learning. He co-developed the seam carving algorithm for content-aware image resizing and has contributed to projects like Sketch-to-Photo, 3Sweep, CLIPasso (SIGGRAPH 2022 Best Paper), and Word-as-Image (SIGGRAPH 2023 Honorable Mention). He received the AsiaGraphics (2023) and EuroGraphics (2025) Outstanding Technical Contributions Awards and was inducted into the SIGGRAPH Academy in 2024.
Title: Sytle-Content Separation and Control in Images
Abstract

Creating stylized content can either be done by stylizing existing photographs or by generating stylized images from scratch. Many methods have been proposed to do both tasks, but they often struggle to balane content fidelity and artistic style or more generally, to separate style and content. In this talk I will present two efforts to face these challenges by analyzing the sensitivity of the network and models to various aspects in the generation or stylization. B-LoRA, is a method that leverages LoRA (Low-Rank Adaptation) to implicitly separate the style and content components of a single image, and Conditional Balance allows fine-grained control over style and content in image generation.

Gal Chechik

Gal ChechikBar-Ilan University, NVIDIA


Biography: Gal Chechik is a Professor of computer science at Bar-Ilan University and a senior director of AI research at NVIDIA. His current research focuses on learning for reasoning and perception. In 2018, Gal joined NVIDIA to found and head NVIDIA's research in Israel. Prior to that, Gal was a staff research scientist at Google Brain and Google research developing large-scale algorithms for machine perception, used by millions daily, and a professor of Computational Neuroscience at BIU. Gal earned his PhD in 2004 from the Hebrew University, and completed his postdoctoral training at Stanford CS department. Gal authored ~160 refereed publications, ~50 patents, including publications in Nature Biotechnology, Cell and PNAS. His work won awards at ICML and NeurIPS.
Title: A "System 2" in Visual Generative AI
Abstract

Between training and inference, lies a growing class of AI problems that involve fast optimization of a pre-trained model for a specific inference task. Like System-2 in the "thinking fast and slow" model of cognitive processing, these are not pure “feed-forward” inference problems applied to a pre-trained model, because they involve some non-trivial inference-time optimization beyond what the model was trained for; neither are they training problems, because they focus on a specific input. These compute-heavy inference workflows raise new challenges in machine learning and open opportunities for new types of user experiences and use cases. In this talk, I describe flavors of the new workflows in the context of text-to-image generative models including recent work on Image Editing and teaching models to count. I will also briefly discuss the generation of rare classes, and future directions.

Andrei Sharf

Andrei SharfBen Gurion University


Biography: Andrei Sharf is an Associate Professor in the Department of Computer Science at Ben-Gurion University. His research focuses on computer graphics, with an emphasis on geometry processing, 3D modeling, and deep learning. He has worked extensively on 3D scanning and point cloud reconstruction. He received his Ph.D. from Tel Aviv University and completed a postdoctoral fellowship at the University of California, Davis. Prior to joining Ben-Gurion University, he was a Visiting Associate Professor at the Shenzhen Institute of Advanced Technology (SIAT), China.
Title: Learning Thin 3D Structure Reconstructions
Abstract

Thin structures such as vessels or pipelines pose challenges due to discontinuities, bifurcations, sparse 3D data, and low contrast.
ThinGAT tackles fine-scale segmentation using a lightweight graph neural network with a modified attention mechanism and edge smoothness loss, preserving continuity and achieving state-of-the-art accuracy on medical benchmarks with only 961K parameters.
For reconstruction, we propose a sliding-box depth projection approach: local orthographic projections from multiple views enable precise recovery of thin geometry. Local reconstructions are fused into coherent 3D models, demonstrated on pulmonary artery CT data and industrial pipeline scans.
Together, these methods deliver compact, accurate, and generalizable solutions for thin structure segmentation and 3D reconstruction across medical and industrial domains.  

Haggai Maron

Haggai MaronTechnion Israel Institute of Technology, NVIDIA


Biography: Haggai Maron is an Assistant Professor at the Technion’s Faculty of Electrical and Computer Engineering and a Senior Research Scientist at NVIDIA Research in Tel Aviv. His research focuses on deep learning for structured data, including sets, graphs, point clouds, and surfaces, with a particular interest in symmetry and equivariant architectures. He aims to bridge theoretical insights with practical applications in machine learning. Haggai completed his Ph.D. at the Weizmann Institute of Science under the supervision of Prof. Yaron Lipman.
Title: Learning in Deep Weight Spaces Through Symmetries
Abstract

With millions of pre-trained models now available online, including Implicit Neural Representations (INRs) and Neural Radiance Fields (NeRFs), neural network weights have emerged as a rich new data modality. This talk explores treating these weights as structured data objects with inherent symmetries that can be exploited for learning. We present architectures that process weight spaces while preserving these symmetries, including our equivariant architectures for multilayer perceptron weights (ICML 2023) and Graph Metanetworks (GMN) (ICLR 2024), which extend this approach to diverse network architectures. We also discuss recent work on learning with Low-Rank Adaptations (LoRA) and processing neural gradients. This research enables novel approaches for analyzing and modifying neural networks, with applications spanning from INR manipulation and generation to weight pruning and model editing.

Chaoqi Chen

Chaoqi ChenShenzhen University


Biography: Chaoqi Chen is an Assistant Professor at the College of Computer Science and Software Engineering, Shenzhen University, supported by the “Hundred Talents Program.” His research focuses on advancing open-world visual language foundation models, with emphasis on adaptability, generalization, and robustness. Key areas include cross-domain inference, causality-inspired learning, and unsupervised domain adaptation. Chaoqi has published over 20 papers in top venues such as IEEE TPAMI, CVPR, ICCV, NeurIPS, and AAAI, and actively mentors students and researchers in cutting-edge AI technologies.
Title: Efficient Learning, Reasoning, and Adaptation for 3D Point Clouds
Abstract

The rapid development of 3D vision has called for more efficient and intelligent approaches to point cloud understanding. This talk presents three complementary advances that collectively address the challenges of learning, reasoning, and adaptation in 3D point cloud processing. First, we propose a parsimonious tri-vector representation for efficient and expressive 3D shape generation, reducing computational cost without sacrificing quality. Second, we introduce PointLLM-R, a chain-of-thought guided reasoning framework that enhances 3D point cloud inference via structured multi-step prompts. Third, we tackle distribution shifts in 4D point cloud segmentation with an active test-time adaptation strategy that improves robustness under unseen scenarios. Together, these methods offer a unified and forward-looking perspective on efficient 3D point cloud learning across diverse tasks and conditions.

Tianjia Shao

Tianjia ShaoZhejiang University


Biography: Tianjia Shao is a ZJU100 Young Professor at the State Key Laboratory of CAD&CG, Zhejiang University. His research focuses on 3D content acquisition, reconstruction, and generation, including real-time dense SLAM, high-quality offline 3D reconstruction, autonomous robot-based reconstruction, digital human creation, and 3D AI-generated content (AIGC). He earned his Ph.D. from Tsinghua University under Prof. Baining Guo and has held positions at the University of Leeds and Microsoft Research Asia. During his Ph.D., he was a visiting researcher at University College London with Prof. Niloy Mitra.
Title: When Gaussian Meets Surfel: Ultra-fast High-fidelity Radiance Field Rendering
Abstract

We introduce Gaussian-enhanced Surfels (GESs), a bi-scale representation for radiance field rendering, wherein a set of 2D opaque surfels with view dependent colors represent the coarse-scale geometry and appearance of scenes, and a few 3D Gaussians surrounding the surfels supplement fine-scale appearance details. The entirely sorting free rendering of GESs not only achieves very fast rates, but also produces view-consistent images, successfully avoiding popping artifacts under view changes. Experimental results show that GESs advance the state-of-the-arts as a compelling representation for ultra-fast high-fidelity radiance field rendering.

Taijiang Mu

Taijiang MuTsinghua University


Biography: Taijiang Mu is currently a Research Associate Professor in the Department of Computer Science and Technology, Tsinghua University. He received his BS degree and Ph.D. degeree from Tsinghua University in 2011 and 2016, respectively. His research interests are in Computer Graphics and Computer Vision, especially focused on 3D reconstruction and generation. He has published over 50 papers in important international conferences and journals, among which three papers have been selected as ESI hot papers. He won the first "Zu Chongzhi" Award. Currently, he serves as an editorial board member of The Visual Computer and VCIBA.
Title: Controllable 3D Generation and Editing
Abstract

Recent breakthroughs in 3D generation have transformed content creation, yet key challenges remain in achieving precise control over scene composition and local geometry editing. In this talk, I will present two innovative solutions addressing these limitations. First, DIScene introduces a structured scene graph approach that distills 2D diffusion knowledge into 3D generation, enabling style-consistent object modeling with explicit interaction handling through node-edge representations. Second, RELATE3D tackles local editing challenges by decomposing 3D latent spaces into semantically meaningful components, facilitated by a novel Refocusing Adapter that enables part-level modifications through multimodal alignment. Together, these methods establish a comprehensive framework for controllable 3D content creation.

Ye Pan

Ye PanShanghai Jiao Tong University


Biography: Ye Pan is an Associate Professor at the John Hopcroft Center, Shanghai Jiao Tong University. Her research focuses on AR/VR, avatars and characters, 3D animation, human-computer interaction, and computer graphics. Previously, she was an Associate Research Scientist at Disney Research Los Angeles. She earned her B.Sc. in Communication and Information Engineering from Purdue/UESTC in 2010 and her Ph.D. in Computer Graphics from University College London in 2015. Ye has served as Associate Editor for the International Journal of Human Computer Studies and as a regular member of IEEE Virtual Reality program committees.
Title: Stylized and Emotional Character Animation and Interaction
Abstract

The production process of 3D digital avatars is typically time-consuming and costly. However, with the increasing maturity of AIGC technology, using this technology to accelerate the production process of 3D digital avatars is becoming more and more feasible. This project aims to develop a stylized 3D chat avatar system and explore its application in the production of game release materials. By combining advanced AIGC technology, and computer graphics techniques, users will be able to easily create personalized and uniquely styled 3D talking avatars. We'll delve into various techniques for creating stylized 3D avatars and achieving real-time animation.

Pannel Disucssion

Pannel topic: Intelligence, Collaboration, and the Next Frontiers of Graphics

Panelist:

Prof. Hui Huang, and the 9 speakers

Moderator: Mengyu Chu, Peking University

Agenda:

Time Title Speaker
13:30 - 13:40 Opening Shi-Min Hu
13:40 - 14:00 Attention and Semantic Control in Generative Models Daniel Cohen-Or
14:00 - 14:20 Efficient Learning, Reasoning, and Adaptation for 3D Point Clouds Chaoqi Chen
14:20 - 14:40 Sytle-Content Separation and Control in Images Ariel Shamir
14:40 - 15:00 When Gaussian Meets Surfel: Ultra-fast High-fidelity Radiance Field Rendering Tianjia Shao
15:00 - 15:20 A "System 2" in Visual Generative AI Gal Chechik
15:20 - 15:40 Coffee break
15:40 - 16:00 Controllable 3D Generation and Editing Taijiang Mu
16:00 - 16:20 Learning Thin 3D Structure Reconstructions Andrei Sharf
16:20 - 16:40 Stylized and Emotional Character Animation and Interaction Ye Pan
16:40 - 17:00 Learning in Deep Weight Spaces Through Symmetries Haggai Maron
17:00 - 17:30 Panel Discussion