Research
Showcasing Innovation: ´óÏó´«Ã½ Computing Science Leads at SIGGRAPH 2025
´óÏó´«Ã½ Computing Science is poised to make a significant impact at , the 52nd International Conference & Exhibition on Computer Graphics & Interactive Techniques. This prominent presence highlights ´óÏó´«Ã½'s leadership and commitment to cutting-edge research and innovation in the field of visual computing.
stands as the premier conference and exhibition dedicated to computer graphics and interactive techniques. Held over five days, it offers an immersive experience for participants to explore, innovate, and imagine within their specific areas of interest. The conference covers a wide range of topics, including Production & Animation, Research & Education, Arts & Design, Gaming & Interactive, and New Technologies.
SIGGRAPH 2025 is scheduled to take place from August 10–14 at the Convention Centre in Vancouver, BC, Canada. It is a global gathering where industry experts, researchers, and creative minds take the stage to share their work and innovations. Submissions for various programs, including Technical Papers, Art Papers, Talks, and Immersive Pavilion content, were closed earlier this year.
´óÏó´«Ã½ Computing Science's Strong Representation
The School will be highly visible at through both faculty leadership and an impressive portfolio of technical papers. Two of our faculty members, Richard Zhang and Ali Mahdavi-Amiri, will serve as the Technical Papers Chair and Session Chair, respectively.
The School is contributing an impressive seven groundbreaking technical papers to this year's conference. These papers highlight advanced research across various sub-fields of computer graphics and interactive techniques, demonstrating the depth and breadth of the School's expertise. The presented papers include:
- "": This paper presents a photograph relighting method that enables explicit control over light sources akin to CG pipelines. Researchers achieve this in a pipeline involving mid-level computer vision, physically-based rendering, and neural rendering. They introduce a self-supervised training methodology to train our neural renderer using real-world photograph collections.
´óÏó´«Ã½ Authors: , Yagiz Aksoy. - "": Researchers propose a geometry- and illumination-aware 2d-graphic compositing pipeline. They use meshes generated by off-the-shelf monocular depth estimation methods to warp the 2d-graphic according to the surface geometry. Using intrinsic decomposition, they composite the warped graphic onto the albedo and reconstruct the final result by combining all intrinsic components.
´óÏó´«Ã½ Authors: , , , , Yağız Aksoy. - "": The authors develop an object insertion pipeline and interface that enables iterative editing of illumination-aware composite images. The pipeline leverages off-the-shelf computer vision methods and differentiable rendering to reconstruct a 3D representation of a given scene. Users can add 3D objects and render them with physically accurate lighting effects.
´óÏó´«Ã½ Authors: , , , Yağız Aksoy. - "": Cora is a novel diffusion-based image editing method that achieves complex edits, such as object insertion, background changes, and non-rigid transformations, in only four diffusion steps. By leveraging pixel-wise semantic correspondences between source and target, it preserves key elements of the original image’s structure and appearance while introducing new content.
´óÏó´«Ã½ Authors: , , Andrea Tagliasacchi, Ali Mahdavi-Amiri. - "": pOps is a framework for learning semantic manipulations in CLIP’s image embedding space. Built on a Diffusion Prior model, it enables concept manipulation by training operators directly on image embeddings. This approach enhances semantic control and integrates easily with diffusion models for image generation.
´óÏó´«Ã½ Authors: Ali Mahdavi-Amiri. - "": PARC is a framework that enhances terrain traversal with machine learning and physics-based simulation. By iteratively training a kinematic motion generator and simulated motion tracker, PARC produces a character controller capable of traversing complex environments using highly agile motor skills, overcoming the challenges of limited motion capture data.
´óÏó´«Ã½ Authors: , , KangKang Yin, Xue Bin Peng. - "": 3D Gaussian Splatting (3DGS) enables fast 3D reconstruction and rendering but struggles with real-world captures due to transient elements and lighting changes. We introduce SpotLessSplats, which leverages semantic features from foundation models and robust optimization to remove transient effects, achieving state-of-the-art qualitative and quantitative reconstruction quality on casual scene captures.
´óÏó´«Ã½ Authors: Andrea Tagliasacchi.
´óÏó´«Ã½â€™s strong presence at SIGGRAPH showcases our cutting-edge computing science research and reaffirms our standing as Canada’s leader in visual computing according to .