Research
I am a Research Scientist at NVIDIA Research (My NVIDIA research page). I joined NVIDIA in 2011 after obtaining my Ph.D. degree from Grenoble University at INRIA in France (thesis document here). My research interests include real-time realistic rendering, global illumination, alternative geometric and material representations (voxel-based), ray-tracing, anti-aliasing techniques, distributed rendering, as well as out-of-core data management. My predominant research direction focuses on the use of pre-filtered geometric representations for the efficient anti-aliased rendering of detailed scenes and complex objects, as well as global illumination effects. My most impactful contributions are the GigaVoxels rendering pipeline and the GIVoxels/VXGI voxel-based indirect illumination technique, with several hardware implications in the NVIDIA Maxwell architecture.
Publications
2013 | |
![]() | CloudLight: A system for amortizing indirect lighting in real-time rendering Crassin, Cyril; Luebke, David; Mara, Michael; McGuire, Morgan; Oster, Brent; Shirley, Peter; Sloan, Peter-Pike; Wyman, Chris 2013. (Techreport) (Abstract | Links | BibTeX | Tags: Global Illumination, cloud rendering) @techreport{CLMMOSSW13, name = {CloudLight: A system for amortizing indirect lighting in real-time rendering}, author = {Cyril Crassin and David Luebke and Michael Mara and Morgan McGuire and Brent Oster and Peter Shirley and Peter-Pike Sloan and Chris Wyman}, url = {https://research.nvidia.com/publication/cloudlight-system-amortizing-indirect-lighting-real-time-rendering/, NVIDIA Research page http://graphics.cs.williams.edu/papers/CloudLight13/Crassin13Cloud.pdf, Technical report http://graphics.cs.williams.edu/papers/CloudLight13/CloudLightSIGGRAPH13.pptx, Siggraph'13 slides http://graphics.cs.williams.edu/papers/CloudLight13/CloudLightTechReport13.mp4, Video results}, year = {2013}, date = {2013-07-01}, abstract = {We introduce CloudLight, a system for computing indirect lighting in the Cloud to support real-time rendering for interactive 3D applications on a user's local device. CloudLight maps the traditional graphics pipeline onto a distributed system. That differs from a single-machine renderer in three fundamental ways. First, the mapping introduces potential asymmetry between computational resources available at the Cloud and local device sides of the pipeline. Second, compared to a hardware memory bus, the network introduces relatively large latency and low bandwidth between certain pipeline stages. Third, for multi-user virtual environments, a Cloud solution can amortize expensive global illumination costs across users. Our new CloudLight framework explores tradeoffs in different partitions of the global illumination workload between Cloud and local devices, with an eye to how available network and computational power influence design decisions and image quality. We describe the tradeoffs and characteristics of mapping three known lighting algorithms to our system and demonstrate scaling for up to 50 simultaneous CloudLight users.}, keywords = {Global Illumination, cloud rendering} } We introduce CloudLight, a system for computing indirect lighting in the Cloud to support real-time rendering for interactive 3D applications on a user's local device. CloudLight maps the traditional graphics pipeline onto a distributed system. That differs from a single-machine renderer in three fundamental ways. First, the mapping introduces potential asymmetry between computational resources available at the Cloud and local device sides of the pipeline. Second, compared to a hardware memory bus, the network introduces relatively large latency and low bandwidth between certain pipeline stages. Third, for multi-user virtual environments, a Cloud solution can amortize expensive global illumination costs across users. Our new CloudLight framework explores tradeoffs in different partitions of the global illumination workload between Cloud and local devices, with an eye to how available network and computational power influence design decisions and image quality. We describe the tradeoffs and characteristics of mapping three known lighting algorithms to our system and demonstrate scaling for up to 50 simultaneous CloudLight users. |
2011 | |
![]() | Interactive Indirect Illumination Using Voxel Cone Tracing Crassin, Cyril; Neyret, Fabrice; Sainz, Miguel; Green, Simon; Eisemann, Elmar Computer Graphics Forum (Proc. of Pacific Graphics 2011), 2011. (Article) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Lighting, Real-Time) @article{CNSGE11b, name = {Interactive Indirect Illumination Using Voxel Cone Tracing}, author = {Crassin, Cyril and Neyret, Fabrice and Sainz, Miguel and Green, Simon and Eisemann, Elmar}, url = {http://research.nvidia.com/publication/interactive-indirect-illumination-using-voxel-cone-tracing, NVIDIA publication webpage http://maverick.inria.fr/Publications/2011/CNSGE11b/, INRIA publication webpage http://www.icare3d.org/research/publications/CNSGE11b/GIVoxels-pg2011-authors.pdf, Paper authors version http://research.nvidia.com/sites/default/files/publications/GIVoxels_Siggraph2011_web.pptx, Siggraph 2011 Talk}, year = {2011}, date = {2011-09-01}, booktitle = {Computer Graphics Forum (Proc. of Pacific Graphics 2011)}, journal = {Computer Graphics Forum (Proc. of Pacific Graphics 2011)}, abstract = {Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the involved surfaces. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low-frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on the fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows for a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and can handle complex scenes with dynamic content thanks to an interactive octree-voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion. }, keywords = {Voxel, Global Illumination, Lighting, Real-Time} } Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the involved surfaces. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low-frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on the fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows for a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and can handle complex scenes with dynamic content thanks to an interactive octree-voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion. |
![]() | Interactive Indirect Illumination Using Voxel Cone Tracing: An Insight Crassin, Cyril; Neyret, Fabrice; Sainz, Miguel; Green, Simon; Eisemann, Elmar SIGGRAPH 2011 : Technical Talk, ACM SIGGRAPH, 2011. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Lighting, Real-Time, gpu) @inproceedings{CNSGE11a, name = {Interactive Indirect Illumination Using Voxel Cone Tracing: An Insight}, author = {Crassin, Cyril and Neyret, Fabrice and Sainz, Miguel and Green, Simon and Eisemann, Elmar}, url = {http://maverick.inria.fr/Publications/2011/CNSGE11a/, Talk INRIA webpage http://maverick.inria.fr/Publications/2011/CNSGE11a/GIVoxels_Siggraph_Talk.pdf, Slides PDF http://maverick.inria.fr/Publications/2011/CNSGE11a/GIVoxels_Siggraph2011.ppt, Slides PPT}, year = {2011}, date = {2011-08-07}, booktitle = {SIGGRAPH 2011 : Technical Talk}, publisher = {ACM SIGGRAPH}, abstract = {Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the surfaces involved. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on-the-fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and Glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and allows for fully dynamic content thanks to an interactive octree voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion}, howpublished = {SIGGRAPH 2011 : Technical Talk}, keywords = {Voxel, Global Illumination, Lighting, Real-Time, gpu} } Indirect illumination is an important element for realistic image synthesis, but its computation is expensive and highly dependent on the complexity of the scene and of the BRDF of the surfaces involved. While off-line computation and pre-baking can be acceptable for some cases, many applications (games, simulators, etc.) require real-time or interactive approaches to evaluate indirect illumination. We present a novel algorithm to compute indirect lighting in real-time that avoids costly precomputation steps and is not restricted to low frequency illumination. It is based on a hierarchical voxel octree representation generated and updated on-the-fly from a regular scene mesh coupled with an approximate voxel cone tracing that allows a fast estimation of the visibility and incoming energy. Our approach can manage two light bounces for both Lambertian and Glossy materials at interactive framerates (25-70FPS). It exhibits an almost scene-independent performance and allows for fully dynamic content thanks to an interactive octree voxelization scheme. In addition, we demonstrate that our voxel cone tracing can be used to efficiently estimate Ambient Occlusion |
![]() | GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes Crassin, Cyril Grenoble University, 2011. (PhD Thesis) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Real-Time, rendering, out-of-core, gpu, ray-tracing, cone-tracing, octree) @phdthesis{Cra11, name = {GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes}, author = {Crassin, Cyril}, url = {http://maverick.inria.fr/Membres/Cyril.Crassin/thesis/CCrassinThesis_EN_Web.pdf, Thesis http://maverick.inria.fr/Publications/2011/Cra11/, INRIA Publication Page}, year = {2011}, date = {2011-07-12}, school = {Grenoble University}, abstract = {In this thesis, we present a new approach to efficiently render large scenes and detailed objects in real-time. Our approach is based on a new volumetric pre-filtered geometry representation and an associated voxel-based approximate cone tracing that allows an accurate and high performance rendering with high quality filtering of highly detailed geometry. In order to bring this voxel representation as a standard real-time rendering primitive, we propose a new GPU-based approach designed to entirely scale to the rendering of very large volumetric datasets. Our system achieves real-time rendering performance for several billion voxels. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. Our solution is based on an adaptive hierarchical data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. We introduce a new GPU cache mechanism providing a very efficient paging of data in video memory and implemented as a very efficient data-parallel process. This cache is coupled with a data production pipeline able to dynamically load or produce voxel data directly on the GPU. One key element of our method is to guide data production and caching in video memory directly based on data requests and usage information emitted directly during rendering. We demonstrate our approach with several applications. We also show how our pre-filtered geometry model and approximate cone tracing can be used to very efficiciently achieve blurry effects and real-time indirect lighting.}, keywords = {Voxel, Global Illumination, Real-Time, rendering, out-of-core, gpu, ray-tracing, cone-tracing, octree} } In this thesis, we present a new approach to efficiently render large scenes and detailed objects in real-time. Our approach is based on a new volumetric pre-filtered geometry representation and an associated voxel-based approximate cone tracing that allows an accurate and high performance rendering with high quality filtering of highly detailed geometry. In order to bring this voxel representation as a standard real-time rendering primitive, we propose a new GPU-based approach designed to entirely scale to the rendering of very large volumetric datasets. Our system achieves real-time rendering performance for several billion voxels. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. Our solution is based on an adaptive hierarchical data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. We introduce a new GPU cache mechanism providing a very efficient paging of data in video memory and implemented as a very efficient data-parallel process. This cache is coupled with a data production pipeline able to dynamically load or produce voxel data directly on the GPU. One key element of our method is to guide data production and caching in video memory directly based on data requests and usage information emitted directly during rendering. We demonstrate our approach with several applications. We also show how our pre-filtered geometry model and approximate cone tracing can be used to very efficiciently achieve blurry effects and real-time indirect lighting. |
![]() | Interactive Indirect Illumination Using Voxel Cone Tracing: A Preview Crassin, Cyril; Neyret, Fabrice; Sainz, Miguel; Green, Simon; Eisemann, Elmar Poster ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D). Best Poster Award., 2011. (Inproceeding) (Links | BibTeX | Tags: Voxel, Global Illumination, Lighting, Real-Time, gpu) @inproceedings{CNSGE11, name = {Interactive Indirect Illumination Using Voxel Cone Tracing: A Preview}, author = {Crassin, Cyril and Neyret, Fabrice and Sainz, Miguel and Green, Simon and Eisemann, Elmar}, url = {http://artis.imag.fr/Publications/2011/CNSGE11, INRIA Publication Webpage http://maverick.inria.fr/Publications/2011/CNSGE11/I3D2011_Poster_web.pdf, Poster PDF}, year = {2011}, date = {2011-02-19}, booktitle = {Poster ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D). Best Poster Award.}, keywords = {Voxel, Global Illumination, Lighting, Real-Time, gpu} } |
2008 | |
![]() | Interactive Multiple Anisotropic Scattering In Clouds Bouthors, Antoine; Neyret, Fabrice; Max, Nelson; Bruneton, Eric; Crassin, Cyril ACM Symposium on Interactive 3D Graphics and Games (I3D), 2008. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Lighting, real-time rendering, ray-casting, clouds rendering) @inproceedings{BNMBC08, name = {Interactive Multiple Anisotropic Scattering In Clouds}, author = {Bouthors, Antoine and Neyret, Fabrice and Max, Nelson and Bruneton, Eric and Crassin, Cyril}, url = {http://www-evasion.imag.fr/Publications/2008/BNMBC08, INRIA Publication Page http://maverick.inria.fr/Publications/2008/BNMBC08/cloudsFINAL.pdf, Paper authors version}, year = {2008}, date = {2008-02-01}, booktitle = {ACM Symposium on Interactive 3D Graphics and Games (I3D)}, abstract = {We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior. Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. We represent the distribution of light paths reaching a given viewed cloud pixel with the mean and standard deviation of their entry points on the lit surface, which we call the collector area. At rendering time for each pixel we determine the collector area on the lit cloud surface for different sets of scattering orders, then we infer the associated light transport. The fast computation of the collector area and light transport is possible thanks to a preliminary analysis of multiple scattering in planeparallel slabs and does not require slicing or marching through the volume. Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2-10 frames per second.}, keywords = {Voxel, Global Illumination, Lighting, real-time rendering, ray-casting, clouds rendering} } We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior. Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. We represent the distribution of light paths reaching a given viewed cloud pixel with the mean and standard deviation of their entry points on the lit surface, which we call the collector area. At rendering time for each pixel we determine the collector area on the lit cloud surface for different sets of scattering orders, then we infer the associated light transport. The fast computation of the collector area and light transport is possible thanks to a preliminary analysis of multiple scattering in planeparallel slabs and does not require slicing or marching through the volume. Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2-10 frames per second. |
2007 | |
![]() | Rendu Interactif De Nuages Realistes Bouthors, Antoine; Neyret, Fabrice; Max, Nelson; Bruneton, Eric; Crassin, Cyril AFIG '07 (Actes des 20emes journees de l'AFIG), Page(s): 183-195, Marne la Vall'ee, France, AFIG, 2007. (Inproceeding) (Links | BibTeX | Tags: Voxel, Global Illumination, gpu, real-time rendering, ray-casting, clouds rendering) @inproceedings{BNMBC07, name = {Rendu Interactif De Nuages Realistes}, author = {Bouthors, Antoine and Neyret, Fabrice and Max, Nelson and Bruneton, Eric and Crassin, Cyril}, url = {http://artis.imag.fr/Publications/2007/BNMBC07, INRIA Webpage http://maverick.inria.fr/Publications/2007/BNMBC07/clouds.pdf, Paper}, year = {2007}, date = {2007-02-01}, booktitle = {AFIG '07 (Actes des 20emes journees de l'AFIG)}, pages = {183-195}, publisher = {AFIG}, address = {Marne la Vall'ee, France}, keywords = {Voxel, Global Illumination, gpu, real-time rendering, ray-casting, clouds rendering} } |