Research
I am a Research Scientist at NVIDIA Research (My NVIDIA research page). I joined NVIDIA in 2011 after obtaining my Ph.D. degree from Grenoble University at INRIA in France (thesis document here). My research interests include real-time realistic rendering, global illumination, alternative geometric and material representations (voxel-based), ray-tracing, anti-aliasing techniques, distributed rendering, as well as out-of-core data management. My predominant research direction focuses on the use of pre-filtered geometric representations for the efficient anti-aliased rendering of detailed scenes and complex objects, as well as global illumination effects. My most impactful contributions are the GigaVoxels rendering pipeline and the GIVoxels/VXGI voxel-based indirect illumination technique, with several hardware implications in the NVIDIA Maxwell architecture.
Publications
2011 | |
![]() | GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes Crassin, Cyril Grenoble University, 2011. (PhD Thesis) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Real-Time, rendering, out-of-core, gpu, ray-tracing, cone-tracing, octree) @phdthesis{Cra11, name = {GigaVoxels: A Voxel-Based Rendering Pipeline For Efficient Exploration Of Large And Detailed Scenes}, author = {Crassin, Cyril}, url = {http://maverick.inria.fr/Membres/Cyril.Crassin/thesis/CCrassinThesis_EN_Web.pdf, Thesis http://maverick.inria.fr/Publications/2011/Cra11/, INRIA Publication Page}, year = {2011}, date = {2011-07-12}, school = {Grenoble University}, abstract = {In this thesis, we present a new approach to efficiently render large scenes and detailed objects in real-time. Our approach is based on a new volumetric pre-filtered geometry representation and an associated voxel-based approximate cone tracing that allows an accurate and high performance rendering with high quality filtering of highly detailed geometry. In order to bring this voxel representation as a standard real-time rendering primitive, we propose a new GPU-based approach designed to entirely scale to the rendering of very large volumetric datasets. Our system achieves real-time rendering performance for several billion voxels. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. Our solution is based on an adaptive hierarchical data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. We introduce a new GPU cache mechanism providing a very efficient paging of data in video memory and implemented as a very efficient data-parallel process. This cache is coupled with a data production pipeline able to dynamically load or produce voxel data directly on the GPU. One key element of our method is to guide data production and caching in video memory directly based on data requests and usage information emitted directly during rendering. We demonstrate our approach with several applications. We also show how our pre-filtered geometry model and approximate cone tracing can be used to very efficiciently achieve blurry effects and real-time indirect lighting.}, keywords = {Voxel, Global Illumination, Real-Time, rendering, out-of-core, gpu, ray-tracing, cone-tracing, octree} } In this thesis, we present a new approach to efficiently render large scenes and detailed objects in real-time. Our approach is based on a new volumetric pre-filtered geometry representation and an associated voxel-based approximate cone tracing that allows an accurate and high performance rendering with high quality filtering of highly detailed geometry. In order to bring this voxel representation as a standard real-time rendering primitive, we propose a new GPU-based approach designed to entirely scale to the rendering of very large volumetric datasets. Our system achieves real-time rendering performance for several billion voxels. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. Our solution is based on an adaptive hierarchical data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. We introduce a new GPU cache mechanism providing a very efficient paging of data in video memory and implemented as a very efficient data-parallel process. This cache is coupled with a data production pipeline able to dynamically load or produce voxel data directly on the GPU. One key element of our method is to guide data production and caching in video memory directly based on data requests and usage information emitted directly during rendering. We demonstrate our approach with several applications. We also show how our pre-filtered geometry model and approximate cone tracing can be used to very efficiciently achieve blurry effects and real-time indirect lighting. |
2009 | |
![]() | GigaVoxels: Voxels Come Into Play Crassin, Cyril Crytek Conference Talk. Crytek GmbH. Frankfurt, Germany, 2009. (Misc) (Links | BibTeX | Tags: gpu, voxels, ray-tracing, GigaVoxels, cache) @misc{Cra09, name = {GigaVoxels: Voxels Come Into Play}, author = {Crassin, Cyril}, url = {http://artis.imag.fr/Publications/2009/Cra09, INRIA Webpage http://maverick.inria.fr/Publications/2009/Cra09/GigaVoxels_Crytek_web.ppt, Talk PPT}, year = {2009}, date = {2009-11-01}, booktitle = {Crytek Conference Talk. Crytek GmbH. Frankfurt, Germany}, journal = {Crytek Conference Talk. Crytek GmbH. Frankfurt, Germany}, keywords = {gpu, voxels, ray-tracing, GigaVoxels, cache} } |
![]() | Beyond Triangles : GigaVoxels Effects In Video Games Crassin, Cyril; Neyret, Fabrice; Lefebvre, Sylvain; Sainz, Miguel; Eisemann, Elmar SIGGRAPH 2009 : Technical Talk + Poster (Best Poster Award Finalist), ACM SIGGRAPH, 2009. (Inproceeding) (Links | BibTeX | Tags: out-of-core, gpu, voxels, sparse, ray-tracing, depth-of-field, soft shadows) @inproceedings{CNLSE09, name = {Beyond Triangles : GigaVoxels Effects In Video Games}, author = {Crassin, Cyril and Neyret, Fabrice and Lefebvre, Sylvain and Sainz, Miguel and Eisemann, Elmar}, url = {http://artis.imag.fr/Publications/2009/CNLSE09, Talk INRIA Webpage http://maverick.inria.fr/Publications/2009/CNLSE09/GigaVoxels_Siggraph09_Slides.pdf, Slides PDF http://maverick.inria.fr/Publications/2009/CNLSE09/gigavoxels_siggraph09_talk.pdf, Sketch PDF}, year = {2009}, date = {2009-08-01}, booktitle = {SIGGRAPH 2009 : Technical Talk + Poster (Best Poster Award Finalist)}, publisher = {ACM SIGGRAPH}, keywords = {out-of-core, gpu, voxels, sparse, ray-tracing, depth-of-field, soft shadows} } |
![]() | Building with Bricks: Cuda-based Gigavoxel Rendering Crassin, Cyril; Neyret, Fabrice; Eisemann, Elmar Intel Visual Computing Research Conference, 2009. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, out-of-core, filtering, voxelization, gpu, ray-tracing, depth-of-field, real-time rendering, ray-casting, octree, GigaVoxels, cache) @inproceedings{CNE09, name = {Building with Bricks: Cuda-based Gigavoxel Rendering}, author = {Crassin, Cyril and Neyret, Fabrice and Eisemann, Elmar}, url = {http://artis.imag.fr/Publications/2009/CNE09, INRIA Webpage /research/publications/CNE09/IntelConf_Final.pdf, Paper Authors Version}, year = {2009}, date = {2009-03-01}, booktitle = {Intel Visual Computing Research Conference}, journal = {Intel Visual Computing Research Conference}, abstract = {For a long time, triangles have been considered the state-of-sthe-art primitive for fast interactive applications. Only recently, with the dawn of programmability of graphics cards, different representations emerged. Especially for complex entities, triangles have difficulties in representing convincing details and faithful approximations quickly become costly. In this work we investigate Voxels. Voxels can represent very rich and detailed objects and are of crucial importance in medical contexts. Nonetheless, one major downside is their significant memory consumption. Here, we propose an out-of-core method to deal with large volumes in real-time. Only little CPU interaction is needed which shifts the workload towards the GPU. This makes the use of large voxel data sets even easier than the, usually complicated, triangle-based LOD mechanisms that often rely on the CPU. This simplicity might even foreshadow the use of volume data, in game contexts. The latter we underline by presenting very efficient algorithms to approximate standard effects, such as soft shadows, or depth of field.}, keywords = {Voxel, out-of-core, filtering, voxelization, gpu, ray-tracing, depth-of-field, real-time rendering, ray-casting, octree, GigaVoxels, cache} } For a long time, triangles have been considered the state-of-sthe-art primitive for fast interactive applications. Only recently, with the dawn of programmability of graphics cards, different representations emerged. Especially for complex entities, triangles have difficulties in representing convincing details and faithful approximations quickly become costly. In this work we investigate Voxels. Voxels can represent very rich and detailed objects and are of crucial importance in medical contexts. Nonetheless, one major downside is their significant memory consumption. Here, we propose an out-of-core method to deal with large volumes in real-time. Only little CPU interaction is needed which shifts the workload towards the GPU. This makes the use of large voxel data sets even easier than the, usually complicated, triangle-based LOD mechanisms that often rely on the CPU. This simplicity might even foreshadow the use of volume data, in game contexts. The latter we underline by presenting very efficient algorithms to approximate standard effects, such as soft shadows, or depth of field. |
2008 | |
![]() | Crassin, Cyril; Neyret, Fabrice; Lefebvre, Sylvain INRIA Technical Report 2008. (Techreport) (Abstract | Links | BibTeX | Tags: gpu, voxels, sparse, ray-tracing, real-time rendering, volumes, hypertextures, visibility, ray-casting, octree) @techreport{CNL08, name = {Interactive GigaVoxels}, author = {Crassin, Cyril and Neyret, Fabrice and Lefebvre, Sylvain}, url = {http://maverick.inria.fr/Publications/2008/CNL08, INRIA Webpage http://maverick.inria.fr/Publications/2008/CNL08/RR-6567.pdf, Technical Report http://maverick.inria.fr/Publications/2008/CNL08/PresGigaVoxels.ppt, Presentation PPT}, year = {2008}, date = {2008-01-01}, institution = {INRIA Technical Report}, abstract = {We propose a new approach for the interactive rendering of large highly detailed scenes. It is based on a new representation and algorithm for large and detailed volume data, especially well suited to cases where detail is concentrated at the interface between free space and clusters of density. This is for instance the case with cloudy sky, landscape, as well as data currently represented as hypertextures or volumetric textures. Existing approaches do not efficiently store, manage and render such data, especially at high resolution and over large extents. Our method is based on a dynamic generalized octree with MIP-mapped 3D texture bricks in its leaves. Data is stored only for visible regions at the current viewpoint, at the appropriate resolution. Since our target scenes contain many sparse opaque clusters, this maintains low memory and bandwidth consumption during exploration. Ray-marching allows to quickly stops when reaching opaque regions. Also, we efficiently skip areas of constant density. A key originality of our algorithm is that it directly relies on the ray-marcher to detect missing data. The march along every ray in every pixel may be interrupted while data is generated or loaded. It hence achieves interactive performance on very large volume data sets. Both our data structure and algorithm are well-fitted to modern GPUs.}, keywords = {gpu, voxels, sparse, ray-tracing, real-time rendering, volumes, hypertextures, visibility, ray-casting, octree} } We propose a new approach for the interactive rendering of large highly detailed scenes. It is based on a new representation and algorithm for large and detailed volume data, especially well suited to cases where detail is concentrated at the interface between free space and clusters of density. This is for instance the case with cloudy sky, landscape, as well as data currently represented as hypertextures or volumetric textures. Existing approaches do not efficiently store, manage and render such data, especially at high resolution and over large extents. Our method is based on a dynamic generalized octree with MIP-mapped 3D texture bricks in its leaves. Data is stored only for visible regions at the current viewpoint, at the appropriate resolution. Since our target scenes contain many sparse opaque clusters, this maintains low memory and bandwidth consumption during exploration. Ray-marching allows to quickly stops when reaching opaque regions. Also, we efficiently skip areas of constant density. A key originality of our algorithm is that it directly relies on the ray-marcher to detect missing data. The march along every ray in every pixel may be interrupted while data is generated or loaded. It hence achieves interactive performance on very large volume data sets. Both our data structure and algorithm are well-fitted to modern GPUs. |