Research
I am a Research Scientist at NVIDIA Research (My NVIDIA research page). I joined NVIDIA in 2011 after obtaining my Ph.D. degree from Grenoble University at INRIA in France (thesis document here). My research interests include real-time realistic rendering, global illumination, alternative geometric and material representations (voxel-based), ray-tracing, anti-aliasing techniques, distributed rendering, as well as out-of-core data management. My predominant research direction focuses on the use of pre-filtered geometric representations for the efficient anti-aliased rendering of detailed scenes and complex objects, as well as global illumination effects. My most impactful contributions are the GigaVoxels rendering pipeline and the GIVoxels/VXGI voxel-based indirect illumination technique, with several hardware implications in the NVIDIA Maxwell architecture.
Publications
2009 | |
![]() | Building with Bricks: Cuda-based Gigavoxel Rendering Crassin, Cyril; Neyret, Fabrice; Eisemann, Elmar Intel Visual Computing Research Conference, 2009. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, out-of-core, filtering, voxelization, gpu, ray-tracing, depth-of-field, real-time rendering, ray-casting, octree, GigaVoxels, cache) @inproceedings{CNE09, name = {Building with Bricks: Cuda-based Gigavoxel Rendering}, author = {Crassin, Cyril and Neyret, Fabrice and Eisemann, Elmar}, url = {http://artis.imag.fr/Publications/2009/CNE09, INRIA Webpage /research/publications/CNE09/IntelConf_Final.pdf, Paper Authors Version}, year = {2009}, date = {2009-03-01}, booktitle = {Intel Visual Computing Research Conference}, journal = {Intel Visual Computing Research Conference}, abstract = {For a long time, triangles have been considered the state-of-sthe-art primitive for fast interactive applications. Only recently, with the dawn of programmability of graphics cards, different representations emerged. Especially for complex entities, triangles have difficulties in representing convincing details and faithful approximations quickly become costly. In this work we investigate Voxels. Voxels can represent very rich and detailed objects and are of crucial importance in medical contexts. Nonetheless, one major downside is their significant memory consumption. Here, we propose an out-of-core method to deal with large volumes in real-time. Only little CPU interaction is needed which shifts the workload towards the GPU. This makes the use of large voxel data sets even easier than the, usually complicated, triangle-based LOD mechanisms that often rely on the CPU. This simplicity might even foreshadow the use of volume data, in game contexts. The latter we underline by presenting very efficient algorithms to approximate standard effects, such as soft shadows, or depth of field.}, keywords = {Voxel, out-of-core, filtering, voxelization, gpu, ray-tracing, depth-of-field, real-time rendering, ray-casting, octree, GigaVoxels, cache} } For a long time, triangles have been considered the state-of-sthe-art primitive for fast interactive applications. Only recently, with the dawn of programmability of graphics cards, different representations emerged. Especially for complex entities, triangles have difficulties in representing convincing details and faithful approximations quickly become costly. In this work we investigate Voxels. Voxels can represent very rich and detailed objects and are of crucial importance in medical contexts. Nonetheless, one major downside is their significant memory consumption. Here, we propose an out-of-core method to deal with large volumes in real-time. Only little CPU interaction is needed which shifts the workload towards the GPU. This makes the use of large voxel data sets even easier than the, usually complicated, triangle-based LOD mechanisms that often rely on the CPU. This simplicity might even foreshadow the use of volume data, in game contexts. The latter we underline by presenting very efficient algorithms to approximate standard effects, such as soft shadows, or depth of field. |
![]() | GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering Crassin, Cyril; Neyret, Fabrice; Lefebvre, Sylvain; Eisemann, Elmar ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D), ACM, 2009. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, rendering, out-of-core, filtering, gpu, real-time rendering, ray-casting, octree, cache) @inproceedings{CNLE09, name = {GigaVoxels : Ray-Guided Streaming for Efficient and Detailed Voxel Rendering}, author = {Crassin, Cyril and Neyret, Fabrice and Lefebvre, Sylvain and Eisemann, Elmar}, url = {http://artis.imag.fr/Publications/2009/CNLE09, INRIA Paper Page http://maverick.inria.fr/Publications/2009/CNLE09/CNLE09.pdf, Paper authors version}, year = {2009}, date = {2009-02-01}, booktitle = {ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D)}, publisher = {ACM}, abstract = {We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (81923 resolution), of hypertextured meshes (163843 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget.}, keywords = {Voxel, rendering, out-of-core, filtering, gpu, real-time rendering, ray-casting, octree, cache} } We propose a new approach to efficiently render large volumetric data sets. The system achieves interactive to real-time rendering performance for several billion voxels. Our solution is based on an adaptive data representation depending on the current view and occlusion information, coupled to an efficient ray-casting rendering algorithm. One key element of our method is to guide data production and streaming directly based on information extracted during rendering. Our data structure exploits the fact that in CG scenes, details are often concentrated on the interface between free space and clusters of density and shows that volumetric models might become a valuable alternative as a rendering primitive for real-time applications. In this spirit, we allow a quality/performance trade-off and exploit temporal coherence. We also introduce a mipmapping-like process that allows for an increased display rate and better quality through high quality filtering. To further enrich the data set, we create additional details through a variety of procedural methods. We demonstrate our approach in several scenarios, like the exploration of a 3D scan (81923 resolution), of hypertextured meshes (163843 virtual resolution), or of a fractal (theoretically infinite resolution). All examples are rendered on current generation hardware at 20-90 fps and respect the limited GPU memory budget. |
2008 | |
![]() | Interactive Multiple Anisotropic Scattering In Clouds Bouthors, Antoine; Neyret, Fabrice; Max, Nelson; Bruneton, Eric; Crassin, Cyril ACM Symposium on Interactive 3D Graphics and Games (I3D), 2008. (Inproceeding) (Abstract | Links | BibTeX | Tags: Voxel, Global Illumination, Lighting, real-time rendering, ray-casting, clouds rendering) @inproceedings{BNMBC08, name = {Interactive Multiple Anisotropic Scattering In Clouds}, author = {Bouthors, Antoine and Neyret, Fabrice and Max, Nelson and Bruneton, Eric and Crassin, Cyril}, url = {http://www-evasion.imag.fr/Publications/2008/BNMBC08, INRIA Publication Page http://maverick.inria.fr/Publications/2008/BNMBC08/cloudsFINAL.pdf, Paper authors version}, year = {2008}, date = {2008-02-01}, booktitle = {ACM Symposium on Interactive 3D Graphics and Games (I3D)}, abstract = {We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior. Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. We represent the distribution of light paths reaching a given viewed cloud pixel with the mean and standard deviation of their entry points on the lit surface, which we call the collector area. At rendering time for each pixel we determine the collector area on the lit cloud surface for different sets of scattering orders, then we infer the associated light transport. The fast computation of the collector area and light transport is possible thanks to a preliminary analysis of multiple scattering in planeparallel slabs and does not require slicing or marching through the volume. Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2-10 frames per second.}, keywords = {Voxel, Global Illumination, Lighting, real-time rendering, ray-casting, clouds rendering} } We propose an algorithm for the real time realistic simulation of multiple anisotropic scattering of light in a volume. Contrary to previous real-time methods we account for all kinds of light paths through the medium and preserve their anisotropic behavior. Our approach consists of estimating the energy transport from the illuminated cloud surface to the rendered cloud pixel for each separate order of multiple scattering. We represent the distribution of light paths reaching a given viewed cloud pixel with the mean and standard deviation of their entry points on the lit surface, which we call the collector area. At rendering time for each pixel we determine the collector area on the lit cloud surface for different sets of scattering orders, then we infer the associated light transport. The fast computation of the collector area and light transport is possible thanks to a preliminary analysis of multiple scattering in planeparallel slabs and does not require slicing or marching through the volume. Rendering is done efficiently in a shader on the GPU, relying on a cloud surface mesh augmented with a Hypertexture to enrich the shape and silhouette. We demonstrate our model with the interactive rendering of detailed animated cumulus and cloudy sky at 2-10 frames per second. |
![]() | Crassin, Cyril; Neyret, Fabrice; Lefebvre, Sylvain INRIA Technical Report 2008. (Techreport) (Abstract | Links | BibTeX | Tags: gpu, voxels, sparse, ray-tracing, real-time rendering, volumes, hypertextures, visibility, ray-casting, octree) @techreport{CNL08, name = {Interactive GigaVoxels}, author = {Crassin, Cyril and Neyret, Fabrice and Lefebvre, Sylvain}, url = {http://maverick.inria.fr/Publications/2008/CNL08, INRIA Webpage http://maverick.inria.fr/Publications/2008/CNL08/RR-6567.pdf, Technical Report http://maverick.inria.fr/Publications/2008/CNL08/PresGigaVoxels.ppt, Presentation PPT}, year = {2008}, date = {2008-01-01}, institution = {INRIA Technical Report}, abstract = {We propose a new approach for the interactive rendering of large highly detailed scenes. It is based on a new representation and algorithm for large and detailed volume data, especially well suited to cases where detail is concentrated at the interface between free space and clusters of density. This is for instance the case with cloudy sky, landscape, as well as data currently represented as hypertextures or volumetric textures. Existing approaches do not efficiently store, manage and render such data, especially at high resolution and over large extents. Our method is based on a dynamic generalized octree with MIP-mapped 3D texture bricks in its leaves. Data is stored only for visible regions at the current viewpoint, at the appropriate resolution. Since our target scenes contain many sparse opaque clusters, this maintains low memory and bandwidth consumption during exploration. Ray-marching allows to quickly stops when reaching opaque regions. Also, we efficiently skip areas of constant density. A key originality of our algorithm is that it directly relies on the ray-marcher to detect missing data. The march along every ray in every pixel may be interrupted while data is generated or loaded. It hence achieves interactive performance on very large volume data sets. Both our data structure and algorithm are well-fitted to modern GPUs.}, keywords = {gpu, voxels, sparse, ray-tracing, real-time rendering, volumes, hypertextures, visibility, ray-casting, octree} } We propose a new approach for the interactive rendering of large highly detailed scenes. It is based on a new representation and algorithm for large and detailed volume data, especially well suited to cases where detail is concentrated at the interface between free space and clusters of density. This is for instance the case with cloudy sky, landscape, as well as data currently represented as hypertextures or volumetric textures. Existing approaches do not efficiently store, manage and render such data, especially at high resolution and over large extents. Our method is based on a dynamic generalized octree with MIP-mapped 3D texture bricks in its leaves. Data is stored only for visible regions at the current viewpoint, at the appropriate resolution. Since our target scenes contain many sparse opaque clusters, this maintains low memory and bandwidth consumption during exploration. Ray-marching allows to quickly stops when reaching opaque regions. Also, we efficiently skip areas of constant density. A key originality of our algorithm is that it directly relies on the ray-marcher to detect missing data. The march along every ray in every pixel may be interrupted while data is generated or loaded. It hence achieves interactive performance on very large volume data sets. Both our data structure and algorithm are well-fitted to modern GPUs. |
2007 | |
![]() | Rendu Interactif De Nuages Realistes Bouthors, Antoine; Neyret, Fabrice; Max, Nelson; Bruneton, Eric; Crassin, Cyril AFIG '07 (Actes des 20emes journees de l'AFIG), Page(s): 183-195, Marne la Vall'ee, France, AFIG, 2007. (Inproceeding) (Links | BibTeX | Tags: Voxel, Global Illumination, gpu, real-time rendering, ray-casting, clouds rendering) @inproceedings{BNMBC07, name = {Rendu Interactif De Nuages Realistes}, author = {Bouthors, Antoine and Neyret, Fabrice and Max, Nelson and Bruneton, Eric and Crassin, Cyril}, url = {http://artis.imag.fr/Publications/2007/BNMBC07, INRIA Webpage http://maverick.inria.fr/Publications/2007/BNMBC07/clouds.pdf, Paper}, year = {2007}, date = {2007-02-01}, booktitle = {AFIG '07 (Actes des 20emes journees de l'AFIG)}, pages = {183-195}, publisher = {AFIG}, address = {Marne la Vall'ee, France}, keywords = {Voxel, Global Illumination, gpu, real-time rendering, ray-casting, clouds rendering} } |
![]() | Crassin, Cyril 2007. (Mastersthesis) (Abstract | Links | BibTeX | Tags: Voxel, gpu, real-time rendering, visibility, ray-casting, octree) @masterthesis{Cra07, name = {Représentation et Algorithmes pour l'Exploration Interactive de Volumes Procéduraux Étendus et Détaillés}, author = {Crassin, Cyril}, url = {http://maverick.inria.fr/Publications/2007/CN07/, INRIA Publication Page http://maverick.inria.fr/Publications/2007/CN07/RapportINRIA.pdf, Thesis Document}, year = {2007}, date = {2007-01-01}, school = {M2 Recherche UJF/INPG, INRIA}, abstract = {Les scènes naturelles sont souvent a la fois très riches en détails et spatialement vastes. Dans ce projet, on s’intéresse notamment a des données volumiques de type nuage, avalanche, écume. L’industrie des effets spéciaux s’appuie sur des solutions logicielles de rendu de gros volumes de voxels, qui ont permis de très beau résultats, mais a très fort coût en temps de calcul et en mémoire. Réciproquement, la puissance des cartes graphiques programmable (GPU) entraine une convergence entre le domaine du temps réel et du rendu réaliste, cependant la mémoire limitée des cartes fait que les données volumiques représentables en temps réel restent faibles (512 3 est un maximum). Proposer des structures et algorithmes adaptées au GPU permettant un réel passage a l’échelle, et ainsi, de traiter interactivement ce qui demande actuellement des heures de calcul, est le défi que ce projet cherche a relever.}, keywords = {Voxel, gpu, real-time rendering, visibility, ray-casting, octree} } Les scènes naturelles sont souvent a la fois très riches en détails et spatialement vastes. Dans ce projet, on s’intéresse notamment a des données volumiques de type nuage, avalanche, écume. L’industrie des effets spéciaux s’appuie sur des solutions logicielles de rendu de gros volumes de voxels, qui ont permis de très beau résultats, mais a très fort coût en temps de calcul et en mémoire. Réciproquement, la puissance des cartes graphiques programmable (GPU) entraine une convergence entre le domaine du temps réel et du rendu réaliste, cependant la mémoire limitée des cartes fait que les données volumiques représentables en temps réel restent faibles (512 3 est un maximum). Proposer des structures et algorithmes adaptées au GPU permettant un réel passage a l’échelle, et ainsi, de traiter interactivement ce qui demande actuellement des heures de calcul, est le défi que ce projet cherche a relever. |