3D Rendering - Techniques And Challenges

Transcription

Vishal Verma et al. /International Journal of Engineering and Technology Vol.2(2), 2010, 29-333D Rendering - Techniques and ChallengesVishal Verma#1, Ekta Walia*2#Department of Computer Science, M L N College, Yamuna Nagar, Haryana, INDIA1me vishaal@hotmail.com*IT Department, Maharishi Markandeshwar University, Mullana, Ambala, magesandanimations are getting more and more common. Theyare used in many different contexts such as isualization and CAD. Advanced ways of describingsurface and light source properties are important toensure that artists are able to create realistic and stylishlooking images. Even when using advanced renderingalgorithms such as ray tracing, time required forshading may contribute towards a large part of theimage creation time. Therefore both performance andflexibility is important in a rendering system. Thispaper gives a comparative study of various 3DRendering techniques and their challenges in acomplete and systematic manner.I. INTRODUCTIONIn the real world, light sources emit photons thatnormally travel in straight lines until they interactwith a surface or a volume. When a photonencounters a surface, it may be absorbed, reflected, ortransmitted. Some of these photons may hit the retinaof an observer where they are converted into a signalthat is then processed by the brain, thus forming animage. Similarly, photons may be caught by thesensor of a camera. In either case, the image is a 2Drepresentation of the environment.The formation of an image as a result of photonsinteracting with a 3D environment may be simulatedon the computer. The environment is then replacedby a 3D geometric model and the interaction of lightwith this model is simulated with one of a largenumber of available algorithms. The process ofimage synthesis by simulating light behavior is calledrendering.II. GEOMETRY BASED RENDERINGALGORITHMSIn geometry based rendering the illumination of ascene has to be simulated by applying a shadingmodel. As hardware systems provided more andmore computing power, those models became moresophisticated.Gouraud shading [1] is a very simple techniquethat linearly interpolates color intensities calculatedat the vertices of a rendered polygon across theinterior of the polygon. Phong introduced a moreISSN : 0975-4024accurate model [2] that is able to simulate specularhighlights. He also proposed to interpolate normalsinstead of intensities on rendered polygons, thusenabling more accurate evaluations of the actualshading model. Many fast methods [3] [4] have alsobeen proposed that approximate the quality of PhongShading. All of these models are local in the sensethat they fail to model global illumination effectssuch as reflection. A comparative study of localillumination methods in terms of speed and visualquality is done by Walia and Singh [5].There is a second class of illumination models thatcan be applied to polygonal scenes, the so calledglobal illumination models. Unlike the local methods,these methods are able to simulate the interreflections between surfaces. Diffuse inter-reflectionscan be simulated by the radiosity method [6], andspecular reflections are handled by recursive raytracing techniques [7]. Many more advanced globalillumination models [8] are also available. Howeverthey are computationally too complex to be used forreal time image synthesis on available hardware.The major problems with Geometry BasedRendering are: No Guarantee for the rightness of themodels. A lot of computation time is needed. Rendering algorithms are complex andtherefore call for special hardware ifinteractive speeds are needed. Even if special hardware is used, theperformance of the system is hard tomeasure since the rendering time is highlydependent on the scene complexity.3.IMAGE BASED RENDERING ALGORITHMSTraditionally, a description of the 3D scene beingrendered is provided by a detailed and complexmodel of the scene. To avoid the expense ofmodeling a complicated scene, it is sometimes moreconvenient to photograph a scene from differentviewpoints. To create images for novel viewpointsthat were not photographed, an interpolation schememay be applied. Rendering using images as amodeling primitive is called image-based rendering.29

Vishal Verma et al. /International Journal of Engineering and Technology Vol.2(2), 2010, 29-33Computer graphics researchers recently haveturned to image-based rendering due to followingreasons: Close to photo realism. Rendering time is decoupled from scenecomplexity. Images are used as input. Exploits coherence. Pre-calculation of scene data/ images.Instead of constructing a scene with millions ofpolygons, in Image Based Rendering the scene isrepresented by a collection of photographs along witha greatly simplified geometric model. This simplerepresentation allows traditional light transportsimulations to be replaced with basic imageprocessing routines that combine multiple imagestogether to produce never-before-seen images fromnew vantage points.There have been many IBR representationsinvented in the literature. They basically havefollowing three categories [9]: Rendering with no geometry Rendering with implicit geometry Rendering with explicit geometryA. Rendering with no geometryWe start with representative techniques forrendering with unknown scene geometry. Thesetechniques typically rely on many input images andalso on the characterization of the 7D plenopticfunction [10]. Common approaches under this classare Light field [11] Lumigraph [12] Concentric mosaics [13]The lightfield is the radiance density functiondescribing the flow of energy along all rays in 3Dspace. Since the description of a ray’s position andorientation requires four parameters (e.g., twodimensional positional information and twodimensional angular information), the radiance is a4D function. Image, on the other hand, is only twodimensional and lightfield imagery must therefore becaptured and represented in 2D form. A variety oftechniques have been developed to transform andcapture the 4D radiance in a manner compatible with2D [11] [12].In Light Field Rendering [11], the light fields arecreated from large arrays of both rendered anddigitized images. The latter are acquired using avideo camera mounted on a computer-controlledgantry. Once a light field has been created, newviews may be constructed in real time by extracting2D slices from the 4D light field of a scene inappropriate directions. The Lumigraph [12] is similarISSN : 0975-4024to light field rendering [11]. In addition to features oflight field rendering, it also allows us to include anygeometric knowledge we may capture to improverendering performance. Unlike the light field andLumigraph where cameras are placed on a twodimensional grid, the 3D Concentric Mosaics [13]representation reduces the amount of data bycapturing a sequence of images along a circular path.Challenges: Because such rendering techniques donot rely on any geometric impostors, they have atendency to rely on oversampling to counterundesirable aliasing effects in output display.Oversampling means more intensive data acquisition,more storage, and higher redundancy.B. Rendering with implicit geometryThese techniques for rendering rely on positionalcorrespondences (typically across a small number ofimages) to render new views. This class has the termimplicit to express the fact that geometry is notdirectly available. Common approaches under thisclass are View Interpolation [14], View Morphing [15], Joint View Interpolation [16].View interpolation [14] uses optical flow (i.e.Relative transforms between cameras) to directlygenerate intermediate views. But the problem withthis method is that the intermediate view may notnecessarily be geometrically correct. View morphing[15] is a specialized version of view interpolation,except that the interpolated views are alwaysgeometrically correct. The geometric correctness isensured because of the linear camera motion.Computer vision techniques are usually used togenerate such correspondences. M. Lhuillier et al.proposed a new method [16] which automaticallyinterpolating two images and tackle two mostdifficult problems of morphing due to the lack ofdepth information: pixel matching and visibilityhandling.Challenges: Representations that rely on implicitgeometry require accurate image registration forhigh-quality view synthesis.C. Rendering with explicit geometryRepresentations that do not rely on geometrytypically require a lot of images for rendering, andrepresentations that rely on implicit geometry requireaccurate image registration for high-quality viewsynthesis. IBR representations that use explicitgeometry have generally source descriptions. Suchdescriptions can be the scene geometry, the texturemaps, the surface reflection model etc.30

Vishal Verma et al. /International Journal of Engineering and Technology Vol.2(2), 2010, 29-331)Scene Geometry as Depth MapsThese approaches use depth maps as scenerepresentation. Depth maps indicate the per-pixeldepth values of the reference views. Such a depthmap is easily available for synthetic scenes, and canbe obtained for real scenes via a range finder.Common approaches under this class are 3D Warping [17] Relief Texture [18] Layered Depth Images (LDI) [19] LDI tree [20]When the depth information is available for everypoint in one or more images, 3D warping [17]techniques can be used to render nearly viewpoints.To improve the rendering speed of 3D warping, thewarping process can be factored into a relativelysimple pre-warping step and a traditional texturemapping step. The texture mapping step can beperformed by standard graphics hardware. This is theidea behind relief texture [18]. A similar factoringalgorithm was performed for the LDI [19], where thedepth map is first warped to the output image withvisibility check, and colors are pasted afterwards.LDI store a view of the scene from a single inputcamera view, but with multiple pixels along each lineof sight. Though LDI has the simplicity of warping asingle image, it does not consider the issue ofsampling rate. Chang et al. [20] proposed LDI treesso that the sampling rates of the reference images arepreserved by adaptively selecting an LDI in the LDItree for each pixel.2)Scene Geometry as Mesh ModelMesh model is the most widely used component inmodel-based rendering. Despite the difficulty toobtain such a model, if it is available in image-basedrendering, we should make use of it to improve therendering quality. Common approaches that use meshmodels as scene representation are Unstructured Lumigraph [21] Spatial-temporal view interpolation [22][23]Buchler et al. proposed the unstructuredLumigraph rendering [21], where weighted light rayinterpolation was used to obtain light rays in thenovel view. One concern about the mesh model isthat it has a finite resolution. To remove the granulareffects in the rendered image due to finite resolution,a model smoothing algorithm was applied during therendering, which greatly improved the resultantimage quality [22] [23].3)Scene Geometry with Texture MapsAs texture maps are often obtained from realobjects, a geometric model with texture mapping canproduce very realistic scenes. Common approachesISSN : 0975-4024that use texture maps with scene geometry as scenerepresentation are View dependent texture map [24] [25] Image-based visual hull [26]Debevec et al. proposed view dependent texturemapping (VDTM) [24], in which the reference viewsare generated from the texture map through aweighting scheme. The weights are determined bythe angular deviation from the reference views to thevirtual view to be rendered. Later a more efficientimplementation of VDTM was proposed in [25],where the per-pixel weight calculation was replacedby a per-polygon search in a pre-computed lookuptable. The image-based visual hull (IBVH) algorithm[26] can be considered as another example of VDTM.In IBVH, the scene geometry was reconstructedthrough an image space visual hull [27] algorithm.Note that VDTM is in fact a special case of the laterproposed unstructured Lumigraph rendering [21].4)Scene Geometry with Reflection ModelOther than the texture map, the appearance of anobject is also determined by the interaction of thelight sources in the environment and the surfacereflection model. Common approaches that useReflection model with scene geometry as scenerepresentation are Reflection space IBR [28] Surface light field [29]In [28], Cabral et al. proposed reflection spaceimage-based rendering. Reflection space IBR recordsthe total reflected radiance for each possible surfacedirection. The above method assumes that if twosurface points share the same surface direction, theyhave the same reflection pattern. This might not betrue due to multiple reasons such as inter reflections.Wood et al. proposed improved surface light field[29] which also considers the concept of interreflections.Challenges: Obtaining source descriptions fromreal images is hard even with state-of-art visionalgorithms.D. Sampling and CompressionOnce the IBR representation of the scene has beendetermined, one may further reduce the data sizethrough sampling and compression [9] [30]. Thesampling analysis can tell the minimum number ofimages / light rays that is necessary to render thescene at a satisfactory quality. Compression, on theother hand, can further remove the redundancy insideand between the captured images. Due to the highredundancy in many IBR representations, an efficientIBR compression algorithm can easily reduce thedata size by tens or hundreds of times.31

Vishal Verma et al. /International Journal of Engineering and Technology Vol.2(2), 2010, 29-33IV. IMPORTANCE OF IBRTraditionally, virtual reality environments havebeen generated by rendering a geometrical model ofthe environment using techniques such as polygonrendering or ray-tracing. I

3D Rendering - Techniques and Challenges Vishal Verma#1, Ekta Walia*2 #Department of Computer Science, M L N College, Yamuna Nagar, Haryana, INDIA 1me_vishaal@hotmail.com *IT Department, Maharishi Markandeshwar University, Mullana, Ambala, INDIA 2wekta@yahoo.com Abstract—Computer generated images and animations are getting more and more common. They