Plane Segmentation Of 3D Time Of Flight Camera's Range Data By .

Transcription

INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMS9VOL. 20, NO. 1 MARCH 2015, 9-15Plane Segmentation of 3D Time-of-Flight Camera’s Range Data byNormalized Cuts for Navigating a Tracked RobotCang Ye and GuruPrasad M. HegdeAbstract—This paper describes a method for extractingplanar surfaces from the range data captured by a 3DTime-of-Flight camera the SwissRanger SR-3000 fornavigation of a tracked robot. The focus of the method is tosegment vertical and horizontal planes from the range data ofindoor scenes. The method first enhances a range image byusing the surface normals of the 3D data points. It thenpartitions the normal enhanced range image into a number ofsegments by using the normalized cuts algorithm. ALeast-Square Plane (LSP) is fit to each segment and the fittingerror is used to determine if the segment is a plane or not.From the resulting planar segments, each vertical/horizontalsegment is labeled based on the normal of its LSP. A pair ofvertical or horizontal segments is merged if they areneighbors. Through this region growing process, the verticaland horizontal planes are extracted from the range data. Theproposed method can be used to navigate a tracked robot inindoor environments.Keywords: range data segmentation, plane segmentation,range image enhancement, normalized cuts, time-of-flightcamera.TI. INTRODUCTIONrobots have gained a wide range ofapplications in military and law enforcementoperations, such as reconnaissance [1] and explosiveordnance disposal [2, 3], as well as urban search and rescuein disaster situations [ 4 ]. For efficient operation in anindoor environment, it is essential for a tracked robot torecognize typical indoor structures, such as stairways,doors and hallways, and apply appropriate actionsaccordingly. For instance, detection of a stairway causesthe robot to approach the stair and traverse the stair byinvoking stair-traversing control [5], while detection of adoor causes the robot to move towards the door and openthe door by applying door-opening action. In either case,the robot uses the detected structure as a navigationalwaypoint to get to the next floor or enter/exit a room. In thestairway case, detection of an upward-stair and adownward-stair may call for two different stair-traversingcontrols. In addition to the above use, detected structuresmay also serve the following purposes: (1) The robot mayuse the structures as landmarks to localize itself; (2) Therobot may build a symbolic map of the environment byusing the structural information to reduce storage need andRACKEDThis work was supported in part by NASA under awardNNX13AD32A and the Arkansas Space Grant Consortium under awardUALR18800, by a NASA EPSCoR RID Award, and a matching fund fromthe Arkansas Science and Technology Authority.C. Ye is with the Department of Systems Engineering, University ofArkansas at Little Rock, Little Rock, AR 72204 USA (phone:501-683-7284; fax: 501-569-8698; e-mail: cxye@ualr.edu).G. Hegde was with the Department of Systems Engineering, Universityof Arkansas at Little Rock, Little Rock, AR 72204 USA. He is now withLand Rover, Ltd., Whitley, CV3 4LF, U.K. (e-mail: gmhegde@ualr.edu(e-mail: gmhegde@ualr.edu).speedup map retrieval. Since these common indoorstructures are often constituted by planar surfaces, efficientplane extraction becomes a needed capability for the robot.After plane extraction, pattern recognition algorithms maybe used to group the planar surfaces into structures andidentify them based on their geometric constituents andgeometric contexts (i.e., inter-plane relationships). Taking astairway as an example, the structure may be characterizedas repeated tread patterns (Fig. 1a) or alternating tread and(a) Tread pattern(b) Riser patternFig. 1. Features of a stairwayriser patterns. A tread pattern is a pair of parallel planeswith a certain size, orientation and height. There is nooverlap if one plane is projected onto the other and thedistance between the planes is . The size, orientation andis theheight are the local attributes of each plane, whilegeometric context (inter-plane distance). The riser pattern isdefined similarly (Fig. 1b) with plane orientationorthogonal to the tread pattern. Using the defined patterns, aplane may be classified into a tread/riser by a classificationmethod using the local attributes and geometric context.Then, the classified planes may be grouped into a stairwayusing the neighborhood relationships.More generally, plane segmentation is an importantproblem in range data processing and has a wider range ofapplications in computer vision. For example, theinformation about where two planes intersect in 3D spacecan be used to extract prominent linear features such ascorners in a room. These features are important forregistering multiple scans of range data.Researchers have addressed the problem of planesegmentation either in the 3D point cloud domain [6, 7, 8,9] or by representing the range data as an image [10, 11,12]. Venable and Uijit de Haag [8] propose a so-calledhistogramming method for extracting planes from the rangedata of a 3D Time-Of-Flight (TOF) camera SwissRangerSR-3000. The method first divides a range image into anumber of sub-images equally. It then fits a Least-SquarePlane (LSP) to the set of 3D data points belonging to eachsub-image, and the plane with the smallest fitting error ischosen as a candidate planar feature. A histogram of thedistances d from the rest of the data points to the candidateplane is computed. Data points that are closely located atthe surrounding of d 0 and d D in the histogram areclassified as points in the candidate plane and points in aparallel plane, respectively. The advantage of the method is

10INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMSits real-time performance. The limitation is that it cannot beapplied to a scenario where planes have multipleorientations (e.g., perpendicular planes). In addition, the setof data points in a sub-image with the minimum fitting errordoes not necessarily form a planar surface. Stamos andAllen’s method [9] identifies planar structures from 3Drange data of a precision laser scanner by dividing the datainto k k patches and merging the planar patches based onplane-fitting statistics. A patch is classified as a locallyplanar patch if the plane-fitting error is below a threshold.Two locally planar patches are considered to be in a sameplanar surface if they have similar orientations and are closein 3D space. The plane-fitting based classification methodis sensitive to the threshold value. In addition, it is not easyto determine an appropriate patch size a trade-offbetween computational cost and the granularity of datasegmentation.In this work, we investigate the use of a SwissRangerSR-3000 [13, 14] for the navigation of a tracked robot inindoor environments. The tracked robot is depicted in Fig.2. The robot is equipped with a flipper that may lift therobot body to allow the robot to traverse a step/stairway. Toenable this capability, a 3D imaging sensor is required toperceive the environment and a segmentation method isneeded to extract planes from the sensor’s range data forstairway detection. Considering the limited platform size ofthe tracked robot, a SR-3000 camera instead of a 3DLIDAR (LIght Detection and Ranging) is used. The 3DHowever, there is still a certain amount of corruption in theNERI. In such a case, a pixel-by-pixel region growingmethod cannot perform segmentation very well as it issusceptible to disturbance in local features. It is required touse a global criterion to segment the NERI. The criterionmust take into account both dissimilarity between thesegments as well as the total similarity within the segments(i.e., among image pixels).This paper presents a new range image segmentationmethod based on the Normalized Cuts (NC) method [19].The NC method was originally proposed for segmentationof an intensity image. It uses the total dissimilarity betweengroups and the total similarity within the groups to partitionan image. It may result in inappropriate grouping of pixelsin case that an object does not have distinctive dissimilarityfrom the background. This problem may be alleviated insegmenting a range image because additional metrics, suchas the normal of the LSP to a planar segment’s 3D datapoints and the residual of the fit, may be used to assess thecorrectness of segmentation.The reminder of this paper is organized as follows: In thefollowing section, we briefly describe the NC method. InSection III, we explain the proposed method for extractingplanes from range data. In Section IV, we present ourexperimental results followed by Section V where wediscuss a recursive method to extract planar surface(s) froma misclassified cluster. The paper is concluded in SectionVI where we discuss some directions for our future work.II. IMAGE SEGMENTATION USING NORMALIZED CUTS(a) Front view(b) Rear viewFig. 2. Tracked robot with SR-4000 3D camera for perceptioncamera is installed on a pan-tilt unit that is fabricated byusing two Dynamixel Servos (RX-64). The pan-tilt unit isused to adjust the camera’s yaw and pitch angles as neededduring navigation and scan the environment to obtain alarge view of the scene.The SR-3000 camera is advantageous over a 3D LIDARfor robot navigation. It has a much higher data throughput25344 (176 144) points per frame (up to 50 frames persecond) and a much smaller dimension (50 48 65 mm3).The SR-3000 camera is also advantageous over astereovision system in term of range data completeness.The camera works well in a featureless environment.However, the SR-3000’s range data has a largermeasurement error (compared with a LIDAR [15]) due torandom noise (e.g., thermal noise and photon short noise)and environmental factors (e.g., surface reflectivity).Previous research [16, 17] has demonstrated that a propercalibration process may reduce the errors of SR-3000’srange data to certain extent. However, it can not eliminatethe errors induced by random noise. In [18], the authors ofthis paper developed a Singular Value Decomposition(SVD) filter to deal with the noise in the Normal-EnhancedRange Image (NERI) of the SR-3000. The SVD filterdemonstrates some success in smoothing the surface.A. Image segmentation as a graph partitioning problemImage segmentation can be modeled as a graphpartitioning problem. An image is represented as aweighted undirected graphwhere each pixel isis formed betweenconsidered as a node and an edgeeach pair of nodes. The weight for each edge is recorded ina Pixel Similarity Matrix (PSM) calculated as a function ofsimilarity between each pair of nodes. In partitioning animage into various disjoint sets of pixels or segments, the goal is to maximize the similarity of nodes ina subset Vi and minimize the similarity across differentsets V j . For the NC algorithm the optimal bipartition of agraph into two sub-graphs A and B is the one that minimizesthe Ncut value given by:cut ( A, B )cut ( A, B),(1)Ncut ( A, B)assoc( A,V ) assoc( B,V )w(u, v) is the dissimilaritywhere cut ( A, B)(u A,v B )between A and B, and w(i, j ) is the weight calculated as afunction of the similarity between nodes i and j .assoc( A, V ) is the total connection from nodes in A to allnodes in V. assoc( B,V ) is defined similarly. From (1), wecan see that high similarity among nodes in A and lowsimilarity across different sets A and B can be maintainedby the minimization process. Given a partition of nodes thatseparates a graph V into two sets A and B , let x be anN V dimensional indicator vector, where xi 1 if node

INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMSi is in A , otherwise xi -1. Let diw(i, j ) be thejtotal connection from node i to all other nodes. With theabove definition, Ncut ( A, B) in (1) can be calculated.According to [19] an approximate discrete solution tominimize Ncut ( A, B) can be obtained by solving thefollowing equation:yT (D W ) ymin x Ncut ( x ) min y,(2)y T Dywhere Dw(i, j ) Wdiag ( d1 , d 2 ,., d n ), di11NERI into a number of segments. Second, the LSP to eachsegment’s data points is computed and the plane-fittingstatistics are used to label each segment into a planar ornon-planar one. Third, adjacent planar surfaces with thesame orientation are merged.To simplify the description, we only consider theextraction of vertical and horizontal planes in 3D space. Asshown in Fig. 3, a vertical plane is defined as one whosenormal direction is along the –Y axis (Fig. 3a) and ahorizontal plane is one whose normal direction is along theZ axis (Fig. 3b).[ wij ],jand y(1 x ) (1 x )di . Ifdixi 0y(is a setxi 0of real numbers), then (2) can be minimized by solving thefollowing generalized Eigen value system:(3)(D W ) yDyB. Grouping AlgorithmThe grouping of pixels in an image I consists of thefollowing steps:a) Consider image I as an undirected graph G (V , E )and construct a PSM. As stated before, each element ofthe PSM is the weight of edge w(i, j ) and is calculatedby w(i, j ) expb)c)d)e) F (i ) F ( j ) 222I*expX ( j ) 22 X (i )2Xif X (i) X ( j) 2 r , otherwise w(i, j ) 0 . Here, X (i)is the spatial location of node i and F (i) I (i) is thebrightness value of pixel i . It is noted that w(i, j ) 0for any pair of nodes i and j that have a distance greaterthan r pixels. The reason for calculating w(i, j ) insuch a manner is substantiated by the followingargument: any two pixels that have similar brightnessvalue and are spatially nearer belong to the same objectmore possibly than two pixels with different brightnessvalues and are distant from each other.Solve (3) for the eigenvectors with the smallest eigenvalues.Use the eigenvector with the second smallest eigenvalue to bipartition the image by finding the splittingpoints such that its Ncut value is minimized.Recursively re-partition the segments (go to step a)Exit if Ncut value for every segment is over somespecified threshold.III. THE PROPOSED METHODIn this work, we adopt the method in [20] for rangeimage enhancement. In our previous work [18], we havedemonstrated that the use of surface normals of theSR-3000’s range data makes the surfaces and edges of anobject more distinct. Following the same approach, we firstconstruct a tri-band color image where each pixel’s RGBvalues represent the x and y components of its surfacenormal and its depth information, respectively. The tri-bandimage is then converted to a gray image, called aNormal-Enhanced Range Image (NERI). The proposedsegmentation method is divided into three steps. First, theNC algorithm is applied to the NERI and partitions the(a)(b)Fig. 3. Diagram of vertical and horizontal planesFor the N segments resulting from the NC method, weneed to identify those that best describe either vertical orhorizontal planes. To do this, we perform an LSP fit to thedata points associated with each of the N segments andcalculate the normal direction and the fitting error. Let thenormal to the LSP be denoted by N (nx, ny, nz) and theresidual of the fit, also known as Plane Fitting Error (PFE),Pdk 1 kis computed byP , where P denotes thenumber of pixels in the segment and d k is the distancebetween the kth data point (xk, yk, zk) and the LSP. The LSP. The minimization can beis found by minimizingobtained by the Singular Value Decomposition (SVD)method. First, the following matrix is constructed by usingthe data points of the segment:Mx1 x0y1 y0 z1 z0x2 x0y2y0 z2 z0x1 x0x2 x0 . . x p x0.y1 y0y2z1 z0z 2 z0.x p x0ypy0 z p z 0where y0 . . y py0. . z p z0 isthe centroid of the data points. Then the eigen values 1, 2,, and p of M and their corresponding eigenvectors arecomputed. It can be proved that N equals the eigenvectorcorrespondingtotheminimumeigenvaluemin( 1 , 2 ,., p ) andequal to min P . Themindeviation of the normal direction from Y and Z axes arecomputed byand,respectively. The value of the PFE determines whether thesegment forms a planar surface while the values of y anddetermine if the plane is vertical or horizontal. To bespecific, the data points of a segment whose PFE issufficiently small form a planar surface; and the planarz

12INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMSsegment is vertical/horizontal if the value ofy/zissufficiently close to 0º.Extracting an entire plane from the scene involvesmerging of two or more planar segments. In our work,merging is performed only if two segments areneighboring. Fig. 4 shows three typical configurations oftwo close-by segments. The common boundary betweenthem is highlighted in green. Any two segments areconsidered as neighbors if there exists at least twoconsecutive common points (i.e., they belong to bothsegments and are continuous in space) on their boundaries.Thus the segments in Fig. 4(a) do not qualify as neighborssince they have a single common point whereas the twosegments in Figs. 4(b) and Fig. 4c are considered asneighbors.(a)(b)(c)Fig. 4. Definition of neighboring segments: for simplicity the segments aredrawn as rectangles: (a) Two non-neighboring segments. (b)-(c) Twoneighboring segments.The proposed method for extracting vertical andhorizontal planes from a range image is as follows:1) Construct the NERI and apply the NC algorithm topartition the NERI into N segments ci for i 1,., N .2) The planar segments ci for i 1,., N that satisfyiare selected to form a set of planar segmentsS {s1, s2 , s3 ,., sn } , i.e., a segment with a PFE smallerthan is taken as a planar surface. Here n N and isa suitable threshold.3) Each s j S is then labeled as vertical or horizontalbased on the normal direction of its LSP.4) A pair of vertical or horizontal segments is merged ifthey are neighbors.5) Terminate the process when all the neighbors aremerged.IV. EXPERIMENTS AND RESULTSWe have validated our planar surface extraction methodthrough experiments in various indoor environments thatcontain most commonly occurring structures. Our currentmethod requires a pre-specified number of clusters N. In allour experiments, we use N 100 that is bigger than theactual number of planar segments each NERI contains. Thereason for choosing such a large N is to ensure a correctsegmentation. This can be demonstrated by the example inFig. 5. The test scene is shown in Fig. 5(a) and its NERI isdepicted in Fig. 5(c). Apparently the NERI has 5 segmentsthat are hand-labeled and shown in Fig. 5(c). We now applythe NC algorithm to Fig. 5(c) using the actual number ofsegments (N 5). The result is shown in Fig. 5d. We canobserve that cluster 5 is a misclassification as it containstwo regions (with different brightness), each of whichrepresents a planar surface that is perpendicular to oneanother in 3D space. To avoid such a misclassification, weneed to assign N a number that is much bigger than theactual segment number of a scene.In all our segmentation results hereafter, we label avertical and a horizontal plane in blue and green,respectively. In the first experiment, we consider an indoor(a)(b)(c)(d)Fig. 5. Misclassification of the NC method using the exact segmentnumber a scene: (a) Actual scene. (b) Raw range image of (a). (c) NERIwith labeled segments. (d) Segmentation result of the NC over the NERI.scene with a stairway as depicted in Fig. 6(a). The rawrange image of the SR-3000 camera is shown in Fig. 6(b)and the NERI representation of the range data in Fig. 6(c).The NC algorithm partitions the NERI into 100 segments asshown in Fig. 6(d). Fig. 6(e) displays selected planarsegments. A token in blue indicates that the correspondingsegment belongs to a vertical plane and a token in greenmeans that the segment lies on a horizontal plane. Finally,the extracted planes are shown in Fig. 6(f). We can see thatthe majority of the planar surfaces are correctly extracted.We can also observe in Fig. 6(f) that region P (marked inred) is under-extracted because its adjacent regions (circledin yellow) are misclassified. As we can see from Fig. 6e thatthe pixels inside region A or B are classified as ones in thesame segment by the NC algorithm. However, either of Aand B contains data points on different planar surfaces thatare perpendicular one another in the 3D space. They aredetermined as non-planar segments due to the large PFEs. Itshould be noted that the misclassification will not have bigimpact in recognizing the stairway and guiding the robot.This is because there are enough number of treads andrisers indentified and the misclassification occurs at alocation far away from the robot. We can also see that thereare minor misclassifications at the right or left ends of someextracted planar segments. This suggest for future researcheffort that goes beyond the scope of this paper. Fig. 6grenders the segmentation result in a 3D point cloud wherethe unclassified points are represented in black.The 2nd experiment shows plane extraction of a hallwayscene. The result is depicted in Fig. 7. We can see that thefloor, door and the wall regions have been properlyextracted. Fig. 8 displays the result of our experiment in thelobby in the ETAS building at the University of Arkansas atLittle Rock. The result demonstrates a satisfactory plane

INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMS13extraction of the proposed method.(a)(a)(b)(b)(c)(c)(d)(d)(e)(f)Fig. 7. Segmentation of a hallway scene: (a) Actual scene. (b) Rangeimage from SR-3000. (c) NERI of (b). (d) Initial clustering result of theNC. (e) Segmentation result after merging clusters. (g) Segmented datashown as a 3D point cloud.(e)(a)(f)(g)Fig. 6. Segmentation of the scene with a stairway scene: (a) Actualscene. (b) Raw range image. (c) NERI of (b). (d) Initial clustering resultof the NC. (e) Labeling of vertical and horizontal planar segments. (f)Segmentation result after merging the homogeneous segments in (e). (g)Segmented data shown as a 3D point cloud.(c)(b)(d)V. DISCUSSIONWe have seen in the previous section that some segments(mainly in the boundary regions) fail to qualify as planarones due to misclassification in the initial clustering phase.In this section, we put forward a recursive approach thatmay identify planar pixels from the non-planar segment andhence can extract a plane to its entirety.Consider Fig. 9(a) which is a magnified view of region Ain Fig. 6(f). Our objective is to extract the part of A thatbelongs to the adjacent vertical plane P (as shown in Fig.6(f)). To achieve this, we apply our proposed methodrecursively as follows:a) The NC algorithm is applied to A with N 2. Thisbreaks A into two sub-segments.(e)(f)Fig. 8 Segmentation results of a lobby scene: (a) Actual scene. (b) Rangeimage from SR-3000. (c) NERI of (b). (d) Initial clustering result of theNC. (e) Segmentation result after merging clusters. (f) Segmented datashown as a 3D point cloud.b) For each sub-segment, compute the PFE and normalof its LSP.c) A sub-segment is then merged with P or discardedbased on the criterion we set forth in Section IV.

14INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMSd) If none of the sub-segments is merged with P, the NCis again applied to A with N N 1, i. e., repeat theprocess with N N 1.In this example, we are unable to make a merger with N 2.But with N 3, we obtain three sub-segments (C, D and E inFig. 9(a)). Through steps b and c, a larger segment (Fig.9(b)) is formed with C joining P (see Fig. 6(f)). From this,we can see that a finer segmentation can be achieved byfurther splitting the non-planar segments that are adjacentto the extracted plane segments.N-Cuts method fails, we will have to use the tri-band colorimage to represent an NERI and develop a new method tosegment the color NERI. We will carry out moreexperiments to test this idea in our future work.REFERENCES[1][2][3][4][5][6](a)(b)Fig. 9. Refinement of the segmentation result: (a) Enlarged view of amisclassified cluster. (b) Segmentation result after extracting planar regionfrom (a).The proposed plane segmentation method is currentlyimplemented in Matlab by using the Normalized Cutslibrary [21]. As a consequence, it is not real-time. Weexpect that a C/C implementation of the method willsignificantly reduce the runtime.VI. CONCLUSION AND FUTURE WORKWe have presented a method that may reliably extract thevertical and horizontal planes from a range image capturedby the SwissRanger SR-3000 camera. We use a split andmerge approach to achieve this. In the proposed method,surface normal information is used to convert the rangeimage into an NERI for range image enhancement. To dealwith noise-induced surface normal errors, we apply the NCalgorithm over the NERI to obtain homogenous segments.They are then merged to form larger segments (horizontaland vertical planes) based on the LSP fitting data statistics.Our method works efficiently without the need of a priorknowledge of the range image. We have also demonstratedthat under-extraction due to a misclassified non-planarsegment can be resolved by using the proposed method tosplit the segment and merge the resulting planarsub-segment(s) with the neighboring planar segment(s).We have validated the method’s efficacy by realexperiments in various indoor environments. Although wehave only described the segmentation of vertical andhorizontal planes, the proposed method indeed works forsegmentation of planes with arbitrary orientations.It is noted that we use a gray image for the NERI in orderto use the Normalized Cuts library. The mapping of atri-band image pixel to an NERI pixel is multiple-to-one. Iftwo non-parallel planes happen to have such orientationsthat their pixels have the same brightness in theirs NERIrepresentations, the distinctness of the planes in the NERIwill be small. This may potentially add difficulty to theN-Cuts method. Fortunately, in this case the intersection ofthe planes (a straight line segment) has a different normaldirection from both planes. This will likely help the N-Cutsmethod locate correct planar segments. In case that 9][20][21]N. Pezeshkian, et al., “Automatic payload deployment system,” inProc. SPIE 7692: Unmanned Systems Technology XII, 2010.Brian M. Yamauchi, “PackBot: a versatile platform for militaryrobotics,” in Proc. SPIE 5422, Unmanned Ground VehicleTechnology VI, 2004.P. Rudakevych, et al., “Integration of the Fido explosives detectoronto the PackBot EOD UGV,” in Proc. SPIE 6561, UnmannedSystems Technology IX, 2007.R. R. Murphy, “Human-Robot Interaction in Rescue Robotics,”IEEE Transactions on Systems, Man and Cybernetics-Part C:Applications and Reviews, vol. 34, pp. 138-153, 2004.A. I. Mourikis, et al., “Autonomous Stair Climbing for TrackedVehicles,” International Journal of Robotics Research, vol. 26, no.7, pp. 737-758, 2007.R. Unnikrishnan and M. Hebert, “Robust extraction of multiplestructures from non-uniformly sampled data,” in Proc. IEEE/RSJInternational Conference on Intelligent Robots and Systems, 2003,pp. 1322-1329.R. Triebel, W. Burgard and F. Dellaert, “Using hierarchical EM toextract planes from 3d range scans,” in Proc. IEEE InternationalConference on Robotics and Automation, 2005, pp. 4437-4442.V. Don and H. Maarten Uijt de, “Near real-time extraction of planarfeatures from 3d flash-ladar video frames,” in Proc. SPIE 6977,Optical Pattern Recognition XIX, 69770N, 2008.I. Stamos and P. E. Allen, “3-D model construction using range andimage data,” in Proc. IEEE International Conference on ComputerVision and Pattern Recognition, 2000, pp. 531-536.A. Bab-Hadiashar and N. Gheissari, “Range image segmentationusing surface selection criterion,” IEEE Transactions on ImageProcessing, vol. 15, no. 7, pp. 2006-2018, 2006.A. D. Sappa, “Automatic extraction of planar projections frompanoramic range images,” in Proc. International Symposium on 3DData Processing, Visualization and Transmission, 2004, pp.231-234.G. M. Hegde and C. Ye, “Extraction of planar features fromSwissRanger SR-3000 range images by a clustering method usingnormalized cuts,” in Proc. IEEE/RSJ International Conference onIntelligent Robots and Systems, 2009, pp. 4034-4039.T. Oggier, et al., “An all-solid-state optical range camera for 3Dreal-time imaging with sub-centimeter depth resolution,” in Proc.SPIE 5249, Optical Design and Engineering, 2003, pp. 534-545.T. Oggier, B. Büttgen and F. Lustenberger, “SwissRanger SR3000and first experiences based on miniaturized 3D-TOF Cameras,”Swiss Center for Electronics and Microtechnology TechnicalReport, 2005.C. Ye and J. Borenstein, “Characterization of a 2-D laser scanner formobile robot obstacle negotiation,” in Proc. IEEE InternationalConference on Robotics and Automation, 2002, pp. 2512-2518.S. A. Guomundsson, H. Aanaes and R. Larsen, “Environmentaleffects on measurement uncertainties of time-of-flight cameras,” inProc. International Symposium on Signals, Circuits and Systems,2007, pp. 1-4.K. Young Min, et al., “Design and calibration of a multi-view TOFsensor fusion system,” in Proc. IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition Workshops, 2008, pp.1-7.G. M. Hegde and C.Ye, “Swissranger sr-3000 range imagesenhancement by a singular value decomposition filter,” in Proc.IEEE International Conference on Information and Automation,2008, pp. 1626-1631.S. Jianbo and J. Malik, “Normalized cuts and image segmentation,”IEEE Transactions on Pattern Analysis and Machine Intelligence,vol. 22, no. 8, pp. 888-905, 2000.K. Pulli, “Vision methods for an autonomous machine based onrange imaging,” Master’s Thesis, University of Oulu.http://www.cis.upenn.edu/ jshi/software/.

INTERNATIONAL JOURNAL OF INTELLIGENT CONTROL AND SYSTEMSCang Ye received the B. E. and M. E.degrees from the University of Scienceand Technology of China, Hefei, Anhui,in 1988 and 1991, respectively, and thePh.D. degree from the University of HongKong, Hong Kong in 1999.He was a Research Fellow with theSchool of Electrical and ElectronicEngineering, Nanyang TechnologicalUniversity, Singapore, fro

C. Ye is with the Department of Systems Engineering, University of Arkansas at Little Rock, Little Rock, AR 72204 USA (phone: 501-683-7284; fax: 501-569-8698; e-mail: cxye@ualr.edu). G. Hegde was with the Department of Systems Engineering, University of Arkansas at Little Rock, Little Rock, AR 72204 USA. He is now with