Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DECODING METHOD, ENCODING METHOD, DECODER AND ENCODER
Document Type and Number:
WIPO Patent Application WO/2024/086123
Kind Code:
A1
Abstract:
An encoding method, an encoder, a decoding method and a decoder are provided. During attribute coding, obtaining original three-dimensional coordinates; converting the original three-dimensional coordinates to converted three-dimensional coordinates; adjusting the converted three-dimensional coordinates to non-negative three-dimensional coordinates; and converting the non-negative three-dimensional coordinates to one-dimensional array.

Inventors:
YU YUE (US)
YU HAOPING (US)
Application Number:
PCT/US2023/035252
Publication Date:
April 25, 2024
Filing Date:
October 16, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INNOPEAK TECH INC (US)
International Classes:
H04N19/13; G06T15/10; G06T17/00; H04N19/54; H04N19/61; G06T9/00
Attorney, Agent or Firm:
LEE, Belinda (US)
Download PDF:
Claims:
File:137850-wof WHAT IS CLAIMED IS: 1. An encoding method, comprising: during attribute coding, obtaining original three-dimensional coordinates; converting the original three-dimensional coordinates to converted three-dimensional coordinates; adjusting the converted three-dimensional coordinates to non-negative three-dimensional coordinates; and converting the non-negative three-dimensional coordinates to one-dimensional array. 2. The encoding method according to claim 1, wherein the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates comprises: determining whether each value of the converted three-dimensional coordinate is a negative value; and in response to at least one value of the converted three-dimensional coordinates is the negative value, clipping the at least one value of the converted three-dimensional coordinates to zero to obtain the non-negative three-dimensional coordinates. 3. The encoding method according to claim 1, wherein the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates is a scaling operation. 4. The encoding method according to claim 3, wherein the converted three-dimensional coordinates comprise a first intermediate value, a second intermediate value and a third intermediate value, wherein the scaling operation comprises: multiplying the first intermediate value by a scaling value and add a first offset value to obtain File:137850-wof a first value of the non-negative three-dimensional coordinates; multiplying the second intermediate value by the scaling value and add a second offset value to obtain a second value of the non-negative three-dimensional coordinates; and multiplying the third intermediate value by the scaling value and add a third offset value to obtain a third value of the non-negative three-dimensional coordinates. 5. The encoding method according to claim 3, wherein the converted three-dimensional coordinates are spherical coordinates, and the scaling operation is specified as following formula: scalePoint^,^= ^point^,^−min^^ ∗ S^ + 128 ≫ 8 where k is an axis index (k in {0,1,2}), i is a point index in decoding order, point^,^ represents the spherical coordinates,^scalePoint^,^ represents the scaled spherical coordinates, S^ is an integer representing fixed point scaling factor with 8 bits decimal precision, and min^ is a scaling offset value. 6. The encoding method according to claim 5, wherein in response to the spherical coordinates are used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. 7. The encoding method according to claim 1, wherein the one-dimensional array is a Morton array. 8. The encoding method according to claim 1, wherein the one-dimensional array is a Hilbert array. 9. An encoder, comprising: File:137850-wof a communication interface, configured to receive data of a point cloud; a storage device, configured to store a bitstream; and a processor, electrically connected to the communication interface and the storage device, and configured to encode the data of the point cloud to generate the bitstream, wherein during attribute coding, the processor is configured to obtain original three-dimensional coordinates, and the processor is configured to convert the original three-dimensional coordinates to converted three-dimensional coordinates, the processor is configured to adjust the converted three-dimensional coordinates to a non-negative three-dimensional coordinates, and the processor is configured to convert the non- negative three-dimensional coordinates to one-dimensional array. 10. The encoder according to claim 9, wherein the processor is configured to determine whether each value of the converted three-dimensional coordinates is a negative value, and in response to at least one value of the converted three-dimensional coordinates is the negative value, the processor is configured to clip the at least one value of the converted three-dimensional coordinates to zero to obtain the non-negative three-dimensional coordinates. 11. The encoding method according to claim 9, wherein the processor is configured to execute a scaling operation to adjust the converted three-dimensional coordinates to the non-negative three-dimensional coordinates. 12. The encoding method according to claim 11, wherein the converted three-dimensional coordinates comprise a first intermediate value, a second intermediate value and a third intermediate value, wherein the processor is configured to multiply the first intermediate value by a scaling value File:137850-wof and add a first offset value to obtain a first value of the non-negative three-dimensional coordinates, wherein the processor is configured to multiply the second intermediate value by the scaling value and add a second offset value to obtain a second value of the non-negative three-dimensional coordinates, wherein the processor is configured to multiply the third intermediate value by the scaling value and add a third offset value to obtain a third value of the non-negative three- dimensional coordinates. 13. The encoding method according to claim 11, wherein the converted three-dimensional coordinates are a spherical coordinates, and the scaling operation is specified as following formula: scalePoint^,^= ^point^,^−min^^ ∗ S^ + 128 ≫ 8 where k is an axis index (k in {0,1,2}), i is a point index in decoding order, point^,^ represents the spherical coordinates,^scalePoint^,^ represents scaled spherical coordinates, S^ is an integer representing fixed point scaling factor with 8 bits decimal precision, and min^ is a scaling offset value. 14. The encoding method according to claim 13, wherein in response to the spherical coordinates are used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. 15. The encoding method according to claim 9, wherein the one-dimensional array is a Morton array. 16. The encoding method according to claim 9, wherein the one-dimensional array is a Hilbert array. File:137850-wof 17. A decoding method, comprising: obtaining original three-dimensional coordinates; converting the original three-dimensional coordinates to converted three-dimensional coordinates; adjusting the converted three-dimensional coordinates to non-negative three-dimensional coordinates; and converting the non-negative three-dimensional coordinates to one-dimensional array. 18. The decoding method according to claim 17, wherein the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates comprises: determining whether each value of the converted three-dimensional coordinates is a negative value; and in response to at least one value of the converted three-dimensional coordinate is the negative value, clipping the at least one value of the converted three-dimensional coordinates to zero to obtain the non-negative three-dimensional coordinates. 19. The decoding method according to claim 17, wherein the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates is a scaling operation. 20. The decoding method according to claim 19, wherein the converted three-dimensional coordinates comprise a first intermediate value, a second intermediate value and a third intermediate value, wherein the scaling operation comprises: multiplying the first intermediate value by a scaling value and add a first offset value to obtain a first value of the non-negative three-dimensional coordinates; File:137850-wof multiplying the second intermediate value by the scaling value and add a second offset value to obtain a second value of the non-negative three-dimensional coordinates; and multiplying the third intermediate value by the scaling value and add a third offset value to obtain a third value of the non-negative three-dimensional coordinates. 21. The decoding method according to claim 20, wherein the converted three-dimensional coordinates are spherical coordinates, and the scaling operation is specified as following formula: scalePoint^,^= ^point^,^−min^^ ∗ S^ + 128 ≫ 8 where k is an axis index (k in {0,1,2}), i is a point index in decoding order, point^,^ represents the spherical coordinates,^scalePoint^,^ represents scaled spherical coordinates, S^ is an integer representing fixed point scaling factor with 8 bits decimal precision, and min^ is a scaling offset value. 22. The decoding method according to claim 21, wherein in response to the spherical coordinates are used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. 23. The decoding method according to claim 17, wherein the one-dimensional array is a Morton array. 24. The decoding method according to claim 17, wherein the one-dimensional array is a Hilbert array. 25. A decoder, comprising: a communication interface, configured to receive data of a point cloud; File:137850-wof a storage device, configured to store a bitstream; and a processor, electrically connected to the communication interface and the storage device, and configured to encode the data of the point cloud to generate the bitstream, wherein during attribute coding, the processor is configured to obtain original three-dimensional coordinates, and the processor is configured to convert the original three-dimensional coordinates to converted three-dimensional coordinates, the processor is configured to adjust the converted three-dimensional coordinates to non- negative three-dimensional coordinates, and the processor is configured to convert the non- negative three-dimensional coordinates to one-dimensional array. 26. The decoder according to claim 25, wherein the processor is configured to determine whether each value of the converted three-dimensional coordinates is a negative value, and in response to at least one value of the converted three-dimensional coordinates is the negative value, the processor is configured to clip the at least one value of the converted three-dimensional coordinates to zero to obtain the non-negative three-dimensional coordinates. 27. The decoder according to claim 25, wherein the processor is configured to execute a scaling operation to adjust the converted three-dimensional coordinates to the non-negative three- dimensional coordinates. 28. The decoder according to claim 27, wherein the converted three-dimensional coordinates comprise a first intermediate value, a second intermediate value and a third intermediate value, wherein the processor is configured to multiply the first intermediate value by a scaling value and add a first offset value to obtain a first value of the non-negative three-dimensional coordinates, wherein the processor is configured to multiply the second intermediate value by the scaling value File:137850-wof and add a second offset value to obtain a second value of the non-negative three-dimensional coordinates, wherein the processor is configured to multiply the third intermediate value by the scaling value and add a third offset value to obtain a third value of the non-negative three- dimensional coordinates. 29. The decoder according to claim 27, wherein the converted three-dimensional coordinates are a spherical coordinates, and the scaling operation is specified as following formula: scalePoint^,^= ^point^,^−min^^ ∗ S^ + 128 ≫ 8 where k is an axis index (k in {0,1,2}), i is a point index in decoding order, point^,^ represents the spherical coordinates,^scalePoint^,^ represents scaled spherical coordinates, S^ is an integer representing fixed point scaling factor with 8 bits decimal precision, and min^ is a scaling offset value. 30. The decoder according to claim 29, wherein in response to the spherical coordinates are used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. 31. The decoder according to claim 25, wherein the one-dimensional array is a Morton array. 32. The decoder according to claim 25, wherein the one-dimensional array is a Hilbert array.
Description:
File:137850-wof DECODING METHOD, ENCODING METHOD, DECODER AND ENCODER CROSS-REFERENCE TO RELATED APPLICATION This application claims the priority benefit of U.S. provisional application serial no. 63/379,872, filed on October 17, 2022. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification. BACKGROUND Technical Field [0001] This invention proposes several improvements of coordinate conversion for Geometry Point Cloud Coding (G-PCC). The proposed encoder, decoder, encoding method and decoding method may be used in future GPCC coding standards, in particular MPEG G-PCC and AVS G- PCC. With the implementation of the proposed method, modifications to bitstream structure, syntax, constraints, and mapping for the generation of decoded point cloud are considered for standardizing. Description of Related Art [0002] In traditional point cloud coding, a three-dimensional coordinate (x, y, z) in the original point cloud may be represented by using a Cartesian coordinate. However, for some G-PCC applications, different coordinate representations may be more efficient for coding purposes. For example, planar and azimuthal geometry coding modes for geometry coding may be used for the light detection and ranging (LiDAR) acquisition system. The LiDAR acquisition system may capture the point cloud by sampling the reflectance of lights emitted by the vertically arranged lasers in the rotating header. The captured point cloud may be in the shape of a cylinder, and due to sample distance may be not uniform which may cause inefficiency in the attribute coding. File:137850-wof However, in this regard, the existing G-PCC cannot guarantee that all values of the coordinates used for converting the one-dimensional array are positive numbers, and cannot work well for a wide range of G-PCC input data with any three-dimensional representation. SUMMARY [Technical Problem] [0003] A novel image processing method for efficiently decoding bitstream and encoding data of point cloud is desirable. [0004] [Solution to Problem] [0005] The encoding method of the invention includes the following step: during attribute coding, obtaining original three-dimensional coordinates; converting the original three- dimensional coordinates to converted three-dimensional coordinates; adjusting the converted three-dimensional coordinates to non-negative three-dimensional coordinates; and converting the non-negative three-dimensional coordinates to one-dimensional array. [0006] In an embodiment of the invention, the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates includes: determining whether each value of the converted three-dimensional coordinates is a negative value; and in response to at least one value of the converted three-dimensional coordinates is the negative value, clipping the at least one value of the converted three-dimensional coordinates to zero to obtain the non-negative three-dimensional coordinates. [0007] In an embodiment of the invention, the step of adjusting the converted three-dimensional coordinates to the non-negative three-dimensional coordinates is a scaling operation. [0008] In an embodiment of the invention, the converted three-dimensional coordinates comprise a first intermediate value, a second intermediate value and a third intermediate value, and the scaling operation includes: multiplying the first intermediate value by a scaling value and File:137850-wof add a first offset value to obtain a first value of the non-negative three-dimensional coordinates; multiplying the second intermediate value by the scaling value and add a second offset value to obtain a second value of the non-negative three-dimensional coordinates; and multiplying the third intermediate value by the scaling value and add a third offset value to obtain a third value of the non-negative three-dimensional coordinates. [0009] In an embodiment of the invention, the converted three-dimensional coordinates are spherical coordinates, and the scaling operation is specified as following formula: scalePoint ^ , ^ = ^ point ^ , ^ −min ^ ^ ∗ S ^ + 128 ≫ 8 [0010] where k is an axis index (k in {0,1,2}), i is a point index in decoding order, point ^ , ^ represents the spherical coordinates,^scalePoint ^ , ^ represent scaled spherical coordinates, S ^ is an integer representing fixed point scaling factor with 8 bits decimal precision, and min ^ is a scaling offset value. [0011] In an embodiment of the invention, in response to the spherical coordinates is used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. [0012] In an embodiment of the invention, the one-dimensional array is a Morton array. [0013] In an embodiment of the invention, the one-dimensional array is a Hilbert array. [0014] The encoder of the invention includes a communication interface, a storage device and a processor. The communication interface is configured to receive data of a point cloud. The storage device, configured to store a bitstream. A processor, electrically connected to the communication interface and the storage device, and configured to encode the data of the point cloud to generate the bitstream. During attribute coding, the processor is configured to obtain original three-dimensional coordinates and the processor is configured to convert the original three-dimensional coordinates to converted three-dimensional coordinates. The processor is configured to adjust the converted three-dimensional coordinates to non-negative three- File:137850-wof dimensional coordinates, and the processor is configured to convert the non-negative three- dimensional coordinates to one-dimensional array. [0015] The decoding method of the invention includes the following step: obtaining original three-dimensional coordinates; converting the original three-dimensional coordinates to converted three-dimensional coordinates; adjusting the converted three-dimensional coordinates to non- negative three-dimensional coordinates; and converting the non-negative three-dimensional coordinates to one-dimensional array. [0016] The decoder of the invention includes a communication interface, a storage device and a processor. The communication interface is configured to receive data of a point cloud. The storage device is configured to store a bitstream. The processor is electrically connected to the communication interface and the storage device, and configured to encode the data of the point cloud to generate the bitstream. During attribute coding, the processor is configured to obtain an original three-dimensional coordinate, and the processor is configured to convert the original three-dimensional coordinate to a converted three-dimensional coordinate. The processor is configured to adjust the converted three-dimensional coordinate to a non-negative three- dimensional coordinate, and the processor is configured to convert the non-negative three- dimensional coordinate to one-dimensional array. [0017] [Effects of Invention] [0018] Based on the above, according to the decoding method, the encoding method, the decoder and the encoder of the invention, may achieve good coding performance through special coordinate conversion. [0019] To make the aforementioned more comprehensible, several embodiments accompanied with drawings are described in detail as follows. File:137850-wof BRIEF DESCRIPTION OF THE DRAWINGS [0020] FIG. 1 is a flow diagram of a G-PCC encoding according to an embodiment of the invention. [0021] FIG. 2A and FIG. 2B illustrate octree structures of G-PCC according to an embodiment of the invention. [0022] FIG. 2C illustrates corresponding digital representation of the octree structure according to an embodiment of the invention. [0023] FIG. 3 illustrates structure of the cubes according to an embodiment of the invention. [0024] FIG. 4 is a flow diagram of a G-PCC decoding according to an embodiment of the invention. [0025] FIG. 5 is a schematic diagram of a hardware structure of an encoder according to an embodiment of the invention. [0026] FIG.6 is a schematic diagram of an example of a point cloud sampling method according to an embodiment of the invention. [0027] FIG. 7 is a schematic diagram of another example of a point cloud sampling method according to an embodiment of the invention. [0028] FIG. 8 is a flowchart of a coding method according to an embodiment of the invention. [0029] FIG. 9 is a schematic diagram of a hardware structure of an encoder according to an embodiment of the invention. DESCRIPTION OF THE EMBODIMENTS [0030] In order to have a more detailed understanding of the characteristics and technical content of the embodiments of the present application, the implementation of the embodiments of the present application will be described in detail below with reference to the accompanying drawings. The attached drawings are for reference and explanation purposes only, and are not used to limit File:137850-wof the embodiments of the present application. [0031] FIG. 1 is a flow diagram of a G-PCC encoding according to an embodiment of the invention. Referring to FIG. 1, the flow shown in FIG. 1 is applied to a point cloud encoder. For the point cloud data to be encoded, the point cloud data is divided into multiple slices through slice division. In each slice, the positions (i.e. geometric information) of the point cloud and the attributes corresponding to each point cloud are coded separately. [0032] In the geometric encoding process, the positions are coordinated to convert the point cloud into a bounding box (bounding box) in step S102, and then quantized in step S104. The quantization mainly plays the role of scaling. Due to the quantization rounding, a part of the positions of the point cloud is the same, so it is further determined whether to remove the duplicate points based on the parameters in step S104. The process of quantifying and removing the duplicate points is also called the voxelization process. [0033] Then, the bounding box is divided into an octree for octree analysis in step S106. In the octree-based geometric information encoding process, the bounding box is divided into eight sub-cubes, and the non-empty (including points in the point cloud) sub-cubes are continued to be divided into eight equal parts until the leaf knots are obtained. When the point is a 1×1×1 unit cube, the division is stopped, and the points in the leaf nodes are arithmetic coded in step S108 to generate a binary geometric bitstream, that is, a geometric code stream. [0034] FIG. 2A and FIG. 2B illustrate octree structures of G-PCC according to an embodiment of the invention, and FIG.2C illustrates corresponding digital representation of the octree structure according to an embodiment of the invention. Referring to FIG. 2A, a cubical axis-aligned bounding box B is defined by two extreme points (0,0,0) and ^2 ^ , 2 ^ , 2 ^ ^ where d is the maximum size of a given point cloud along x, y or z direction. A point of the point cloud will be noted as “point” below. All points are included in this defined cube B. [0035] Referring to FIG.2B, the cube B is divided into eight sub-cubes B1 to B8, which creates File:137850-wof an octree structure allowing one parent cube B to have 8 child cubes B1 to B8. The 7 sibling cubes B2 to B8 of a given cube B1 are the same size cubes and share at least one same face/edge/point with this given cube. The volume of each of the cubes B1 to B8 is 1/8 volume of its parent cube B. Each of the cubes B1 to B8 may contain more than one point and a number of points in a cube is dependent on the size and location of the cube. The size of a smallest cube is pre-defined for a given point cloud. For a given point, the parent cube of a given point is defined as a minimum size cube which contains this given point. Sibling points of a given point are defined as those points which have the same parent cube with this given point. [0036] Referring to FIG.2C, an octree is a recursive data structure that is often used to describe three-dimensional space in which each internal cube has exactly eight children. The space is recursively subdivided into eight octants to the point where the resolution of the child cube is equal to a size of the point – the smallest element that has no further subdivisions. To represent a cube an 8-bit binary code that follows a space-filling curve pattern (Hilbert, Morton) is used, each child is assigned a “1” or “0” value to indicate if the space in the child cube has any points associated with that child cube, or the child cube is empty. Only the occupied child cubes are further subdivided. The process of parent cube subdivision is terminated when the size of the child cube becomes equal to the size of the indivisible element, i.e., spatial resolution of the point cloud, or simply the size of the point. [0037] FIG. 3 illustrates structure of the cubes according to an embodiment of the invention. Referring to FIG. 3, depending on the location of the current cube, one cube may have up to six same-size cubes to share one face. In addition, the current cube may also have some neighbouring cubes which share lines or points with the current cube. [0038] Similarly, the parent cube of the current cube also has up to six neighbouring cubes with the same size of the parent cube that share one face with the parent cube. The parent cube of the current cube also has up to twelve neighbouring cubes with the same size of parent cubes that share File:137850-wof an edge. The parent cube of the current cube also has up to eight neighbouring cubes with the same size of parent cubes that share a point with the parent cube. [0039] Referring back to FIG. 1, based on the surface formed by the distribution of the point cloud in each block, twelve pieces of the surface and the block are analyzed in step S110. At most twelve vertexes (intersection points) are generated by the edges, and the arithmetic coding is performed on the vertexes (surface fitting based on the intersection points) in step S108 to generate the binary geometric bitstream, that is, a geometric code stream. Vertex is also used in the realization of the geometric reconstruction process in step S112, and the reconstructed set information is used when encoding the attributes of the point cloud. [0040] In the attribute coding process, the geometric coding is completed, and after the geometric information is reconstructed in step S112, color transformation is performed in step S114, in which the color information (that is, the attribute information) is transformed from the RGB color space to the YUV color space. Then, the reconstructed geometric information is used to recolor the point cloud, so that the uncoded attribute information corresponds to the reconstructed geometric information in step S116. Attribute coding is mainly for color information. [0041] In the process of color information coding, there are mainly two transformation methods. One is distance-based lifting transformation that relies on Level of Detail (LOD) division in step S118 and lifting in step S120, and the other is direct area adaptation. Hierarchical transform (Region Adaptive Hierarchal Transform, RAHT) transformation in step S122. These two methods will transform the color information from the spatial domain to the frequency domain, so as to obtain high-frequency coefficients and low-frequency coefficients through the transformation, and finally quantize the coefficients (i.e., quantized coefficients) in step S124. [0042] Finally, after octree partitioning and surface fitting, the geometrically coded data and the quantized coefficient processing attribute coded data are slice-synthesized, and then the vertex File:137850-wof coordinates of each block (that is, arithmetic coding) are sequentially coded in step S126 to generate a binary attribute bitstream, that is, the attribute code stream. [0043] FIG. 4 is a flow diagram of a G-PCC decoding according to an embodiment of the invention. The flow in FIG. 4 is applied to the point cloud decoder. For the obtained binary code stream, the geometric bitstream and the attribute bitstream in the binary code stream are first decoded in steps S402 and S404, respectively. When decoding the geometric bitstream, through arithmetic decoding in step S402, octree synthesis in step S406, surface fitting in step S408, geometry reconstruction in step S410, and inverse coordinate transformation in step S412, the positions (i.e. geometric information) of the point cloud are obtained. [0044] When decoding the attribute bitstream, through arithmetic decoding in step S404, inverse Quantization in step S414, LOD-based lifting based inverse transformation in steps S416 and S418, or RAHT based inverse transformation in step S420, and inverse color conversion in step S422, the attributes of the point cloud are obtained, and the three-dimensional image model of the point cloud data to be encoded is restored based on the positions and the attributes. [0045] The octree-based geometry information may be coded with context-based arithmetic coding. There may also be some corresponding attribute information for point clouds, including colour, reflectance, etc., that needs to be compressed. Because the neighbouring points in a point cloud may have a strong correlation, prediction-based coding methods have been developed and used to compose and code point cloud attributes. More specifically, a prediction is formed from neighbouring coded attributes. Then, the difference between the current attribute and the prediction is coded. [0046] In the invention, coding is assumed to mean encoding and decoding methods and systems. [0047] [Attribute Coding in the G-PCC] [0048] Based on a G-PCC standard (e.g. AVS G-PCC or MPEG G-PCC), after the geometry information is coded, Morton or Hilbert code/order may be used to convert a point cloud cube into File:137850-wof a one-dimension array. Each position in the cube will have a corresponding Morton or Hilbert code, but some positions may not have any corresponding point cloud attribute. In other words, some positions may be empty. The attribute coding will follow the pre-defined Morton or Hilbert order. A predictor may be generated from the previous coded points in Morton or Hilbert order. The attribute difference between the current point and its predictor is coded into the bitstream. [0049] To reduce the memory usage, some pre-defined number has been specified to limit the number of neighboring points that can be used in generating the prediction. For example, only M data points among previous N consecutively coded points may be used for coding the current attribute. In the previous G-PCC software, M and N are set as a fixed number of 3 and 128, respectively. [0050] If more than 128 points have already been coded before the current point, only 3 out of the 128 previously coded neighbouring points could be used to form the attribute predictor according to a pre-defined order. If there are less than 128 coded points before the current point, all such coded points will be used as candidates to establish the attribute predictor. More specifically, the previous K points, e.g., K = 6, before the current point are selected according to a pre-defined Morton or Hilbert order. Then, new Morton or Hilbert codes for these N points will be re-calculated by adding a fixed shift, e.g., 1, to coordinates (x, y, z) of these N data points. Assuming that the new Morton or Hilbert code for the current position is X, a P-point set before and a Q-point set after the current position according to the new Morton or Hilbert code order will be selected. Among these pre-defined K, P and Q point sets, M points are selected with M closest “distance” between these coded points and the current point. The distance d1 as one example is defined as follows, other distance metrics can also be used. [0051] ^1 = |^1 − ^2|+ ^ |^1 − ^2| + |^1 − ^2| (1) [0052] where (^1, ^1, ^1) and (^2, ^2, ^2) are coordinates of the current point and pre- selected point, respectively. File:137850-wof [0053] More recently, a full search method based on Hilbert code has been applied in the G- PCC attribute coding. In the current software, the search range is set as 128 and the number of previous points used to form the predictor is set as M. If more than 128 points have already been coded before the current point, only M out of the 128 previously coded neighbouring points can be used to form the attribute predictor according to the Hilbert order. [0054] If there are fewer than 128 coded points before the current one, all such coded points will be used as candidates to form the attribute predictor. Among the up-to 128 previously coded points, M points are selected with M closest “distance” between these coded points and the current point. The distance d2 as one example is defined as follows, other distance metrics can also be used. [0055] ^2 = | ^1 − ^3 | + ^ | ^1 − ^3 | + |^1 − ^3| (2) [0056] where (^1, ^1, ^1) and (^3, ^3, ^3) are the coordinates of the current point and the pre-selected point along Hilbert order, respectively. Once M closest points have been selected, a weighted average of attributes from these M points is formed as the predictor to code the attribute of the current point. [0057] It is known that the points which share the same face/line/point with the current point are close to the current point. Another technology is to consider these points serving as a predictor. [0058] The residual is defined as the difference of attribute values between the current point and its predictor. Depending on the application, PCC can be either lossless or lossy. Hence, the residual may or may not be quantized by using the predefined quantization process. In the invention, the residual without or with quantization is called level. The level can be a signed integer and will be coded into the bitstream. [0059] [Color Level Coding] [0060] There are three color attributes for each point, which come from the three color components. If the levels for all the three color components are zeros, this point is called a zero level point. Otherwise, if there is at least one non-zero level for one color component with the File:137850-wof point, this point is called a non-zero level point. In the current GPCC, the number of consecutive zero level points is called zero-run length. The zero-run length values and levels for non-zero level points are coded into the bitstream. [0061] More specifically, on the encoding side, before coding the first point, the zero-run length value is set as zero. Starting from the first point along the predefined coding order, the residuals between the three color predictors and their corresponding color attributes for the current point can be obtained. Then, the corresponding levels for the three components of the current point also can be obtained. If the current point is a zero level point, the zero-run length value will be increased by one and the process proceeds to the next point. If the current point is a non-zero level point, the zero-run length value will be coded first and then the three color levels for this non-zero level point will be coded right after. After the level coding of a non-zero level point, the zero-run length value will be reset to zero and the process proceeds to the next point till finishing all points. [0062] On the decoding side, the zero-run length value is first decoded and the three color levels corresponding to the number of zero-run length points are set as zero. Then, the levels for the non-zero level point are decoded and then the next zero-run length value is decoded. This process continues until all points are decoded. [0063] For a non-zero level point, there is at least one non-zero level among the three components. Several one-bit flags plus the remainder of the absolute level may be coded to represent residual levels of the color components. The absolute level or absolute level of color residual minus one may be coded and named as coded level in the remaining of the invention. [0064] [Encoder] [0065] FIG. 5 is a schematic diagram of a hardware structure of an encoder according to an embodiment of the invention. Referring to FIG. 5, the encoder 500 includes a processor 510, a storage device 520, a communication interface 530, and a data bus 540. The processor 510 is File:137850-wof electrically connected to the storage device 520, the communication interface 530 through the data bus 540. In the embodiment of the invention, the storage device 520 may store relevant instructions, and may further store relevant image encoders of algorithms. The processor 510 may receive the bitstream from the communication interface 530. The processor 510 may execute the relevant image encoders and/or the relevant instructions to implement encoding methods of the invention. In the embodiment of the invention, the encoder 500 may be implemented by one or more personal computer (PC), one or more server computer, and one or more workstation computer or composed of multiple computing devices, but the invention is not limited thereto. In one embodiment of the invention, the encoder 500 may include more processors for executing the relevant image encoders and/or the relevant instructions to implement the image data processing method of the invention. In the embodiment of the invention, the decoder 500 may implement the G-PCC encoding method of FIG. 1. [0066] In the embodiment of the invention, the processor 510 may include, for example, a central processing unit (CPU), a graphic processing unit (GPU), or other programmable general- purpose or special-purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic device (PLD), other similar processing circuits or a combination of these devices. In the embodiment of the invention, the storage device 520 may be a non-transitory computer-readable recording medium, such as a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically-erasable programmable read-only memory (EEPROM) or a non-volatile memory (NVM), but the present invention is not limited thereto. In one embodiment of the invention, the relevant image encoders and/or the relevant instructions may also be stored in the non-transitory computer-readable recording medium of one apparatus, and executed by the processor of another one apparatus. The communication interface 530 is, for example, a network card that supports wired network connections such as Ethernet, a wireless network card that supports wireless communication standards such as Institute File:137850-wof of Electrical and Electronics Engineers (IEEE) 802.11n/b/g/ac/ax/be, or any other network connecting device, but the embodiment is not limited thereto. The communication interface 530 is configured to retrieve data of a point cloud. In the embodiment of the invention, the processor 510 may encode the data of the point cloud to generate corresponding bitstream. [0067] FIG.6 is a schematic diagram of an example of a point cloud sampling method according to an embodiment of the invention. FIG.7 is a schematic diagram of another example of a point cloud sampling method according to an embodiment of the invention. Referring to FIG. 6, the encoder 500 of FIG.5 may be applied to code the data of the point cloud sampled by, for example, the light detection and ranging (LiDAR) acquisition system 600, but the invention is not limited thereto. The LiDAR acquisition system 600 may include a rotating header 610, and the rotating header 610 may include a plurality of vertically arranged lasers. As shown as FIG.6, the LiDAR acquisition system 600 may sample the reflectance of lights emitted by the vertically arranged lasers in the rotating header 610 to obtain the data of the point cloud. The captured point cloud is in the shape of a cylinder as shown as FIG. 6, where the points can be represented in the cylindrical coordinate system with radial, azimuthal, and elevation angular directional sampling with a limited elevation angle range. Referring to FIG. 7, in addition to the above cylindrical sampling, the points sampled at the azimuth and an elevation angle are distributed in the direction of the light ray. As shown as FIG.7, the laser light (at point 701) may sample a plurality of points 702. Since the points 702 are in the same azimuth and elevation direction, the Euclidean distance between adjacent points in each direction is getting bigger as the radial distance increases. That is, the sampling distance may be not uniform, but the invention is not limited thereto. [0068] FIG. 8 is a flowchart of a coding method according to an embodiment of the invention. Referring to FIG. 5 and FIG. 8, the encoder 500 may code geometry information and attribute information in the data of the point cloud as above examples of FIG. 6 and FIG. 7. After the geometry information is coded, Morton or Hilbert or any other code/order may be used to convert File:137850-wof a three-dimensional point cloud cube into a one-dimension array. In other words, a three- dimensional coordinate (x, y, z) for a point will be converted into a one-dimension array. Each position in the cube will have a corresponding Morton or Hilbert code, but some positions may not have any corresponding point cloud attribute. Some positions may be empty. The attribute coding will follow the pre-defined Morton or Hilbert order. A predictor may be generated from the previous coded points in Morton’s or Hilbert's order. The attribute difference between the current point and its predictor is coded into the bitstream. In this regard, since all values of the coordinates converted to the one-dimensional array must be positive numbers to complete the coordinate conversion, the encoder 500 may execute the following steps S810 to S840. [0069] During attribute coding, the encoder 500 may execute the following steps S810 to S840. In step S810, the processor 510 may obtain original three-dimensional coordinates (e.g. (x_con, y_con, z_con)). In the embodiment of the invention, the original three-dimensional coordinates may be Cartesian coordinates, and may be used for describing one point in the point cloud. The original three-dimensional coordinates may include three non-negative integers in Cartesian to represent the current point. In step S820, the processor 510 may convert the original three- dimensional coordinates to converted three-dimensional coordinates. In the embodiment of the invention, the coordinate conversion may be performed by a predefined function (e.g. func(x_org, y_org, z_org, paras)), where the input variables of the coordinate conversion are Cartesian coordinates (x_org, y_org and z_org) and given parameters (paras), so that the output of the coordinate conversion may be the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)). It is should be noted that, the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) may include one or more negative values, but the invention is not limited thereto. In one embodiment of the invention, the predefined function (e.g. func(x_org, y_org, z_org, paras) may be used to perform a coordinate system transform as following formula (1). In the following formula (1), the original three-dimensional coordinates may be (x, y, z) (i.e. (x_org, y_org, z_org)), File:137850-wof and the converted three-dimensional coordinates may be (u, v, w) (i.e. (x_con, y_con, z_con)), where arctan (y/x) is the inverse function of tangent function with input (y/x), and arctan (z/u) is the inverse function of tangent function with input (z/u). u = ^x^ + y^ = arctan^(y/x) ………………formula (1) w = arctan^(z/u) [0070] In step S830, the processor 510 may adjust the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) to non-negative three-dimensional coordinates (e.g. (x_con’, y_con’, z_con’)). In step S840, the processor 510 may convert the non-negative three-dimensional coordinates (e.g. (x_con’, y_con’, z_con’)) to one-dimensional array. In the embodiment of the invention, the one-dimensional array is a Morton array or a Hilbert array, but the invention is not limited thereto. Therefore, the encoder 500 may efficiently convert the non-negative three- dimensional coordinates (e.g. (x_con’, y_con’, z_con’)) into the one-dimensional array for subsequent encoding. [0071] In one embodiment of the invention, the processor 510 may determine the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) according to the following formula (2). x _con^ = ^x_con^ < ^0^? ^0 ∶ ^x_con ^ y_con^ = ^y_con^ < ^0^? ^0 ∶ ^y_con …………..formula (2) z _con^ = ^z_con^ < ^0^? ^0 ∶ ^z_con [0072] Specifically, according to the above formula (2), the processor 510 may determine whether each value of the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) is a negative value. In response to at least one value of the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) is the negative value, the processor 510 may clip the at least one value of the converted three-dimensional coordinates (e.g. (x_con, y_con, z_con)) to zero to obtain the non-negative three-dimensional coordinates (e.g. (x_con’, y_con’, z_con’)). [0073] In another embodiment of the invention, the processor 510 may convert the original three-dimensional coordinates (x_org, y_org and z_org) to the converted three-dimensional File:137850-wof coordinates by the predefined function (e.g. func(x_org, y_org, z_org, paras)). The converted three-dimensional coordinates may include a first intermediate value (e.g. x_intermediate), a second intermediate value (e.g. y_intermediate) and a third intermediate value (e.g. z_intermediate). The first intermediate value (e.g. x_intermediate), the second intermediate value (e.g. y_intermediate) and the third intermediate value (e.g. z_intermediate) may be the intermediate results given by the predefined function (e.g. func(x_org, y_org, z_org, paras)) as mentioned above. The processor 510 may execute a scaling operation to adjust the converted three-dimensional coordinates (e.g. (x_intermediate, y_intermediate, z_intermediate)) to the non- negative three-dimensional coordinates (e.g. (x_con’, y_con’, z_con’)) according to the following formula (3). x _con′^ = ^x_intermediate^ ∗ ^SC^ + ^offset_x ^ y_con′^ = ^y_intermediate^ ∗ ^SC^ + ^offset_y …………..formula (3) z _con′^ = ^z_intermediate^ ∗ ^SC^ + ^offset_z [0074] Specifically, according to the above formula (3), the processor 510 may multiply the first intermediate value (e.g. x_intermediate) by a scaling value (e.g. SC) and add a first offset value (e.g. offset_x) to obtain a first value (e.g. x_con’) of the non-negative three-dimensional coordinates. The processor 510 may multiply the second intermediate value (e.g. y_intermediate) by the scaling value (e.g. SC) and add a second offset value (e.g. offset_y) to obtain a second value (e.g. y_con’) of the non-negative three-dimensional coordinates. The processor 510 may multiply the third intermediate value (e.g. z_intermediate) by the scaling value (e.g. SC) and add a third offset value (e.g. offset_z) to obtain a third value (e.g. z_con’) of the non-negative three- dimensional coordinates. It should be noted that, the scaling value (e.g. SC), the first offset value (e.g. offset_x), the second offset value (e.g. offset_y) and the third offset value (e.g. offset_z) are given parameters according to the original three-dimensional coordinates (x_org, y_org and z_org). The scaling value (e.g. SC), the first offset value (e.g. offset_x), the second offset value (e.g. offset_y) and the third offset value (e.g. offset_z) are selected to guarantee that there is no negative File:137850-wof value for the non-negative three-dimensional coordinates (x_con’, y_con’, z_con’). [0075] In another embodiment of the invention, spherical coordinates may be employed to code attribute for point cloud acquired by, for example, the LiDAR acquisition system for both predictive or lifting transform (e.g. indicated with pherical_coord_flag in the bitstream) in MPEG G-PCC. Decoded spherical coordinates are first scaled according to scaling factors specified in attribute parameters (e.g. indicated with attr_coord_scale in the bitstream). Therefore, the converted three-dimensional coordinates may be spherical coordinates (e.g. represented by point ^ , ^ ). The scaling operation may be specified as the following formula (4). scalePoint ^ , ^ = ^point ^ , ^ −min ^ ^ ∗ S ^ + 128 ≫ 8…………formula (4). [0076] According to the formula (4), k is the axis index (k in {0,1,2}), i is point index in decoding order, point ^ , ^ represents the spherical coordinates, ^scalePoint ^ , ^ represents the scaled spherical coordinates, S ^ are integers representing fixed point scaling factors with 8 bits decimal precision, and min ^ are scaling offsets. In response to the spherical coordinates are used for attribute coding and a scaling offset flag in attribute parameters is true, the scaling offset value is inherited a corresponding syntax value in the attribute parameters. The processor 510 may subtract the scaling offsets from the spherical coordinates (point ^ , ^ ) to get the difference value ^ point ^ , ^ −min ^ ^ , multiply the difference value ^ point ^ , ^ −min ^ ^ by the fixed point scaling factors (S ^ ), and add 128 to the get the result (^point ^ , ^ −min ^ ^ ∗ S ^ + 128). Then the processor 510 may perform a bitwise right shift operation on the result (^point ^ , ^ −min ^ ^ ∗ S ^ + 128 ) by 8 bits, which is equivalent to dividing by 256, to get the spherical coordinates (scalePoint ^ , ^ ). Therefore, the processor 510 may convert the scaled spherical coordinates (scalePoint ^ , ^ ) into the one-dimensional array for attribute coding, where all values for scaled spherical coordinates (scalePoint ^ , ^ ) are positive numbers. [0077] Moreover, the processor 510 may further guarantee that all values for the scaled spherical coordinates (e.g. (r_ sca, θ_sca, ψ_sca)) are positive numbers. The processor 510 may further File:137850-wof adjust the scaled spherical coordinates (e.g. (r_ sca, θ_sca, ψ_sca)) to non-negative three- dimensional coordinates (e.g. (r_ sca’, θ_sca’, ψ_sca’)). The processor 510 may determine the scaled spherical coordinates (e.g. (r_ sca, θ_sca, ψ_sca)) according to the following formula (5). r _^sca^ < ^0^? ^0 ∶ ^r_^sca ^ θ_sca^ < ^0^? ^0 ∶ ^θ_sca …………..formula (5) ψ_sca^ < ^0^? ^0 ∶ ^ψ_sca [0078] Specifically, according to the following formula (5), the processor 510 may determine whether each value of the scaled spherical coordinates (e.g. (r_ sca, θ_sca, ψ_sca)) is a negative value. In response to at least one value of the scaled spherical coordinates (e.g. (r_ sca, θ_sca, ψ_sca)) is the negative value, the processor 510 may clip the at least one value of the scaled spherical coordinates (e.g. (x_con, y_con, z_con)) to zero to obtain the non-negative three- dimensional coordinates (e.g. (r_ sca’, θ_sca’, ψ_sca’)). [0079] [Decoder] [0080] FIG. 9 is a schematic diagram of a hardware structure of a decoder according to an embodiment of the invention. Referring to FIG. 9, the decoder 900 includes a processor 910, a storage device 920, a communication interface 930, and a data bus 940. The processor 910 is electrically connected to the storage device 920, the communication interface 930 through the data bus 940. In the embodiment of the invention, the storage device 920 may store relevant instructions, and may further store algorithms of relevant image decoders. The processor 910 may receive the bitstream from the communication interface 930. The processor 910 may execute the relevant image decoders and/or the relevant instructions to implement decoding methods of the invention. In the embodiment of the invention, the decoder 900 may be implemented by one or more personal computer (PC), one or more server computer, and one or more workstation computer or composed of multiple computing devices, but the invention is not limited thereto. In one embodiment of the invention, the decoder 900 may include more File:137850-wof processors for executing the relevant image decoders and/or the relevant instructions to implement the image data processing method of the invention. In the embodiment of the invention, the decoder 900 may implement the G-PCC decoding method of FIG. 4. [0081] In the embodiment of the invention, the processor 910 may include, for example, a central processing unit (CPU), a graphic processing unit (GPU), or other programmable general- purpose or special-purpose microprocessor, digital signal processor (DSP), application specific integrated circuit (ASIC), programmable logic device (PLD), other similar processing circuits or a combination of these devices. In the embodiment of the invention, the storage device 920 may be a non-transitory computer-readable recording medium, such as a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically-erasable programmable read-only memory (EEPROM) or a non-volatile memory (NVM), but the present invention is not limited thereto. In one embodiment of the invention, the relevant image decoders and/or the relevant instructions may also be stored in the non-transitory computer-readable recording medium of one apparatus, and executed by the processor of another one apparatus. The communication interface 930 is, for example, a network card that supports wired network connections such as Ethernet, a wireless network card that supports wireless communication standards such as Institute of Electrical and Electronics Engineers (IEEE) 802.11n/b/g/ac/ax/be, or any other network connecting device, but the embodiment is not limited thereto. The communication interface 530 is configured to retrieve bitstream. In the embodiment of the invention, the bitstream may include encoded values of geometry bitstream and attribute bitstream. The attribute bitstream may further include encoded values of color level, reflectance level and/or zero-run length. The processor 910 may be configured to decode the bitstream to obtain corresponding data of a point cloud. [0082] Referring to FIG. 8 and FIG. 9, during bitstream decoding, the decoder 900 may also execute the following steps S810 to S840 to perform the coordinate conversion. In step S810, File:137850-wof the processor 910 may obtain original three-dimensional coordinates. In step S820, the processor 910 may convert the original three-dimensional coordinates to converted three-dimensional coordinates. In step S830, the processor 910 may adjust the converted three-dimensional coordinates to non-negative three-dimensional coordinates. In step S840, the processor 910 may convert the non-negative three-dimensional coordinates to one-dimensional array. In the embodiment of the invention, the one-dimensional array is the Morton array or the Hilbert array, but the invention is not limited thereto. Therefore, the decoder 900 may efficiently convert the non-negative three-dimensional coordinates into the one-dimensional array for subsequent encoding. In addition, in the embodiment of the invention, for relevant calculation details and judgment methods, may refer to the description of the above embodiments of FIG. 5 to FIG. 8, and will not be described again here. [0083] In summary, the G-PCC system applying the above-mentioned improved encoding method and the above-mentioned improved decoding method may achieve good encoding and decoding performance for cylindrical sampling, for example, and may guarantee that all values used for converting to the one-dimensional array are positive numbers. [0084] It will be apparent to those skilled in the art that various modifications and variations can be made to the disclosed embodiments without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the disclosure covers modifications and variations provided that they fall within the scope of the following claims and their equivalents. Reference Signs List [0085] 500:Encoder 510, 910:Processor 520, 920:Storage device File:137850-wof 530, 930:Communication interface 540, 940:Data bus 600:LiDAR acquisition system 610:Rotating header 701, 702:Point 900:Decoder B1~B8:Cube S102, S104, S106, S108, S110, S112, S114, S116, S118, S120, S122, S124, S126, S402, S404, S406, S408, S410, S412, S414, S416, S418, S420, S422, S810, S820, S830, S840:Step