Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMPROVED LOCAL ILLUMINATION COMPENSATION FOR INTER PREDICTION
Document Type and Number:
WIPO Patent Application WO/2023/147243
Kind Code:
A1
Abstract:
A method for video encoding comprises determining a mode of Local Illumination Compensation (LIC) for the video encoding to be enabled; calculating LIC parameters for the mode of LIC with a limited number of reference sample pairs, wherein a reference sample pair refers to a luma reference sample and a chroma reference sample; enabling the mode of LIC for the video encoding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual; and forming and outputting a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC.

Inventors:
WANG XIANGLIN (US)
YAN NING (CN)
JHU HONG-JHENG (CN)
CHEN YIWEN (CN)
XIU XIAOYU (CN)
CHEN WEI (CN)
KUO CHEWEI (CN)
GAO HAN (CN)
YU BING (CN)
Application Number:
PCT/US2023/060868
Publication Date:
August 03, 2023
Filing Date:
January 19, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BEIJING DAJIA INTERNET INFORMATION TECH CO LTD (CN)
WANG XIANGLIN (US)
International Classes:
H04N19/587; H04N19/105; H04N19/132; H04N19/157; H04N19/186
Domestic Patent References:
WO2020192717A12020-10-01
Foreign References:
US20190215522A12019-07-11
US20210409683A12021-12-30
US20210377539A12021-12-02
Other References:
LI JUNRU, WANG MENG, ZHANG LI, WANG SHIQI, ZHANG KAI, WANG SHANSHE, MA SIWEI, GAO WEN: "Sub-sampled Cross-component Prediction for Emerging Video Coding Standards (2012.15067v1)", ARXIV, CORNELL UNIVERSITY, ITHACA, 30 December 2020 (2020-12-30), Ithaca, pages 1 - 11, XP093081936, Retrieved from the Internet [retrieved on 20230914], DOI: 10.1109/TIP.2021.3104191
Attorney, Agent or Firm:
SU, Hefeng (US)
Download PDF:
Claims:
We Claim:

1. A method for video encoding, comprising: determining a mode of Local Illumination Compensation (LIC) for the video encoding to be enabled; calculating LIC parameters for the mode of LIC with a limited number of reference sample pairs, wherein a reference sample pair refers to a luma reference sample and a chroma reference sample; enabling the mode of LIC for the video encoding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual; and forming and outputting a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC.

2. The method of claim 1, wherein the maximum number of reference sample pairs for calculating the LIC parameters is limited to a pre-determined value based on a size and shape of a chroma block.

3. The method of claim 1, further comprising: selecting the limited number of reference samples pairs by taking one out of every two neighboring reference sample pairs.

4. The method of claim 1, wherein the LIC parameters are calculated using up to 8 reference sample pairs.

5. The method of claim 1, further comprising: adding one or more Multi-model LIC (MMLIC) modes to the mode of LIC, wherein in each MMLIC mode, the reference sample pairs are classified into a number of groups and the LIC parameters are calculated for each group.

6. The method of claim 1, further comprising: replacing the mode of LIC with a MMLIC mode.

7. The method of claim 5, further comprising: classifying the reference sample pairs into the number of groups with block based pixel classification where a classification decision is applied to all pixels in a block.

8. The method of claim 5, further comprising: classifying the reference sample pairs into three groups.

9. The method of claim 5, further comprising: classifying the reference sample pairs into the number of groups by a threshold which is calculated by the average value of the neighbouring reconstructed luma samples or the maximum and minimum value of the neighbouring reconstructed luma samples.

10. The method of claim 9, further comprising: calculating the threshold with reconstructed luma samples in the current block.

11. The method of claim 5, further comprising: performing LIC with the calculated LIC parameters for each group of the number of groups to generate a number of predicted samples; and weighting the number of predicted samples to generate a final prediction.

12. The method of claim 1, further comprising: determining the mode of Local Illumination Compensation (LIC) adaptively by a condition check based on block size.

13. An apparatus for video encoding, comprising: a memory to store an instruction; and a processor to execute the instruction such that the processor is configured to: determine a mode of Local Illumination Compensation (LIC) for the video encoding to be enabled; calculate LIC parameters for the mode of LIC with a limited number of reference sample pairs, wherein a reference sample pair refers to a luma reference sample and a chroma reference sample; enable the mode of LIC for the video encoding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual; and form and output a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC.

14. The apparatus of claim 13, wherein the maximum number of reference sample pairs for calculating the LIC parameters is limited to a pre-determined value based on a size and shape of a chroma block.

15. The apparatus of claim 13, the processor further configured to: select the limited number of reference samples pairs by taking one out of every two neighboring reference sample pairs.

16. The apparatus of claim 13, wherein the processor is configured to: calculate the LIC parameters using up to 8 reference sample pairs.

17. The apparatus of claim 13, the processor further configured to: add one or more Multi-model LIC (MMLIC) modes to the mode of LIC, wherein in each MMLIC mode, the reference sample pairs are classified into a number of groups and the LIC parameters are calculated for each group.

18. The apparatus of claim 13, the processor further configured to: replace the mode of LIC with a MMLIC mode.

19. The apparatus of claim 17, the processor further configured to: classify the reference sample pairs into the number of groups with block based pixel classification where a classification decision is applied to all pixels in a block.

20. The apparatus of claim 17, the processor further configured to: classify the reference sample pairs into three groups.

21. The apparatus of claim 17, the processor further configured to: classify the reference sample pairs into the number of groups by a threshold which is calculated by the average value of the neighbouring reconstructed luma samples or the maximum and minimum value of the neighbouring reconstructed luma samples.

22. The apparatus of claim 21, the processor further configured to: calculate the threshold with reconstructed luma samples in the current block.

23. The apparatus of claim 17, the processor further configured to: perform LIC with the calculated LIC parameters for each group of the number of groups to generate a number of predicted samples; and weight the number of predicted samples to generate a final prediction.

24. The apparatus of claim 13, the processor further configured to: determine the mode of Local Illumination Compensation (LIC) adaptively by a condition check based on block size.

25. A method for video decoding, comprising: receiving and decoding a bit-stream to obtain a prediction residual and prediction mode information indicating a mode of Local Illumination Compensation (LIC) to be enabled; forming a residual block from the prediction residual and forming a prediction block from the prediction mode information; and reconstructing a reconstructed block from the residual block and prediction block; wherein LIC parameters for the mode of LIC are calculated with a limited number of reference sample pairs, and wherein a reference sample pair refers to a luma reference sample and a chroma reference sample.

26. The method of claim 25, wherein the maximum number of reference sample pairs for calculating the LIC parameters is limited to a pre-determined value based on a size and shape of a chroma block.

27. The method of claim 25, wherein the limited number of reference samples pairs are selected by taking one out of every two neighboring reference sample pairs.

28. The method of claim 25, wherein the LIC parameters are calculated using up to 8 reference sample pairs.

29. The method of claim 25, wherein one or more Multi-model LIC (MMLIC) modes are added to the mode of LIC, wherein in each MMLIC mode, the reference sample pairs are classified into a number of groups and the LIC parameters are calculated for each group.

30. The method of claim 25, wherein the mode of LIC is replaced with a MMLIC mode.

31. The method of claim 29, wherein the reference sample pairs are classified into the number of groups with block based pixel classification where a classification decision is applied to all pixels in a block.

32. The method of claim 29, wherein the reference sample pairs are classified into three groups.

33. The method of claim 29, wherein the reference sample pairs are classified into the number of groups by a threshold which is calculated by the average value of the neighbouring reconstructed luma samples or the maximum and minimum value of the neighbouring reconstructed luma samples.

34. The method of claim 33, wherein the threshold is calculated with reconstructed luma samples in the current block.

35. The method of claim 29, wherein LIC is performed with the calculated LIC parameters for each group of the number of groups to generate a number of predicted samples; and wherein the number of predicted samples are weighted to generate a final prediction.

36. The method of claim 25, wherein the mode of Local Illumination Compensation (LIC) is determined adaptively by a condition check based on block size.

37. An apparatus for video decoding, comprising: a memory to store an instruction; and a processor to execute the instruction such that the processor is configured to: receive and decode a bit-stream to obtain a prediction residual and prediction mode information indicating a mode of Local Illumination Compensation (LIC) to be enabled; form a residual block from the prediction residual and form a prediction block from the prediction mode information; and reconstruct a reconstructed block from the residual block and prediction block; wherein LIC parameters for the mode of LIC are calculated with a limited number of reference sample pairs, and wherein a reference sample pair refers to a luma reference sample and a chroma reference sample.

38. The apparatus of claim 37, wherein the maximum number of reference sample pairs for calculating the LIC parameters is limited to a pre-determined value based on a size and shape of a chroma block.

39. The apparatus of claim 37, wherein the limited number of reference samples pairs are selected by taking one out of every two neighboring reference sample pairs.

40. The apparatus of claim 37, wherein the LIC parameters are calculated using up to 8 reference sample pairs.

41. The apparatus of claim 37, wherein one or more Multi-model LIC (MMLIC) modes are added to the mode of LIC, wherein in each MMLIC mode, the reference sample pairs are classified into a number of groups and the LIC parameters are calculated for each group.

42. The apparatus of claim 37, wherein the mode of LIC is replaced with a MMLIC mode.

43. The apparatus of claim 41, wherein the reference sample pairs are classified into the number of groups with block based pixel classification where a classification decision is applied to all pixels in a block.

44. The apparatus of claim 41, wherein the reference sample pairs are classified into three groups.

45. The apparatus of claim 41, wherein the reference sample pairs are classified into the number of groups by a threshold which is calculated by the average value of the neighbouring reconstructed luma samples or the maximum and minimum value of the neighbouring reconstructed luma samples.

46. The apparatus of claim 45, wherein the threshold is calculated with reconstructed luma samples in the current block.

47. The apparatus of claim 41, wherein LIC is performed with the calculated LIC parameters for each group of the number of groups to generate a number of predicted samples; and wherein the number of predicted samples are weighted to generate a final prediction.

48. The apparatus of claim 37, wherein the mode of Local Illumination Compensation (LIC) is determined adaptively by a condition check based on block size.

49. A computer readable storage medium having stored thereon instructions that when executed cause a computing device to perform the method according to any one of claims 1 through 12 or 25 through 36.

50. A computer readable storage medium having stored therein a bitstream for execution by an encoding device having one or more processors, wherein the bitstream, when executed by the one or more processors, causes the encoding device to perform the method for video encoding according to any of claims 1 through 12.

51. A computer readable storage medium having stored therein a bitstream for execution by a decoding device having one or more processors, wherein the bitstream, when executed by the one or more processors, causes the decoding device to perform the method for video decoding according to any of claims 25 through 36.

Description:
Improved Local Illumination Compensation for Inter Prediction

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application is based upon and claims priority to U.S. Provisional Application No. 63/302,919 filed on January 25, 2022, the entire content thereof is incorporated herein by reference in its entirety.

TECHNICAL FIELD

[0002] This application is related to video coding and compression. More specifically, this application relates to methods and apparatus on improving the coding efficiency and simplifying the complexity of local illumination compensation (LIC).

BACKGROUND

[0003] Various video coding techniques may be used to compress video data. Video coding is performed according to one or more video coding standards. For example, video coding standards include versatile video coding (VVC), joint exploration test model (JEM), high- efficiency video coding (H.265/HEVC), advanced video coding (H.264/AVC), moving picture expert group (MPEG) coding, or the like. Video coding generally utilizes prediction methods (e.g., inter-prediction, intra-prediction, or the like) that take advantage of redundancy present in video images or sequences. An important goal of video coding techniques is to compress video data into a form that uses a lower bit rate, while avoiding or minimizing degradations to video quality.

SUMMARY

[0004] Embodiments of the present disclosure are proposed to further improve the LIC coding efficiency or simplify the existing LIC design to facilitate hardware implementations. It is noted that the invented methods could be applied independently or jointly.

[0005] In an aspect, there is proposed a method for video encoding, comprising: determining a mode of Local Illumination Compensation (LIC) for the video encoding to be enabled; calculating LIC parameters for the mode of LIC with a limited number of reference sample pairs, wherein a reference sample pair refers to a luma reference sample and a chroma reference sample; enabling the mode of LIC for the video encoding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual; and forming and outputting a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC.

[0006] In an aspect, there is proposed an apparatus for video encoding, comprising: a memory to store an instruction; and a processor to execute the instruction such that the processor is configured to: determine a mode of Local Illumination Compensation (LIC) for the video encoding to be enabled; calculate LIC parameters for the mode of LIC with a limited number of reference sample pairs, wherein a reference sample pair refers to a luma reference sample and a chroma reference sample; enable the mode of LIC for the video encoding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual; form and output a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC.

[0007] In an aspect, there is proposed a method for video decoding, comprising: receiving and decoding a bit-stream to obtain a prediction residual and prediction mode information indicating a mode of Local Illumination Compensation (LIC) to be enabled; forming a residual block from the prediction residual and forming a prediction block from the prediction mode information; and reconstructing a reconstructed block from the residual block and prediction block; wherein LIC parameters for the mode of LIC are calculated with a limited number of reference sample pairs, and wherein a reference sample pair refers to a luma reference sample and a chroma reference sample.

[0008] In an aspect, there is proposed an apparatus for video decoding, comprising: a memory to store an instruction; and a processor to execute the instruction such that the processor is configured to: receive and decode a bit-stream to obtain a prediction residual and prediction mode information indicating a mode of Local Illumination Compensation (LIC) to be enabled; form a residual block from the prediction residual and form a prediction block from the prediction mode information; and reconstruct a reconstructed block from the residual block and prediction block; wherein LIC parameters for the mode of LIC are calculated with a limited number of reference sample pairs, and wherein a reference sample pair refers to a luma reference sample and a chroma reference sample.

[0009] In an aspect, there is proposed a computer readable medium having stored thereon instructions that when executed cause a computing device to perform the above methods.

[0010] In an aspect, there is proposed a computer readable storage medium having stored therein a bitstream for execution by an encoding device having one or more processors, wherein the bitstream, when executed by the one or more processors, causes the encoding device to perform the above method for video encoding.

[0011] In an aspect, there is proposed a computer readable storage medium having stored therein a bitstream for execution by a decoding device having one or more processors, wherein the bitstream, when executed by the one or more processors, causes the decoding device to perform the above method for video decoding.

[0012] It is to be understood that both the foregoing general description and the following detailed description are examples only and are not restrictive of the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

[0013] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate examples consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.

[0014] FIG. 1 illustrates a block diagram of a generic block-based hybrid video encoding system;

[0015] FIG. 2 illustrates a diagram of block partitions in the multi-type tree structure: (a) quaternary partition; (b) vertical binary partition; (c) horizontal binary partition; (d) vertical ternary partition; (e) horizontal ternary partition;

[0016] FIG. 3 illustrates a general block diagram of a block-based video decoder;

[0017] FIG. 4A illustrates a diagram of straight line derivation of a and P using the min-Max method;

[0018] FIG. 4B illustrates a diagram of locations of the samples used for the derivation of a and P;

[0019] FIG. 5 illustrates a diagram of an example of classifying the neighboring samples into two groups based on the value Threshold,'

[0020] FIG. 6 illustrates a diagram of an example of classifying the neighboring samples into two groups based on the knee point, T, indicated by an arrow;

[0021] FIG. 7 illustrates neighboring samples used for deriving IC parameters;

[0022] FIG. 8 illustrates a flow diagram of a method for video encoding according to an embodiment of the present disclosure;

[0023] FIG. 9 illustrates a flow diagram of a method for video decoding according to an embodiment of the present disclosure;

[0024] FIG. 10 illustrates a diagram of an example location of reference sample in method 1; [0025] FIG. 11 illustrates a diagram of an example location of reference sample in method 2; [0026] FIG. 12 illustrates a diagram of an example location of reference sample in method 3; [0027] FIG. 13 illustrates a diagram of an example location of reference sample in method 4;

[0028] FIG. 14 illustrates the predicted sample weighting; and

[0029] FIG. 15 illustrates a block diagram of a computing device for practicing an embodiment of the LIC for the video coding according to an embodiment of the present disclosure.

DETAILED DESCRIPTION

[0030] The first version of the HE VC standard was finalized in October 2013, which offers approximately 50% bit-rate saving or equivalent perceptual quality compared to the prior generation video coding standard H.264/MPEG AVC. Although the HEVC standard provides significant coding improvements than its predecessor, there is evidence that superior coding efficiency can be achieved with additional coding tools over HEVC. Based on that, both VCEG and MPEG started the exploration work of new coding technologies for future video coding standardization. A Joint Video Exploration Team (JVET) was formed in Oct. 2015 by ITU-T VECG and ISO/IEC MPEG to begin significant study of advanced technologies that could enable substantial enhancement of coding efficiency. A reference software called joint exploration model (JEM) was maintained by the JVET by integrating several additional coding tools on top of the HEVC test model (HM).

[0031] In Oct. 2017, the joint call for proposals (CfP) on video compression with capability beyond HEVC was issued by ITU-T and ISO/IEC. In Apr. 2018, 23 CfP responses were received and evaluated at the 10-th JVET meeting, which demonstrated compression efficiency gain over the HEVC around 40%. Based on such evaluation results, the JVET launched a new project to develop the new generation video coding standard that is named as Versatile Video Coding (VVC). In the same month, a reference software codebase, called VVC test model (VTM), was established for demonstrating a reference implementation of the VVC standard.

[0032] Like HEVC, the VVC is built upon the block-based hybrid video coding framework. FIG. 1 gives a block diagram of a generic block-based hybrid video encoding system. An input video signal is processed block by block (called coding units (CUs)). In VTM-1.0, a CU can be up to 128x128 pixels. However, different from the HEVC which partitions blocks only based on quad-trees, in the VVC, a coding tree unit (CTU) is split into CUs to adapt to varying local characteristics based on quad/binary/temary-tree. Additionally, the concept of multiple partition unit type in the HEVC is removed, i.e., the separation of CU, prediction unit (PU) and transform unit (TU) does not exist in the VVC anymore; instead, each CU is always used as the basic unit for both prediction and transform without further partitions. In the multi-type tree structure, a CTU is firstly partitioned by a quad-tree structure. Then, each quad-tree leaf node can be further partitioned by a binary and ternary tree structure. As shown in FIG. 2, there are five splitting types, quaternary partitioning, horizontal binary partitioning, vertical binary partitioning, horizontal ternary partitioning, and vertical ternary partitioning. In FIG. 1, spatial prediction and/or temporal prediction may be performed. Spatial prediction (or “intra prediction”) uses pixels from the samples of already coded neighboring blocks (which are called reference samples) in the same video picture/slice to predict the current video block. Spatial prediction reduces spatial redundancy inherent in the video signal. Temporal prediction (also referred to as “inter prediction” or “motion compensated prediction”) uses reconstructed pixels from the already coded video pictures to predict the current video block. Temporal prediction reduces temporal redundancy inherent in the video signal. Temporal prediction signal for a given CU is usually signaled by one or more motion vectors (MVs) which indicate the amount and the direction of motion between the current CU and its temporal reference. Also, if multiple reference pictures are supported, a reference picture index is additionally sent, which is used to identify from which reference picture in the reference picture store the temporal prediction signal comes. After spatial and/or temporal prediction, the mode decision block in the encoder chooses the best prediction mode, for example based on the rate-distortion optimization method. The prediction block is then subtracted from the current video block; and the prediction residual is de-correlated using transform and quantized. The quantized residual coefficients are inverse quantized and inverse transformed to form the reconstructed residual, which is then added back to the prediction block to form the reconstructed signal of the CU. Further in-loop filtering, such as deblocking filter, sample adaptive offset (SAG) and adaptive in-loop filter (ALF) may be applied on the reconstructed CU before it is put in the reference picture store and used to code future video blocks. To form the output video bit-stream, coding mode (inter or intra), prediction mode information, motion information, and quantized residual coefficients are all sent to the entropy coding unit to be further compressed and packed to form the bit-stream.

[0033] FIG. 3 gives a general block diagram of a block-based video decoder. The video bitstream is first entropy decoded at entropy decoding unit. The coding mode and prediction information are sent to either the spatial prediction unit (if intra coded) or the temporal prediction unit (if inter coded) to form the prediction block. The residual transform coefficients are sent to inverse quantization unit and inverse transform unit to reconstruct the residual block. The prediction block and the residual block are then added together. The reconstructed block may further go through in-loop filtering before it is stored in reference picture store. The reconstructed video in reference picture store is then sent out to drive a display device, as well as used to predict future video blocks.

[0034] In general, the basic intra prediction scheme applied in the VVC is kept the same as that of the HEVC, except that several modules are further extended and/or improved, e.g., intra sub-partition (ISP) coding mode, extended intra prediction with wide-angle intra directions, position-dependent intra prediction combination (PDPC) and 4-tap intra interpolation. The main focus of this disclosure is to further improve the coding efficiency of the existing LIC mode. Additionally, some methods are also proposed to reduce the LIC computational complexity and make it more friendly for practical hardware implementations. To facilitate the following description, the related background knowledge is elaborated in the following sections.

[0035] Cross-component linear model prediction

[0036] To reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode is used in VVC, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows: pred c (i,j) = a - rec L '(i, j) + p (1)

[0037] where pred c (i, j) represents the predicted chroma samples in a CU and rec L '(tj) represents the downsampled reconstructed luma samples of the same CU. Linear model parameter a and are derived from the straight-line relationship between luma values and chroma values from two samples, which are minimum luma sample A (XA, YA) and maximum luma sample B (XB, YB) inside the set of neighboring luma samples, as exemplified in FIG. 4 A. Here XA, YA are the x-coordinate (i.e. luma value) and y-coordinate (i.e. chroma value) value for sample A, and XB, YB are the x-coordinate and y-coordinate value for sample B. The linear model parameters a and ft are obtained according to the following equations.

/? - v 4 - ax^

(2)

[0038] Such a method is also called min-Max method. The division in the equation above could be avoided and replaced by a multiplication and a shift.

[0039] For a coding block with a square shape, the above two equations are applied directly. For a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary. FIG. 4B shows the location of the left and above samples and the sample of the current block involved in the CCLM mode.

[0040] Besides the scenario wherein the above template and the left template are used to calculate the linear model coefficients, the two templates also can be used alternatively in the other two LM modes, called LM_A, and LM_L modes.

[0041] In LM_A mode, only pixel samples in the above template are used to calculate the linear model coefficients. To get more samples, the above template is extended to the size of (W+W). In LM_L mode, only pixel samples in the left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to the size of (H+H).

[0042] Note that when the upper reference line is at the CTU boundary, only one luma row (which is stored in line buffer for intra prediction) is used to make the down-sampled luma samples.

[0043] For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (CCLM, LM_A, and LM_L). Chroma mode signaling and derivation process are shown in Table 1. Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.

Table 1 Derivation of chroma prediction mode from luma mode when CCLM is enabled [0044] Multi-model linear model prediction

[0045] To reduce the cross-component redundancy, multi-model LM (MMLM) prediction mode is proposed, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using two linear models as follows: tpred c (i,j) = a ± • rec L '(tj) + /?i if rec L '(tj) < Threshold lpred c (i,j) = a 2 ■ rec L '(tj) + P2 i/ rec L '(i, j) > Threshold '

[0046] where pred c (i,j) represents the predicted chroma samples in a CU and rec L '(tj) represents the downsampled reconstructed luma samples of the same CU. Threshold is calculated as the average value of the neighboring reconstructed luma samples. FIG. 5 shows an example of classifying the neighboring samples into two groups based on the value Threshold . For each group, parameter cci and P i, with i equal to 1 and 2 respectively, are derived from the straight-line relationship between luma values and chroma values from two samples, which are minimum luma sample A (XA, YA) and maximum luma sample B (XB, YB) inside the group. Here XA, YA are the x-coordinate (i.e. luma value) and y- coordinate (i.e. chroma value) value for sample A, and XB, YB are the x-coordinate and y- coordinate value for sample B. The linear model parameters a and f are obtained according to the following equations. [0047] Such a method is also called min-Max method. The division in the equation above could be avoided and replaced by a multiplication and a shift.

[0048] For a coding block with a square shape, the above two equations are applied directly. For a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary.

[0049] Besides the scenario wherein the above template and the left template are used together to calculate the linear model coefficients, the two templates also can be used alternatively in the other two MMLM modes, called MMLM_A, and MMLM_L modes.

[0050] In MMLM_A mode, only pixel samples in the above template are used to calculate the linear model coefficients. To get more samples, the above template is extended to the size of (W+W). In MMLM_L mode, only pixel samples in the left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to the size of (H+H).

[0051] Note that when the upper reference line is at the CTU boundary, only one luma row (which is stored in line buffer for intra prediction) is used to make the down-sampled luma samples.

[0052] For chroma intra mode coding, a total of 11 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and six cross -component linear model modes (CCLM, LM_A, LM_L, MMLM, MMLM_A and MMLM_L). Chroma mode signaling and derivation process are shown in Table 2. Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.

Table 2 Derivation of chroma prediction mode from luma mode when MMLM is enabled [0053] Adaptive enabling of LM and MMLM for prediction

[0054] MMLM and LM modes may also be used together in an adaptive manner. For MMLM, two linear models are as follows:

[0055] where pred c (i,j) represents the predicted chroma samples in a CU and rec L '(tj) represents the downsampled reconstructed luma samples of the same CU. Threshold can be simply determined based on the luma and chroma average values together with their minimum and maximum values. FIG. 6 shows an example of classifying the neighboring samples into two groups based on the knee point, T, indicated by an arrow. Linear model parameter a and are derived from the straight-line relationship between luma values and chroma values from two samples, which are minimum luma sample A (XA, YA) and the Threshold (XT, YT). Linear model parameter a 2 and ? 2 are derived from the straight-line relationship between luma values and chroma values from two samples, which are maximum luma sample B (XB, YB) and the Threshold (XT, YT). Here XA, YA are the x- coordinate (i.e. luma value) and y-coordinate (i.e. chroma value) value for sample A, and XB, YB are the x-coordinate and y-coordinate value for sample B. The linear model parameters ctr and Pi for each group, with i equal to 1 and 2 respectively, are obtained according to the following equations.

P

(3 2 = Y T — oc 2 X T

[0056] For a coding block with a square shape, the above equations are applied directly. For a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary. [0057] Besides the scenario wherein the above template and the left template are used together to determine the linear model coefficients, the two templates also can be used alternatively in the other two MMLM modes, called MMLM_A, and MMLM_L modes respectively.

[0058] In MMLM_A mode, only pixel samples in the above template are used to calculate the linear model coefficients. To get more samples, the above template is extended to the size of (W+W). In MMLM_L mode, only pixel samples in the left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to the size of (H+H).

[0059] Note that when the upper reference line is at the CTU boundary, only one luma row (which is stored in line buffer for intra prediction) is used to make the down-sampled luma samples.

[0060] For chroma intra mode coding, there is a condition check used to select LM modes (CCLM, LM_A, and LM_L) or multi-model LM modes (MMLM, MMLM_A, and MMLM_L). The condition check is as follows: t LM modes if (f(Y T — Y A ) < d 11 ( Y B — Y T ) < d &.(block area > BlkSizeThres LM f) (MMLM modes if (f(Y T — Y A ~) > d && ( Y B — Y T ~) > d)& block area > BloSizeThres MM y)

[0061] where BlkSizeThres LM represents the smallest block size of LM modes and BlkSizeThres MM represents the smallest block size of MMLM modes. The symbol d represents a pre-determined threshold value. In an example, d may take a value of 0. In another example, d may take a value of 8.

[0062] For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes. Chroma mode signaling and derivation process are shown in Table 3. It is worth noting that for a given CU, if it is coded under linear model mode, whether it is a conventional single model LM mode or a MMLM mode is determined based on the condition check above. Unlike the case shown in Table 2, there are no separate MMLM modes to be signaled. Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.

Table 3 Derivation of chroma prediction mode from luma mode when CCLM is enabled

[0063] Local illumination compensation

[0064] Local Illumination Compensation (LIC) is an inter prediction technique to model local illumination variation between current block and its prediction block as a function of that between current block template and reference block template. The parameters of the function can be denoted by a scale a and an offset p, which forms a linear equation, that is, a*p[x]+P to compensate illumination changes, where p[x] is a reference sample pointed to by MV at a location x on reference picture. Since a and P can be derived based on current block template and reference block template, no signaling overhead is required for them, except that an LIC flag is signaled for AMVP mode to indicate the use of LIC.

[0065] When LIC applies for a CU, a least square error method is employed to derive the parameters a and P by using the neighboring samples of the current CU and their corresponding reference samples. More specifically, as illustrated in FIG. 7, the subsampled (2:1 subsampling) neighboring samples of the CU and the corresponding samples (identified by motion information of the current CU or sub-CU) in the reference picture are used. The IC parameters are derived and applied for each prediction direction separately.

[0066] When LIC is enabled for a picture, additional CU level RD check is needed to determine whether LIC is applied or not for a CU. When LIC is enabled for a CU, mean- removed sum of absolute difference (MR-SAD) and mean-removed sum of absolute Hadamard-transformed difference (MR-SATD) are used, instead of SAD and SATD, for integer pel motion search and fractional pel motion search, respectively.

[0067] To reduce the encoding complexity, the following encoding scheme is applied in the JEM.

[0068] LIC is disabled for the entire picture when there is no obvious illumination change between a current picture and its reference pictures. To identify this situation, histograms of a current picture and every reference picture of the current picture are calculated at the encoder. If the histogram difference between the current picture and every reference picture of the current picture is smaller than a given threshold, LIC is disabled for the current picture; otherwise, LIC is enabled for the current picture.

[0069] Although the existing LIC can efficiently model local illumination variation, its performance can be still improved. On the other hand, the current LIC design also introduces significant complexity to both encoder and decoder design. The tradeoff between its implementation complexity and its coding efficiency benefit needs to be further improved.

[0070] In this disclosure, several methods are proposed to further improve the LIC coding efficiency or simplify the existing LIC design to facilitate hardware implementations. It is noted that the invented methods could be applied independently or jointly.

[0071] In general, the main aspects of the proposed technologies in the disclosure can be summarized as follows:

[0072] 1. To simplify the computational complexity of the LIC, it is proposed to generate model parameters for LIC with more limited number of reference samples to reduce the calculation needed.

[0073] 2. To improve the coding efficiency, one adaptive LIC scheme is proposed. Compared to the existing method where LIC is fixedly applied with one linear model, the proposed algorithm adaptively adjusts the number of linear model.

[0074] Illumination compensation (IC) parameter calculation with more limited number of reference samples

[0075] In this disclosure, it is proposed to generate model parameters for LIC with more limited number of reference samples to reduce the calculation needed. FIG. 8 illustrates a flow diagram of a method for video encoding according to an embodiment of the present disclosure. In the first embodiment of this disclosure, as illustrated with reference to FIG. 8, at 802, it determines a mode of LIC for video encoding to be enabled. As described previously, in an example, if the histogram difference between the current picture and every reference picture of the current picture is smaller than a given threshold, the mode of LIC is disabled for the current picture; otherwise, the mode of LIC is enabled for the current picture. Additionally, in an example, the mode of LIC can be determined adaptively by a condition check based on block size. At 804, it calculates LIC parameters for the mode of LIC with a limited number of reference sample pairs. A reference sample pair refers to a luma reference sample and its corresponding chroma reference samples. In an example, only half of those reference sample pairs currently used in determining the LIC parameters is used. For example, those reference sample pairs can be selected in a spatially further down-sampled manner by taking one out of every two neighboring reference sample pairs into consideration in deriving the LIC parameters. At 806, it enables the mode of LIC for the video coding with the calculated LIC parameters to perform LIC for inter prediction to generate a prediction residual. As described previously, the prediction residual is formed by subtracting a prediction block from the current block, which reflects local illumination variation between the current block and prediction block. The prediction residual will be then added back to the prediction block to form a reconstructed block in decoding, as described with reference to FIG. 9 below. At 808, it forms and outputs a bit-stream encoded with the prediction residual and prediction mode information indicating the mode of LIC. The bit-stream will be sent to a decoder for video decoding.

[0076] FIG. 9 illustrates a flow diagram of a method for video decoding according to an embodiment of the present disclosure. At 902, it receives and decodes a bit-stream to obtain a prediction residual and prediction mode information indicating a mode of Local Illumination Compensation (LIC) to be enabled. In an example, the bit-stream is encoded as described with reference to FIG. 8. As described above, for example, LIC parameters for the mode of LIC are calculated with a limited number of reference sample pairs, and wherein a reference sample pair refers to a luma reference sample and a chroma reference sample. The prediction residual is formed by subtracting a prediction block from the current block during the video encoding, which reflects local illumination variation between the current block and prediction block. At 904, it forms a residual block from the prediction residual and forms a prediction block from the prediction mode information. At 906, it reconstructs a reconstructed block from the residual block and prediction block. As described above, for example, the residual block and prediction block are added together to form the reconstructed block.

[0077] In the second embodiment of this disclosure, the maximum number of reference sample pairs used in calculating the LIC parameters is limited to a pre-determined value based on the size and shape of corresponding chroma blocks. Four different examples (labelled as Method 1, 2, 3, and 4) are provided in Table 4, where the pre-determined value can be 2, 4 and/or 8 depending on the size and shape of the chroma block of the current CU.

Table 4 Number of sample pairs for LIC parameter calculation in JEM and proposed method [0078] FIG. 10 shows an example location of reference sample in method 1. FIG. 11 shows an example location of reference sample in method 2. FIG. 12 shows an example location of reference sample in method 3. FIG. 13 shows an example location of reference sample in method 4.

[0079] In the third embodiment of this disclosure, only blocks with a block size equal or larger than a certain threshold may be used in forming the inter prediction of the EIC. In an example, the maximum number of reference sample pairs is limited to 8 and the minimum block size is limited to 8 or 16.

[0080] Multi-model Local illumination compensation

[0081] In the fourth embodiment of this disclosure, it is proposed to add one or more Multimodel LIC (MMLIC) modes. In each MMLIC mode, the reference sample pairs are classified into a number of groups and the LIC parameters are calculated for each group. In an example, the reconstructed neighboring samples as the reference sample pairs are classified into two classes using a threshold which is the average of the neighboring reconstructed luma samples. The linear model of each class is derived using the Least-Mean-Square (LMS) method.

[0082] For example, for which the samples are predicted based on the reference samples pointed to by MV at a location x on reference picture by using two linear models as follows: tpred (i,j) = a ■ rec '(i, j) + Pi if rec '(i,j) Threshold

Ipred (i,j) = a 2 ■ rec '(t j) + P2 if rec '(i,j) > Threshold '

[0083] where pred (i,j) represents the predicted samples and rec '(i,j) represents the reference sample pointed to by MV at a location x on reference picture. Threshold is calculated as the average value of the reconstructed neighbouring samples. FIG. 5 shows an example of classifying the neighbouring samples into two groups based on the value Threshold. For each group, parameter cci and Pi, with i equal to 1 and 2 respectively, are derived from the straight-line relationship between luma values and chroma values from two samples, which are minimum luma sample A (XA, YA) and maximum luma sample B (XB, YB) inside the group. Here XA, YA are the x-coordinate (i.e. luma value) and y-coordinate (i.e. chroma value) value for sample A, and XB, YB are the x-coordinate and y-coordinate value for sample B. The linear model parameters a and p are obtained according to the following equations. [0084] For a coding block with a square shape, the above two equations are applied directly. For a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary.

[0085] Since a and P can be derived based on current block template and reference block template, no signaling overhead is required for them, except that an MMLIC flag is signaled to indicate the use of MMLIC.

[0086] Besides the scenario wherein the above template and the left template are used together to calculate the linear model coefficients, the two templates also can be used alternatively in the other two MMLIC modes, called MMLIC A, and MMLIC L modes.

[0087] In MMLIC A mode, only pixel samples in the above template are used to calculate the linear model coefficients. To get more samples, the above template is extended to the size of (W+W). In MMLIC L mode, only pixel samples in the left template are used to calculate the linear model coefficients. To get more samples, the left template is extended to the size of (H+H).

[0088] Only use MMLIC for the parameter derivation in the conventional LIC mode [0089] In the fifth embodiment of this disclosure, it is proposed to only allow one or more MMLIC modes and disable the conventional LIC modes that are based on single model such that the LIC modes are replaced with the MMLIC modes. In this case, the condition check used to select LIC modes or multi-model LIC modes in the manner as described in the section of “Adaptive enabling of LM and MMLM for prediction” is no longer needed and multimodel LIC modes are always used. In particular, the condition check described in the equation (7) in the section can be applied for adaptively selecting the LIC modes and MMLIC modes.

[0090] Block based pixel classification for model selection in MMLIC mode

[0091] In the sixth embodiment of this disclosure, it is proposed to use block-based pixel classification to select different models in MMLIC mode. Currently, such classification is pixel based, i.e. each reconstructed luma sample is checked against a classification threshold and based on the comparison result a corresponding LIC model is selected for that pixel. According to this embodiment of the disclosure, such classification is done on a block level, with the classification decision applied to all pixels in the block. In an example, the block size may be NxM, wherein N and M are positive number such as 2 or 4. Taking both N and M are equal to 2 for example, the classification in this case is done on 2x2 block level. As a result, a same linear model would be selected for all four pixels in the block.

[0092] According to the disclosure, classification may be performed using different methods, involving all or just partial samples in the block. For example, the average of all samples in each NxM block may be used to decide which linear model to use for the block. In another example, for simplification, a classification may be made by simply checking one sample from each block to determine which linear model to use for the block. The one sample may be the top-left sample of each NxM block.

[0093] Three model based local illumination compensation

[0094] In the seventh embodiment of the disclosure, it is proposed to use three parameter sets in local illumination compensation mode to compensate illumination changes. In particular, the reference sample pairs for calculating LIC parameters for the local illumination compensation mode are classified into three groups. In an embodiment, the parameters of the function can be denoted by a scale a and an offset p, which forms a linear equation, that is, the chroma samples are predicted based on the reconstructed luma samples of the same CU by using three linear models as follows: a*p[x]+ i pred (i, j) = a • rec (i, j) + j/ rec L '(i, j) < Threshold^ pred (i,j) = a 2 • rec^O, j) + /? 2 if rec (Lj) > Threshold^ and rec L '(i,j) < Threshold^ (10) pred (i, j) = a 3 • rec (L j) + f 3 if rec (i, j) > Threshold 2 [0095] where pred (i,j) represents the predicted luma samples and rec_L'(i,j) represents reference sample pointed to by MV at a location (i,j) on reference picture. In an embodiment, Threshold and Threshold^ can be calculated by the maximum and minimum value of the neighbouring reconstructed luma samples (denoted as Lmax and Lmin respectively in the following). In an example, Threshold and Threshold 2 can be calculated as follows:

1 2

Thresholds = - * Lmax + - * Lmin 1 3 3

2 1

Threshold 2 = - * Lmax + - * Lmin (11)

[0096] In the eighth embodiment of the disclosure, Threshold! and Threshold 2 can be calculated as the average value of the neighbouring reconstructed luma samples. In an example, all neighbouring reconstructed luma samples are separated into two groups based on the average value of the neighbouring reconstructed luma samples. Luma samples with values smaller than the average value belongs to one group, and those with values not smaller than the average value belongs to another group. And Threshold! and Threshold 2 can be calculated as the average value of each group. With the value of Threshold! and Threshold 2 determined, the neighbouring reconstructed luma samples can be separated into three groups depending on the relationship between the luma value and the value of Threshold! and Threshold^ For example, the first group contains the reconstructed luma samples with values range from the minimum luma sample value and Threshold . The second group contains the reconstructed luma samples with values range from Threshold! and Threshold^ The third group contains the remaining reconstructed luma samples.

[0097] With samples divided into three groups, linear model parameters may be derived for each group respectively. In an example, parameter a and are separately derived from the straight-line relationship between luma values and chroma values from two samples, which are the minimum value luma sample and maximum value luma sample inside each of the three groups. In another example, linear model parameter a r are derived from the straight-line relationship between luma values and chroma values from two samples, which are the minimum value luma sample and the Threshold! ■ Linear model parameter a 2 and [J 2 are derived from the straight-line relationship between luma values and chroma values from two samples, which are the Threshold! ar| d the Threshold^ Linear model parameter a 3 and ? 3 are derived from the straight-line relationship between luma values and chroma values from two samples, which are the maximum luma sample and the Threshold^ [0098] Model classification threshold based on reconstructed luma samples inside the current CU

[0099] In the ninth embodiment of this disclosure, it is proposed to use the reconstructed luma samples inside a current CU to calculate the model classification threshold in crosscomponent linear model. In an embodiment, the threshold is calculated as the average value of the reconstructed luma samples inside a CU. In another embodiment, the threshold is calculated as the average value of the reconstructed luma samples inside the CU and the reconstructed luma samples neighboring to the CU.

[00100] Model classification threshold based on minimum and maximum luma sample

[00101] In the tenth embodiment of this disclosure, it is proposed to use the minimum and maximum samples to derive the model classification threshold. In an embodiment, the threshold is calculated as the (Max + min)/N where Max is the sample value of the maximum sample, min is the sample value of the minimum sample and N is any value (e.g. 2).

[00102] Position-based sample classification

[00103] In the eleventh embodiment, the samples in the template and reference template are divided into two parts directly: the left/above templates and the left/above reference templates, which can be illustrated in FIG. 14. The left template and left reference template are used to derive linear model 1, denoted as and the above template and above reference template are used to derive linear model 2, denoted as f 2 . When LIC is performed for the current sample located at with reference sample model 2 are used to generate two predicted samples:

[00104] The two predicted samples are then weighted to generate the final prediction for the current samples, the weighting factors are derived based on the distance to the above and left template. The final prediction sample can be derived as follow.

[00105] It should be noted that the above-mentioned embodiments focus on the motion compensation process of LIC. In addition, two methods are proposed when applying this embodiment into motion compensation.

• In the first method, the proposed MMLIC is used to replace the current LIC. In the second method, the proposed MMLIC is used as an alternative of the current LIC. In other words, for each block the proposed MMLIC or the LIC is selected and a flag is signaled to indicate whether LIC or MMLIC is used for the block.

[00106] FIG. 15 illustrates a block diagram of a computing device 20 for practicing an embodiment of the LIC for the video coding according to an embodiment of the present disclosure. The computing device 20 includes one or more processors 22, volatile memory 24 (e.g., random access memory (RAM)), non-volatile memory 30, user interface (UI) 38, one or more communications interfaces 26, and a communications bus 48.

[00107] The non-volatile memory 30 may include: one or more hard disk drives (HDDs) or other magnetic or optical storage media; one or more solid state drives (SSDs), such as a flash drive or other solid-state storage media; one or more hybrid magnetic and solid-state drives; and/or one or more virtual storage volumes, such as a cloud storage, or a combination of such physical storage volumes and virtual storage volumes or arrays thereof.

[00108] The user interface 38 may include a graphical user interface (GUI) 40 (e.g., a touchscreen, a display, etc.) and one or more input/output (I/O) devices 42 (e.g., a mouse, a keyboard, a microphone, one or more speakers, one or more cameras, one or more biometric scanners, one or more environmental sensors, and one or more accelerometers, etc.).

[00109] The non-volatile memory 30 stores an operating system 32, one or more applications 34, and data 36 such that, for example, computer instructions of the operating system 32 and/or the applications 34 are executed by processor(s) 22 out of the volatile memory 24. In some embodiments, the volatile memory 24 may include one or more types of RAM and/or a cache memory that may offer a faster response time than a main memory. Data may be entered using an input device of the GUI 40 or received from the I/O device(s) 42. Various elements of the computer 20 may communicate via the communications bus 48.

[00110] The processor(s) 22 may be implemented by one or more programmable processors to execute one or more executable instructions, such as a computer program, to perform the functions of the system. As used herein, the term “processor” describes circuitry that performs a function, an operation, or a sequence of operations. The function, operation, or sequence of operations may be hard coded into the circuitry or soft coded by way of instructions held in a memory device and executed by the circuitry. A processor may perform the function, operation, or sequence of operations using digital values and/or using analog signals.

[00111] In some embodiments, the processor can be embodied in one or more application specific integrated circuits (ASICs), microprocessors, digital signal processors (DSPs), graphics processing units (GPUs), microcontrollers, field programmable gate arrays (FPGAs), programmable logic arrays (PLAs), multi-core processors, or general-purpose computers with associated memory.

[00112] The processor 22 may be analog, digital or mixed-signal. In some embodiments, the processor 22 may be one or more physical processors, or one or more virtual (e.g., remotely located or cloud) processors. A processor including multiple processor cores and/or multiple processors may provide functionality for parallel, simultaneous execution of instructions or for parallel, simultaneous execution of one instruction on more than one piece of data.

[00113] The communications interfaces 26 may include one or more interfaces to enable the computing device 20 to access a computer network such as a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or the Internet through a variety of wired and/or wireless connections, including cellular connections.

[00114] The description of the present disclosure has been presented for purposes of illustration and is not intended to be exhaustive or limited to the present disclosure. Many modifications, variations, and alternative implementations will be apparent to those of ordinary skill in the art having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.

[00115] Unless specifically stated otherwise, an order of steps of the method according to the present disclosure is only intended to be illustrative, and the steps of the method according to the present disclosure are not limited to the order specifically described above, but may be changed according to practical conditions. In addition, at least one of the steps of the method according to the present disclosure may be adjusted, combined or deleted according to practical requirements.

[00116] The examples were chosen and described in order to explain the principles of the disclosure and to enable others skilled in the art to understand the disclosure for various implementations and to best utilize the underlying principles and various implementations with various modifications as are suited to the particular use contemplated. Therefore, it is to be understood that the scope of the disclosure is not to be limited to the specific examples of the implementations disclosed and that modifications and other implementations are intended to be included within the scope of the present disclosure.