Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LOW-COMPLEXITY LOSSLESS COMPRESSION OF ASYNCHRONOUS EVENT SEQUENCES
Document Type and Number:
WIPO Patent Application WO/2024/088515
Kind Code:
A1
Abstract:
This disclosure relates to event sensors and compressing an asynchronous event sequence of such an event sensor. The compression of the event sequence is lossless and of low complexity. A device for compressing an asynchronous event sequence is configured to obtain the event sequence, which comprises a plurality of events collected by the event sensor over a time period. Each event comprises spatial information on the event sensor, a polarity information, and a timestamp within the time period. The device generates a plurality of subsequences from the asynchronous event sequence, wherein each subsequence is related to none, one, or more events having the same timestamp. Each subsequence comprises the spatial information and the polarity information of the related events. Then, the device encodes individually each subsequence into a bitstream, and collects the plurality of bitstreams in a file.

Inventors:
SCHIOPU IONUT (SE)
BILCU RADU CIPRIAN (SE)
Application Number:
PCT/EP2022/079737
Publication Date:
May 02, 2024
Filing Date:
October 25, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
SCHIOPU IONUT (SE)
International Classes:
H03M7/30; G06T7/136
Other References:
KHAN NABEEL ET AL: "Time-Aggregation-Based Lossless Video Encoding for Neuromorphic Vision Sensor Data", IEEE INTERNET OF THINGS JOURNAL, IEEE, USA, vol. 8, no. 1, 8 July 2020 (2020-07-08), pages 596 - 609, XP011827528, DOI: 10.1109/JIOT.2020.3007866
SCHIOPU IONUT ET AL: "Low-Complexity Lossless Coding of Asynchronous Event Sequences for Low-Power Chip Integration", SENSORS, vol. 22, no. 24, 19 December 2022 (2022-12-19), pages 10014, XP093020959, DOI: 10.3390/s222410014
ANONYMOUS: "python - Grouping timeseries with same time stamp - Stack Overflow", 1 June 2020 (2020-06-01), pages 1 - 3, XP093049772, Retrieved from the Internet [retrieved on 20230525]
ANONYMOUS: "sql - MySQL - All entries with same timestamp in one row - Stack Overflow", 1 June 2015 (2015-06-01), pages 1 - 2, XP093049762, Retrieved from the Internet [retrieved on 20230525]
ANONYMOUS: "python - Joining time series data from multiple sources, subsetted by the least comprehensive dataset - Stack Overflow", 1 November 2019 (2019-11-01), pages 1 - 3, XP093049924, Retrieved from the Internet [retrieved on 20230526]
SCHIOPU IONUT ET AL: "Lossless Compression of Event Camera Frames", IEEE SIGNAL PROCESSING LETTERS, vol. 29, 1 January 2022 (2022-01-01), USA, pages 1779 - 1783, XP055970700, ISSN: 1070-9908, DOI: 10.1109/LSP.2022.3196599
KHAN NABEEL ET AL: "Lossless Compression of Data From Static and Mobile Dynamic Vision Sensors-Performance and Trade-Offs", IEEE ACCESS, IEEE, USA, vol. 8, 22 May 2020 (2020-05-22), pages 103149 - 103163, XP011792194, DOI: 10.1109/ACCESS.2020.2996661
Attorney, Agent or Firm:
HUAWEI EUROPEAN IPR (DE)
Download PDF:
Claims:
CLAIMS 1. A device (100) for compressing an asynchronous event sequence (101) of an event sensor (102), the device (100) being configured to obtain the asynchronous event sequence (101), which comprises a plurality of events collected by the event sensor (102) over a time period, wherein each event comprises spatial information on the event sensor (102), a polarity information, and a timestamp within the time period; generate a plurality of subsequences (103) from the asynchronous event sequence (101), wherein each subsequence (103) is related to none, one, or more events having the same timestamp, each subsequence (103) comprising the spatial information and the polarity information of the related events; encode individually each subsequence (103) into a bitstream (104); and collect the plurality of bitstreams (104) in a file (105). 2. The device (100) according to claim 1, wherein to encode a subsequence (103) into a bitstream (104), the device (100) is configured to perform a threshold-based range partitioning algorithm on the subsequence (103). 3. The device (100) according to claim 2, wherein, for each related event of the subsequence (103), the threshold-based range partitioning algorithm is configured to individually encode the spatial information in separate threshold-based range partitioning steps, and write the polarity information of the subsequence (103). 4. The device (100) according to claim 3, wherein the spatial information of each event comprises a first coordinate and a second coordinate, and each threshold-based range partitioning step is configured to encode individually the first coordinate and the second coordinate of the spatial information in a separate threshold-based range partitioning step. 5. The device (100) according to one of the claims 2 to 4, wherein the threshold-based range partitioning algorithm is further configured to encode a number of events in each subsequence (103) based on a separate threshold- based range partitioning step.

6. The device (100) according to one of the claims 3 to 5, wherein, for each subsequence (103), the threshold-based range partitioning algorithm is configured to determine a prediction of the number of events related to the subsequence (103), and to encode individually the number of events based on the respective prediction; determine a prediction of the spatial information of each event, and to encode individually the spatial information based on the respective prediction of the event; and write the polarity information of the event. 7. The device (100) according to one of the claims 3 to 6, wherein the threshold-based range partitioning algorithm is a triple-threshold-based range partitioning algorithm configured with three thresholds. 8. The device (100) according to one of the claims 1 to 7, wherein each spatial information comprises a first coordinate and a second coordinate, and the device (100) is further configured to arrange the spatial and polarity information of the related events of each subsequence (103) in an ascending order in the subsequence (103) with respect to the first coordinate. 9. The device (100) according to claim 8, further configured to arrange the spatial and polarity information of the related events of each subsequence (103) in ascending order in the subsequence (103) with respect to the second coordinate, if the first coordinate is equal. 10. The device (100) according to one of the claims 1 to 9, wherein when collecting the plurality of bitstreams (104) in the file (105), the device (100) is further configured to form one or more bitstream packages (201) of bitstreams (104), wherein each bitstream package (201) is related to a respective time-window and includes all bitstreams (104) that are related to timestamps within that respective time-window. 11. The device (100) according to claim 10, further configured to generate a header bitstream (202) and add it to the file (105); wherein the header bitstream (202) comprises respective lengths of the bitstream packages (201).

12. The device (100) according to claim 3 and claim 11, further configured to determine a prediction of the respective lengths of the bitstream packages (201) and to encode the respective lengths of the bitstream packages (201) in a separate threshold-based range partitioning step, before adding them to the header bitstream (202). 13. A method (1000) for compressing an asynchronous event sequence (101) of an event sensor (102), the method comprising receiving (1001) the asynchronous event sequence (101), which comprises a plurality of events collected over a time period, wherein each event comprises spatial information on the event sensor (102), a polarity information, and a timestamp within the time period; generating (1002) a plurality of subsequences (104) from the asynchronous event sequence (101), wherein each subsequence (103) is related to none, one, or more of the events having the same timestamp, each subsequence (103) comprising the spatial information and the polarity information of the related events; encoding (1003) individually each subsequence (103) into a bitstream (104); and collecting (1004) the plurality of bitstreams (104) in a file (105). 14. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method (1000) according to claim 13.

Description:
LOW-COMPLEXITY LOSSLESS COMPRESSION OF ASYNCHRONOUS EVENT SEQUENCES TECHNICAL FIELD The present disclosure relates to event sensors. The disclosure provides a device and method for compressing an asynchronous event sequence of an event sensor. The compression of the event sequence is lossless and of low complexity. BACKGROUND Research breakthroughs in the neuromorphic engineering domain have enabled the development of a new type of sensor, called event sensor. The event sensor is bio-inspired by the human brain, as each sensor pixel is working as a separate neuron. In contrast to a conventional sensor, in which all pixels capture an incoming light intensity at the same time, the event sensor reports only incoming light intensity changes at any timestamp and at any pixel position, by triggering asynchronous events. Since each pixel detects and reports independently only the change in brightness, the event sensor proposes a new paradigm shift for capturing visual data. The event sensor provides a high temporal resolution, as asynchronous events can be triggered at a timestamp distance of only 1 µs. This is made possible by a remarkable event sensor feature, namely capturing all dynamic information without unnecessary static information (e.g., background), which is extremely useful for capturing high-speed motion scenes. Two types of events sensors are available. The dynamic vision sensor (DVS) and the dynamic and active-pixel vision sensor (DAVIS), which comprises a DVS and an active pixel sensor (APS). These event sensors are now widely used in the computer vision domain, in which RGB and event-based solutions already provide an improved performance for applications such as deblurring, optic flow estimation, and many others. However, to achieve high frame rates, the asynchronous event sequences captured by an event sensor reach high bitrate levels when they are stored using raw sensor representation of 8 Bytes (B) per event. For a better pre-processing of event sensor data, for example, on low-power event-processing chips, low-complexity and efficient event coding and compression solutions are needed, which store the event information in a lossless manner. A first exemplary solution of an asynchronous event sequence compression designed for system-on-chip (SoC) integration exploits spatial and temporal characteristics of spike location information for compression. However, this solution employs rather complex coding techniques. A second exemplary solution of an (synchronous) event frame compression designed for SoC integration processes DVS data by aggregating events at fixed time intervals. However, the performance of this solution depends on the performance of the selected video codec. A third exemplary solution of an event frame compression designed for SoC integration introduces an efficient context-based lossless image codec for encoding event camera frames. However, this solution uses a performance-oriented lossless compression codec, which is complex and designed to encode accumulated event frames. SUMMARY In view of the above, an objective of this disclosure is to provide a lossless compression method for encoding an asynchronous event sequence acquired by an event sensor. Another objective is to employ only low-complexity coding techniques for the compression method, in order to allow integration into low-power event-processing chips. That is, a low-complexity lossless compression method is desired for efficient-memory representation of asynchronous event sequences. These and other objectives are achieved by this disclosure as described in the independent claims. Advantageous implementations are further defined in the dependent claims. A first aspect of this disclosure provides a device for compressing an asynchronous event sequence of an event sensor, the device being configured to obtain the asynchronous event sequence, which comprises a plurality of events collected by the event sensor over a time period, wherein each event comprises spatial information on the event sensor, a polarity information, and a timestamp within the time period; generate a plurality of subsequences from the asynchronous event sequence, wherein each subsequence is related to none, one, or more events having the same timestamp, each subsequence comprising the spatial information and the polarity information of the related events; encode individually each subsequence into a bitstream; and collect the plurality of bitstreams in a file. The device of the first aspect is able to perform a lossless compression to encode the asynchronous event sequence provided by the event sensor. The device of the first aspect may thereby use only low-complexity coding techniques, and may thus be integrated into low-power event-processing chips, or the like. In an implementation form of the first aspect, to encode a subsequence into a bitstream, the device is configured to perform a threshold-based range partitioning algorithm on the subsequence. The threshold-based range partitioning algorithm enables a low-complexity lossless compression. In an implementation form of the first aspect, for each related event of the subsequence, the threshold-based range partitioning algorithm is configured to individually encode the spatial information in separate threshold-based range partitioning steps, and write the polarity information of the event subsequence. In an implementation form of the first aspect, the spatial information of each event comprises a first coordinate and a second coordinate, and each threshold-based range partitioning step is configured to encode individually the first coordinate and the second coordinate of the spatial information in a separate threshold-based range partitioning step. In an implementation form of the first aspect, the threshold-based range partitioning algorithm is further configured to encode a number of events in each subsequence based on a separate threshold-based range partitioning step. In an implementation form of the first aspect, for each subsequence, the threshold-based range partitioning algorithm is configured to determine a prediction of the number of events related to the subsequence, and to encode individually the number of events based on the respective prediction; determine a prediction of the spatial information of each event, and to encode individually the spatial information based on the respective prediction of the event; and write the polarity information of the event. In an implementation form of the first aspect, the threshold-based range partitioning algorithm is a triple-threshold-based range partitioning algorithm configured with three thresholds. An example algorithm with three thresholds has proven to be a very good compromise between compression complexity and efficiency. In an implementation form of the first aspect, each spatial information comprises a first coordinate and a second coordinate, and the device is further configured to arrange the spatial and polarity information of the related events of each subsequence in an ascending order in the subsequence with respect to the first coordinate. In an implementation form of the first aspect, the device is further configured to arrange the spatial and polarity information of the related events of each subsequence in ascending order in the subsequence with respect to the second coordinate, if the first coordinate is equal. In an implementation form of the first aspect, when collecting the plurality of bitstreams in the file, the device is further configured to form one or more bitstream packages of bitstreams, wherein each bitstream package is related to a respective time-window and includes all bitstreams that are related to timestamps within that respective time-window. In an implementation form of the first aspect, the device is further configured to generate a header bitstream and add it to the file; wherein the header bitstream comprises respective lengths of the bitstream packages. In an implementation form of the first aspect, the device is further configured to determine a prediction of the respective lengths of the bitstream packages and to encode the respective lengths of the bitstream packages in a separate threshold-based range partitioning step, before adding them to the header bitstream. A second aspect of this disclosure provides a method for compressing an asynchronous event sequence of an event sensor, the method comprising receiving the asynchronous event sequence, which comprises a plurality of events collected over a time period, wherein each event comprises spatial information on the event sensor, a polarity information, and a timestamp within the time period; generating a plurality of subsequences from the asynchronous event sequence, wherein each subsequence is related to none, one, or more of the events having the same timestamp, each subsequence comprising the spatial information and the polarity information of the related events; encoding individually each subsequence into a bitstream; and collecting the plurality of bitstreams in a file. In an implementation form of the second aspect, to encode a subsequence into a bitstream, the method comprises performing a threshold-based range partitioning algorithm on the subsequence. In an implementation form of the second aspect, for each related event of the subsequence, the threshold-based range partitioning algorithm individually encodes the spatial information in separate threshold-based range partitioning steps, and writes the polarity information of the event subsequence. In an implementation form of the second aspect, the spatial information of each event comprises a first coordinate and a second coordinate, and each threshold-based range partitioning step encodes individually the first coordinate and the second coordinate of the spatial information in a separate threshold-based range partitioning step. In an implementation form of the second aspect, the threshold-based range partitioning algorithm further encodes a number of events in each subsequence based on a separate threshold-based range partitioning step. In an implementation form of the second aspect, for each subsequence, the threshold-based range partitioning algorithm: determines a prediction of the number of events related to the subsequence, and encodes individually the number of events based on the respective prediction; determines a prediction of the spatial information of each event, and encodes individually the spatial information based on the respective prediction of the event; and writes the polarity information of the event. In an implementation form of the second aspect, the threshold-based range partitioning algorithm is a triple-threshold-based range partitioning algorithm configured with three thresholds. In an implementation form of the second aspect, each spatial information comprises a first coordinate and a second coordinate, and the method further comprises arranging the spatial and polarity information of the related events of each subsequence in an ascending order in the subsequence with respect to the first coordinate. In an implementation form of the second aspect, the method further comprises arranging the spatial and polarity information of the related events of each subsequence in ascending order in the subsequence with respect to the second coordinate, if the first coordinate is equal. In an implementation form of the second aspect, when collecting the plurality of bitstreams in the file, the method further comprises forming one or more bitstream packages of bitstreams, wherein each bitstream package is related to a respective time-window and includes all bitstreams that are related to timestamps within that respective time-window. In an implementation form of the second aspect, the method further comprises generating a header bitstream and add it to the file; wherein the header bitstream comprises respective lengths of the bitstream packages. In an implementation form of the second aspect, the method further comprises determining a prediction of the respective lengths of the bitstream packages and to encode the respective lengths of the bitstream packages in a separate threshold-based range partitioning step, before adding them to the header bitstream. The method of the second aspect and its implementation forms achieve the same advantages as described above for the device of the first aspect and its respective implementation forms. A third aspect of this disclosure provides a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform the method of the second aspect or any of its implementation forms. A fourth aspect of this disclosure provides a non-transitory storage medium storing executable program code which, when executed by a processor, causes the method according to the second aspect or any of its implementation forms to be performed. In summary, this disclosure proposes a novel lossless compression method for encoding asynchronous event sequences acquired by event sensors. For example, a contribution of this disclosure is a low-complexity coding scheme, which uses a decision tree to reduce the representation range of the residual error. The decision tree may, as an example, be formed using a triplet threshold. This triplet threshold may be used to divide the input data range into several coding ranges, which are arranged at concentric distances from an initial prediction. This approach is referred to, in this disclosure, as triple threshold-based range partitioning (TTP). Another contribution of this disclosure is dividing the input sequence into same-timestamp subsequences, wherein each subsequence collects the same timestamp events, for example, in ascending order of largest coordinate of the event spatial information. The proposed method and device of this disclosure can also provide random access to any time-window using additional header information. It has to be noted that all devices, elements, units and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof. BRIEF DESCRIPTION OF DRAWINGS The above described aspects and implementation forms will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which FIG.1 shows a device for compressing an asynchronous event sequence of an event sensor, according to this disclosure. FIG.2 shows a framework proposed by this disclosure. FIG.3 shows a representation using same-timestamp subsequences proposed by this disclosure. FIG.4 shows a TTP of a low-complexity coding scheme of this disclosure. FIG.5 shows (a) a TTP decision tree, (b) TTP_y partitioning (c), a TTP_y decision tree, and (d) TTP_e range partitioning, of the low-complexity coding scheme of this disclosure. FIG.6 shows (a) a TTP_e decision tree, (b) TTP_L range partitioning, and (c) TTP_L range partitioning, of the low-complexity coding scheme of this disclosure. FIG.7 shows deterministic cases of the low-complexity coding scheme of this disclosure. FIG.8 shows an algorithm (Alg.1) proposed by this disclosure. FIG.9 shows results of a device and method according to this disclosure. FIG.10 shows a method according to this disclosure, for compressing an asynchronous event sequence of an event sensor. DETAILED DESCRIPTION OF EMBODIMENTS FIG. 1 shows a device 100 according to this disclosure. The device 100 is configured to compress an asynchronous event sequence 101 of an event sensor 102. The device 100 may, for example, be a processor, a controller, or a computer. The device 100 may be included in an event camera together with the event sensor 102. However, the device 100 may also be a separate entity used to post-process event sequences 101, which are provided by an event camera including the event sensor 102. The device 100 is configured to obtain the asynchronous event sequence 101, for instance, by receiving it from the event sensor 102, or by receiving it as input for post-processing. The event sequence 101 comprises a plurality of events, which are collected by the event sensor 102 over a time period. That is, the event sensor 102 is configured to collect one or more events, and to form the event sequence 101 using the collected events. Each event comprises spatial information on the event sensor 102 – for example, pixel information or coordinates on the event sensor’s sensing surface – where the event is detected. Each event further comprises a polarity information, for instance, indicating whether a light intensity or brightness has increased or decreased at the spatial information (e.g., at the corresponding pixel of the event sensor 102). Each event further comprises and a timestamp within the time period, wherein the timestamp indicates at which time the even was detected or recorded by the event sensor 102. The device 100 is further configured to generate a plurality of subsequences 103 from the asynchronous event sequence 101. Each subsequence 103 is related to none, one, or more events having the same timestamp. Each subsequence 103 comprises the spatial information and the polarity information of the related events (of the same timestamp). That is, each subsequence 103 may be referred to as a same-timestamp subsequence, and comprises the information of all events that are detected or recorded simultaneously. The device 100 is further configured to encode individually each subsequence 103 into a bitstream 104, and to collect the plurality of bitstreams 104 in a file 105. In this way, the events in the event sequence 101 may be stored, in a lossless compressed from, into the file 105. The device 100 may comprise processing circuitry (not shown) configured to perform, conduct or initiate the various operations of the device 100 described herein. The processing circuitry may comprise hardware and/or the processing circuitry may be controlled by software. The hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry. The digital circuitry may comprise components such as application-specific integrated circuits (ASICs), field-programmable arrays (FPGAs), digital signal processors (DSPs), or multi- purpose processors. The device 100 may further comprise memory circuitry, which stores one or more instruction(s) that can be executed by the processor or by the processing circuitry, in particular under control of the software. For instance, the memory circuitry may comprise a non-transitory storage medium storing executable software code which, when executed by the processor or the processing circuitry, causes the various operations of the device 100 to be performed. In one embodiment, the processing circuitry comprises one or more processors and a non-transitory memory connected to the one or more processors. The non-transitory memory may carry executable program code which, when executed by the one or more processors, causes the device 100 to perform, conduct or initiate the operations or methods described herein. The following detailed description of the solutions of this disclosure, which bases on the solution shown in FIG. 1, is organized as follows: First, section A describes the proposed framework of this disclosure. Then, section B presents experimental evaluation. Finally, section C draws conclusions and highlights some benefits of the solution of this disclosure. A. Framework An event sensor 102 may be considered, which has a ^ × ^ resolution. A change of an incoming light intensity triggers an asynchronous event, ^ ^ = ^ ^ , ^ ^ ) , which may be stored in 8B of memory using: spatial information ( ^ ^ , ^ ^ ) , ∀^ ^ [ 1, ^ ] , ^ ^ [ 1, ^ ] ; polarity ^ ^ ∈ {−1,1}, wherein −1 indicates a decrease and 1 indicates an increase in the light intensity; and timestamp ^ ^ . An asynchronous event sequence 101, ^ ^ = {^ ^ } ^^^,^,…,^^ , collects Ne events triggered over a time period of ^ µs. FIG.2 depicts the proposed framework for encoding an asynchronous event sequence 101. The sequence representation foreseen by this disclosure groups, as described above, same- timestamp events in the subsequences 103, wherein the events may be reordered. That is, the plurality of subsequences 103 are generated from the asynchronous event sequence 101, wherein the spatial and polarity information of the related events of each subsequence 103 may be arranged in an ascending order in the subsequence 103 with respect to the first coordinate. Each same-timestamp subsequence 103 is then encoded, in turn, by a proposed method (including an algorithm). That is, each subsequence 103 is encoded individually into a bitstream 104. The algorithm used for encoding subsequences 103 into bitstreams 104 may be a TTP algorithm, as described later in detail. For example, the TTP algorithm may individually encode the spatial information in separate threshold-based range partitioning steps, and may then write the polarity information of the subsequence 103. If the spatial information has two coordinates, the TTP algorithm may encode individually the first coordinate and the second coordinate of the spatial information in a separate threshold-based range partitioning step. The TTP algorithm may also encode a number of events in each subsequence 103 based on a separate threshold- based range partitioning step. The method (including the algorithm) may be referred to as a low-complexity lossless compression of asynchronous event sequences (LLC-ARES). The LLC-ARES method is accordingly built, as described, on the TTP coding scheme. As further shown in FIG. 2, the plurality of bitstreams 104 are collected in a file 105, e.g., a compressed file 105 (e.g., without RA). The plurality of bitstreams 104 may also be collected in a compressed file with RA. In this case, one or more bitstream packages 201 of bitstreams 104 may be formed, wherein each bitstream package 201 is related to a respective time-window and includes all bitstreams 104 that are related to timestamps within that respective time- window. In addition, a header bitstream 202 may be generated and may be added to the file 105. The header bitstream 202 may comprise respective lengths of the bitstream packages 201. The length of a bitstream package 201 may depend on the number of events in the subsequences 103, which related to the bitstreams 104 in the bitstream package 201. In the following, subsection A.1 describes a proposed sequence representation, subsection A.2 presents a proposed low-complexity coding scheme, and subsection A.3 presents a proposed algorithm. A.1. Sequence representation The input sequence ^ ^ is arranged as a set of same-timestamp subsequences 103, colle ^ cts all ^ having the same timestamp ^ ^ . Note that the timestamp information is decoded using {^ ^ ^ } ^^^,^,…,^^^ . Each ^ ^ may be ordered in ascending order of the largest spatial information dimension, e.g., ^^ < then ^ is further ordered in ascending order remaining dimension, ^^ ^ ^ < ^ ^^^ . That is, general, when the spatial information comprises a first coordinate and a second coordinate, the spatial and polarity information of the related events of each subsequence 103 may be arranged in an ascending order in the subsequence 103 with respect to the first coordinate, and may be arranged in ascending order in the subsequence 103 with respect to the second coordinate, if the first coordinate is equal. FIG. 3 depicts the proposed representation and the difference between the sensor's event-by-event (EE) order and the same-timestamp (ST) order. A.2 Triple Threshold-based Range Partition (TTP) To integrate the proposed method into, for example, a low-power chip, a low-complexity coding scheme is proposed by this disclosure. Thereby, the binary representation range of the residual error is partitioned into smaller intervals selected using a short-depth decision tree designed based on a triple threshold, ^ = ( ^ ^ , ^ ^ , ^ ^ ) . Hence, the input range is partitioned into several smaller coding ranges, which are arranged at concentric distances from an initial prediction. For example, the case of encoding ^ ∈ [ 1,  ^ ] (finite range) – wherein ^ may be a first coordinate of the spatial information – using prediction ^ by writing the binary representation of the residual error ^ = ^ − ^ on ^ ^ bits is considered. Since ^ ^ is unknown at the decoder side, ^ is used to create a decision tree, which partitions [ 1,  ^ ] into five types of coding ranges using three thresholds δ 1 , δ 2 , and δ 3 , as shown in FIG.4. Let us denote ⌈log ^ ^ ^ ⌉, and ^ ^ = ⌈log ^ (^ − ^ ^ − 1)⌉. The first range, R1, shown in FIG.4 may be defined using as (^ − ^ ^ ,  ^ + ^ ^ ) to represent |^| < ^ ^ on ^ ^^ bits plus an additional bit for ^^^^ ( ^ ) . The second range, R2, may be defined using ^ ^ to represent | ^ | − < ^ ^ on ^ ^^ bits plus a sign bit, ^ ≥ 0. Similarly, the third range, R3, may be defined using δ ^ to represent | ^ | − − ^ ^ < ^ ^ on ^ ^^ bits plus a sign bit. The fourth range R4 and the fifth range R5 range may be defined for | ^ | ≥ and used to represent ^ − 1 on ^ ^ bits and ^ − ^ on ^ ^ bits, respectively. FIG. 5a depicts the corresponding decision tree defined by checking the following four constraints: (c1) ^ ^ is set by checking | ^ | < Δ, if true then ^ ^ = 0, else ^ ^ = 1. (c2) If ^ ^ = 0 then ^ ^ is set by checking | ^ | < ^ ^ , if true then ^ ^ = 0 and R1 is employed to represent ^ on ^ ^ = ^ ^^ + 1 bits, else ^ ^ = 1. (c3) If ^ ^ = 1, then ^ ^ is set by checking | ^ | < ^ ^ + ^ ^ , if true then ^ ^ = 0 and R2 is employed to represent ^ on ^ ^ = ^ ^^ + 1 bits, else ^ ^ = 1 and R3 is used to represent ^ on ^ ^ = ^ ^^ + 1 bits. (c4) Finally, if ^ ^ = 1, then ^ ^ is set by checking ^ ≤ ^ ^ , if true then ^ ^ = 0 and R4 is employed to represent ^ − 1 on ^ ^ bits, else ^ ^ = 1 and R5 is employed to represent ^ − ^ on ^ ^ bits. Sometimes, information can be determined, i.e., deterministic. For example, if ^ ^ or ^ ^ are outside the finite range (see FIG.7a), then R4 or R5 may not exist and the context tree can be built without checking the constraint (c4) listed above. Moreover, since ^ ^ ^ and ^ ^ = ^ − ^ ^ + 1 are not power-2 numbers, the most significant bit of ^, ^ ^^^^ , is 0 – thanks to the constraint 1 ≤ ^ ≤ ^ ^ and 1 ≤ ^ ≤ ^^ ^ , respectively. FIG. 7b shows that if ^ ∈ ( ^ ^ − 2 ^^^^ ,  2 ^^^^] and ^ ^^^^ would be set 1, then ^ > ^ ^ and the constraint would be violated. The basic algorithm implementation may be modified for encoding different types of data. Let us denote ^^^ ^ = ^^ ^ − ^ ^ ^ and ^^^ ^   =  ^^ ^   −  ^ ^ ^ . is encode using version ^^^ ^ , where ^ ^ is used to detect another dete ^ ^ ^ ^^ rministic case: if ^^^ = 0, then ^ ^ = ^ ^^^ and the sign bit is saved (see FIG. 3; ST order). {^^ ^ }^^^,^,…,^ ^ ^ having ^^^ ^ ≥ 0 (thanks to ST order) is encoded using a version ^^^ ^ , which is designed to encode ^ in range [ ^,  ^ ] . FIG.5b and FIG. 5c show the ^^^ ^ , range partitioning and decision tree, respectively. Some data types have a very large or infinite support range. {^ ^ ^ } ^^^,^,…,^^^ may be encoded using version ^^^ ^ . Note that ^ ^ ^ [ 0,  ^^ ] , however, there is a low probability of having a majority of pixels triggered at the same time. Since ^ ^ is usually very small, ^^^ ^ is designed to use the doublet threshold ( δ ^ , δ ^ ) , as experiments show that the triplet threshold does not improve the performance. FIG.5d shows the ^^^ ^ range partitioning, where only the residual errors 0,1, … , δ ^ − 2 are encoded by R2 as the last value in the range, δ ^ − 1 (i.e., ^ ^^ bits of 1), signals the use of R6 to encode | ^ | − Δ − 2 by using a simple coding technique, the Elias- Gamma Coding (EGC). FIG. 6a shows the corresponding decision tree, where ^ ^ ^ = 0 (i.e., ^ ^ = ∅) is encoded by first bit. Finally, ^^^ may be designed to encode the bitstream ℬ length, denoted ^ ^ by defining partition intervals using two triple thresholds: ^ ^ = ( δ^ ^ , δ^ ^ , δ ^ ^) for encoding small errors using R1S, R2S, and R3S; and ^ ^ = (δ^ ^ , δ^ ^ , δ ^ ^) for encoding large errors using R1L, R2L, and R3L. Similar to ^^^ ^ , R6 is signalled in R3L using value δ^ ^ − 1 and | ^ | − Δ ^ − Δ ^ − 2 is encoded using EGC (see FIG.6b and FIG.6c). A.3 Algorithm The input sequence ^ ^ is arranged as a set of same-timestamp subsequences 103, collects all ^ events having same timestamp ^ ^ . Note that the timestamp information may be decoded using {^ ^ ^ } ^^^,^,…,^^^ . Each ^ ^ may be ordered in ascending order of the largest spatial information dimension, e.g., ^^ ^ < ^ ^ ^ ^^ . If ^^ ^ = ^ ^ ^ ^^ , then ^ ^ may be further ordered in ascending order of remaining dimension, ^^ ^ < ^ ^ ^ ^^ . FIG.3 depicts the proposed representation and the difference between the sensor's event-by-event (EE) order of the event sequence 101 and the same- timestamp (ST) order of the subsequence 103. The proposed method, LLC-ARES, employs the proposed representation to generate {^ ^ } ^^^,^,…,^^^ . is encoded as bitstream using an algorithm “Alg.1” as shown in FIG. 8, which is built using proposed coding scheme TTP. The compressed file 105 collects these bitstreams 104 Alg.1 encodes the following data: (i) ^ ^ ^ by employing TTP ^ using ^ ^ ^ ^ , computed by (1), and ^ ^ ; (ii) ^^ ^ as follows: (ii.1) ^^ ^ by employing TTP ^ using ^ ^ ^ ^ computed by (2), range [ 1,  W ] , and ^ ^^ ; (ii.2) by employing TTP ^ using ^ ^ ^ ^ computed by (2), range [ 1,  H ] , and ^ ^^ ; and (ii.3) ^^ ^ using binarization; (iii) remaining events as follows: (iii.1) ^^ by employin ^ ^ ^ ^ ^ ^ ^ g TTP ^ using ^ ^ = ^ ^^^ , range ^ ^ ,  W , and ^ ^ ; (iii.2) ^^ by employing TTP using ^ ^ computed by (3), ^ ^ , range 1 ^ ^ ^ ^ ^^ [ ,  H ] , and ^ ^ ; (iii.3) ^^ ^ using binarization. ^ ^ ^ ^ , ^ ^ ^ ^ , ^ ^ ^ ^ , ^^ ^ , ^ ^ are predicted as follows: 2 ^^⌈^^^^^^^⌉ (4) ^ ^ ℓ = ^ if ℓ = 1, ^ ℓ^^ otherwise In equation (2), the prediction for the spatial information of ^^ ^ is set as the sensor's centre ( ^/2, ^/2 ) , while in the rest it depends on ^^ ^ of the previously non-empty ^ ^ . In equation (3), if ^^^ ^ is small, ^ ^ ^ is set as the median of a small prediction window of size ^ ^ , else of a larger prediction window of size ^ ^ . In an example, one my select: ^ ^ = 10, ^ ^ = 2 ^ + 2 ^ , ^ ^ = 3, ^ ^ = 5, ^ ^ = 15. The triple threshold parameters may be selected as power-2 numbers, and may be set as follows: ^ ^ = (2 ^ , 2 ^ ), ^ ^^ = (2 ^ , 2 ^ , 2 ^ ), ^ ^ = (2 ^ , 2 ^^ , 2 ^^ ), The alternative method LLC-ARES-RA is a LLC-ARES version, which provides RA to any time-window of size Δ ^^ . ^ ^ may in this case be divided into packages of Δ ^^ time- length, . LLC-ARES encodes each package ^ as bitstream set {ℬ ^ ^ } ^^^,^,⋯,^^^^^ , which is collected as the package ℓ bitstream, ⋯ ℬ ^ ^ ^^ ^, of length ^ . ^^^ ^ encodes ^ using computed using (4), ^ ^ , and ^ ^ , and genera ^ ℓ tes the header bitstream, ℬ (see bitstream packages 201 in FIG.2). Hence, the bitstreams 104 of {^ } ℓ^^,^,…,^ are collected in a header bitstream 202, = ^ ℬ^ ^ ^ ℬ ^ ⋯ ℬ ^ ^^ , while all bitstream packages 201 are collected by sequence bitstream, ℬ ^ = [ ^ ^ ⋯ ℬ ^ ] . The compressed file 105 with RA collects ℬ ^ and ℬ ^ in this order. B. Experimental Evaluation Experiments were carried out on the stereo event camera dataset for driving scenarios called DSEC. It contains 82 training sequences captured using the Prophesee Gen3.1 event sensor placed on top of a moving car, having a ^ × ^ = 640 × 480 resolution. The DSEC sequences are sorted in ascending order of their event acquisition density. Thanks to driving with a different speed and in different outdoor scenarios, the DSEC sequences have a highly variable density of events, see FIG.9a. To limit the runtime of state-of-the-art codecs, in each sequence, only the first ^ = 10 ^  ^^ (100 ^) of event data are encoded. DSEC is publicly available online. The proposed method, LLC-ARES, may be an implementation in C. The LLC-ARES-RA version was tested using a time-window of Δ ^^ = 10 ^ , 10 ^ , 10 ^  ^^ and ^ = 10 ^  ^^. The Raw data size is computed using the sensor specifications of 8B per event. The compression results are compared using the following metrics: (c1) Compression Ratio (CR), defined as the ratio between Raw data size and compressed size; (c2) Relative Compression (RC), defined as the ratio between the compressed size of a target codec and the compressed size of LLC-ARES; (c3) Bitrate (BR), defined as the ratio between the compressed size in bits and the number of events in the sequence, and measured in bits per event ( ^^^^ ) , e.g., Raw data has 64 ^^^^. The runtime results are compared using the following metrics: (t1) Event Density (ρ ^ ), defined as the ratio between the number of events in the sequence and the encoding/acquisition time, and measured in Millions of events per second (^^^^^); (t2) Time Ratio (TR), defined as the ratio between the acquisition time and the encoding time. The LLC-ARES performance is compared with the following state-of-the-art traditional data compression codecs: (a) the dictionary-based codec Zeta Library (ZLIB) (version 1.2.3 available online); (b) the Lempel–Ziv–Markov chain algorithm (LZMA), an advanced dictionary-based codec developed lossless data compression and first used in the 7-Zip open source codec; and (c) the Bzip2 algorithm based on the Burrows–Wheeler algorithm (version 1.0.5 available online). FIG. 9b shows the CR results and FIG. 9c the BR results over DSEC. One can note that, for state-of-the-art codecs, the proposed ST order provides an improved performance of up to 96% compared with the sensor’s EE order. LLC-ARES (designed for low-power chip integration) provides an improved performance compared with all state-of-the-art codecs (designed for SoC integration) over the sequences having a small and medium event density, and a close performance over the sequences having a high event density as more complex coding techniques are employed by data compression codecs. Table I shows the average CR and BR results over DSEC. One can note that, compared with Bzip2, LZMA, and ZLIB, LLC-ARES provides: (i) average CR improvement of 5.49% , 11.45%, and 35.57%, respectively; (ii) average BR improvement of 7.37%, 13.40%, and 37.12%, respectively; and (iii) average bitsavings of 1.09 ^^^^, 1.99 ^^^^, and 5.50 ^^^^, respectively. FIG.9d shows the event density results and FIG.9e the TR results over DSEC. Compared with runtime performance of state-of-the-art codecs, LLC-ARES provides a performance much closer to real-time for all sequences, and an outstanding performance for the sequences having a high event density. Table I shows the average event density and TR results over DSEC. Compared with Bzip2, LZMA, and ZLIB, LLC-ARES provides: (i) average event density improvement of 234 ×, 412 ×, and 2086 ×, respectively; and (ii) average TR improvement of 216 ×, 401 ×, and 1969 ×, respectively. FIG.9f shows the RC results over DSEC. One can note that the RC results are quite similar as the size of the header bitstream is negligible compared with the time-window sequence bitstream. When providing RA to the smallest time window of Δ ^^ = 100 ^^, compared with LLC-ARES, the LLC-ARES-RA performance decreases with less than 0.19% when the encoded header information is stored memory and less than 0.35% when the decoded header information is stored in memory, denoted here memory usage (MU) results. FIG.10 shows a method 1000 for compressing an asynchronous event sequence 101 of an event sensor 102. The method 1000 can be performed by the device 100 or by an event camera’s processor. The method 1000 comprises a step 1001 of receiving the asynchronous event sequence, which comprises a plurality of events collected over a time period, wherein each event comprises spatial information on the event sensor, a polarity information, and a timestamp within the time period. The method 1000 further comprises a step 1002 of generating 1002 a plurality of subsequences 104 from the asynchronous event sequence 101, wherein each subsequence 103 is related to none, one, or more of the events having the same timestamp, each subsequence 103 comprising the spatial information and the polarity information of the related events. Further, the method 1000 comprises a step 1003 of encoding individually each subsequence 103 into a bitstream 104. Then, the method 1000 comprises a step 1004 of collecting the plurality of bitstreams 104 in a file 105. C. Conclusions This disclosure proposes a novel lossless coding method for asynchronous event sequences 101. LLC-ARES employs low-complexity coding techniques and is designed for low-power chip integration. A novel coding scheme may create decision trees for reducing the residual error representation range. A novel representation may employ the ST order, wherein the same- timestamp events may be grouped into ordered subsequences 103. RA to any time-window may be provided using additional header information. The experimental evaluation demonstrates an improved coding performance, and a closer to real-time runtime performance compared with state-of-the-art codecs. The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed matter, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.