Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LIDAR BACKGROUND NOISE DETECTION AND COMPENSATION
Document Type and Number:
WIPO Patent Application WO/2024/081230
Kind Code:
A1
Abstract:
This disclosure relates to detecting and calibrating out noise from lidar data, including noise caused by solar radiation and/or other lidar background noise sources. A lidar system may determine a background noise level for a location by sampling using an analog-to-digital converter (ADC) during a time window associated with a lidar pulse. The ADC sample may be compared to additional ADC samples performed during additional time periods between lidar pulses. The ADC samplings may be analyzed to determine the lidar background noise level of the environment, and the lidar data may be modified to calibrate out the background noise. In some examples, the lidar system itself may be reconfigured based on the lidar background noise, including modifying the laser transmit power, aperture size, optical gain, and/or other features of the lidar system to calibrate out the background noise and/or improve the signal-to-noise ratio (SNR) of the lidar data.

Inventors:
SUBASINGHA SHAMINDA (US)
TING SAMANTHA (US)
MCMICHAEL RYAN (US)
ABDELMAKSOUD NOOR (US)
ZHOU KAI (US)
PIRACHA MOHAMMAD (US)
Application Number:
PCT/US2023/034809
Publication Date:
April 18, 2024
Filing Date:
October 10, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZOOX INC (US)
International Classes:
G01S7/497; G01S7/483; G01S17/931; G04F10/00; G05D1/20; H03M1/12
Domestic Patent References:
WO2021026241A12021-02-11
Foreign References:
US20200174120A12020-06-04
US20200158835A12020-05-21
US20180188104A12018-07-05
CN111983586A2020-11-24
Attorney, Agent or Firm:
BRISNEHAN, Brian, J. et al. (US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1. A lidar system comprising: at least one laser; at least one photodetector; one or more processors; and one or more non-transitory computer-readable media storing computerexecutable instructions that, when executed, cause the one or more processors to perform operations comprising: determining a pulse transmission time associated with a lidar pulse transmitted by the lidar system into an environment; determining, based at least in part on the pulse transmission time, a sampling time window; receiving light data from the environment during the sampling time window; determining, based at least in part on the light data, a lidar background noise level associated with the environment; and determining reflectivity' data associated with the lidar pulse, based at least in part on the lidar background noise level.

2. The lidar system as claim 1 recites, wherein receiving the light data during the sampling time window comprises: sampling, using an analog-to-digital converter (ADC) of the lidar system, light received by a photodetector during the sampling time window; and accumulating, using an integrator of the lidar system, the sampled light received during the sampling time window.

3. The lidar system as either claim 1 or claim 2 recites, the operations further comprising: determining, based at least in part on the pulse transmission time, a second sampling time window different from the sampling time window, wherein the sampling time window is within a dwell time associated with the lidar pulse, and wherein the second sampling time window is outside of the dwell time associated with the lidar pulse; and receiving second light data from the environment during the second sampling time window7, wherein determining the lidar background noise level is based at least in part on the light data and the second light data.

4. The lidar system as any one of claims 1-3 recites, wherein determining the reflectivity data associated with the lidar pulse comprises: determining a peak power level associated with the lidar pulse; and subtracting the lidar background noise level from the peak power level.

5. The lidar system as claim 4 recites, wherein determining the peak power level associated with the lidar pulse comprises: determining, using a time-to-digital converter (TDC) of the lidar system, a peak time associated with the lidar pulse; and determining, using an analog-to-digital converter (ADC) of the lidar system, and based at least in part on the peak time, a magnitude associated with the peak power level.

6. The lidar system as any one of claims 1-5 recites, wherein determining the sampling time window is based at least in part on at least one of: a transmission power associated with the lidar pulse; a range associated with the lidar system; or a transmission frequency associated with the lidar system.

7. The lidar system as any one of claims 1-6 recites, the operations further comprising: determining a second sampling time window associated with a second lidar pulse, wherein the lidar pulse has a first transmission power and the second lidar pulse has a second transmission power different from the first transmission power; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level associated with the environment is based at least in part on the first transmission power, the light data, the second transmission power, and the second light data.

8. A method comprising: determining a pulse transmission time associated with a lidar pulse transmitted by a lidar system into an environment; determining, based at least in part on the pulse transmission time, a sampling time window; receiving light data from the environment during the sampling time window; determining, based at least in part on the light data, a lidar background noise level associated with the environment; and determining reflectivity data associated with the lidar pulse, based at least in part on the lidar background noise level.

9. The method as claim 8 recites, wherein receiving the light data during the sampling time window comprises: sampling, using an analog-to-digital converter (ADC) of the lidar system, light received by a photodetector during the sampling time window'; and accumulating, using an integrator of the lidar system, the sampled light received during the sampling time window.

10. The method as either claim 8 or claim 9 recites, further comprising: determining, based at least in part on the pulse transmission time, a second sampling time window different from the sampling time window, wherein the sampling time window is within a dwell time associated with the lidar pulse, and wherein the second sampling time window is outside of the dwell time associated with the lidar pulse; and receiving second light data from the environment during the second sampling time window7, wherein determining the lidar background noise level is based at least in part on the light data and the second light data.

11. The method as any of claims 8-10 recites, wherein determining the reflectivity data associated with the lidar pulse comprises: determining a peak power level associated with the lidar pulse; and subtracting the lidar background noise level from the peak power level.

12. The method as any one of claims 8-11 recites, wherein determining the peak power level associated with the lidar pulse comprises: determining, using a time-to-digital converter (TDC) of the lidar system, a peak time associated with the lidar pulse; and determining, using an analog-to-digital converter (ADC) of the lidar system, and based at least in part on the peak time, a magnitude associated with the peak power level.

13. The method as any one of claims 8-12 recites, wherein determining the sampling time window is based at least in part on at least one of: a transmission power associated with the lidar pulse; a range associated with the lidar system; or a transmission frequency associated with the lidar system.

14. The method as any one of claims 8-13 recites, further comprising: determining a second sampling time window associated with a second lidar pulse, wherein the lidar pulse has a first transmission power and the second lidar pulse has a second transmission power different from the first transmission power; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level associated with the environment is based at least in part on the first transmission power, the light data, the second transmission power, and the second light data.

15. One or more non-transitory computer-readable media comprising instructions that, when executed by one or more processors, cause the one or more processors to perform a method as recited in any one of claims 8-14.

Description:
LIDAR BACKGROUND NOISE DETECTION AND COMPENSATION

CROSS REFERENCE TO RELATED APPICATOINS

[0001] This application claims priority to U.S. Provisional Patent Application No. 63/415.177, filed October 11, 2022, and entitled “LIDAR BACKGROUND NOISE DETECTION AND COMPENSATION,” the entire contents of which are incorporated herein by reference.

BACKGROUND

[0002] Various systems, including autonomous vehicles, utilize lidar systems that use lasers to emit pulses into an environment and sensors to detect pulses that are reflected back from the surfaces of objects in the environment. Such reflected pulses may, in turn, be used to perform detection of objects, such as vehicles, pedestrians, and bicycles, in an environment. Lidar sensors generally measure the distance from a lidar device to the surface of an object by transmitting a light pulse and receiving a reflection of the light pulse from the surface of the object, which may be read by a sensor of the lidar device. The sensor may generate a signal based on light pulses incident on the sensor. Lidar return signals may be attributable to reflections of obj ects, but portions of lidar signals also may be attributable to noise and/or other interfering signals (e.g., from the lidar device itself or from an external source). Within the context of autonomous vehicles, lidar systems may be used to detect objects in driving environments, analyze the objects, and/or determine routes for the vehicle to navigate through the environment safely and efficiently. However, lidar noise and interference may cause errors in the analysis of lidar data, such as false-positive object detections. Such lidar data analysis errors can present challenges to safely and efficiently navigating environments.

BRIEF DESCRIPTION OF THE DRAWINGS

[0003] The detailed description is described with reference to the accompanying figures. In the figures, the left-most digit(s) of a reference number identifies the figure in which the reference number first appears. The use of the same reference numbers in different figures indicates similar or identical components or features.

I [0004] FIG. 1 is a pictorial flow diagram illustrating an example technique of determining and calibrating for lidar background noise, in accordance with one or more examples of the disclosure.

[0005] FIGS. 2 and 3 illustrate example environments including lidar background noise caused by solar radiation, and the corresponding effect on the lidar reflectivity 7 data, in accordance w ith one or more examples of the disclosure.

[0006] FIG. 4 depicts an example graph representing the return signals associated with a number of lidar pulses emitted by a lidar system, in accordance with one or more examples of the disclosure.

[0007] FIGS. 5A and 5B depict example graphs representing a technique for determining lidar background noise associated with a lidar pulse, in accordance with one or more examples of the disclosure.

[0008] FIG. 6 depicts another example graph representing a technique for determining lidar background noise associated with a lidar pulse, in accordance with one or more examples of the disclosure.

[0009] FIG. 7 depicts an example graph representing techniques for determining power levels and/or corresponding times associated with a lidar pulse, in accordance with one or more examples of the disclosure.

[0010] FIGS. 8A and 8B depict example graphs representing a technique for determining background noise based on comparing ADC samples representing accumulated energy from different return lidar pulses having different transmit powers, in accordance with one or more examples of the disclosure.

[0011] FIG. 9 depicts an example environment including lidar background noise caused by solar radiation, and the corresponding calibrated lidar reflectivity data, in accordance with one or more examples of the disclosure.

[0012] FIG. 10 depicts an example environment including an example autonomous vehicle operating in the environment, in accordance w ith examples of the disclosure.

[0013] FIG. 11 is a block diagram of an example system for implementing various techniques described herein.

DETAILED DESCRIPTION

[0014] Various techniques (e.g., processes, systems, non-transitory computer- readable media storing instructions, etc.) are described herein for determining and calibrating out background noise from lidar reflectivity data. Background noise within lidar return signals may be caused by solar radiation, warm ground or road surfaces, or various other anomalies that may emit energy capable of being detected by a lidar system. In various examples described herein, a lidar system may determine lidar background noise by performing a first energy data sampling (e.g., which may be performed using an analog-to-digital converter (ADC)). In at least some examples, such sampling may be performed during a time window associated with a particular lidar pulse. For instance, the first ADC sampling may be performed over an estimated round-trip travel time associated with a laser pulse emitted by the lidar system. The first ADC sample may be compared to a second ADC sample associated with a different time period (or multiple time periods), such as during a “dwell time'’ between lidar pulses. Various techniques, described below in more detail, can be used to analyze and compare the data from the ADC samples, to determine a lidar background noise level associated with the environment. The lidar reflectivity data output by the lidar system, including determinations and/or attributes of lidar points, may be modified to calibrate the data based on the background noise in the environment. Additionally or alternatively, the lidar system itself may be reconfigured based on the background noise in the environment. For instance, the laser transmit power, aperture size, optical gain, and/or other features of the lidar system may be modified to calibrate out the background noise and/or improve the signal -to-noise ratio (SNR) of the lidar data.

[0015] During the operation of a lidar system, the lidar system may emit periodic laser pulses (or lidar pulses) into the surrounding environment. Lidar pulses emitted by the lidar system may include various properties, such as intensity, power, polarization, phase, coherence, spectral content, modulation, spatial shape, temporal shape, and other lidar pulse properties, some or all of which may be controlled by the lidar system. The lidar system also may include lidar sensors (e.g., photodetectors) configured to detect returning lidar pulses that were reflected off of one or more surfaces and/or objects in the environment. In various examples, the systems and techniques described herein for determining and calibrating out lidar background noise can be implemented within autonomous vehicles. In such examples, an autonomous vehicle may use the lidar system to detect objects proximate to the vehicle in the environment (e.g., other vehicles, pedestrians, bicycles, road debris, traffic signs, etc.). However, the techniques described herein can be applied to a variety of systems and/or platforms including lidar capabilities (e g., sensor systems, robotic platforms, inspection systems, remote security systems, machine vision platforms, etc.) and are not limited to autonomous vehicles.

[0016] Although the techniques described herein may apply to various types and configurations of lidar systems, certain techniques may provide particular advantages for lidar systems with high-sensitivity detectors and/or low-speed analog-to-digital converters (ADCs) to receive and process reflectivity data. For example, photodetectors such as silicon photomultiplier (SiPM) detectors using single-photon avalanche diodes (SPAD), as well as other high-sensitivity photodetectors including photodiodes and/or avalanche photodiodes (APDs) may be highly sensitive to lidar background noise (e.g., light and/or energy reflections or emissions) within the environment. For instance, in outdoor environments, the solar radiation emitted from sunlit surfaces may have significantly greater intensities than comparable shaded areas, causing lidar background noise within the sunlit surfaces. The lidar background noise from solar radiation and/or other sources may present technical challenges for the lidar system in attempting to distinguish the background noise from the lidar pulse data reflected by surfaces and objects in the environment. These technical challenges are made more difficult when the lidar system uses a lower transmit power, allowing the background noise to obscure the lidar reflection data and decreasing the signal-to-noise ratio (SNR).

[0017] Additionally, lidar systems that include a relatively low-speed ADC can present additional technical challenges for detecting lidar background noise. Within a lidar system, an ADC may be used to generate the lidar sensor data output, based on the data received from the optical sensors. For example, photodetectors, photodiodes, and/or other optical sensors may generate analog signals that can be provided to the ADC to convert into a digital output signal for additional processing. ADCs may use integrators and other similar techniques to accumulate the amount of light received via the optical sensors over a period of time. For relatively low-speed ADCs, the light data may be calculated (e.g., accumulated and/or integrated) over relatively longer time ranges, which can obscure the lidar background noise and cause difficulties in determining the precise peak power level and/or the peak time for a lidar return pulse signal. For example, when a low-speed ADC accumulates light data over a time period that is longer than the round-trip time of the lidar pulse, the lidar system may be unable to distinguish how much of the accumulated light data is the result of the reflected lidar pulse and how much is the result of background noise. Such low-speed ADC’s may be advantageous over high-speed ADC’s, however, such as with respect to cost, energy consumption, etc.

[0018] In various examples, the lidar systems described herein may perform ADC sampling operations (e.g., accumulating and/or integrating the light data received from the optical sensors) over multiple time periods relative to the emission of a lidar pulse. In some instances, the lidar system may determine an estimated round-trip time for a lidar pulse (e.g., based on the range of the lidar system, the lidar pulse characteristics, the environment characteristics, etc.), and may determine a first time window during which the reflected light from the lidar pulse is likely to be received by the optical sensors. The lidar system also may determine one or more additional time windows during the “dwell time” (e.g., time between the lidar pulses) during which reflected light from the lidar pulse is less likely to be received. The lidar system may perform ADC sampling(s) during both time windows, and may compare the output from the first time window associated with the lidar pulse (e.g., which may include reflected light from the lidar pulse and lidar background noise) with the additional time window(s) during the dwell time (e.g., which may include primary lidar background noise). By comparing the light energy accumulated during the different ADC samplings, the lidar system may determine the power level of the lidar background noise received from the environment, and may calibrate out the lidar background noise to more accurately determine the power level of the light reflected from the lidar pulse. [0019] To determine the lidar background noise level, the ADC samples may be analyzed and/or compared using various different techniques. In some examples, the lidar system may use the ADC sample performed during the dwell time between lidar pulses to determine the background noise level at a particular location in the environment. As a non-limiting example of determining which samples to associate with the dwell time, a rising-edge meeting or surpassing a threshold may be used to initiate flagging of an ADC sample as being associated with a pulse, whereas a falling edge meeting the threshold may end flagging samples with the pulse. In other examples, estimated transit time and firing time may be used (as will be discussed herein) for determining those samples associated with the lidar pulse and those associated with the dwell time. For instance, the output from the ADC sample may represent the accumulated energy received by the optical sensors during the sampling time window. The lidar system may use the accumulated energy output and the duration of the time window to determine the background noise level during the dwell time (e.g., by estimating an average irradiance or, otherwise, an amount of energy received per unit time). In some instances, the lidar system may perform multiple ADC samplings during the dwell time associated with a lidar pulse (e.g., just before or just after the end of the estimated round-trip time of the lidar pulse), and may average the output readings from the multiple ADC samples to estimate the background noise level. After determining an estimated background noise level, the lidar system may subtract the background noise level from the ADC sample associated with the lidar pulse (e.g., the accumulated light received during the estimated round-trip time), to determine the peak power level and/or overall energy returned from the lidar return pulse signal.

[0020] The lidar system may perform ADC samplings over uniform time interv als and/or time intervals having different durations. When ADC samples are associated with different time durations, the output readings of the ADC samples may be adjusted (e.g., scaled) based on their time durations to provide consistent readings. In some cases, both the time duration associated w ith the ADC samples and the number of ADC samples that may be generated during the dwell time may be constrained by the characteristics of the ADC and the lidar system. As an example, a lidar system may use an ADC with a sampling rate of 2 MHz, which is capable of outputting a maximum of one sample every 500 ns. In this example, if the lidar system fires 355,000 lasers per second, one laser will be fired approximately every 2,812 ns. Assuming a range of 50 m for the lidar system, the estimated maximum round-trip time for each laser would be 334 ns. Therefore, in this example lidar system, there is 2,478 ns (2,812 - 334 ns) of dw ell time (e.g., the time between the lidar return pulse signal and the firing of the next pulse) associated with each lidar pulse. During each dwell time, the ADC in this example is capable of outputting five samples. In some examples, the laser or detector may be turned off except for a shorter period of time and the integrated value of the ADC may be averaged with respect to the amount of time the detector is on (as opposed to the length of the ADC window). In some examples in which the lidar operates in a bistatic mode, time periods may be sampled immediately before a subsequent pulse to ensure another return isn't contributing to the estimated background.

[0021] In some cases, the lidar system may use the lidar system capabilities and configuration data to predetermine the start time and duration of the ADC sampling window' associated with a lidar pulse. For instance, when the firing time of a lidar pulse and the estimated range associated with that lidar pulse are known, the lidar system may compute the estimated (maximum) round-trip time for the lidar pulse and determine the ADC sampling window for the lidar pulse as the time window between the firing time and the estimated round-trip time. In other cases, the lidar system need not predetermine which of the ADC samplings is associated with a lidar pulse and which are associated with the dwell time between lidar pulse. For instance, the lidar system may perform a number of ADC samplings close together in time near a lidar pulse, and may compare the sizes of the output readings of the ADC samples to determine which are associated with the lidar pulse and which are associated with the dwell time.

[0022] Some lidar systems may use time-to-digital converters (TDCs) instead of, or in addition, to ADCs to evaluate the data received from the optical sensors and output digital sensor data. For example, lidar systems may use TDCs to implement power level thresholds based on the analog light data received from the optical sensors. When a power level threshold is triggered the TDC may output the corresponding timing information. As described below in more detail, the lidar system may use a combination of TDC thresholds to determine the start and end times of a lidar return pulse signal, and/or the peak power level or peak time associated with a lidar pulse. In some examples, after using TDC thresholds to determine the start time, peak time, and/or end time of a lidar return pulse signal, the lidar system may initiate ADC samplings based on the determined times to determine the peak power level of the return signal. Additional TDC thresholds may be used to determine the dwell time associated with the lidar pulse, and the lidar system may initiate ADC samplings within the determined dwell time to determine the background noise power level.

[0023] After using the various techniques described herein to determine the lidar background noise level of a particular location in the environment, the lidar sy stem may use the background noise level to modify/ calibrate the lidar reflectivity data associated with the location. For example, the lidar system may use the reflectivity data associated with a lidar pulse to determine w hether or not to output a lidar point based on the pulse, and to determine the attributes of the lidar point. By calibrating the reflectivity' data to account for the background noise at that location in the environment, the lidar system may reduce false positive and false negative lidar point determinations, and may provide more accurate power level data associated with the determined lidar points. The calibration of the lidar reflectivity data based on the background noise also may improve the signal-to-noise ratio (SNR), by reducing some or all of the background noise in the reflectivity data. [0024] In some examples, the lidar system also may be reconfigured based on the determined levels of background noise in the environment. For instance, the laser transmit power, aperture size, optical gain, and/or other features of the lidar system may be modified to reduce background noise and/or improve the signal-to-noise ratio (SNR) of the lidar data. Any of these characteristics of the lidar system may be modified, individually or in any combination, in response to the level of background noise in the environment. As a non-limiting example, in regions of the environment with high levels of lidar background noise, the lidar system may increase transmit power to better distinguish reflectivity lidar data from background noise. Additionally or alternatively, the lidar system may reduce the aperture size in high background noise regions. In contrast, in regions with lower levels of lidar background noise, the lidar system may decrease transmit power and/or increase the aperture size to save energy and/or improve the SNR of the lidar data.

[0025] As noted above, different locations in the same environment may have different levels of background noise. For instance, shaded surfaces on the ground, road, buildings, or other objects may emit/reflect less solar radiation and may have lower levels of lidar background noise, whereas sunlit surfaces in the same environment may emit/reflect more solar radiation and may have higher levels of lidar background noise. Therefore, the techniques described herein for determining and calibrating out lidar background noise may be applied for a single laser pulse emitted to a particular location in the environment. In such cases, the lidar system may compute a different background noise level for each different laser and/or each laser pulse emitted by the system.

[0026] In other cases, the lidar system may use these techniques over multiple laser pulses emitted toward the same location to determine the average background noise of the location. For instance, when the lidar system is stationary (e.g., mounted to a fixed location in a security system), it may determine the average background noise of a location based on multiple laser pulses emitted from the same laser and/or in the same (or a substantially similar) direction. When the lidar system itself is not stationary (e.g., a lidar on an autonomous vehicle), it may determine the estimated distance between the target locations of different lidar pulses (e.g., based on the movement of the autonomous vehicle between the laser emissions, the lidar range, etc.). The lidar system then may use a distance threshold, time threshold, and/or threshold number of pulses to determine when multiple pulses are targeted to the same region and are close enough to be used to determine the average background noise of the region. In at least some examples, lidar returns may be selected (or otherwise determined) which are associated with a same or similar target region despite movement of the sensor (e.g., rotations) and/or system to which it is mounted (e.g., the vehicle).

[0027] The various systems and techniques described herein may be directed to determining and calibrating out lidar background noise to improve the quality of lidar data used to generate surface detection data, object detection data, and/or other data that may be used by a vehicle, such as an autonomous vehicle, to more accurately identify objects in an environment. Using this improved data, such a vehicle may generate safer and more efficient trajectories for use in navigating through an environment. In examples, the systems and techniques described herein may also, or instead, enable a vehicle, such as an autonomous vehicle, to more accurately predict trajectories of other vehicles and/or mobile objects in an environment and therefore operate more safely in the environment using such predictions. In particular examples, the systems and techniques described herein can utilize data structures containing surface detection data and/or object detection data based on the disclosed improved analysis of returned lidar pulses to more accurately and efficiently determine the locations of objects in an environment and the proximity’ of an autonomous vehicle to such objects. By using the lidar pulse analysis techniques described herein to more accurately determine and calibrate out lidar background noise, the examples described herein may result in increased certainty and accuracy of object detections, thereby allowing an autonomous vehicle to generate more accurate and/or safer trajectories for the autonomous vehicle to traverse in the environment.

[0028] For example, techniques described herein may increase the reliability of the determination of locations, dimensions, and/or other physical parameters of objects in the environment, reducing the likelihood of failing to detect or inaccurately detecting an object. That is, techniques described herein provide a technological improvement over existing object detection, classification, tracking, and/or navigation technology'. In addition to improving the accuracy of object detections and determinations of the size, shape, and location of such objects, the systems and techniques described herein can provide a smoother ride and improve safety outcomes by, for example, more accurately providing safe passage to an intended destination through an environment that is also occupied by one or more objects. [0029] The techniques described herein may also improve the operation of computing systems and increase resource utilization efficiency. For example, vehicle computing systems may perform object detection more efficiently using the techniques described herein, because the disclosed examples for calibrating out background noise may improve the signal-to-noise ratio (SRN) of the lidar data. As a result, these techniques may permit the lidar system to use lower transmit powers for lidar pulses, and/or may require the processing of fewer returned lidar pulses and/or associated data than would be required using conventional techniques.

[0030] The systems and techniques described herein can be implemented in several ways. Example implementations are provided below with reference to the following figures. Although discussed in the context of an autonomous vehicle, the techniques described herein can be applied to a variety of systems (e.g., a sensor system or a robotic platform) and are not limited to autonomous vehicles. For example, the techniques described herein may be applied to semi-autonomous and/or manually operated vehicles. In another example, the techniques can be utilized in an aviation or nautical context, or in any system involving objects or entities having dimensions and/or other physical parameters that may not be known to the system. Further, although discussed in the context of pulses originating as lidar emissions, detection using lidar sensors, and processing using lidar sensor data, other types of sensors and emitters are contemplated, as well as other types of sensor data. Furthermore, the disclosed systems and techniques may include processing using various types of components and various types of data and data structures, including, but not limited to, various types of image data or sensor data (e.g., stereo cameras, time-of-flight data, radar data, sonar data, and the like). Additionally, the techniques described herein can be used with real data (e.g.. captured using sensor(s)), simulated data (e.g., generated by a simulator), or any combination of the two.

[0031] FIG. 1 is a pictorial flow diagram of an example process 100 for determining and calibrating out background noise within reflected lidar pulses in an environment. As shown in this example, one or more operations of the process 100 may be implemented by a lidar system 102. In some examples, the lidar system 102 may be implemented within a vehicle computing system, such as by using one or more of the components and systems illustrated in FIG. 10 and described below. For example, one or more components and systems can include those associated with the one or more sensor systems 1010, the perception component 1022, and/or the planning component more components and systems can include those associated with the one or more sensor systems 1106 and/or the perception component 1122 of the vehicle 1102 illustrated in FIG. 11. In some examples, the one or more operations of the process 100 may also, or instead, be performed by a remote system in communication with a vehicle, such as the perception component 1140 of the computing device(s) 1134 illustrated in FIG. 11. Such processes may also, in turn, be performed by the device itself (e.g., using onboard electronics) such that a standalone device may produce such signals without the need for additional computational resources. In still other examples, the one or more operations of the process 100 may be performed by a combination of a remote system and a vehicle computing systems. However, the process 100 is not limited to being performed by such components and systems, and the components and systems of FIGS. 10 and 11 are not limited to performing the process 100.

[0032] At operation 102, a lidar system 102, which may be associated with an autonomous vehicle or may be implemented in a different system/environment, may emit a signal into an environment. The lidar system 102 may include one or more lidar emitters and one or more lidar sensors (e.g., photodetectors). The environment into which the lidar pulse is emitted may include one or more other obj ects that a vehicle computing system configured at the autonomous vehicle may detect. For example, the lidar system 102 may be implemented within a vehicle computing system of an autonomous vehicle that includes one or more sensors configured to detect stationary objects (e.g., buildings, road markings, signs) and moving objects (e.g., people, bicycles, other vehicles) in the environment. As shown in box 106, the lidar system 102 may be implemented within an autonomous vehicle traversing a driving environment. In this example, the lidar system 102 emits periodic lidar pulses in different directions from the vehicle, receives return signals from the lidar pulses, and analyzes the return signals to detect various objects 108-112 in the environment. Although the lidar system 102 may emit any number of lidar pulses (e.g., based on the number of lasers and firing rates) at various angles/directions into the environment, the operations below' may refer to receiving and analyzing the return single from a single lidar pulse.

[0033] At operation 114, the lidar system 102 performs a first ADC sampling during a first time window associated with a lidar pulse. In some examples, the lidar system 102 may determine the first time window as the period of time during which a significant amount of the reflected light from the laser pulse will be received by the lidar sensors. As shown in box 116. a first sampling time window 118 corresponds to the time during which most (or all) of the return signal from a lidar pulse is captured by the lidar sensors. In some examples, the lidar system 102 may determine an estimated maximum round-trip time associated with the lidar pulse emitted in operation 104. Different lidar pulses emitted by the same lidar system (and/or same laser) may have different estimated maximum round-trip times, based on the characteristics of the laser pulse (e.g., intensity, frequency, etc.) and the characteristics of the environment (e.g., distance to a surface). For instance, if the likely maximum range of the emitted pulse is 50 meters, then the estimated maximum round-trip time for the pulse would be 334 ns (e.g., 50 m * 2 / c, where c equals the speed of light). To determine the first time window in operation 1 14, the lidar system 102 may determine a start time of the time window corresponding to the start time of the lidar pulse (or shortly thereafter) and the end time of the time window 7 corresponding to the start time plus the estimated maximum round-trip time for the pulse.

[0034] As described above, the lidar system 102 may perform an ADC sampling by computing the accumulated light received by the lidar sensors (e.g., photodetectors) associated with the lidar pulse, over the sampling time window 7 . For instance, in operation 114, the ADC may use an integrator to determine the accumulated light received during the first time window, and may output the accumulated light data (e.g., as a power or energy reading) as the ADC sample for the first time window'.

[0035] At operation 120, the lidar system 102 performs a second ADC sampling during a second time window that is different from the first time window 7 . In various examples, the lidar system 102 may determine the second time window as the period of time during which little or none of the reflected light from the laser pulse will be received by the lidar sensors. As shown in box 122, a second sampling time window 124 corresponds to the time after the first time window 7 during which little (or none) of the return signal from the lidar pulse is captured by the lidar sensors. In some examples, the lidar system 102 may determine the second time window as a period of time after the first time window, during the “dwell time” between tw o lidar pulses. The dwell time associated with a lidar pulse may be the time period between the return signal of the lidar pulse (e.g., after the estimated maximum round-trip time) and the firing of the next lidar pulse. In various examples, the second time window in operation 120 may be any period of time during the dwell time associated with a lidar pulse. In some cases, the second time window may be the same length as the first time window, which may simplify the comparison of the ADC samplings. In other cases, the second time window may be longer or shorter than the first time window. In some examples, the lidar system 102 may perform multiple additional ADC samplings during the dwell time of a lidar pulse in operation 120, and may average the output readings from the multiple ADC samples to estimate the background noise level.

[0036] To perform the ADC sampling in operation 120, the lidar system 102 may use similar or identical techniques to those used to perform the ADC sampling in operation 114. For example, the ADC within the lidar system 102 may use an integrator to accumulate the light received by the lidar sensors (e.g., photodetectors) over the second time window, and may output the accumulated light data as the power or energy reading for the second time window.

[0037] At operation 126, the lidar system 102 may determine the level of lidar background noise based on the first ADC sampling performed in operation 114 and the second ADC sampling performed in operation 120. The lidar system 102 then may calibrate out the determined background noise from the lidar reflectivity data. For example, based on the second ADC sample (and/or additional ADC samples captured during the dwell time associated with a lidar pulse), the lidar system 102 may determine an average background noise level. The lidar system 102 then may subtract the average background noise level (e.g., an energy or power value) from the first ADC sample to determine the reflectivity data associated with the lidar pulse. By subtracting the determined background noise from the lidar return signal, the resulting lidar reflectivity data may provide improved and more accurate signal characteristics, and an increased signal-to-noise ratio (SNR) for the lidar data. Box 128 in this example represents the improved and more accurate lidar data for an environment, after calibrating out the lidar background noise.

[0038] As discussed above, the operations described in FIG. 1 may be performed for a single lidar pulse, to determine the level of lidar background noise at the location where the pulse was emitted. In some examples, groups of multiple lidar pulses emitted to the same location/region of the environment (e.g., within a distance threshold) may be analyzed together to determine an average background noise level for the region. However, different locations/regions in the environment may have different levels of background noise (e.g., shaded versus sunlit regions, regions having different surface materials, etc.). Additionally, the level of the background noise for a particular location/region may change in a relatively short period of time, for instance, when a sunlit region becomes shaded or vice versa. As a result, the operations described in FIG. 1 may be performed separately for each different location in the environment, and/or may be performed periodically in the same region to determine updated background noise levels.

[0039] Although this example depicts performing the “pulse time” ADC sampling during the first sampling window 118 before the “dwell time” ADC sampling during the second sampling window 124, in other examples the dwell time ADC sampling can be performed prior to the pulse time ADC sample. As described below, there may be advantages in some cases of performing dwell time ADC sampling during sampling windows that are just prior to the beginning of a lidar pulse being fired, so that during the sampling window the lidar sensors may receive none (or a minimal amount) of reflected light from any previous lidar pulses. Further, in some examples, techniques described herein may use one or more dwell time ADC sampling windows to measure lidar background noise, without necessarily performing any pulse time ADC sampling. [0040] FIGS. 2 and 3 depict example driving environments including lidar background noise caused by solar radiation, and the corresponding effect on the lidar reflectivity data. As described below, the lidar data depicted in FIGS. 2 and 3 may represent lidar data generated without using the various techniques described herein for determining and calibrating out lidar background noise. As a result, the lidar data shown in FIGS. 2 and 3 may include a number of inaccuracies (e.g., false positives and/or false negative lidar detections), caused by the regions of solar radiation in the environment.

[0041] FIG. 2 depicts an example driving environment 200 associated with an autonomous vehicle (or other sensor system). In this example, image 202 represents a visual image captured by a camera associated with the autonomous vehicle while traversing the driving environment. Region 204 within the image 202 represents a shaded region caused by the shading from one or more buildings or trees in the environment. Lidar data 206 shows a top-down rendering of the lidar data generated by a lidar system based on the driving environment 200. Region 208 within the lidar data 206 corresponds to region 204 within the image 202. In this example, within the lidar data 206, the colors purple and dark blue indicate low er reflectivity, while red and yellow indicate higher reflectivity. As shown in this example, due to the background noise caused by solar radiation, the lidar data region 208 shows a lower level of reflectivity than other similar road surfaces in the environment 200. This example illustrates the shaded areas (e.g.. region 204) degrade the measured reflectivity of the road surface.

[0042] FIG. 3 depicts another example driving environment 300 associated with an autonomous vehicle (or other sensor system). In this example, image 302 represents a visual image captured by a camera associated with the autonomous vehicle while traversing the driving environment 300. Region 304 within the image 302 represents a patch of sunlight passing between two buildings within an otherwise shaded area of the street. Lidar data 306 shows a top-do w n rendering of the lidar data generated by a lidar system based on the driving environment 300. Region 308 within the lidar data 306 corresponds to region 304 within the image 302. As shown in this example, due to the background noise caused by solar radiation, the patch of sunlight in region 304 causes a difference in reflectivity compared to the surrounding shaded areas.

[0043] Both the examples in FIG. 2 and FIG. 3 illustrate that, when the background noise due to solar radiation is not calibrated out, the presence of various shaded and sunlit areas in the environment may cause differences in reflectivity which can degrade or distort the measured reflectivity of the surfaces in the environment. As shown in these examples, background noise from solar radiation may result in false positive lidar point detections (e g., region 308 of FIG. 3) and/or false negatives where lidar points based on reflectivity are not detected (e.g.. in region 208 of FIG. 2).

[0044] FIG. 4 depicts an example graph 400 representing the return signals (or lidar return pulses) associated with two lidar pulses (which also may be referred to as laser firings) emitted by a lidar system 102. In this example, graph 400 depicts a first lidar pulse 402 and a second lidar pulse 404 that have been fired in a lidar system 102, as well as the subsequent first return signal 406 associated with the first lidar pulse and the second return signal 408 associated with the second lidar pulse. The first and second lidar pulses in this example may represent laser pulses emitted from tw o different lasers of the lidar system 102, or may represent consecutive pulses emitted by the same laser at different times.

[0045] As shown in this example, after the lidar system 102 fires the first laser pulse 402, the lidar sensors (e.g., photodetectors) may receive a lidar return pulse 406 associated with the first laser firing. Although it may be difficult or impossible to determine the exact starting time and/or ending time of a laser return signal, a laser return signal (which also may be referred to as the lidar return pulse) may generally correspond to the period of time during which most (or all) of the reflected light from the lidar pulse is received by the lidar sensors. In some examples, the laser return signal may correspond to a period of time starting at the firing of the laser and ending after the estimated maximum round-trip time of the lidar pulse. As described above, the lidar system 102 may estimate the maximum round-trip time of a lidar pulse based on a number of factors, such as the transmit power of the lidar pulse, the wavelength of the laser, and/or the characteristics of the location to which the laser is directed.

[0046] As depicted in FIG. 4, after receiving the return signal 406 associated with the first lidar pulse 402, the lidar sensors may experience a period of “dwell time” before the next lidar pulse 404 is fired. As described above, during the dwell time associated with a lidar pulse little (or none) of the reflected light from the lidar pulse may be received by the lidar sensors. However, as shown in this example, even during the dwell time associated with a lidar pulse, the lidar sensors may receive light from various background noise light sources (e.g., reflected sunlight, object surfaces emitting heat, etc.). In this example, the dwell time between the lidar pulses indicates a relatively consistent level of background noise (e.g., a power level greater than zero) which may be caused by solar radiation at the location where the laser is directed.

[0047] FIGS. 5A and 5B depict two example graphs 500 and 502 illustrating a technique for determining the lidar background noise level associated with a lidar pulse 504. As shown in FIG. 5 A. the lidar system 102 may determine a first time window 508 during the dwell time in between lidar pulses, and may perform a first ADC sampling during the first time window 508. In particular, the time window 508 in this example may be selected as a time window' just prior to the firing of the lidar pulse 504. As described above, during an ADC sampling, the ADC of the lidar system 102 may determine the accumulated amount of light (e.g., using an integrator) received by the analog lidar sensor elements (e.g., photodetectors) over the sampling time window. Graph 500 depicts the first ADC sampling window 508, including shading indicating the accumulated light data received during the first sampling w indow. As shown in this example, because the first ADC sampling window 508 is just prior to the beginning of the lidar pulse 504, the lidar sensors may receive none of the reflected light from the lidar pulse 504 (and none or a minimal amount of reflected light from any pulses previous to lidar pulse 504).

[0048] As shown in FIG. 5B, lidar system 102 may perform an additional ADC sampling during a second ADC sampling window 510. Unlike the first ADC sampling window 508, the second ADC sampling window 510 is not associated with the dwell time before lidar pulse 504. Rather, the second ADC sampling window 510 may begin simultaneously with or shortly after the beginning of the of the lidar pulse 504. As noted above, the lidar system 102 may determine the end of the second ADC sampling time window 510 based on the estimated maximum round-trip time for the laser in the first lidar pulse 504. For instance, if the range of the lidar pulse is approximately 30 meters, then the duration of the sampling time window 510 may be set for approximately 200 nanoseconds based on determining that most or all of the reflected light from the lidar pulse should return to the lidar sensor within that time duration. For longer range lidar pulses (e.g., 50 meter range) the lidar system 102 may use longer sampling time windows (e.g., 300 nanosecond duration), and so on.

[0049] As noted above, the lidar system 102 may determine “pulse time” ADC sampling time windows to correspond to the time duration when most or all of the return signal from the lidar pulse is likely to be received by the lidar sensors, and separate “dwell time” ADC sampling time windows to correspond to periods between lidar pulses, during which the lidar sensors may receive relatively little (or none) of the reflected light from the lidar pulse. For instance, in addition to the first ADC sampling time window 508, the lidar system 102 may determine additional “dwell time” sampling time windows using any time period between the end of the second ADC sampling time window 510 and the start of the next lidar pulse 506. As show n in this example, in some cases it may be advantageous to use a “dw ell time” ADC sampling window just before a lidar pulse in order to minimize the amount of reflected light received from the previous pulse. However, in other examples, any ADC sampling window between the end of the second ADC sampling time window 510 and the start of the next lidar pulse 506 may be used as “dwell time” sampling window-s, during which the lidar sensors may receive light from background noise sources (e.g., solar radiation) but little or no light from previous reflected lidar pulses.

[0050] As described above, the lidar system 102 may use the accumulated light data from the ADC samples of the first time window' 508 and the second time window 510 to compute the background noise level associated with the lidar pulse 504. For example, lidar system 102 may use the duration of the first ADC sampling time window 508, and the accumulated light received during the first ADC sample (e.g., the integral of the shaded region in graph 502), to determine an average background noise level. The lidar system 102 may then subtract the average background noise level determined from the first ADC sampling from the second ADC sampling shown in FIG. 5B to improve the accuracy of the reflectivity data for the lidar return pulse.

[0051] As shown in this example, the total dwell time between lidar pulses 504 and 506 may be longer than the duration of a “dwell time’' sampling window (e.g., the first ADC sampling time window 508). As a result, in some cases the lidar system 102 may determine a duration for dwell time ADC sampling windows that is longer than the duration of “pulse time” ADC sampling windows. In such cases, the ADC may integrate to determine the accumulated amount of light over the longer time period, and then divide by the longer time duration to determine the average background noise level. In other cases, the “dwell time” ADC sampling windows may be shorter in duration than the “pulse time” ADC sampling windows, in which case the lidar system be integrate to determine the accumulated amount of light over the shorter dwell time period, and then divide by the shorter time duration to determine the average background noise level.

[0052] Additionally or alternatively, the lidar system 102 may perform multiple ADC samplings during the dwell time associated with the lidar pulse 504. The multiple ADC samplings may be averaged (or otherwise combined) to determine the background noise level to be calibrated out of the lidar return pulse.

[0053] For example, referring now to FIG. 6, an example graph 600 is shown illustrating another technique for determining the background noise associated with a lidar pulse. The example lidar pulse(s) shown in FIG. 6 may be similar or identical to the lidar pulse(s) shown in FIGS. 5 A and 5B. In this example, the lidar system 102 has determined that, based on the amount of dwell time between the end of the lidar return pulse and the next laser firing, that five ADC samples may be captured and evaluated during the dw ell time. In this example, the ADC sample captured during the lidar return pulse (e.g., ADC Sample 1) and each of the ADC samples captured during the dwell time (e.g., ADC Sample lathrough ADC Sample le) have the same duration. However, as described above, in other examples the ADC samplings performed during the return pulse and the dwell time may have different time window durations.

[0054] When multiple ADC samplings are performed during the dwell time associated with a lidar pulse, the accumulated light data from the samples may be averaged to determine the average background noise level associated with the lidar pulse. In other examples, the lidar system 102 may determine the background noise based on the median of ADC Sample la through ADC Sample le.

[0055] FIG. 7 depicts an example graph 700 illustrating various techniques for determining particular power levels and/or corresponding particular times associated with a lidar pulse. As described above, certain lidar systems may include time-to-digital converters (TDCs) configured to evaluate the power levels associated with lidar return pulses. For instance, a lidar system 102 may include TDCs instead of, or in addition to, ADCs. TDCs, like ADCs, may receive data from the optical sensors of the lidar system 102, and output digital sensor data representing the amount of light data received by the optical sensors.

[0056] As shown in FIG. 7. TDCs may be used to implement power level thresholds based on the amount of analog light data received via the optical sensors. For example, graph 700 depicts two separate TDC power thresholds 702 and 704. In this example, when the amount of light received by the lidar sensor crosses either power threshold, in either direction, the TDC may output an indication of which TDC power threshold was crossed and the time at which the threshold was crossed. Graph 700 depicts a return signal (a return lidar pulse) caused by a lidar pulse, followed by a dwell time associated with the lidar pulse. In this example, based on the power level thresholds 702 and 704 implemented by the TDC, the TDC may output a first measurement indicating time tl in response to the lidar return pulse signal crossing the first TDC threshold 702, a second measurement indicating time t2 in response to the lidar return pulse signal crossing the second TDC threshold 704, a third measurement indicating time t3 in response to the lidar return pulse signal crossing the second TDC threshold 704 again, and a fourth measurement indicating time t4 in response to the lidar return pulse signal crossing the first TDC threshold 702 again.

[0057] In some examples, the lidar system 102 may use TDC power thresholds to determine any or all of the ADC sampling time windows described herein. For instance, in response to the lidar return pulse signal exceeding the first threshold 702 at time tl, and/or in response to the lidar return pulse signal exceeding the second threshold 704 at time t2, the lidar system 102 may initiate one or more ADC sampling windows to measure the reflectivity associated with the lidar return pulse. Additionally or alternatively, in response to the lidar return pulse signal falling below the second threshold 704 at time t3 and/or falling below the first threshold 702 at time t4, the lidar system 102 may end ADC sampling time windows associated with the lidar return pulse. Similarly, the lidar system 102 may use the TDC power thresholds to determine when the dwell time associated with a lidar pulse has begun (e.g.. in response to the lidar return pulse signal falling below the first threshold 702 at time t4), and may use the outputs of the TDC to trigger an ADC sampling to measure the light received during a time window within the dwell time.

[0058] Further, in some examples, TDC power thresholds may be used to determine a maximum power level (e.g., peak 706) associated with a return lidar pulse. In such examples, after determining a background noise level associated with a lidar pulse (using any of the various techniques described herein), the lidar system 102 may subtract the background noise level from peak 706 of the lidar pulse to determine a calibrated peak value that represents the peak reflectivity associated with the lidar pulse (e.g., excluding the background noise).

[0059] FIGS. 8 A and 8B show two example graphs 800 and 802 illustrating additional techniques for determining the level of background noise associated with one or more lidar pulses. In this example, FIG. 8A shows a first graph 800 depicting a first lidar pulse 804 emitted by a laser in a lidar system 102, as well as the lidar return pulse signal associated wi th the first lidar pulse. FIG. 8B shows a second graph 802 depicting a second lidar pulse 806 emitted by a different laser (or the same laser) at a different time, and the lidar return pulse signal associated with the second lidar pulse.

[0060] In this example, the lidar system 102 also may perform an ADC sampling associated with each lidar pulse. In contrast to the smaller ADC sampling time windows described in the above examples, in this example, the lidar system 102 may use larger time windows for the ADC samplings, corresponding to the entire time period between consecutive lidar pulses. In graph 800, shaded area 810 may represent the accumulated amount of light (e.g., determined using an integrator) detected by the optical sensors of the lidar system 102 between time tl (the firing of the first lidar pulse 804) and time t2 (the firing of the second lidar pulse 806). Similarly, in graph 802, the shaded area 812 may represent the accumulated amount of light detected by the optical sensors of the lidar system 102 between time 12 (the firing of the second lidar pulse 806) and time t3 (the firing of the third lidar pulse 808).

[0061] In this example, the first lidar pulse 804 and second lidar pulse 806 may be directed to the same location or a nearby location within the same region of the environment. Although they are depicted as consecutive pulses in this example, in other examples they may be non-consecutive. To determine the level of background noise associated with the location/region where the lidar pulses were directed, the lidar system 102 may vary’ the transmit power between the first lidar pulse 804 and second lidar pulse 806, and then analyze and compare the ADC samples associated with the lidar pulses to determine the background noise level. As shown in this example, the transmit power of the first lidar pulse in FIG. 8A is lower than the transmit power of the second lidar pulse in FIG. 8B. Due to the difference in laser transmit powers, the size of the ADC sample (e.g., shaded region 810) associated with the first lidar pulse 804 may be smaller than the size of the ADC sample (e.g., shaded region 812) associated with the second lidar pulse 806. Because the lidar pulses are directed to the same location/region at nearly the same time, the lidar system 102 may assume the same background noise level associated with the ADC samples 810 and 812. Additionally, the lidar system 102 may assume the that the difference in the reflectivity of the lidar pulses 804 and 806 may be proportional to (or otherwise correlated with) the difference in the transmit power. Based on these assumptions, the lidar system 102 can compute the level of background noise associated with the lidar pulses, based on the ADC samples 810 and 812.

[0062] Though not illustrated in FIG. 8, an additional or alternative technique may be used to derive the background irradiance based on similar location data. As a nonlimiting example, pulses may be chosen in which no surface reflection is expected (e.g., as may be derived from using map data, a current field of view of the lidar sensor, and/or position/orientation of the sensor in the environment). In such examples, lidar channels associated with no return may be selected which are contemporaneous (or substantially - e.g., within some threshold amount of time) and/or proximate to those that do. The relative difference (or other combination) of integrated received data may be used for calibration in those instances. In at least some examples, the laser may randomly (or with some pattern) refrain from firing. In such examples, the associated detector will be associated with background flux. In at least some examples, the background may be stored and subtracted from those subsequent lidar points pointing at a same target area (e.g.. as would have been observed by the original laser if it had been fired). In various examples, a dedicated sensor may be used to measure the background noise in addition to the sensor associated with the lidar system. In some such examples, one or more such additional sensors may be directed to the same or similar target areas as those associated with the lidar. [0063] FIG. 9 depicts another example driving environment 900 associated with an autonomous vehicle or other sensor system. In the example, image 902 is depicted showing a visual image captured of the environment. Regions 904 within the image 902 represent shaded regions on the road surface, while regions 906 represent unshaded regions on the same road surface. As described above, shaded and unshaded areas on the same surface may have different levels of lidar background noise. Using the techniques described herein, the lidar system 102 may determine the background noise levels associated with the shaded regions 904, and may separately determine the background noise levels of the unshaded regions 906 on the road surface (as well as determining the lidar background noise levels for any other regions in the environment). [0064] After determining the background noise levels for the shaded areas and unshaded areas in the environment, the lidar system 102 may calibrate the lidar data to exclude the differences in background noise between areas on the same surface. Lidar data 908 represents a rendering of the lidar data points generated by a lidar system 102 based on the example driving environment 900. As shown in this example, the differences in lidar background noise between the shaded regions 904 and unshaded regions 906 of the road surface have been calibrated out (e g., by subtracting the respective background noise levels from the reflectivity data for each region) to provide calibrated lidar data output. In contrast to the examples shown above in FIGS. 2 and 3, the calibrated lidar data 908 does not include regions of degraded measured reflectivity (as shown in lidar data 206) or differences in reflectivity of the same surface based on solar radiation (as shown in lidar data 306).

[0065] FIG. 10 illustrates a perspective view 1000 of an environment in which a vehicle 1002 may be traveling. In this example, the environment contain a number of objects, such as another vehicle 1004, a traffic signal 1006, and a tree 1008. To safely navigate through this environment, a vehicle computing system operating the vehicle 1002 may use one or more sensor systems 1010 to emit signals, receive reflected signals, and process such reflected signals to generate data that may be used in object detection operations.

[0066] For example, the vehicle 1002 may be configured with one or more lidar sensor systems 1012. The vehicle computing system may operate the lidar system 1012 to emit lidar pulses into the environment. These lidar pulses may be reflected back to the lidar system 1012 as return signals (or return pulses), which may reflected directly or indirectly from the surfaces of the various objects, providing the return signals that may be measured in the various ways described herein to determine and calibrate out the background lidar noise within the environment.

[0067] For example, the processor(s) 1014 and memory 1016 of the sensor system 1010 may process return signals (e.g., return lidar pulses and/or background noise) to determine as described herein to determine background lidar noise levels and/or to calibrate the lidar data (e.g., lidar points and attributes thereof) based on the determined background noise. Based on the calibrated lidar reflectivity data, the processor 1014 may generate data that may be provided to the perception component 1022 of the vehicle control system 1020 performing object detection operations. The perception component 1022 may then provide surface detection data and/or object detection data to the planning component 1024 for traj ectory and route planning for the vehicle 1002. [0068] FIG. 1 1 depicts a block diagram of an example system 1100 for implementing the techniques described herein. In at least one example, the system 1100 can include a vehicle 1102. The vehicle 1102 can include a vehicle computing device 1104 that may function as and/or perform the functions of a vehicle controller for the vehicle 1102. The vehicle 1102 can also include one or more sensor systems 1106, one or more emitters 1108, one or more communication connections 1110, at least one direct connection 1112, and one or more drive systems 1114.

[0069] The vehicle computing device 1104 can include one or more processors 1116 and memory 1118 communicatively coupled with the one or more processors 1116. In the illustrated example, the vehicle 1102 is an autonomous vehicle; however, the vehicle 1102 could be any other type of vehicle. In the illustrated example, the memon' 1118 of the vehicle computing device 1104 stores a localization component 1120. a perception component 1122. a planning component 1124, one or more system controllers 1126, one or more maps 1128, and a prediction component 1130. Though depicted in FIG. 11 as residing in memory 1 118 for illustrative purposes, it is contemplated that any one or more of the localization component 1120, the perception component 1122, the planning component 1124, the one or more system controllers 1126. the one or more maps 1128, and the prediction component 1130 can additionally, or alternatively, be accessible to the vehicle 1102 (e g., stored remotely).

[0070] In at least one example, the localization component 1120 can include functionality to receive data from the sensor system(s) 1106 to determine a position and/or orientation of the vehicle 1102 (e.g.. one or more of an x-, y-, z-position, roll, pitch, or yaw). For example, the localization component 1120 can include and/or request/receive a map of an environment and can continuously determine a location and/or orientation of the autonomous vehicle within the map. In some instances, the localization component 1120 can utilize SLAM (simultaneous localization and mapping), CLAMS (calibration, localization and mapping, simultaneously), relative SLAM, bundle adjustment, non-linear least squares optimization, or the like to receive image data. LIDAR data, radar data, IMU data, GPS data, wheel encoder data, and the like to accurately determine a location of the autonomous vehicle. In some instances, the localization component 1 120 can provide data to various components of the vehicle 1102 to determine an initial position of an autonomous vehicle for generating a trajectory and/or for generating map data, as discussed herein.

[0071] In some instances, the perception component 1122 can include functionality to perform object detection, segmentation, and/or classification. For example, the perception component 1122 may include functionality to analyze pulse data to determine whether return pulses are likely to be multipath return pulse or single reflection return pulses, as described herein. In some examples, the perception component 1122 can provide processed sensor data that indicates a presence of an entity that is proximate to the vehicle 1102 and/or a classification of the entity as an entity type (e.g., car, pedestrian, cyclist, animal, building, tree, road surface, curb, sidewalk, traffic signal, traffic light, car light, brake light, unknown). In additional or alternative examples, the perception component 1122 can provide processed sensor data that indicates one or more characteristics associated with a detected entity (e.g., a tracked object) and/or the environment in which the entity is positioned. The perception component 1122 may use the multichannel data structures as described herein, such as the multichannel data structures generated by the described deconvolution process, to generate processed sensor data. In some examples, characteristics associated with an entity' or object can include, but are not limited to, an x-position (global and/or local position), a y-position (global and/or local position), a z-position (global and/or local position), an orientation (e.g., a roll, pitch, yaw), an entity type (e.g., a classification), a velocity of the entity, an acceleration of the entity, an extent of the entity (size), etc. Such entity characteristics may be represented in a multichannel data structure as described herein (e.g., a multichannel data structure generated as output of one or more deconvolution layers (e.g., learned deconvolutional upsampling decoding layer(s)) using a learned upsampling transformation). Characteristics associated with the environment can include, but are not limited to, a presence of another entity in the environment, a state of another entity in the environment, a time of day, a day of a week, a season, a weather condition, an indication of darkness/light, etc. In some examples, the perception component 1122 can provide processed return pulse data as described herein.

[0072] In general, the planning component 1124 can determine a path for the vehicle 1102 to follow to traverse through an environment. In some examples, the planning component 1124 can determine various routes and trajectories and various levels of detail. For example, the planning component 1 124 can determine a route (e.g., planned route) to travel from a first location (e.g., a current location) to a second location (e.g., a target location). For the purpose of this discussion, a route can be a sequence of waypoints for travelling between two locations. As non-limiting examples, waypoints include streets, intersections, global positioning system (GPS) coordinates, etc. Further, the planning component 1124 can generate an instruction for guiding the autonomous vehicle along at least a portion of the route from the first location to the second location. In at least one example, the planning component 1124 can determine how to guide the autonomous vehicle from a first waypoint in the sequence of waypoints to a second waypoint in the sequence of waypoints. In some examples, the instruction can be a trajectory, or a portion of a trajectory. In some examples, multiple trajectories can be substantially simultaneously generated (e.g., within technical tolerances) in accordance with a receding horizon technique, wherein one of the multiple trajectories is selected for the vehicle 1102 to navigate.

[0073] In at least one example, the vehicle computing device 1104 can include one or more system controllers 1126, which can be configured to control steering, propulsion, braking, safety, emitters, communication, and other systems of the vehicle 1102. These system controller(s) 1126 can communicate with and/or control corresponding systems of the drive system(s) 1114 and/or other components of the vehicle 1102.

[0074] The memory 1118 can further include one or more maps 1128 that can be used by the vehicle 1102 to navigate within the environment. For the purpose of this discussion, a map can be any number of data structures modeled in two dimensions, three dimensions, or N-dimensions that are capable of providing information about an environment, such as, but not limited to, topologies (such as intersections), streets, mountain ranges, roads, terrain, and the environment in general. In some instances, a map can include, but is not limited to: texture information (e.g., color information (e.g.. RGB color information, Lab color information, HSV/HSL color information), non- visible light information (near-infrared light information, infrared light information, and the like), intensity information (e.g., lidar information, radar information, nearinfrared light intensity information, infrared light intensity information, and the like); spatial information (e.g., image data projected onto a mesh, individual “surfels” (e.g., polygons associated with individual color and/or intensity)); and reflectivity information (e.g.. specularity information, retroreflectivity information, BRDF information, BSSRDF information, and the like). Tn an example, a map can include a three-dimensional mesh of the environment. In some instances, the map can be stored in a tiled format, such that individual tiles of the map represent a discrete portion of an environment, and can be loaded into working memory as needed, as discussed herein. In at least one example, the one or more maps 1 128 can include at least one map (e.g., images and/or a mesh). In some examples, the vehicle 1102 can be controlled based at least in part on the maps 1128. That is, the maps 1128 can be used in connection with the localization component 1120, the perception component 1122, and/or the planning component 1124 to determine a location of the vehicle 1102, identify objects in an environment, and/or generate routes and/or trajectories to navigate within an environment.

[0075] In some examples, the one or more maps 1128 can be stored on a remote computing device(s) (such as the computing device(s) 1134) accessible via network(s) 1132. In some examples, multiple maps 1128 can be stored based on, for example, a characteristic (e.g., type of entity, time of day, day of week, season of the year). Storing multiple maps 1128 can have similar memory' requirements but increase the speed at which data in a map can be accessed.

[0076] In general, the prediction component 1130 can generate predicted trajectories of objects in an environment. For example, the prediction component 1130 can generate one or more predicted trajectories for vehicles, pedestrians, animals, and the like within a threshold distance from the vehicle 1102. In some instances, the prediction component 1130 can measure a trace of an object and generate a trajectory for the object based on observed and predicted behavior. In some examples, the prediction component 1130 can use data and/or data structures based on return pulses as described herein to generate one or more predicted trajectories for various mobile objects in an environment. In some examples, the prediction component 1130 may be a sub-component of perception component 1122. [0077] In some instances, aspects of some or all of the components discussed herein can include any models, algorithms, and/or machine learning algorithms. For example, in some instances, the components in the memory 11 18 (and the memory 1138, discussed below) can be implemented as a neural network. For instance, the memory 1118 may include a deep tracking network that may be configured with a convolutional neural network (CNN) that may one or more convolution/deconvolution layers.

[0078] An example neural network is an algorithm that passes input data through a series of connected layers to produce an output. Individual layers in a neural network can also comprise another neural network or can comprise any number of layers, and such individual layers may convolutional, deconvolutional, and/or another type of layer. As can be understood in the context of this disclosure, a neural network can utilize machine learning, which can refer to a broad class of such algorithms in which an output is generated based on learned parameters.

[0079] Although discussed in the context of neural netw orks, any type of machine learning can be used consistent with this disclosure, for example, to determine a learned upsampling transformation. For example, machine learning algorithms can include, but are not limited to, regression algorithms (e g., ordinary least squares regression (OLSR), linear regression, logistic regression, stepwise regression, multivariate adaptive regression splines (MARS), locally estimated scatterplot smoothing (LOESS)), instance-based algorithms (e.g., ridge regression, least absolute shrinkage and selection operator (LASSO), elastic net, least-angle regression (LARS)), decisions tree algorithms (e.g., classification and regression tree (CART), iterative dichotomiser 3 (ID3), Chi-squared automatic interaction detection (CHAID), decision stump, conditional decision trees), Bayesian algorithms (e.g., naive Bayes, Gaussian naive Bayes, multinomial naive Bayes, average one-dependence estimators (AODE), Bayesian belief network (BNN), Bayesian networks), clustering algorithms (e.g., k- means, k-medians, expectation maximization (EM), hierarchical clustering), association rule learning algorithms (e.g., perceptron, back-propagation, hopfield network, Radial Basis Function Network (RBFN)), deep learning algorithms (e.g.. Deep Boltzmann Machine (DBM), Deep Belief Networks (DBN), Convolutional Neural Network (CNN), Stacked Auto-Encoders), Dimensionality Reduction Algorithms (e.g., Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares Regression (PLSR), Sammon Mapping, Multidimensional Scaling (MDS), Projection Pursuit, Linear Discriminant Analysis (LDA), Mixture Discriminant Analysis (MDA), Quadratic Discriminant Analysis (QDA). Flexible Discriminant Analysis (FDA)), Ensemble Algorithms (e.g., Boosting, Bootstrapped Aggregation (Bagging), AdaBoost, Stacked Generalization (blending), Gradient Boosting Machines (GBM), Gradient Boosted Regression Trees (GBRT), Random Forest), SVM (support vector machine), supervised learning, unsupervised learning, semi-supervised learning, etc. Additional examples of architectures include neural networks such as ResNet50, ResNetlOl, VGG. DenseNet, PointNet, and the like.

[0080] In at least one example, the sensor system(s) 1106 can include radar sensors, ultrasonic transducers, sonar sensors, location sensors (e.g., GPS, compass), inertial sensors (e.g., inertial measurement units (IMUs), accelerometers, magnetometers, gyroscopes), cameras (e.g., RGB, IR, intensity, depth), time of flight sensors, microphones, wheel encoders, environment sensors (e.g., temperature sensors, humidity sensors, light sensors, pressure sensors), etc. The sensor system(s) 1106 can include multiple instances of one or more of these or other types of sensors. For instance, the camera sensors can include multiple cameras disposed at various locations about the exterior and/or interior of the vehicle 1102. The sensor system(s) 1106 can provide input to the vehicle computing device 1104. Additionally, or alternatively, the sensor system(s) 1106 can send sensor data, via the one or more networks 1132, to the one or more computing device(s) at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc.

[0081] In some examples, the sensor system(s) 1106 can include one or more lidar systems, such as one or more monostatic lidar systems, bistatic lidar systems, rotational lidar systems, solid state lidar systems, and/or flash lidar systems. In some examples, the sensor system(s) 1106 may also, or instead, include functionality to analyze the return signals of lidar pulses to determine and calibrate out background noise from the lidar reflectivity 7 data, as described herein. In particular examples, a lidar system of the sensor system(s) 1106 may perform one or more of the operations described herein to perform ADC sampling during different time windows associated with a lidar return pulse and within the dwell time between return pulses, and to determine background noise levels based on a comparison/analysis of the ADC samples. Alternatively, or in addition, any one or more other components of the vehicle computing device 1104 may perform one or more of the operations described herein to analyze return pulses and to determine background noise levels. [0082] The vehicle 1102 can also include one or more emiters 1108 for emiting light (visible and/or non-visible) and/or sound. The emiter(s) 1108 in an example include interior audio and visual emiters to communicate with passengers of the vehicle 1102. By way of example and not limitation, interior emitters can include speakers, lights, signs, display screens, touch screens, haptic emiters (e.g., vibration and/or force feedback), mechanical actuators (e.g., seatbelt tensioners, seat positioners, headrest positioners), and the like. The emiter(s) 1108 in this example may also include exterior emitters. By way of example and not limitation, the exterior emiters in this example include lights to signal a direction of travel or other indicator of vehicle action (e.g., indicator lights, signs, light arrays), and one or more audio emiters (e.g., speakers, speaker arrays, horns) to audibly communicate with pedestrians or other nearby vehicles, one or more of which comprising acoustic beam steering technology. The exterior emiters in this example may also, or instead, include non-visible light emiters such as infrared emiters, near-infrared emiters, and/or lidar emiters.

[0083] The vehicle 1102 can also include one or more communication connection(s) 1110 that enable communication between the vehicle 1102 and one or more other local or remote computing device(s). For instance, the communication connection(s) 1110 can facilitate communication with other local computing device(s) on the vehicle 1102 and/or the drive system(s) 1114. Also, the communication connection(s) 1110 can allow the vehicle to communicate with other nearby computing device(s) (e.g., other nearby vehicles, traffic signals). The communications connection(s) 1110 also enable the vehicle 1102 to communicate with a remote teleoperations computing device or other remote services.

[0084] The communications connection(s) 1110 can include physical and/or logical interfaces for connecting the vehicle computing device 1104 to another computing device or a network, such as network(s) 1132. For example, the communications connection(s) 1110 can enable Wi-Fi-based communication such as via frequencies defined by the IEEE 802.11 standards, short range wireless frequencies such as Bluetooth, cellular communication (e.g., 2G. 3G, 4G, 4G LTE. 5G) or any suitable wired or wireless communications protocol that enables the respective computing device to interface with the other computing device(s).

[0085] In at least one example, the vehicle 1102 can include one or more drive systems 1114. In some examples, the vehicle 1102 can have a single drive system 1114. In at least one example, if the vehicle 1102 has multiple drive systems 1 114, individual drive systems 1114 can be positioned on opposite ends of the vehicle 1102 (e.g., the front and the rear). In at least one example, the drive system(s) 1114 can include one or more sensor systems to detect conditions of the drive system(s) 1 114 and/or the surroundings of the vehicle 1102. By way of example and not limitation, the sensor system(s) 1106 can include one or more wheel encoders (e.g., rotary' encoders) to sense rotation of the wheels of the drive systems, inertial sensors (e.g., inertial measurement units, accelerometers, gyroscopes, magnetometers) to measure orientation and acceleration of the drive system, cameras or other image sensors, ultrasonic sensors to acoustically detect objects in the surroundings of the drive system, lidar sensors, radar sensors, etc. Some sensors, such as the wheel encoders can be unique to the drive system(s) 1114. In some cases, the sensor system(s) on the drive system(s) 1114 can overlap or supplement corresponding systems of the vehicle 1102 (e.g., sensor system(s) 1106).

[0086] The drive system(s) 1114 can include many of the vehicle systems, including a high voltage battery, a motor to propel the vehicle, an inverter to convert direct current from the battery into alternating cunent for use by other vehicle systems, a steering system including a steering motor and steering rack (which can be electric), a braking system including hydraulic or electric actuators, a suspension system including hydraulic and/or pneumatic components, a stability control system for distributing brake forces to mitigate loss of traction and maintain control, an HVAC system, lighting (e.g., lighting such as head/tail lights to illuminate an exterior surrounding of the vehicle), and one or more other systems (e.g., cooling system, safety systems, onboard charging system, other electrical components such as a DC/DC converter, a high voltage junction, a high voltage cable, charging system, charge port). Additionally, the drive system(s) 1114 can include a drive system controller which can receive and preprocess data from the sensor system(s) and to control operation of the various vehicle systems. In some examples, the drive system controller can include one or more processors and memory communicatively coupled with the one or more processors. The memory can store one or more components to perform various functionalities of the drive system(s) 1 114. Furthermore, the drive system(s) 11 14 may also include one or more communication connection(s) that enable communication by the respective drive system with one or more other local or remote computing device(s). [0087] In at least one example, the direct connection 1112 can provide a physical interface to couple the one or more drive system(s) 1114 with the body of the vehicle 1102. For example, the direct connection 1112 can allow the transfer of energy, fluids, air, data, etc. between the drive system(s) 1114 and the vehicle. In some instances, the direct connection 11 12 can further releasably secure the drive system(s) 11 14 to the body of the vehicle 1102.

[0088] In some examples, the vehicle 1102 can send sensor data to one or more computing device(s) 1134 via the network(s) 1132. In some examples, the vehicle 1102 can send raw sensor data to the computing device(s) 1134. In other examples, the vehicle 1 102 can send processed sensor data and/or representations of sensor data (e.g., data representing return pulses) to the computing device(s) 1134. In some examples, the vehicle 1102 can send sensor data to the computing device(s) 1134 at a particular frequency, after a lapse of a predetermined period of time, in near real-time, etc. In some cases, the vehicle 1 102 can send sensor data (raw or processed) to the computing device(s) 1134 as one or more log files.

[0089] The computing device(s) 1134 can include processor(s) 1136 and a memory 1138 storing a planning component 1142 and/or a perception component 1140. In some instances, the perception component 1140 can substantially correspond to the perception component 1122 and can include substantially similar functionality. In some instances, the planning component 1142 can substantially correspond to the planning component 1124 and can include substantially similar functionality.

[0090] The processor(s) 1116 of the vehicle 1102 and the processor(s) 1136 of the computing device(s) 1134 can be any suitable processor capable of executing instructions to process data and perform operations as described herein. By way of example and not limitation, the processor(s) 1116 and 1136 can comprise one or more Central Processing Units (CPUs), Graphics Processing Units (GPUs), and/or any other device or portion of a device that processes electronic data to transform that electronic data into other electronic data that can be stored in registers and/or memory. In some examples, integrated circuits (e.g., ASICs), gate arrays (e.g., FPGAs), and other hardware devices can also be considered processors in so far as they are configured to implement encoded instructions.

[0091] Memory 11 18 and 1138 are examples of non-transitory computer-readable media. The memory 1118 and 1138 can store an operating system and one or more software applications, instructions, programs, and/or data to implement the methods described herein and the functions attributed to the various systems. In various implementations, the memory can be implemented using any suitable memory technology, such as static random-access memory' (SRAM), synchronous dynamic RAM (SDRAM). nonvolatile/Flash-type memory, or any other type of memory capable of storing information. The architectures, systems, and individual elements described herein can include many other logical, programmatic, and physical components, of which those shown in the accompanying figures are merely examples that are related to the discussion herein.

[0092] It should be noted that while FIG. 11 is illustrated as a distributed system, in alternative examples, components of the vehicle 1102 can be associated with the computing device(s) 1134 and/or components of the computing device(s) 1134 can be associated with the vehicle 1102. That is, the vehicle 1102 can perform one or more of the functions associated with the computing device(s) 1134. and vice versa.

[0093] In any of the above examples, any one or more parameters may be adjusted based at least in part on the observed background including, but not limited to, transmit power, pulse duration, receiver integration time, receiver photosensitivity', lidar receiver aperture size, or any other parameter which may impact SNR, link budget, or performance.

EXAMPLE CLAUSES

[0094] A. A lidar system comprising: at least one laser; at least one photodetector; one or more processors; and one or more non-transitory computer-readable media storing computer-executable instructions that, when executed, cause the one or more processors to perform operations comprising: transmitting, via the laser, a lidar pulse into an environment, at a pulse transmission time; determining, based at least in part on the pulse transmission time, a first sampling time window associated with a return signal of the lidar pulse; receiving first light data from the environment during the first sampling time window^; determining a second sampling time window within a dwell time betw een the first sampling time window and a second lidar pulse; receiving second light data from the environment during the second sampling time window; determining, based at least in part on the second light data, a lidar background noise level associated with the environment; and determining reflectivity data associated with the lidar pulse, based at least in part on the first light data and the lidar background noise level.

[0095] B. The lidar system of paragraph A, wherein receiving the first light data during the first sampling time window comprises: sampling, using an analog-to-digital converter (ADC) of the lidar system, light received by the at least one photodetector during the first sampling time window; and accumulating, using an integrator of the lidar system, the sampled light received during the first sampling time window.

[0096] C. The lidar system of paragraph A, wherein determining the reflectivity data associated with the lidar pulse comprises: determining a peak power level associated with the lidar pulse; and subtracting the lidar background noise level from the peak power level.

[0097] D. The lidar system of paragraph C. wherein determining the peak power level associated with the lidar pulse comprises: determining, using a time-to-digital converter (TDC) of the lidar system, a peak time associated with the lidar pulse; and determining, using an analog-to-digital converter (ADC) of the lidar system, and based at least in part on the peak time, a magnitude associated with the peak power level.

[0098] E. The lidar system of paragraph A, wherein determining the second sampling time window- is based at least in part on at least one of: a transmission powder associated with the lidar pulse; a range associated with the lidar system; or a transmission frequency associated with the lidar system.

[0099] F. A method compnsing: determining a pulse transmission time associated with a lidar pulse transmitted by a lidar system into an environment; determining, based at least in part on the pulse transmission time, a sampling time window ; receiving light data from the environment during the sampling time window; determining, based at least in part on the light data, a lidar background noise level associated with the environment; and determining reflectivity data associated with the lidar pulse, based at least in part on the lidar background noise level.

[00100] G. The method of paragraph F, wherein receiving the light data during the sampling time window comprises: sampling, using an analog-to-digital converter (ADC) of the lidar system, light received by a photodetector during the sampling time window; and accumulating, using an integrator of the lidar system, the sampled light received during the sampling time window'.

[00101] H. The method of paragraph F, further comprising: determining, based at least in part on the pulse transmission time, a second sampling time window different from the sampling time window, wherein the sampling time window is within a dwell time associated with the lidar pulse, and wherein the second sampling time window' is outside of the dwell time associated w ith the lidar pulse; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level is based at least in part on the light data and the second light data.

[00102] I. The method of paragraph F, wherein determining the reflectivity data associated with the lidar pulse comprises: determining a peak power level associated with the lidar pulse; and subtracting the lidar background noise level from the peak power level.

[00103] J. The method of paragraph I. wherein determining the peak power level associated with the lidar pulse comprises: determining, using atime-to-digital converter (TDC) of the lidar system, a peak time associated with the lidar pulse; and determining, using an analog-to-digital converter (ADC) of the lidar system, and based at least in part on the peak time, a magnitude associated with the peak power level.

[00104] K. The method of paragraph F, wherein determining the sampling time window is based at least in part on at least one of: a transmission power associated with the lidar pulse; a range associated with the lidar system; or a transmission frequency associated with the lidar system.

[00105] L. The method of paragraph F, further comprising: determining a second sampling time window associated with a second lidar pulse, wherein the lidar pulse has a first transmission power and the second lidar pulse has a second transmission power different from the first transmission power; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level associated with the environment is based at least in part on the first transmission power, the light data, the second transmission power, and the second light data.

[00106] M. The method of paragraph F, further comprising: reconfiguring the lidar system based at least in part on the reflectivity data associated with the lidar pulse, wherein reconfiguring the lidar system includes at least one of: modifying an aperture size of a light detector of the lidar system; modifying an optical gain of the light detector; modifying a laser transmit power of the lidar system; modifying a lidar pulse duration of the lidar system; modifying a receiver integration time of the lidar system; modifying a receiver gain of the lidar system; modifying a receiver photosensitivity of the lidar system; or modifying a receiver aperture size of the lidar system.

[00107] N. One or more non-transitory computer-readable media storing instructions executable by one or more processors, wherein the instructions, when executed, cause the one or more processors to perform operations comprising: determining a pulse transmission time associated with a lidar pulse transmitted by a lidar system into an environment; determining, based at least in part on the pulse transmission time, a sampling time window; receiving light data from the environment during the sampling time window; determining, based at least in part on the light data, a lidar background noise level associated with the environment; and determining reflectivity data associated with the lidar pulse, based at least in part on the lidar background noise level.

[00108] O. The one or more non-transitory computer-readable media of paragraph N, wherein receiving the light data during the sampling time window comprises: sampling, using an analog-to-digital converter (ADC) of the lidar system, light received by a photodetector during the sampling time window; and accumulating, using an integrator of the lidar system, the sampled light received during the sampling time window.

[00109] P. The one or more non-transitory computer-readable media of paragraph N, the operations further comprising: determining, based at least in part on the pulse transmission time, a second sampling time window different from the sampling time window, wherein the sampling time window' is within a dwell time associated with the lidar pulse, and wherein the second sampling time window' is outside of the dwell time associated with the lidar pulse; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level is based at least in part on the light data and the second light data.

[00110] Q. The one or more non-transitory computer-readable media of paragraph N, wherein determining the reflectivity' data associated with the lidar pulse comprises: determining a peak power level associated with the lidar pulse; and subtracting the lidar background noise level from the peak power level.

[00111] R. The one or more non-transitory computer-readable media of paragraph Q, wherein determining the peak pow er level associated with the lidar pulse comprises: determining, using a time-to-digital converter (TDC) of the lidar system, a peak time associated with the lidar pulse; and determining, using an analog-to-digital converter (ADC) of the lidar system, and based at least in part on the peak time, a magnitude associated with the peak powder level.

[00112] S. The one or more non-transitory computer-readable media of paragraph N, wherein determining the sampling time window is based at least in part on at least one of: a transmission power associated with the lidar pulse; a range associated with the lidar system; or a transmission frequency associated with the lidar system.

[00113] T. The one or more non-transitory computer-readable media of paragraph N, the operations further comprising: determining a second sampling time window associated with a second lidar pulse, wherein the lidar pulse has a first transmission power and the second lidar pulse has a second transmission power different from the first transmission power; and receiving second light data from the environment during the second sampling time window, wherein determining the lidar background noise level associated with the environment is based at least in part on the first transmission pow er, the light data, the second transmission pow er, and the second light data.

[00114] While the example clauses described above are described with respect to particular implementations, it should be understood that, in the context of this document, the content of the example clauses can be implemented via a method, device, system, a computer-readable medium, and/or another implementation. Additionally, any of the examples A-T may be implemented alone or in combination w ith any other one or more of the examples A-T.

CONCLUSION

[00115] While one or more examples of the techniques described herein have been described, various alterations, additions, permutations and equivalents thereof are included within the scope of the techniques described herein.

[00116] In the description of examples, reference is made to the accompanying drawings that form a part hereof, which show by way of illustration specific examples of the claimed subject matter. It is to be understood that other examples may be used and that changes or alterations, such as structural changes, may be made. Such examples, changes or alterations are not necessarily departures from the scope with respect to the intended claimed subject matter. While the steps herein may be presented in a certain order, in some cases the ordering may be changed so that certain inputs are provided at different times or in a different order without changing the function of the systems and methods described. The disclosed procedures could also be executed in different orders. Additionally, various computations that are herein need not be performed in the order disclosed, and other examples using alternative orderings of the computations could be readily implemented. In addition to being reordered, the computations could also be decomposed into sub-computations with the same results. [00117] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claims.

[00118] The components described herein represent instructions that may be stored in any type of computer-readable medium and may be implemented in software and/or hardware. All of the methods and processes described above may be embodied in, and fully automated via, software code modules and/or computer-executable instructions executed by one or more computers or processors, hardware, or some combination thereof. Some or all of the methods may alternatively be embodied in specialized computer hardware.

[00119] Conditional language such as, among others, ‘'may,” "could." “may” or “might,” unless specifically stated otherwise, are understood within the context to present that certain examples include, while other examples do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that certain features, elements and/or steps are in any way required for one or more examples or that one or more examples necessarily include logic for deciding, with or without user input or prompting, whether certain features, elements and/or steps are included or are to be performed in any particular example.

[00120] Conjunctive language such as the phrase “at least one of X, Y or Z,” unless specifically stated otherwise, is to be understood to present that an item, term, etc. may be either X, Y, or Z, or any combination thereof, including multiples of each element. Unless explicitly described as singular, “a” means singular and plural.

[00121] Any routine descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code that include one or more computerexecutable instructions for implementing specific logical functions or elements in the routine. Alternate implementations are included within the scope of the examples described herein in which elements or functions may be deleted, or executed out of order from that shown or discussed, including substantially synchronously, in reverse order, with additional operations, or omitting operations, depending on the functionality involved as would be understood by those skilled in the art. [00122] Many variations and modifications may be made to the above-described examples, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.