Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM
Document Type and Number:
WIPO Patent Application WO/2022/157090
Kind Code:
A1
Abstract:
An electronic device having circuitry configured to perform hand owner identification based on image analysis of an image captured by an imaging system (200) to obtain a hand owner status.

Inventors:
ARORA VARUN (DE)
EYNARD DAMIEN (DE)
TOCINO DIAZ JUAN CARLOS (DE)
DAL ZOT DAVID (DE)
Application Number:
PCT/EP2022/050823
Publication Date:
July 28, 2022
Filing Date:
January 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY SEMICONDUCTOR SOLUTIONS CORP (JP)
SONY DEPTHSENSING SOLUTIONS SA/NV (BE)
International Classes:
G06V20/59; B60K37/06; G06V40/10
Domestic Patent References:
WO2019134888A12019-07-11
WO2015104257A12015-07-16
Foreign References:
US20150131857A12015-05-14
GB2568508A2019-05-22
US20140062858A12014-03-06
Other References:
FRITZSCHE M ET AL: "Vehicle occupancy monitoring with optical range-sensors", INTELLIGENT VEHICLES SYMPOSIUM, 2004 IEEE PARMA, ITALY JUNE 14-17, 2004, PISCATAWAY, NJ, USA,IEEE, 14 June 2004 (2004-06-14), pages 90 - 94, XP010727448, ISBN: 978-0-7803-8310-4, DOI: 10.1109/IVS.2004.1336361
HERRMANN ENRICO ET AL: "Hand-movement-based in-vehicle driver/front-seat passenger discrimination for centre console controls", ALGORITHMS AND TECHNOLOGIES FOR MULTISPECTRAL, HYPERSPECTRAL, AND ULTRASPECTRAL IMAGERY XIX - PROCEEDINGS OF SPIE, vol. 7532, 26 January 2010 (2010-01-26), US, pages 75320U.1 - 75320U.9, XP055914834, ISSN: 0277-786X, ISBN: 978-1-5106-4548-6, DOI: 10.1117/12.838918
MAKRUSHIN ANDREY ET AL: "User discrimination in automotive systems", IMAGE PROCESSING: ALGORITHMS AND SYSTEMS IX, SPIE, 1000 20TH ST. BELLINGHAM WA 98225-6705 USA, vol. 7870, no. 1, 10 February 2011 (2011-02-10), pages 1 - 9, XP060004679, DOI: 10.1117/12.872453
SHURAN SONGJIANXIONG XIAO: "Sliding Shapes for 3D Object Detection in Depth Images", PROCEEDINGS OF THE 13TH EUROPEAN CONFERENCE ON COMPUTER VISION
Attorney, Agent or Firm:
MFG PATENTANWÄLTE (DE)
Download PDF:
Claims:
CLAIMS

1. An electronic device comprising circuitry configured to perform hand owner identification (1706) based on image analysis (701) of an image (700) captured by an imaging system (200) to ob- tain a hand owner status (1710, 1711, 1712).

2. The electronic device of claim 1, wherein the circuitry is configured to define a driver wheel zone (300) as a Region of Interest in the captured image (700), and to perform hand owner identifi- cation (1706) based on the defined driver wheel zone (300).

3. The electronic device of claim 1, wherein the circuitry is configured to detect an active hand (303) in the captured image (700) capturing a field-of-view (201) of the imaging system (200) being a ToF imaging system, and to perform hand owner identification (1706) based on the detected active hand (303).

4. The electronic device of claim 2, wherein the circuitry is configured to define a minimum number (m) of frames in which an active hand (303) should be detected in the driver wheel zone (300).

5. The electronic device of claim 4, wherein the circuitry is configured to count a number (n) of frames in which the active hand (303) is detected in the driver wheel zone (300), and to perform hand owner identification (1706) by comparing the minimum number (m) of frames with the counted number (n) of frames.

6. The electronic device of claim 5, wherein the circuitry is configured to, when the minimum number (m) of frames is smaller than the counted number (n) of frames, obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a driver.

7. The electronic device of claim 1, wherein the circuitry is configured to perform image analy- sis (701) based on the captured image (700) to obtain tip positions (702), a palm position (703) and an arm position (704) indicating a bottom arm position.

8. The electronic device of claim 7, wherein the circuitry is configured to perform arm angle determination (800) based on the palm position (703) and the bottom arm position (704) to obtain an arm angle (801).

9. The electronic device of claim 7, wherein the circuitry is configured to perform fingertips analysis (1000) based on the tip positions (702) to obtain a tip score (tipi). 10. The electronic device of claim 9, wherein the circuitry is configured to perform arm analysis (1100) based on the palm position (703), the bottom arm position (704) and the arm angle (801) to obtain a palm score (palmi), a bottom arm score (bottomi) and an arm angle score (anglei).

11. The electronic device of claim 10, wherein the circuitry is configured to perform arm voting (1200) based on the palm position (703), the bottom arm position (704) and the arm angle (801) to obtain an arm vote (1201).

12. The electronic device of claim 11, wherein the circuitry is configured to perform score deter- mination (1400) based on the arm vote (1201), the tip score (tipi), the palm score (palmi), the bot- tom arm score (bottomi) and the arm angle score (anglei) to obtain a driver’s score (scoreD) and a passenger’s score (scoreP).

13. The electronic device of claim 12, wherein the circuitry is configured to, when the driver’s score (scoreD) is higher than the passenger’s score ( score P), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a driver.

14. The electronic device of claim 12, wherein the circuitry is configured to, when the driver’s score (scoreD) is lower than the passenger’s score (scorep), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a passenger.

15. The electronic device of claim 12, wherein the circuitry is configured to, when an absolute difference of the driver’s score (scoreD) and the passenger’s score (scoreP) is greater than a thresh- old (∈), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is unknown.

16. The electronic device of claim 1, wherein the circuitry is configured to, when the captured image (7

00) is a depth image, perform seat occupancy detection based on the depth image to obtain a seat occupancy detection status.

17. The electronic device of claim 1, wherein the circuitry is configured to perform hand owner identification (1706) based on a Left Hand Drive (LHD) configuration or a Right Hand Drive (RHD) configuration.

18. A method comprising performing hand owner identification (1706) based on image analysis (701) of an image (700) captured by an imaging system (200) to obtain a hand owner status (1710, 1711, 1712).

19. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 18.

Description:
ELECTRONIC DEVICE, METHOD AND COMPUTER PROGRAM

TECHNICAL FIELD

The present disclosure generally pertains to the field of automotive user interfaces, and in particular, to devices, methods and computer programs for automotive user interfaces.

TECHNICAL BACKGROUND

Automotive user interfaces for vehicle systems concern the control of vehicle electronics, driving functionality, comfort functions (e.g., navigation, communication, entertainment) and driver assis- tance (e.g., distance checking).

Recent cars integrate interactive screens (touchscreens) which replace progressively a classical cock- pit. Usually, buttons or interactions are directly operated by a user of the car system and the car sys- tem outputs a feedback as a pre-defined behavior.

Next generation in-car user interfaces also rely on gesture recognition technology. Gesture recogni- tion determines whether recognizable hand or finger gestures are performed without contacting a touchscreen.

Although automotive user interfaces relying on touchscreen technology and gesture recognition technology are known, it is generally desirable to provide better techniques for controlling the func- tionality of a vehicle.

SUMMARY

According to a first aspect the disclosure provides an electronic device comprising circuitry config- ured to perform hand owner identification based on image analysis of an image captured by an im- aging system to obtain a hand owner status.

According to a second aspect the disclosure provides a method comprising performing hand owner identification based on image analysis of an image captured by an imaging system to obtain a hand owner status.

According to a third aspect the disclosure provides a computer program comprising instructions which, when the program is executed by a computer, cause the computer to perform hand owner identification based on image analysis of an image captured by an imaging system to obtain a hand owner status.

Further aspects are set forth in the dependent claims, the following description and the drawings. BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are explained by way of example with respect to the accompanying drawings, in which:

Fig. 1 schematically shows an embodiment of an interactive car feedback system, which is used to recognize a user’s gesture and perform a respective action based on the recognized gesture;

Fig. 2 schematically shows an embodiment of an in-vehicle imaging system comprising a ToF imag- ing system used for hand owner identification in an in-vehicle scenario;

Fig. 3 schematically shows an embodiment of a process for adapting an output behavior of the car system based on the operation performed by a user and based on the user;

Fig. 4 schematically shows an embodiment of a process for a car working mode selection based on the hand owner status;

Fig. 5 schematically shows an embodiment of an iToF imaging system in an in-vehicle scenario, wherein images captured by the iToF imaging system are used for hand owner identification;

Fig. 6a illustrates in more detail an example of a depth image obtained by the in-vehicle ToF imaging system used for car seat occupancy detection, wherein the depth image shows that the passenger’s seat is occupied;

Fig. 6b illustrates in more detail an example of a depth image obtained by the in-vehicle ToF imag- ing system used for car seat occupancy detection, wherein the depth image shows that the driver’s seat is occupied;

Fig. 7 illustrates a depth image generated by the ToF imaging system capturing a scene in an in-vehi- cle scenario, wherein in the depth image an active hand is detected;

Fig. 8 schematically describes an embodiment of a process of hand owner identification;

Fig. 9a schematically shows an embodiment of an image analysis process performed on an image captured by an in-vehicle ToF imaging system;

Fig. 9b illustrates a bottom arm analysis result, wherein the position of the bottom of the arm is de- termined;

Fig. 10 schematically shows an embodiment of a process of an arm angle determination performed to obtain an arm angle;

Fig. Ila schematically shows in more detail an embodiment of a hand owner status determination, where the hand owner status indicates that a detected hand belongs to the driver; Fig. 11b schematically shows in more detail an embodiment of a hand owner status determination, where the hand owner status indicates that a detected hand belongs to the front-seat passenger;

Fig. 12a schematically shows an embodiment of a fingertip analysis process based on a tip criterion, to obtain a tip score;

Fig. 12b schematically illustrates an embodiment of a finger pose detection result, which is obtained based on the detected tip positions and palm position;

Fig. 13 schematically shows an embodiment of an arm analysis process to obtain hand parameters;

Fig. 14a schematically shows an embodiment of an arm voting process performed based on hand parameters to obtain an arm vote;

Fig. 14b schematically shows an embodiment of the arm voting performed in Fig. 12a;

Fig. 14c schematically shows another embodiment of the arm voting performed in Fig. 12a;

Fig. 15a shows in more detail an embodiment of the arm voting process described with regard to Fig. 14a, wherein the arm vote is attributed to the driver;

Fig. 15b shows in more detail an embodiment of the arm voting process described with regard to Fig. 14a, wherein the arm vote is attributed to the passenger;

Fig. 16 schematically shows an embodiment of a score determination process, wherein a driver’s score and a passenger’s score are computed;

Fig. 17 shows a flow diagram visualizing a method for determining a hand owner status of an identi- fied active hand, wherein the computed driver’s score and passenger’s score are compared;

Fig. 18 shows a flow diagram visualizing a method for generating a hand owner status for an identi- fied active hand in a captured image;

Fig. 19 shows a flow diagram visualizing a method for hand owner status identification, wherein hand owner’s historical statistics are computed and an arm vote and a Right-Hand Drive (RHD) swapping is performed;

Fig. 20 shows a flow diagram visualizing an embodiment of a method for hand owner status identifi- cation;

Fig. 21 shows a block diagram depicting an example of schematic configuration of a vehicle control system;

Fig. 22 schematically shows an embodiment of hand owner detection process performed to adapt a car system behavior based on an input user; Fig. 23 shows in more detail an embodiment of a separation line defined in a captured image; and

Fig. 24 schematically shows a hand owner detection result, wherein the hand owner status is set as driver, while the hand owner interacts with an in-vehicle infotainment system.

DETAILED DESCRIPTION OF EMBODIMENTS

Before a detailed description of the embodiments is given under reference of Fig. 1 to Fig. 24, some general explanations are made.

Car systems become more and more smart. In the embodiments described below, information from the user’s hand that interacts with an entertainment system or a car driving system is may be used to adapt the cockpit content to the given user.

The embodiments disclose an electronic device comprising circuitry configured to perform hand owner identification based on image analysis of an image captured by an imaging system to obtain a hand owner status.

The hand owner identification may be performed in a vehicle’s cabin in an-in vehicle scenario, or the like.

The circuitry of the electronic device may include a processor, may for example be CPU, a memory (RAM, ROM or the like), a memory and/ or storage, interfaces, etc. Circuitry may comprise or may be connected with input means (mouse, keyboard, camera, etc.), output means (display (e.g. liquid crystal, (organic) light emitting diode, etc.)), a (wireless) interface, etc., as it is generally known for electronic devices (computers, smartphones, etc.). Moreover, circuitry may comprise or may be con- nected with sensors for sensing still images or video image data (image sensor, camera sensor, video sensor, etc.), etc. In particular, the circuitry of the electronic device may comprise an ToF imaging system (iToF camera).

In an in-vehicle scenario, a ToF imaging system may illuminate its field-of-view and the objects within it, such as a driver’s hand, a passenger’s hand, a passenger’s leg, a driver’s leg, a console, an infotainment system and the like. In a hand owner identification detection process, the ToF imaging system which includes the ToF sensor may detect interactions of a driver and/ or a passenger with the car’s infotainment system, or the like. Still further, in such a hand owner identification detection process, a driver and a front-seat passenger may be identified independently.

The user’s hand is typically detected as an active hand that interacts with e.g. the entertainment sys- tem or the car driving system in the cabin of the vehicle. In an in-vehicle scenario, the circuitry may detect occupant input actions and acquire occupant in- formation, which may include hand owner information, based on which a hand owner status may be generated. The hand owner status may be any status information indicating e.g. that the detected hand belongs to the driver, to a (front-seat) passenger, or that it is unknown to whom the hand be- longs, and the like. The hand owner status may be used by the car system to adapt an output of the car’s cockpit or to allow or dis-allow certain functionality. The hand owner status may be used by the car system to allow, for example, a passenger to interact with the car system including the info- tainment system, when the driver is not allowed to, and to allow the driver to tune configurations of the car to which the passenger may not have access, and the like.

The image captured by an imaging system may be a depth image, a confidence image, or the like. The imaging system may be any imaging system comprising at least one camera, wherein the camera may be depth camera system, a Red-Green-Blue (RGB) camera, a Time-of-Flight (ToF) camera, a combination of them, or the like. The in-cabin monitoring depth camera system may be fixed on the ceiling of the car and it may be orientated in the cabin with a downward facing field of view. Prefer- ably, the field of view is configured to be wide enough to comprise the driver, the central console area, and the passenger.

The hand owner identification process may for example combine several criteria computed by soft- ware, like a camera orientation criterion, a palm position criterion, a palm trajectory criterion, an arm angle and arm position criterion, a hand-tips analysis criterion, and the like. Hand actions and owner history may be also monitored. In the hand owner identification may be performed a score compu- tation and a vote computation, which may be used to identify the hand’s owner and to generate the hand owner status by reducing false detections. The hand owner identification may be performed in daylight, in low light and in night conditions.

The circuitry may be configured to define a driver wheel zone as a Region of Interest (ROI) in the captured image, and to perform hand owner identification based on the defined driver wheel zone. The driver wheel zone may be a region within which at least a part of the steering wheel of the vehi- cle is included. The driver wheel zone corresponds to the Region of Interest (ROI) in the captured image as it maps to the driver wheel zone in real space.

The circuitry may be configured to detect an active hand in the captured image capturing a field-of- view of the imaging system being a ToF imaging system, and to perform hand owner identification based on the detected active hand. The active hand may be a driver’s hand or a passenger’s hand. The active hand may be a hand that interacts with the car system comprising an infotainment sys- tem. The active hand may be segmented and tracked using a dedicated pipeline. The active hand may be segmented and tracked by defined a bounding box in the captured image, a ROI in the cap- tured image, detecting a two-dimensional (2D) / three-dimensional (3D) position of the active hand in the captured image, or the like.

The circuitry may be configured to define a minimum number of frames in which an active hand should be detected in the driver wheel zone. The minimum number of frames may be a predefined number. The minimum number of frames may be any integer number suitably chosen by the skilled person. The active hand may be at least partially detected in the driver wheel zone.

The circuitry may be configured to count a number of frames in which the active hand is detected in the driver wheel zone, and to perform hand owner identification by comparing the minimum num- ber of frames with the counted number of frames. The number of frames in which the active hand is detected in the driver wheel zone may be any integer number. The frames in which the active hand is detected in the driver wheel zone may be consecutive frames, or not. The active hand may be at least partially detected in the driver wheel zone.

The circuitry may be configured to, when the minimum number of frames is smaller than the counted number of frames, obtain a hand owner status which indicates that hand owner is a driver.

The circuitry may be configured to perform an image analysis based on the captured image to obtain tip positions, a palm position and an arm position indicating a bottom arm position. The image anal- ysis performed on the captured image may include pixel segmentation (either 2D or 3D), to extract for example, fingertips position, fingers direction, which may be obtained by applying a principal component analysis on 3D point cloud, a palm position, which may be estimated by computing cen- ter of gravity of 2D palm, a palm orientation, which may be obtained by applying principal compo- nent analysis on the segmented palm, an arm orientation, which may be computed from the fingers direction, a bottom area, a bottom arm, a fingers pose, and the like.

The circuitry may be configured to perform arm angle determination based on the palm position and the bottom arm position to obtain an arm angle. The arm angle may include information regard- ing the arm orientation or the like.

The circuitry may be configured to perform fingertips analysis based on the tip positions to obtain a tip score. The fingertips analysis may include detecting a one finger (1F) pose or a two fingers (2F) pose by localizing the positions of detected tips relative to the palm center. This may give infor- mation about the hand owner. Based on tip and palm position, a direction tip-palm may be deter- mined. A specific range of tip-palm direction may be predefined for each of a passenger’s hand and the driver’s hand. The tip score may be a score that indicates whether the detected tip is the driver’s fingertip or the passenger fingertip. The circuitry may be configured to perform arm analysis based on the palm position, the bottom arm position and the arm angle to obtain a palm score, a bottom arm score and an arm angle score. The palm position may be determined using a confidence image or a 2D image. The arm analysis scores may be used to distinguish driver’s arm from the passenger’s arm. The bottom arm position may be used to detect where it enters the field of view, e.g. bottom arm position. The arm angle may be an angle defined between a separation line that separates the captured image in two parts and the detected hand.

The circuitry may be configured to perform arm voting based on the palm position, the bottom arm position and the arm angle to obtain an arm vote. The arm vote may be represented by a Boolean value. The arm vote may influence the hand owner score that defines the hand owner status. In par- ticular, a false positive hand status may be avoided by the arm voting.

The circuitry may be configured to perform score determination based on the arm vote, the tip score, the palm score, the bottom arm score and the arm angle score to obtain a driver’s score and a passenger’s score.

The circuitry may be configured to, when the driver’s score is higher than the passenger’s score, ob- tain a hand owner status which indicates that hand owner is a driver.

The circuitry may be configured to, when the driver’s score is lower than the passenger’s score, ob- tain a hand owner status which indicates that hand owner is a passenger.

The circuitry may be configured to, when an absolute difference of the driver’s score and the pas- senger’s score is greater than a threshold, obtain a hand owner which indicates that hand owner is unknown. The threshold may be any value suitably chosen by the skilled person.

According to an embodiment, the circuitry may be configured to, when the captured image is a depth image, perform seat occupancy detection based on the depth image to obtain a seat occu- pancy detection status. The seat occupancy detection may be performed with any seat occupancy method known to the skilled person.

According to an embodiment, the circuitry may be configured to perform hand owner identification based on a Left Hand Drive (LHD) configuration or a Right Hand Drive (RHD) configuration.

The embodiments also disclose a method comprising performing hand owner identification based on image analysis of an image captured by an imaging system to obtain a hand owner status.

The embodiments also disclose a computer program comprising instructions which, when the pro- gram is executed by a computer, cause the computer to perform hand owner identification based on image analysis of an image captured by an imaging system to obtain a hand owner status. Embodiments are now described by reference to the drawings.

Interactive car feedback system

Fig. 1 schematically shows an embodiment of an interactive car feedback system, which is used to recognize a user’s gesture and perform a respective action based on the recognized gesture.

In an in-vehicle scenario, a gesture recognition 100 recognizes a gesture performed by a driver or a passenger of the vehicle. This a process performed may be performed by an interactive car feedback system, such as car system 101. Detected gestures may typically include pressing buttons being part of interactive screens or performing direct interactions from the driver or the passenger to the car system 101. Based on the recognized gesture, the car system 101 performs a process of output ac- tion 102. For example, the car system 101 performs a predefined output action, such as a predefined behavior. In a case where the recognized gesture is pressing a button, a signal from the pressed but- ton may be used to determine the output action, and the recognized gesture may be used to perform hand owner status determination.

The car system 101, for example, detects an operation performed on the car’s infotainment system, such as a multimedia player operation, a navigation system operation, a car configuration tuning op- eration, a warning flasher activation operation and the like, and/ or an operation performed on the car’s console, such as a hand break operation or the like.

In-vehicle ToF imaging system

Fig. 2 schematically shows an embodiment of an in-vehicle imaging system comprising a ToF imag- ing system used for hand owner identification in an in-vehicle scenario.

A ToF imaging system 200 actively illuminates with light pulses its field of view 201 in an in-vehicle scenario. The ToF imaging system 200 analyses the time of flight of the emitted light to obtain im- ages of the field-of-view 201, such as for example a depth image and a confidence image. Based on the obtained images, a processor 202 performs hand owner identification to obtain a hand owner status. Based on the hand owner status determined by the processor 202, an infotainment system 203 of the vehicle, performs a predefined action. The processor 202 may be implemented as the mi- crocomputer 7610 of Fig. 21 below.

In the embodiment of Fig. 2, the ToF imaging system 200 may be an indirect ToF imaging system (iToF) which emits light pulses of infrared light inside its field-of-view 201. The objects included in the field-of-view 201 of the ToF imaging system 200 reflect the emitted light back to the ToF imag- ing system 200. The ToF imaging system 200 may capture a confidence image and a depth map (e.g. depth image) of the field-of-view 201 inside the vehicle, by analysing the time of flight of the emit- ted infrared light. The objects included in the field-of-view 201 of the iToF sensor of the ToF imag- ing system 200 may be a dashboard of the vehicle, a console of the vehicle, a driver’s hand, a passenger’s hand, and the like. Alternatively, the ToF imaging system 200 may be a direct ToF imag- ing system (dToF imaging system), or an imaging system comprising an RGB camera together with a ToF sensor, any known to the skilled person 2D/RGB vision system or the like.

Fig. 3 schematically shows an embodiment of a process for adapting an output behavior of the car system based on the operation performed by a user and based on the user, e.g. the hand owner sta- tus, wherein the user is a driver or a passenger of the vehicle.

At 204, an operation performed by the user is detected. At 205, if the operation is “hand break”, the process proceeds at 206. If the operation is not “hand break”, the process proceeds at 209. At 206, if the hand owner status is set to “driver”, the car system allows the operation to be performed at 208. If the hand owner status is not set to “driver”, at 206, the car system disallows the operation to be performed at 207. At 209, if the operation is “multimedia player”, the process proceeds at 213. If the operation is not “multimedia player”, the process proceeds at 210. At 210, if the hand owner status is set to “driver” and the car is stopped or if the hand owner status is set to “passenger”, the car sys- tem allows the operation to be performed at 212. If the hand owner status is not set to “driver” and the car is not stopped or if the hand owner status is not set to “passenger”, the car system disallows the operation to be performed at 211. At 213, if the operation is “navigation system”, the process proceeds at 214. If the operation is not “navigation system”, the process proceeds at 217. At 214, if the hand owner status is set to “driver” and the car is stopped or if the hand owner status is set to “passenger”, the car system allows the operation to be performed at 216. If the hand owner status is not set to “driver” and the car is not stopped or if the hand owner status is not set to “passenger”, the car system disallows the operation to be performed at 215. At 217, if the operation is “car con- figuration tuning”, the process proceeds at 218. If the operation is not “car configuration tuning”, the process proceeds at 221. At 218, if the hand owner status is set to “passenger”, the car system, at 220, disallows the operation to be performed. If the hand owner status is not set to “passenger”, at 218, the car system, at 219, allows the operation to be performed. At 205, if the operation is “warn- ing flasher”, the process proceeds at 222. At 222, if the hand owner status is set to “driver”, the car system allows, at 224, the operation to be performed. If the hand owner status is not set to “driver”, at 222, the car system disallows, at 223, the operation to be performed.

In the embodiment of Fig. 2, the in-vehicle imaging system acquires information regarding an active hand, such as a driver’s hand or a passenger’s hand, which interacts with the entertainment system 203. The entertainment system 203 may allow the passenger to perform interactions that the driver is not able to do. Additionally, the entertainment system 203 may for example allow the driver to perform interactions, such as tune the configuration of the car, that the passenger shouldn’t access.

Fig. 4 schematically shows an embodiment of a process for a car working mode selection based on the hand owner status. Based on this information of the active hand, the car system may include three working modes, namely a driver focused interactions mode, a passenger focused interactions mode, and a traditional interactions mode, which is independent on the hand owner’s action.

At 225, a hand owner status is detected. At 226, if the hand owner status is set to “driver”, the work- ing mode is set to driver focused interactions mode at 227. If the hand owner status is not set to “driver”, the process proceeds at 228. At 228, if the hand owner status is set to “passenger”, the working mode is set to passenger focused interactions mode at 229. If the hand owner status is not set to “passenger”, the process proceeds at 230. At 230, the working mode is set to traditional inter- actions mode.

The hand owner identification performed by processor 202 and the hand owner status determina- tion are performed based on computation of hand parameters and history, as described in Figs. 8 to 20 below. The hand owner identification may combine single frame analysis for hand analysis, arm analysis, and rule-based analysis as well as frame history.

Fig. 5 schematically shows an embodiment of an ToF imaging system in an in-vehicle scenario, wherein images captured by the ToF imaging system are used for hand owner identification.

A ToF imaging system 200 (see Fig. 2), which is fixed, for example, on the ceiling of a vehicle, com- prises an iToF sensor that captures an in-vehicle scene by actively illuminating with light pulses its field of view 201 inside the vehicle. The ToF imaging system 200 captures a confidence image and a depth map (e.g. depth image) of the cabin inside the vehicle, by analysing the time of flight of the emitted infrared light. For example, the ToF imaging system 200 captures, within its field-of-view 201, a Human Machine Interface (HMI) 301 of the vehicle, which relates to the vehicle’s infotain- ment system (203 in Fig. 2) above. In addition, the ToF imaging system 200 captures, within its field-of-view 201, a hand of a front-seat passenger, a leg of the front-seat passenger, a hand of a driver, such as the active hand 303, a leg of the driver, a steering wheel, such as the steering wheel 302 of the vehicle, and the like.

Based on the captured image of the ToF imaging system 200 an owner of a detected active hand is determined, such the active hand 303. In order the ToF imaging system 200 to detect the owner of the active hand 303, the iToF sensor depth image and/ or the iToF sensor confidence image are ana- lysed for example by defining a Region Of Interest (ROI), such as the driver wheel zone 300, in the field-of-view 201 of the IToF sensor. The driver wheel zone 300 corresponds to the same region, i.e. ROI, in the captured image.

The iToF sensor of the ToF imaging system (see 200 in Figs. 2 and 5) obtains a depth image (see Figs. 6a and 6b) by capturing its field of view (see 201 in Figs. 2 and 5). The depth image is an image or an image channel that contains information relating to the true distance of the surfaces of objects in a scene from a viewpoint, i.e. from the iToF sensor. The depth (true distance) can be measured by the phase delay of the return signal. Thus, the depth image can be determined directly from a phase image, which is the collection of all phase delays determined in the pixels of the iToF sensor.

Occupancy detection

Fig. 6a illustrates in more detail an embodiment of a depth image obtained by the in-vehicle ToF im- aging system used for car seat occupancy detection, wherein the depth image shows that the passen- ger’s seat is occupied. Here, the depth image obtained by capturing the cabin of the vehicle depicts the console of the vehicle and a leg 400 of a passenger, located on the right of the console. In the embodiment of Fig. 6a, only one person is detected, thus the seat occupant is the passenger.

Fig. 6b illustrates in more detail an embodiment of a depth image obtained by the in-vehicle ToF imaging system used for car seat occupancy detection, wherein the depth image shows that the driver’s seat is occupied. Here, the depth image depicts the console of the vehicle and a leg 401 of the driver, located on the left of the console. In the embodiment of Fig. 6b, only one person is de- tected, thus the seat occupant is the driver.

The depth images in Figs. 6a and 6b obtained by the ToF imaging system 200 are analysed to detect a driver’s car seat occupancy and/ or a passenger’s car seat occupancy. The analysis may for example be performed by removing the background in the depth image using a reference image that was made earlier. From the background image, wherein only static part in the field of view remains, a blob for each driver and passenger area is computed. The blob corresponds to the surface of an ob- ject within a depth range and that is static. In a case where the blob size, regarding a threshold, is satisfying, the presence of a driver and/ or of a passenger is decided. The analysis detects if there is any occupant on a car seat.

In case of only one person being detected in the car, the final decision is obvious, i.e. the car seat is occupied by either the driver or the passenger. In the case where only one person, i.e. driver or pas- senger, is onboard, it is not necessary to perform any further driver/ passenger detection. Addition- ally, false positive and false negative in the hand owner status detection may be prevented and a filtering for the final decision, regarding the hand owner status, may be prepared. In the embodiments of Figs. 6a and 6b, car seat occupancy detection is performed based on the depth image. Alternatively, car seat occupancy detection may be performed using seat pressure sen- sors embedded inside each seat of the vehicle, or the like. Still alternatively, the care seat occupancy detection is optional, and the skilled person may not perform occupancy detection of the vehicle’s seats.

Hand detection

Fig. 7 illustrates a depth image generated by the ToF imaging system capturing a scene in an in-vehi- cle scenario, wherein in the depth image an active hand is detected. The captured scene comprises the right hand 501 of the vehicle’s driver and the right leg 502 of the driver. An object/hand recog- nition method is performed on the depth image to track an active hand, such as the hand 501. In a case where a hand is detected, an active bounding box 500 relating to the detected hand 501 in the depth image is determined and provided by the object/hand detection process.

Fig. 7 shows only a subsection of the depth image captured by the in-vehicle ToF imaging system. Object detection, such as hand detection is performed by the hand owner identification process.

The object detection may be performed based on any object detection methods known to the skilled person. An exemplary object detection method is described by Shuran Song and Jianxiong Xiao in the published paper “Sliding Shapes for 3D Object Detection in Depth Images” Proceedings of the 13th European Conference on Computer Vision (ECCV2014).

Hand owner identification

Fig. 8 schematically describes an embodiment of a process of hand owner identification. The ToF imaging system (see 200 in Figs. 2 and 5) illuminates an in-vehicle scene, within its field-of-view (see 201 in Figs. 2 and 5) and captures an image such as for example a depth image. A Region of Interest (ROI) is defined in the depth image, such as the driver wheel zone (see 300 in Fig. 5) in the field of view of the ToF imaging system.

At 600, a predefined driver wheel zone is obtained. The predefined driver wheel zone corresponds to the same region, i.e. the predefined ROI, in the captured image. This predefined ROI may for ex- ample be set in advance (at time of manufacture, system setup, etc.) as a predefined parameter of the process. At 601, a predefined minimum number m of frames in which an active hand should be identified in the driver wheel zone is obtained. The minimum number m of frames is set such that if the identified active hand is, at least partially, inside the driver wheel zone, the identified active hand is considered to be the driver’s hand. At 602, a number n of frames in which an active hand is iden- tified, at least partially, in the driver wheel zone is counted. The n frames may be consecutive frames, without limiting the present embodiment in that regard. At 603, if the number n of frames in which an active hand is identified, at least partially, in the driver wheel zone, obtained at 602, is higher that the predefined minimum number m of frames in which an active hand should be identi- fied in the driver wheel zone, obtained at 601 , the method proceeds at 604. At 604, a hand owner status is determined which indicates that the hand owner of the active hand is the driver.

Image analysis

Fig. 9a schematically shows an embodiment of an image analysis process performed on an image captured by an in-vehicle ToF imaging system.

An image captured by the in-vehicle ToF imaging system, such as the captured image 700, is sub- jected to image analysis 701 to obtain tip positions 702, a palm position 703 and an arm position 704 of a detected active hand. The arm position 704 includes information about a bottom arm position (see Fig. 7b). The image analysis 701 may include a process of image segmentation to detect a hand and an arm in the captured image.

The palm position 703 may for example be estimated by the image analysis 701 by computing the center of gravity of a two-dimensional (2D) palm detected in a depth image generated by the ToF imaging system, without limiting the present embodiment in that regard. Alternatively, or in addition the palm position may be determined using the confidence image generated by the ToF imaging sys- tem (see 200 in Figs. 2 and 5). A palm orientation can also be obtained by applying principal compo- nent analysis on the palm detected by image segmentation and analysis.

The arm position 704 may for example be detected in combination with the identified active hand where it enters the field of view (see 201 in Figs. 2 and 5). The position where the identified active hand enters the field of view is denoted here as the bottom arm position (see Fig. 7b).

A seat occupant detection may be performed as described with regard to Figs. 6a and 6b. The pro- cess of image segmentation may be performed as described with regard to Fig. 7.

The skilled person may extract by the image analysis process any desirable information for perform- ing hand owner identification. For example, the image analysis 701 used to obtain the fingertip posi- tions may be any image analysis method known to the skilled person. An exemplary image analysis method is described in the patent literature WO 2019/ 134888 Al (SONY CORP.) 11 July 2019 (11.07.2019), wherein an example gesture recognition algorithm is used to extract feature points such as fingertips and the like, being detected in the captured images. Another exemplary image analysis method used to obtain hand parameters such as fingertip posi- tions, palm position, arm position, hand and finger pose and the like is described in the patent litera- ture WO 2015/104257 Al (SONY CORP.) 16 July 2015 (16.07.2015), wherein Points Of Interest (POI) are determined in a detected hand of a user, by selecting at least one of a palm center, a hand tip, a fingertip, or the like.

In the embodiment of Fig. 9a, the segmentation process performed in the captured image may be a pixel segmentation performed on either a two-dimensional (2D) image or a three-dimensional (3D) image to extract information such as tip positions 702, palm position 703 and arm position 704 use- ful for generating a hand owner status. Other information may also be obtained by the image analy- sis, such as fingertip positions and orientation, palm position and orientation, arm position and orientation, hand and finger pose, hand’s bounding box, and the like, without limiting the present embodiment in that regard.

Fig. 9b illustrates a bottom arm analysis result, wherein the position of the bottom of the arm is de- termined. An active hand 706 is identified, and the position of the arm coupled to the identified hand 706 is detected, where the active hand 706 enters the field of view of the ToF sensor. In the embodiment of Fig. 9b, the bottom arm position is determined by computing the mass center, i.e. mean center, of the arm contour in a bottom arm area 705 of the field of view. The bottom arm area 705 is the edge of the captured image closest to the back of the vehicle. The mean center can be computed from hand’s contours and the hand’s contours can be estimated from hand segmentation. The contour of the arm, i.e. the hand’s contours, can be computed on a height of 14 pixels and a width same as the width of the captured image, without limiting the present embodiment in that re- gard. Alternatively, the contour of the arm can be considered within the 14 pixels height, without taking into account the width, without limiting the present embodiment in that regard. Still alterna- tively, the height and the width may be any suitable height and width being chosen by the skilled person.

Fig. 10 schematically shows an embodiment of a process of an arm angle determination performed to obtain an arm angle.

Based on a palm position 703 and the arm position 704, acquired by the image analysis (see 701 in Fig. 9a), an arm angle determination 800 is performed to obtain an angle of the detected arm, such as the arm angle 801.

The arm angle determination 800 includes detecting the arm angle when considering a vertical line, i.e. a separation line, splitting the captured image in two parts as 0° angle. The arm angle is deter- mined from the separation line (see 900 in Figs. Ila and 11b), by considering its slope, i.e. arm angle (see 901 in Figs. Ila and 11b). The arm angle can be determined in a captured 2D image, such as a confidence image and/or an RGB image. In this case the arm vector in the 2D image directly de- fines the arm angle. The arm angle can also be determined in a captured 3D depth image. In this case, an arm direction, i.e. arm orientation (see 902 in Figs. Ila and 11b), can be determined in 3D (vector) from a depth image generated by the ToF imaging system. Then the direction of the arm is projected in 2D on the confidence image being generated by the ToF imaging system and the arm angle is determined in this 2D image.

In the present embodiment the arm angle 801 is obtained based on the palm position 703 and the arm position 704, without limiting the present embodiment in that regard. Alternatively, the arm an- gle may be computed from the fingers’ direction and relatively to the separation line between driver/ passenger zone. In such case, the fingers’ direction may be computed by applying principal compo- nent analysis on three-dimensional (3D) point cloud, or the like. Still alternatively, the arm angle may be computed from the fingertip-palm’s direction and the arm position, wherein the fingertip-palm’s direction may be computed based on the fingertips position and the palm position, or the like.

Fig. 11a schematically shows in more detail an embodiment of a hand owner status determination, where the hand owner status indicates that a detected hand belongs to the driver, and Fig. lib sche- matically shows in more detail an embodiment of a hand owner status determination, where the hand owner status indicates that a detected hand belongs to the front-seat passenger. Such hand pa- rameters are for example, the fingertips position 702, the palm position 703, the arm position 704, the arm angle 801, acquired by the image analysis 701 and the arm angle determination 800, as de- scribed with regard to Figs. 9a and 10 respectively.

An in-vehicle ToF imaging system (see 200 in Figs. 2 and 5) captures a scene within its field-of-view 201 to obtain a captured image. The scene within its field-of-view 201 includes an HMI 301, a part of the steering wheel 302, and an active hand, here the right hand of the driver. In the captured im- age a driver wheel zone 300 is defined, which corresponds to the same region in the scene. A finger- tip position 702, a palm position 703, and a bottom arm position 704 of the detected active hand are acquired by image analysis (701 in Fig. 9a). The arm angle 801 is acquired by the arm angle determi- nation process 800 (see Fig. 10) based on the fingertips position 702, the palm position 703, and the bottom arm position 704 of the detected active hand. The arm angle 801 comprises the arm angle 901 and the arm orientation 902. The arm orientation 902 is the orientation of the detected active hand (see 303 in Fig. 5), which is determined based on the bottom arm position 704 and the palm position 703. Here the arm orientation 902 is represented by a dashed line. The arm angle 901 is the angle formed between the arm orientation 902 and a separation line 900, that divides the captured image in two parts and thus the captured scene. Here the arm angle 901 is represented by a double arrow. The bottom arm position 704 is the position of the bottom arm within a predefined area, such as the bottom arm area 903, which is a predefined threshold. The bottom arm area 903 is de- fined as the top edge area of the captured image, which corresponds to the edge area closest to the back of the captured scene and thus of the vehicle. The predefined threshold may be a threshold of 16 pixels or 5% of the height of the image, or the like, without limiting the present embodiment in that regard.

In the embodiment of Fig. Ila, the arm angle 901 is positive with regard to separation line 900 and thus the arm orientation 902 points from left to right in Fig. Ila, which, from the perspective of the ToF sensor, is from the left part of the scene captured by the ToF imaging system (see 200 in Figs. 2 and 5) to the right part. Thus, the hand owner status is identified as the driver.

Accordingly, in the embodiment of Fig. 1 lb, the arm angle 901 is negative with regard to separation line 900 and thus, the arm orientation 902 points from right to left in Fig. Ila, which, from the per- spective of the ToF sensor, is the from the right part of the scene captured by the ToF imaging sys- tem (see 200 in Figs. 2 and 5) to the left part. Thus, the hand owner status is identified as the passenger.

In the embodiments of Figs. 11a and 11b, the separation line 900 is a vertical line, without limiting the present embodiment in that regard. Alternatively, the separation line may be an oblique line sep- aration line, such as the separation line 2200 described with regard to Fig. 24. The separation line 900 may be an angle of 0°. The arm angle 901 may be an angle of 30° (left part of the scene), the arm angle 901 may be an angle of (— )30° (right part of the scene), or the like, without limiting the present embodiment in that regard.

Fingertips analysis

Fig. 12a schematically shows an embodiment of a fingertip analysis process based on a tip criterion, to obtain a tip score. Based on the tip positions 702, a fingertips analysis 1000 is performed to ob- tain a score 1001 for the tip, such as the tipi, wherein t is the hand owner status, e.g. i = D for the driver and i = P for the passenger. The tip score tipi is the score computed from the fingertips analysis 1000 (tip criterion) and is used for status score computation as described in more detail with regard to Fig. 16 below.

A tip-palm direction is determined based on the tip positions 702 and the palm position 703, both acquired on the image analysis process described with regard to Fig. 9a. The tip-palm direction, which is a fingers direction, is obtain by applying principal component analysis on 3D point cloud, or the like. Fig. 12b schematically illustrates an embodiment of a finger pose detection result, which is obtained based on the detected tip positions 702 and palm position 703. The finger pose detection result is a result of a one finger (IF) or two fingers (2F) pose detection. The detector localizes the position of detected tips, i.e. the tip positions 702, from the palm center, i.e. the palm position 703. This results to a first information about the owner.

The fingers pose is still analyzed during a frame laps, for example, 20 by default, and reset if no other 1F/2F is detected again. Based on tip and palm position, a direction tip-palm is determined, wherein a specific range of tip-palm direction exists for each of passenger and driver hands. As de- scribed in the embodiment of Fig. 9a above, the palm position 703 is estimated by computing the center of gravity of the 2D palm, and the palm orientation can be obtained by applying principal component analysis on the segmented palm.

Arm analysis

Fig. 13 schematically shows an embodiment of an arm analysis process to obtain hand parameters.

Based on the palm position 703, the (bottom) arm position 704 and the arm angle 801 of the de- tected active hand, an arm analysis 1100 of the detected active hand is performed, to obtain hand parameters. The hand parameters include a score 1101 for the palm, such as the palm i , a score 1102 for the arm bottom, such as the bottom.}, and a score 1103 for the angle, such as the angle i , wherein I is the hand owner status, e.g. i = D for the driver and I = P for the passenger. The arm analysis 1100 includes a palm position criterion, a bottom arm criterion and an arm angle criterion, such that the palm} is computed based on the palm position criterion, the bottom i is computed based on the bottom arm criterion, and the angle} is computed based on the arm angle criterion.

The palm position criterion is a criterion that aims to distinguish a driver’s arm from the one of a passenger. This is performed by determining the score 1101 for the palm, i.e. the palm i .

The bottom arm criterion is a criterion that aims to distinguish a driver's bottom arm from a passen- ger’s bottom arm. This is performed by determining the score 1102 for the bottom arm, i.e. the bottomi.

The arm angle criterion is a criterion that aims to distinguish a driver’s arm angle from a passenger’s arm angle. This is performed by determining the score 1103 for the arm angle, i.e. the angle i . For example, the arm angle is determined when considering a separation line splitting the captured image in two parts as 0° angle. The sin of the angle contributes to a final score, which is the score to deter- mine the hand owner, and thus to the hand owner status of the identified active hand. For example, when detecting a positive arm angle, the owner is on the right part of the captured image, in a Left- Hand Drive (LHD) case, which gives more weight to the passenger, i.e. angle P . When detecting a negative arm angle, the owner is on the left part of the captured image, in an LHD case, which gives more weight to the driver, i.e. angle D .

Arm vote

Fig.l4a schematically shows an embodiment of an arm voting process performed based on hand pa- rameters to obtain an arm vote.

Based on the palm position 703, the (bottom) arm position 704 and the arm angle 801 of the de- tected active hand, an arm voting 1200 is performed to obtain an arm vote 1201. The arm vote 1201 is a true or false value, i.e. Booleans. The arm voting 1200 is implemented to avoid a false positive hand status by analyzing the arm criterions, i.e. the palm position criterion, the bottom arm criterion and the arm angle criterion, described in Fig. 13 above. The output of the arm voting 1200 are Bool- eans that influence the computing status score, which is obtained as described with regard to Fig. 16 below, and thus, the determination of the hand owner status, as described with regard to Figs. 17 to 19 below.

Fig. 14b schematically shows an embodiment of the arm voting performed in Fig. 14a above. At 1202, an obtained bottom arm position (see 704 in Fig. 14a) is compared to a threshold. If the bot- tom arm position is less than the threshold, at 1202, the value is set as true, and thus, the vote 1203 is attributed to the driver. If the bottom arm position is more than the threshold, at 1202, the value is set as false, and thus, the vote 1204 is attributed to the passenger.

Fig. 14c schematically shows another embodiment of the arm voting performed in Fig. 14a above. At 1205, based on the obtained arm angle position (see 801 in Fig. 10) and the obtained palm posi- tion (see 703 in Fig. 9a) the arm vote is attributed to the driver 1206, to the passenger 1207 or to un- known 1208.

The arm voting 1200 is implemented to avoid false positive hand owner status by analyzing the arm criterion. The arm voting 1200 requires a separation line (see 900 in Figs. Ila and 11b) that is de- fined on the captured image and separates the captured image in two parts.

The embodiments of Figs. 15a and 15b shows in more detail how the votes are attributed in the arm voting process described with regard to Fig. 14a, based on the angle formed between the arm of the detected hand and the separation line 900. The separation line 900 separates the captured image in a left part and a right part. In an LHD configuration, the area located on the left part of the captured image is the driver’s area and the area located on the right part of the captured image is the passen- ger’s area. The angle formed between the separation line 900 and a black bold line 1300 is the toler- ance angle in which the arm vote is attributed to unknown. The tolerance angle may be an angle of 5°, without limiting the present embodiment in that regard. The tolerance angle may be 0 degrees (0°), or the tolerance angle may be any suitable angle being chosen by the skilled person.

In the embodiment of Fig. 15a, the palm position is located in the left part of the captured image, and thus, the vote is attributed to the driver. The arm angle 901 is positive, i.e. arm angle > 0°, and therefore, the vote is attributed to the passenger. The bottom arm area 903 is located in the right part of the captured image, and therefore, the vote is attributed to the passenger. The separa- tion line 900 may be an angle of 0°. The arm angle 901 may be an angle of 30°, without limiting the present embodiment in that regard.

In the embodiment of Fig. 15b, the palm position is located in the right part of the captured image, and thus, the vote is attributed to the passenger. The arm angle 901 is negative arm angle < 0°, and therefore, the vote is attributed to the driver. The bottom arm area 903 is located in the left part of the captured image, and therefore, the vote is attributed to the driver. The arm angle 901 may be an angle of (— )30° (right part of the scene), or the like, without limiting the present embodiment in that regard.

In LHD configuration, for example if a palm position is detected in the left part of the captured im- age and also a positive angle is detected, it is considered that the arm is the passenger’s arm by vot- ing for the angle, e.g. ai'm_right.has_angle_vote — true. Otherwise it is considered that the arm position comes from driver, by voting for the position, e.g. arm_left.has _palm _position_vote — true. Then, using the bottom arm criterion, if the arm position, which is the position in which the arm enters the cap- tured image, is detected in the left part of the image, it is considered as belonging to the driver. Thus, the vote is attributed to the position, e.g. arm_left.has _palm _position_vote = true.

Status score computation/ Score determination

Fig. 16 schematically shows an embodiment of a score determination process, wherein two status scores are computed, namely a driver’s score and a passenger’s score based on the results of the pre- viously computed criterions.

Based on the arm votes 1201, the hand parameters palm i , bottom i , and angle i , and the tip pa- rameters tip i , two status scores 1401 and 1402 are computed, namely the driver’s score score D and the passenger’s score score P . A status score score i is a score, which is computed and used to iden- tify the hand owner status.

In the embodiment of Fig. 16, the status score score i is where hi is the computed history mean dominant owner, l i is the history last state, tipi is the score computed from the hand tip criterion, palmi is the score from the palm position criterion, botto i is the score from the bottom arm criterion, angle i is the score from the arm angle criterion, is a history weight, w t is the tip weight, w paim is the palm position weight, and w angie is the palm angle weight.

As can be taken from the above formula, a weight is applied on each component, and all the weights are normalized.

The history mean dominant owner is computed taking into account a Global History Owner value. The Global History Owner is the mean of the number of driver /passenger detected during (previous) valid frames in history.

The Global History Owner value counts for 50% of the history owner score

The history last state I i adds the last state into the final score, for example, if last state was set to un- known owner, the l i = 0.

For computing the weight history w h , a first score is computed here in two parts using owner his- tory. Then, the last owner is taken in account for 50% of the history owner score

The weight palm position w paim and the weight palm angle w angie are computed from the arm vot- ing process, as described with regard to Figs. 14a to 15b above.

Fig. 17 shows a flow diagram visualizing a method for determining a hand owner status of an identi- fied active hand, wherein the computed driver’s score and passenger’s score are compared.

At 1500, a comparison operator is used to determine whether the driver’s score score D (see Fig. 16) and the passenger’s score score P (see Fig. 16) are equal. If the score D is equal to score P , at 1500, the method proceeds at 1501, wherein the hand owner status is set to unknown, at 1501. If the score D is not equal to score P , at 1500, the method proceeds at 1502. At 1502, if the difference \score D — score P \ is higher than a threshold value ∈, wherein for example ∈ = 0.1, the method proceeds at 1504. If the difference \score D — score P \ is lower than a threshold value ∈, wherein for example ∈ = 0.1, at 1502, the method proceeds at 1503. At 1503, the hand owner status is set to last known. At 1504, if an identified active hand crosses the driver’s area, such as the driver wheel zone (see 300 in Figs. 5, Ila and lib), the method proceeds at 1505. If an identified active hand does not cross the driver’s area, such as the driver wheel zone (see 300 in Figs. 5, Ila and 11b), at 1504, the method proceeds at 1506. At 1506, if the driver’s score score D is higher than the passen- ger’s score score P , the method proceeds at 1505. If the driver’s score score D is lower than the pas- senger’s score score P , at 1506, the method proceeds at 1507. At 1505, the hand owner status is set to driver. At 1507, the hand owner status is set to passenger. After 1506, the process returns at 1502, and is repeatedly performed, in a case where the difference between the scores is too low.

Generation of hand owner status

Fig. 18 shows a flow diagram visualizing a method for generating a hand owner status for an identi- fied active hand in a captured image, as described with regard to Figs. 5 to 17 above.

At 1600, a driver wheel zone (see 300 in Figs. 5, Ila and lib) in the captured image is obtained. At 1601, if an active hand is detected in the driver wheel zone for at least m frames (see Fig. 8), the method proceeds at 1602. At 1602, the hand owner is identified, and the hand owner status is set to driver. If an active hand is not detected in the driver wheel zone for at least m frames (see Fig. 8), at 1601, the method proceeds at 1603. At 1603, the tip position (see 702 in Figs. 9a and 12a), the palm position (see 703 in Figs. 9a and 14a), and arm position (see 704 in Figs. 9a and 14a) are analyzed based on a tip criterion, palm criterion and arm criterion respectively, to obtain the scores tipi, palrrii, bottom i , and angle i , used to compute the driver’s score score D and the passenger’s score score P . At 1604, the driver’s score score D and the passenger’s score score P are computed (see Fig. 16). At 1605, if the difference \score D — score P | is higher than a threshold value e, wherein for example 6 = 0.1, the method proceeds at 1607. If the difference \score D — score P | is lower than a threshold value 6, wherein for example 6 = 0.1, at 1605, the method proceeds at 1606. At 1606, the hand owner is identified, and the hand owner status is set to unknown. At 1607, if the driver’s score score D is higher than the passenger’s score score P , the method proceeds at 1608. If the driver’s score score D is lower than the passenger’s score score P , at 1607, the method proceeds at 1609. At 1608, the hand owner is identified, and the hand owner status is set to driver. At 1609, the hand owner is identified, and the hand owner status is set to passenger.

Fig. 19 shows a flow diagram visualizing a method for hand owner status identification, wherein hand owner’s historical statistics are computed and an arm vote and a Right-Hand Drive (RHD) swapping is performed.

At 1700, a 2D image and/ or a confidence image is obtained. At 1701, if a value indicating that “hand is on wheel”, in a continuous mode, is higher than a threshold, the method proceeds at 1708. If a value indicating that “hand is on wheel”, in a continuous mode, is lower than a threshold, for example, 20 frames, the method proceeds at 1703. At 1702, the value of “hand is on wheel” variable is incremented by one, this value increment is used at 1601 in Fig. 18 above. At 1703, a dedicated owner detection pipeline is used, wherein the dedicated owner detection pipeline includes the steps 1704 to 1707. At 1704, tip and hand parameters are computed based on the tip criterion (see Fig. 12a), the arm criterion (see Figs. 14a, b, c) and the arm vote (see Fig. 12a) to obtain the scores tip i palm i , bottom i , and angle i , used to compute the driver’s score score D and the passenger’s score score P . At 1705, the driver’s score score D and the passenger’s score score P are computed (see Fig. 16). At 1708, if a hand is identified on the wheel, the method proceeds at 1712. If a hand is not identified on the wheel, at 1708, the method proceeds at 1709. At 1709, historical statistics are computed, which are used at 1706, and the method proceeds at 1706. At 1706, a judging process (see Fig. 17) is performed based on the comped scores score D , score P , to obtain a hand owner sta- tus, e.g. driver, e.g. passenger, or e.g. unknown. At 1707, the result of the judging process can be in- verted, if needed, depending on the driving configuration, e.g. LHD or RHD. That is, the hand owner status driver becomes passenger and passenger becomes driver. The result of the judging process, with or without the LHD/RHD swapping, is the hand owner status, namely driver, passenger, or unknown. In the embodiment of Fig. 17, the RHD swapping at 1707 is optional. The continuous mode is active when detecting that an active hand is touching the steering wheel for a number of frames, for example, for 20 frames. The historical statistics computed at 1709 is the scores l i used to compute the driver’s and passenger’s score described with regard to Fig. 16 above.

Fig. 20 shows a flow diagram visualizing an embodiment of a method for hand owner status identifi- cation. At 1800, an image is acquired by a ToF sensor of an ToF imaging system (see 200 in Figs. 2 and 5) that captures a scene within the ToF imaging system’s field-of-view (see 201 in Figs. 2 and 5), for example, in an in-vehicle scenario. At 1801, identification of an active hand in the image is per- formed. At 1802, a hand owner status for the identified hand is generated, based on the active hand detected and identified in the captured image at 1801. The hand owner status may be e.g. driver, e.g. passenger, e.g. unknown or e.g. last known, as described in Figs. 17, 18 and 19 above.

Implementation

Fig. 21 shows a block diagram depicting an example of schematic configuration of a vehicle control system 7000 as an example of a mobile body control system to which the technology according to an embodiment of the present disclosure can be applied. The vehicle control system 7000 includes a plurality of electronic control units connected to each other via a communication network 7010. In the example depicted in Fig. 21, the vehicle control system 7000 includes a driving system control unit 7100, a body system control unit 7200, a battery control unit 7300, an outside-vehicle infor- mation detecting unit 7400, an in-vehicle information detecting unit 7500, and an integrated control unit 7600. The communication network 7010 connecting the plurality of control units to each other may, for example, be a vehicle-mounted communication network compliant with an arbitrary stand- ard such as controller area network (CAN), local interconnect network (LIN), local area network (LAN), FlexRay (registered trademark), or the like.

Each of the control units includes: a microcomputer that performs arithmetic processing according to various kinds of programs; a storage section that stores the programs executed by the microcom- puter, parameters used for various kinds of operations, or the like; and a driving circuit that drives various kinds of control target devices. Each of the control units further includes: a network inter- face (I/F) for performing communication with other control units via the communication network 7010; and a communication I/F for performing communication with a device, a sensor, or the like within and without the vehicle by wire communication or radio communication. A functional con- figuration of the integrated control unit 7600 illustrated in Fig. 21 includes a microcomputer 7610, a general-purpose communication I/F 7620, a dedicated communication I/F 7630, a positioning sec- tion 7640, a beacon receiving section 7650, an in-vehicle device I/F 7660, a sound/ image output section 7670, a vehicle-mounted network I/F 7680, and a storage section 7690. The other control units similarly include a microcomputer, a communication I/F, a storage section, and the like.

The driving system control unit 7100 controls the operation of devices related to the driving system of the vehicle in accordance with various kinds of programs. The driving system control unit 7100 may have a function as a control device of an antilock brake system (ABS), electronic stability con- trol (ESC), or the like.

The driving system control unit 7100 is connected with a vehicle state detecting section 7110. The driving system control unit 7100 performs arithmetic processing using a signal input from the vehi- cle state detecting section 7110, and controls the internal combustion engine, the driving motor, an electric power steering device, the brake device, and the like.

The body system control unit 7200 controls the operation of various kinds of devices provided to the vehicle body in accordance with various kinds of programs. For example, the body system con- trol unit 7200 functions as a control device for a keyless entry system, a smart key system, a power window device, or various kinds of lamps such as a headlamp, a backup lamp, a brake lamp, a turn signal, a fog lamp, or the like.

The battery control unit 7300 controls a secondary battery 7310, which is a power supply source for the driving motor, in accordance with various kinds of programs. The outside-vehicle information detecting unit 7400 detects information about the outside of the vehicle including the vehicle control system 7000. For example, the outside-vehicle information de- tecting unit 7400 is connected with at least one of an imaging section 7410 and an outside-vehicle information detecting section 7420. The imaging section 7410 includes at least one of a time-of- flight (ToF) camera, a stereo camera, a monocular camera, an infrared camera, and other cameras. The outside-vehicle information detecting section 7420, for example, includes at least one of an en- vironmental sensor for detecting current atmospheric conditions or weather conditions and a pe- ripheral information detecting sensor for detecting another vehicle, an obstacle, a pedestrian, or the like on the periphery of the vehicle including the vehicle control system 7000.

The in-vehicle information detecting unit 7500 detects information about the inside of the vehicle. The in-vehicle information detecting unit 7500 may collect any information related to a situation re- lated to the vehicle. The in-vehicle information detecting unit 7500 is, for example, connected with a driver and/ or passengers state detecting section 7510 that detects the state of a driver and/ or pas- sengers. The driver state detecting section 7510 may include a camera that images the driver, a bio- sensor that detects biological information of the driver, a microphone that collects sound within the interior of the vehicle, or the like. The biosensor is, for example, disposed in a seat surface, the steer- ing wheel, or the like, and detects biological information of an occupant sitting in a seat or the driver holding the steering wheel.

The integrated control unit 7600 controls general operation within the vehicle control system 7000 in accordance with various kinds of programs. The integrated control unit 7600 is connected with an input section 7800. The input section 7800 is implemented by a device capable of input operation by an occupant, such, for example, as a touch panel, a button, a microphone, a switch, a lever, or the like. The integrated control unit 7600 may be supplied with data obtained by voice recognition of voice input through the microphone. The input section 7800 may, for example, be a remote control device using infrared rays or other radio waves, or an external connecting device such as a mobile telephone, a personal digital assistant (PDA), or the like that supports operation of the vehicle con- trol system 7000. The input section 7800 may be, for example, a camera. In that case, an occupant can input information by gesture. Alternatively, data may be input which is obtained by detecting the movement of a wearable device that an occupant wears. Further, the input section 7800 may, for ex- ample, include an input control circuit or the like that generates an input signal on the basis of infor- mation input by an occupant or the like using the above-described input section 7800, and which outputs the generated input signal to the integrated control unit 7600. An occupant or the like inputs various kinds of data or gives an instruction for processing operation to the vehicle control system 7000 by operating the input section 7800. The storage section 7690 may include a read only memory (ROM) that stores various kinds of pro- grams executed by the microcomputer and a random access memory (RAM) that stores various kinds of parameters, operation results, sensor values, or the like. In addition, the storage section 7690 may be implemented by a magnetic storage device such as a hard disc drive (HDD) or the like, a semiconductor storage device, an optical storage device, a magneto-optical storage device, or the like.

The general-purpose communication I/F 7620 is a communication I/F used widely, which commu- nication I/F mediates communication with various apparatuses present in an external environment 7750. The general-purpose communication I/F 7620 may implement a cellular communication pro- tocol such as global system for mobile communications (GSM (registered trademark)), worldwide interoperability for microwave access (WiMAX (registered trademark)), long term evolution (LTE (registered trademark)), LTE-advanced (LTE-A), or the like, or another wireless communication protocol such as wireless LAN (referred to also as wireless fidelity (Wi-Fi (registered trademark)), Bluetooth (registered trademark), or the like. The general-purpose communication I/F 7620 may, for example, connect to an apparatus (for example, an application server or a control server) present on an external network (for example, the Internet, a cloud network, or a company-specific network) via a base station or an access point. In addition, the general-purpose communication I/F 7620 may connect to a terminal present in the vicinity of the vehicle (which terminal is, for example, a terminal of the driver, a pedestrian, or a store, or a machine type communication (MTC) terminal) using a peer to peer (P2P) technology, for example.

The dedicated communication I/F 7630 is a communication I/F that supports a communication protocol developed for use in vehicles. The dedicated communication I/F 7630 may implement a standard protocol such, for example, as wireless access in vehicle environment (WAVE), which is a combination of institute of electrical and electronic engineers (IEEE) 802.1 Ip as a lower layer and IEEE 1609 as a higher layer, dedicated short range communications (DSRC), or a cellular communi- cation protocol. The dedicated communication I/F 7630 typically carries out V2X communication as a concept including one or more of communication between a vehicle and a vehicle (V ehicle to Vehicle), communication between a road and a vehicle (Vehicle to Infrastructure), communication between a vehicle and a home (V ehicle to Home), and communication between a pedestrian and a vehicle (Vehicle to Pedestrian).

The positioning section 7640, for example, performs positioning by receiving a global navigation satellite system (GNSS) signal from a GNSS satellite (for example, a GPS signal from a global posi- tioning system (GPS) satellite), and generates positional information including the latitude, longi- tude, and altitude of the vehicle. Incidentally, the positioning section 7640 may identify a current position by exchanging signals with a wireless access point or may obtain the positional information from a terminal such as a mobile telephone, a personal hand-phone system (PHS), or a smart phone that has a positioning function.

The beacon receiving section 7650, for example, receives a radio wave or an electromagnetic wave transmitted from a radio station installed on a road or the like, and thereby obtains information about the current position, congestion, a closed road, a necessary time, or the like. Incidentally, the function of the beacon receiving section 7650 may be included in the dedicated communication I/F 7630 described above.

The in-vehicle device I/F 7660 is a communication interface that mediates connection between the microcomputer 7610 and various in-vehicle devices 7760 present within the vehicle. The in-vehicle device I/F 7660 may establish wireless connection using a wireless communication protocol such as wireless LAN, Bluetooth (registered trademark), near field communication (NFC), or wireless uni- versal serial bus (WUSB). In addition, the in-vehicle device I/F 7660 may establish wired connection by universal serial bus (USB), high-definition multimedia interface (HDMI (registered trademark)), mobile high-definition link (MHL), or the like via a connection terminal (and a cable if necessary) not depicted in the figures. The in-vehicle devices 7760 may, for example, include at least one of a mobile device and a wearable device possessed by an occupant and an information device carried into or attached to the vehicle. The in-vehicle devices 7760 may also include a navigation device that searches for a path to an arbitrary destination. The in-vehicle device I/F 7660 exchanges control sig- nals or data signals with these in-vehicle devices 7760.

The vehicle-mounted network I/F 7680 is an interface that mediates communication between the microcomputer 7610 and the communication network 7010. The vehicle-mounted network I/F 7680 transmits and receives signals or the like in conformity with a predetermined protocol sup- ported by the communication network 7010.

The microcomputer 7610 of the integrated control unit 7600 controls the vehicle control system 7000 in accordance with various kinds of programs on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. The microcomputer 7610 may be the processor 202 of Fig. 2 and also the microcomputer 7610 may implement the functionality described in Figs. 9a, 9b, Ila, 12a, 13 and Fig. 16 in more detail. For example, the microcomputer 7610 may calculate a control target value for the driving force generating device, the steering mechanism, or the braking device on the basis of the obtained information about the inside and outside of the vehicle, and output a control command to the driving system control unit 7100. For example, the microcomputer 7610 may perform cooperative control intended to implement functions of an advanced driver assistance system (ADAS) which functions include collision avoidance or shock mitigation for the vehicle, fol- lowing driving based on a following distance, vehicle speed maintaining driving, a warning of colli- sion of the vehicle, a warning of deviation of the vehicle from a lane, or the like. In addition, the microcomputer 7610 may perform cooperative control intended for automatic driving, which makes the vehicle to travel autonomously without depending on the operation of the driver, or the like, by controlling the driving force generating device, the steering mechanism, the braking device, or the like on the basis of the obtained information about the surroundings of the vehicle.

The microcomputer 7610 may generate three-dimensional distance information between the vehicle and an object such as a surrounding structure, a person, or the like, and generate local map infor- mation including information about the surroundings of the current position of the vehicle, on the basis of information obtained via at least one of the general-purpose communication I/F 7620, the dedicated communication I/F 7630, the positioning section 7640, the beacon receiving section 7650, the in-vehicle device I/F 7660, and the vehicle-mounted network I/F 7680. In addition, the micro- computer 7610 may predict danger such as collision of the vehicle, approaching of a pedestrian or the like, an entry to a closed road, or the like on the basis of the obtained information, and generate a warning signal. The warning signal may, for example, be a signal for producing a warning sound or lighting a warning lamp.

The sound/image output section 7670 transmits an output signal, e.g. modified audio signal, of at least one of a sound and an image to an output device capable of visually or auditorily notifying in- formation to an occupant of the vehicle or the outside of the vehicle. In the example of Fig. 21 an audio speaker 7710, a display section 7720, and an instrument panel 7730 are illustrated as the out- put device. The display section 7720 may, for example, include at least one of an on-board display and a head-up display. The display section 7720 may have an augmented reality (AR) display func- tion. The output device may be other than these devices, and may be another device such as head- phones, a wearable device such as an eyeglass type display worn by an occupant or the like, a projector, a lamp, or the like. In a case where the output device is a display device, the display device visually displays results obtained by various kinds of processing performed by the microcomputer 7610 or information received from another control unit in various forms such as text, an image, a table, a graph, or the like. In addition, in a case where the output device is an audio output device.

Incidentally, at least two control units connected to each other via the communication network 7010 in the example depicted in Fig. 21 may be integrated into one control unit. Alternatively, each indi- vidual control unit may include a plurality of control units. Further, the vehicle control system 7000 may include another control unit not depicted in the figures. In addition, part or the whole of the functions performed by one of the control units in the above description may be assigned to an- other control unit. That is, predetermined arithmetic processing may be performed by any of the control units as long as information is transmitted and received via the communication network 7010. Similarly, a sensor or a device connected to one of the control units may be connected to an- other control unit, and a plurality of control units may mutually transmit and receive detection infor- mation via the communication network 7010.

Incidentally, a computer program for realizing the functions of the electronic device according to the present embodiment described with reference to Figs. 2 and 5 can be implemented in one of the control units or the like. In addition, a computer readable recording medium storing such a com- puter program can also be provided. The recording medium is, for example, a magnetic disk, an op- tical disk, a magneto-optical disk, a flash memory, or the like. In addition, the above-described computer program may be distributed via a network, for example, without the recording medium being used.

It should be noted that the description above is only an example configuration. Alternative configu- rations may be implemented with additional or other sensors, storage devices, interfaces, or the like.

Fig. 22 schematically shows an embodiment of hand owner detection process performed to adapt a car system behaviour based on an input user.

A vehicle, such as the car 2100, comprises a car system set up 2101, a car safety system 2102 and a car system display 2103. A user 2104, which is a driver and/ or a passenger of the car 2100, is able to see what is displayed on the car system display 2103. The car system display 2103 is operated by a user’s hand, e.g. an active hand, and a hand owner detector 2105 detects the active hand and identi- fies the hand owner. The hand owner detector 2105 detects the active hand of the user 2104 based on a hand detection 2106, a palm analysis 2107, a tips analysis 2108, a seat occupants detection 2109, and a predefined wheel zone of a steering wheel 2110. The results of the process performed by the hand owner detector 2105 are acquired by the system of the car 2100, such that a car system behav- ior is adapted based on an input user.

In the embodiment of Fig. 22, the car system display 2103 may be comprised for example in the in- fotainment system 203 described with regard to Fig. 2 above. The hand owner detector 2105 may be implemented by the processor 202 described with regard to Fig. 2 above. The hand detection 2106 may be performed as described in Fig. 7 above. The palm analysis 2107 may be performed as de- scribed in Figs. 9a, 9b, 10 and 13 above. The tips analysis 2108 may be the fingertips analysis 1000 as described in Fig. 12a above. The seat occupant’s detection 2109 may be performed as described in Figs. 6a and 6b above. The steering wheel 2110 may be the steering wheel 302 described in Figs. 5, Ila and 11b above.

Fig. 23 shows in more detail an embodiment of a separation line defined in a captured image. A sep- aration line 2200 is a line that splits in two parts the image captured by the in-vehicle ToF imaging system. In this implementation, the separation line 2200 is an oblique black line defined in the cap- tured image. Based on the separation line 2200, an angle of an identified active hand may be per- formed. The position of the separation line can be modified, and thus, to adapt the sensitivity of the method function of the car configuration and/ or driver and passenger (morphology).

Fig. 24 schematically shows a hand owner detection result, wherein the hand owner status is set as driver, while the hand owner interacts with an in-vehicle infotainment system. An active hand 2300, captured by the in-vehicle ToF imaging system (see 200 in Fig. 2) while interacting with the infotain- ment system (see 203 in Fig. 2) of a vehicle. The active hand 2300 is detected by the hand owner de- tector (see Fig. 7) and based on the embodiments described with regard to Figs. 2 to 19 above, a hand owner is identified and a hand owner status is generated, here the hand owner status is set to driver.

It should be recognized that the embodiments describe methods with an exemplary ordering of method steps. The specific ordering of method steps is, however, given for illustrative purposes only and should not be construed as binding.

It should also be noted that the division of the electronic device of Fig. 21 into units is only made for illustration purposes and that the present disclosure is not limited to any specific division of functions in specific units. For instance, at least parts of the circuitry could be implemented by a re- spectively programmed processor, field programmable gate array (FPGA), dedicated circuits, and the like.

All units and entities described in this specification and claimed in the appended claims can, if not stated otherwise, be implemented as integrated circuit logic, for example, on a chip, and functionality provided by such units and entities can, if not stated otherwise, be implemented by software.

In so far as the embodiments of the disclosure described above are implemented, at least in part, us- ing software-controlled data processing apparatus, it will be appreciated that a computer program providing such software control and a transmission, storage or other medium by which such a com- puter program is provided are envisaged as aspects of the present disclosure.

Note that the present technology can also be configured as described below. (1) An electronic device comprising circuitry configured to perform hand owner identification (1706) based on image analysis (701) of an image (700) captured by an imaging system (200) to ob- tain a hand owner status (1710, 1711, 1712).

(2) The electronic device of (1), wherein the circuitry is configured to define a driver wheel zone (300) as a Region of Interest in the captured image (700), and to perform hand owner identification (1706) based on the defined driver wheel zone (300).

(3) The electronic device of (1) or (2), wherein the circuitry is configured to detect an active hand (303) in the captured image (700) capturing a field-of-view (201) of the imaging system (200) being a ToF imaging system, and to perform hand owner identification (1706) based on the detected active hand (303).

(4) The electronic device of (2) or (3), wherein the circuitry is configured to define a minimum number (m) of frames in which an active hand (303) should be detected in the driver wheel zone (300).

(5) The electronic device of (4), wherein the circuitry is configured to count a number (n) of frames in which the active hand (303) is detected in the driver wheel zone (300), and to perform hand owner identification (1706) by comparing the minimum number (m) of frames with the counted number (n) of frames.

(6) The electronic device of (5), wherein the circuitry is configured to, when the minimum num- ber (m) of frames is smaller than the counted number (n) of frames, obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a driver.

(7) The electronic device of anyone of (1) to (6), wherein the circuitry is configured to perform image analysis (701) based on the captured image (700) to obtain tip positions (702), a palm position (703) and an arm position (704) indicating a bottom arm position.

(8) The electronic device of (7), wherein the circuitry is configured to perform arm angle deter- mination (800) based on the palm position (703) and the bottom arm position (704) to obtain an arm angle (801).

(9) The electronic device of (7), wherein the circuitry is configured to perform fingertips analysis (1000) based on the tip positions (702) to obtain a tip score (tip i ).

(10) The electronic device of (8) or (9), wherein the circuitry is configured to perform arm analy- sis (1100) based on the palm position (703), the bottom arm position (704) and the arm angle (801) to obtain a palm score (palm i ), a bottom arm score (bottom i ) and an arm angle score (angle i ). (11) The electronic device of (8) or (10), wherein the circuitry is configured to perform arm vot- ing (1200) based on the palm position (703), the bottom arm position (704) and the arm angle (801) to obtain an arm vote (1201).

(12) The electronic device of (11), wherein the circuitry is configured to perform score determina- tion (1400) based on the arm vote (1201), the tip score (tip i ), the palm score (palm ), the bottom arm score (bottom i ) and the arm angle score (angles) to obtain a driver’s score (score D ) and a passenger’s score (score P ).

(13) The electronic device of (12), wherein the circuitry is configured to, when the driver’s score (score D ) is higher than the passenger’s score (score P ), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a driver.

(14) The electronic device of (12), wherein the circuitry is configured to, when the driver’s score (score D ) is lower than the passenger’s score (score P ), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is a passenger.

(15) The electronic device of (12), wherein the circuitry is configured to, when an absolute differ- ence of the driver’s score score D and the passenger’s score (score P ) is greater than a threshold (∈), obtain a hand owner status (1710, 1711, 1712) which indicates that hand owner is unknown.

(16) The electronic device of anyone of (1) to (15), wherein the circuitry is configured to, when the captured image (700) is a depth image, perform seat occupancy detection based on the depth im- age to obtain a seat occupancy detection status.

(17) The electronic device of anyone of (1) to (16), wherein the circuitry is configured to perform hand owner identification (1706) based on a Left Hand Drive (LHD) configuration or a Right Hand Drive (RHD) configuration.

(18) A method comprising performing hand owner identification (1706) based on image analysis (701) of an image (700) captured by an imaging system (200) to obtain a hand owner status (1710, 1711, 1712).

(19) A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of (18).

(20) A non-transitory computer-readable recording medium that stores therein a computer pro- gram product, which, when executed by a computer, cause the computer to carry out the method of (18).