Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD FOR SAFETY WARNING FOR A DESIGNATED ZONE
Document Type and Number:
WIPO Patent Application WO/2024/030068
Kind Code:
A1
Abstract:
An apparatus and method for safety warning for a designated zone, wherein the apparatus comprises: a detector for monitoring a designated zone and detecting one or more intruders partially or fully entering the designated zone; an image sensor for capturing images in real time, wherein the apparatus is configured to: send a signal to alert that the one or more intruders are detected, wherein the designated zone extends in a direction away from the detector, and the image sensor is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle with respect to a direction of view of the image sensor, wherein the apparatus is operable to: control the motor to move both the image sensor and detector to align the direction of extension of the designated zone with a point of interest in the captured images.

Inventors:
LOW JUN KIAT ELAINE (SG)
LOW BOON KIAT MARTIN (SG)
Application Number:
PCT/SG2022/050544
Publication Date:
February 08, 2024
Filing Date:
August 01, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
S S IND SUPPLY & TECHNICAL SERVICES (SG)
International Classes:
G08G1/16; B60Q1/50; B60Q5/00; E01F9/00
Foreign References:
CN107798918A2018-03-13
US20120306664A12012-12-06
CN111127836A2020-05-08
CN210181808U2020-03-24
Attorney, Agent or Firm:
CHANG, Jian Ming (SG)
Download PDF:
Claims:
Claims

1. An apparatus for safety warning for a designated zone, wherein the apparatus comprises: a detector configured to monitor a designated zone and detect one or more intruders partially or fully entering the designated zone; an image sensor for capturing images in real time; a motor for moving the image sensor and the detector; and a processing unit configured to execute instructions to operate the apparatus to: receive, from the detector, input relating to the one or more intruders detected in the designated zone; send a signal to alert that the one or more intruders are detected in the designated zone, wherein the designated zone extends in a direction away from the detector, and the image sensor is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle with respect to a direction of view of the image sensor, wherein the apparatus is operable to: control the motor to move both the image sensor and detector to align the direction of extension of the designated zone with a point of interest in the captured images.

2. The apparatus of claim 1 , wherein the apparatus further comprises: a display configured to display the images captured by the image sensor; and one or more user control mechanisms for controlling movements of the motor to align the direction of extension of the designated zone with the point of interest.

3. The apparatus of claim 2, wherein the display comprises a physical visual marker to guide a user to align the direction of extension of the designated zone with the point of interest.

4. The apparatus of claim 2 or 3, wherein the display is configured to display a graphical representation to guide a user to align the direction of extension of the designated zone with the point of interest.

5. The apparatus of claim 3 or 4, wherein the physical visual marker or the graphical representation is provided in a form of a vertical line located at a center of the display.

6. The apparatus of claim 3, 4 or 5, wherein the physical visual marker or the graphical representation comprises one or more curved lines indicative of a line in reality that has undergone barrel distortion in the images captured by the image sensor.

7. The apparatus of any one of claims 2 to 6, wherein the display is configured to graphically display the one or more user control mechanisms.

8. The apparatus of any one of the preceding claims, wherein the point of interest is a center of a road or lane in a road, a center of a road or lane in a road at a predetermined distance away from the detector, a center of a plurality of lanes in a road, or a center of a vehicle or object in the captured images.

9. The apparatus of any one of the preceding claims, wherein the direction of extension of the designated zone is aligned with the point of interest when a longitudinal axis of symmetry of the designated zone is substantially parallel with a lane or a road.

10. The apparatus of any one of the preceding claims, wherein the direction of extension of the designated zone is aligned with the point of interest when an area of the designated zone is aligned with a region of interest including a plurality of said point of interest.

11 . The apparatus of any one of the preceding claims, wherein the apparatus is operable to: count every detection of an intruder in the designated zone, wherein images of every detection captured by the image sensor are stored in a database so as to enable an accident or near miss to be tracked.

12. The apparatus of any one of the preceding claims, wherein data to control the motor to move the image sensor and the detector to align the direction of extension of the designated zone with the point of interest is determined based on an output of a neural network.

13. The apparatus of any one of the preceding claims, wherein data to control the motor to move the image sensor and the detector to align the direction of extension of the designated zone with the point of interest is determined based on location data of the point of interest obtained from processing the captured images to identify features for calculation of the location of the point of interest.

14. The apparatus of claim 13, wherein the features identified include a road kerb, a road divider, a road marking, a vehicle or object, and/or a lane marking.

15. The apparatus of any one of the preceding claims, wherein the apparatus is operable to: send different signals to alert one or more intruders detected in different areas of the designated zone, wherein each signal for each area in the designated zone results in an alert that is different from an alert of a signal sent for another area in the designated zone. The apparatus of claim 15, wherein the alerts in the different areas of the designated zone are different in level of urgency and the level of urgency correspond to different speed of the intruders and/or distance of the intruders from the detector. The apparatus of claim 16, wherein the alerts have different tempo for different level of urgency. The apparatus of claim 16 or 17, wherein the alerts have different volume for different level of urgency. The apparatus of any one of the preceding claims, wherein the apparatus is operable to: send signals wirelessly to a device residing at each of the one or more intruders to activate an alert. The apparatus of any one of the preceding claims, wherein the one or more intruders are alerted through sound and lighting effects. The apparatus of any one of the preceding claims, wherein the detector is a radio detection and ranging (RADAR) sensor, or a light detection and ranging (LIDAR) sensor, or a combination of both. The apparatus of any one of the preceding claims, wherein the apparatus comprises a mounting bracket for mounting the image sensor, the detector, and the motor to a surface of a vehicle. The apparatus of any one of the preceding claims, wherein the apparatus is operable to: send a signal to alert a user of a need to re-align the direction of extension of the designated zone with the point of interest in the case that alignment is off. The apparatus of any one of the preceding claims, wherein a field of view of the image sensor is wider than a width of the designated zone. The apparatus of any one of the preceding claims, wherein the designated zone has a width in a range of, a width of a lane or road, to, a width that is 50% to 100% of a width of a vehicle mounted with the image sensor and the detector. The apparatus of any one of the preceding claims, wherein, in a top view of the apparatus, the designated zone has an elongate rectangle shape. The apparatus of any one of claims 1 to 25, wherein, in a top view of the apparatus, the designated zone has an elongate rectangle shape that is bent according to a road or lane bend identified in the captured images. A method for safety warning for a designated zone, wherein the method comprises: monitoring a designated zone and detecting one or more intruders partially or fully entering the designated zone using a detector; capturing images in real time using an image sensor; receiving, from the detector, input relating to the one or more intruders detected in the designated zone; and sending a signal to alert that the one or more intruders are detected in the designated zone, wherein the designated zone extends in a direction away from the detector, and the image sensor is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle with respect to a direction of view of the image sensor, wherein the method comprises: controlling the motor to move both the image sensor and detector to align the direction of extension of the designated zone with a point of interest in the captured images.

Description:
Apparatus and Method for Safety Warning for a Designated Zone

Field of Invention

The invention relates to an apparatus and a method for safety warning for a designated zone, in particular, for monitoring and warning a user in a moving vehicle entering the designated zone and/or a user located in or near the designated zone.

Background

There are many instances in which individuals, such as construction personnel, public service/outdoor facilities maintenance personnel (e.g. plant pruning, road-sweeping, paramedics etc.), work on roads or in an area with high vehicle traffic. Consequently, these individuals are exposed to a high risk of traffic accidents.

To prevent accidents, signs may be put up to warn drivers to avoid road lanes with road works or public services/maintenance being carried out. However, despite that signs are put up, there will still be a number of cases in which a vehicle driver becomes distracted and accidentally drives into a road lane with road works or public services being carried out. For instance, the driver may be distracted because he or she is communicating on a mobile phone, or having a conversation with passengers.

An existing solution to ensure the safety of the aforementioned individuals would be to require them to wear or carry a device. A radar will be set up at a suitable location to monitor incoming vehicles. The individuals will be alerted via the device to avoid an accident if the radar detects that a vehicle is approaching them in the area that they are working in. However, the radar requires a long time to set up and align, and only a trained person is able to do it. Furthermore, there is a high cost involved to implement such a system, which can send alerting signals to many users carrying devices upon radar detection of danger.

Summary of Invention

The present invention is defined in the independent claims. Optional features of the invention are defined in the dependent claims.

Brief Description of Drawings

Embodiments of the invention will be better understood and readily apparent to one skilled in the art from the following written description, by way of example only, and in conjunction of the drawings, in which:

Figure 1 is a schematic diagram of an apparatus for alerting a vehicle in a designated zone according to an example of the present disclosure. Figures 2A and 2B show a top view of an apparatus having an example of a designated zone of a detector according to one example of the present disclosure.

Figure 3 shows an example of an apparatus for safety warning for a designated zone being mounted on a vehicle according to one example of the present disclosure.

Figure 4A shows a front view of a monitoring unit having a specific mounting arrangement between an image sensor and a detector according to one example of the present disclosure. Figure 4B shows a side view of the monitoring unit in Figure 4A.

Figure 5 shows an example of a vehicle mounted with the apparatus of Figure 4A that is parked in an undesirable orientation.

Figure 6 shows a trigonometric relationship between a vehicle mounted with the apparatus of Figure 4A and a vehicle detected in the designated zone according to one example of the present disclosure.

Figure 7 shows an image sensor and its field of view according to one example of the present disclosure.

Figure 8 shows an example of a graphical user interface used for alignment according to one example of the present disclosure.

Figure 9 shows a designated detection zone of the apparatus of one example of the present disclosure before and after alignment.

Figure 10 illustrates the system architecture of a user device that provides the graphical user interface according to one example of the present disclosure.

Figure 11 A illustrates a top view of a mounting arrangement between an image sensor and a detector according to one example of the present disclosure.

Figure 11 B illustrates a top view of a mounting arrangement between an image sensor and a detector according to another example of the present disclosure.

Figure 12 shows an example of a vehicle mounted with the apparatus of Figure 4A and a center of a lane located at a distance away from the detector is used as a point of interest for alignment purposes.

Detailed Description

Conventional detection apparatuses for work zone vehicle intrusion detection often require some form of alignment to ensure accurate detection. The setting up of the detection apparatuses and detection zone alignment process is typically time consuming, difficult, and require a trained person to perform. For example, in the case that a radio detection and ranging (RADAR or radar) sensor is used, during set up, a user may first have to obtain location data via Global Positioning System (GPS) and/or from a Satellite system, map out and align a user defined designated zone based on the obtained data, and configure the radar to monitor the designated zone. This is a problem to the deployment of such apparatuses in a temporary work zone, such as one that may be shifted frequently or only set up for a short duration of time (e.g. work zone for plant pruning, road works on a highway, etc.). Hence, such detection apparatuses are typically installed at a permanent location and are impractical for short duration use, let alone installing them on a vehicle that is constantly on the move (e.g. road sweeper, garbage truck, etc.). Furthermore, during the time of setting up such detection apparatuses, there is traffic accident risk to personnel present in the work zone. In addition, the alignment of the desired detection zone may go off unintentionally due to unforeseen circumstances or forces of nature, and detection accuracy is affected. In this case, realignment may delay the work schedule. Examples of an apparatus, a system and a method of the present disclosure that aim to address the aforementioned issues are described as follows.

Figure 1 illustrates one example of the present disclosure. There is provided an apparatus 100 for alerting one or more intruders, which can be a user (i.e. driver) of a vehicle, a person/animal, or an object, partially or fully entering a designated zone. The apparatus 100 comprises an image sensor 106 (or camera) for capturing images in real time. The image sensor 106 can be comprised in a camera, for instance, a video camera. Examples of the image sensor 106 include CMOS or CCD image sensor.

The term “vehicle” in the present disclosure refers to an manned or unmanned vehicle of any type that can travel on land, over or in waters, and in the air. For example, a land vehicle such as a car, truck, tank, electric scooter, bicycle, electric bicycle etc., a vehicle for travelling over waters such as a ship, a boat, a hovercraft, jetski, etc., or a vehicle for travelling in the air such as an aeroplane, a helicopter, a drone etc. In the case of manned vehicle, the alert can be in the form of sound and/or lighting effects. In the case of unmanned vehicle, the alert can be a data signal to instruct or notify the unmanned vehicle to take safety measures such as to move in a manner to avoid collision or to get around the designated zone, etc.

With reference to Figure 1 , the apparatus 100 further comprises a detector 150 configured to monitor the designated zone and detect one or more intruders (including any person, object and/or vehicle) partially or fully entering and residing within the designated zone. The detector 150 is configured such that the designated zone extends or project in a direction away from the detector 150 to reach a predetermined distance away from the detector 150. As such, the designated zone will appear elongate in shape and will be useful, in the case of road safety warning, for detection along a length of a road or a road lane, which is typically long and having the width of the road or road lane. The detector 150 may be a light detection and ranging (LiDAR) sensor, a radio detection and ranging (RADAR or radar) sensor, an ultrasonic sensor or the like. The detector 150 can also be made up of different types of sensors or a plurality of sensors, such as a combination of LiDAR sensor and a radar sensor or a plurality of radar sensors (short range and long range). The detector 150 may have an antenna, a transmitter, a receiver and a digital signal processing (DSP) module. In the present example, there is provided a controller (or processing unit, or processor) 1 10 configured to execute instructions stored in a memory (e.g., RAM, ROM) to operate the apparatus 100 to receive, from the detector 150, data (or input) relating to one or more intruders detected in the designated zone; and to send a signal to alert the users (or drivers) of the one or more intruders detected in the designated zone. The controller 1 10 comprises a Digital Signal Processing (DSP) module. Optionally, a signal may be sent to devices carried or worn by individuals residing close to or in the vicinity of the designated zone. The signal may also be sent to devices residing in vehicles. Such signals may be sent wirelessly to the devices residing with the individuals and/or vehicles, for instance, via the telecommunication network, Satellite communication network, etc. The data relating to the one or more intruders detected in the designated zone may indicate, but not limited to, at least one or more of speed, distance from the detector 150, position in the designated zone, shape and/or type of the intruder (e.g. shape of a person, car, truck, motorbike etc.), orientation of the intruder (e.g. a reversing vehicle, a forward moving vehicle etc.), and object or vehicle trajectory (e.g. a vehicle moving along straight or curved path, etc.) with respect to the apparatus 100.

If predetermined conditions for sending an alert to the one or more intruders detected in the designated zone are met, the controller 110 is configured to cause a warning sound to be played by a sound emitting device 105b and/or a warning light to be displayed by a light emitting device 105a at the one or more vehicles detected in the designated zone. For instance, if there is at least one vehicle being detected in the designated zone, the light emitting device 105a and/or sound emitting device 105b are activated to warn the driver of the vehicle that he/she has intruded a work zone and that there is a risk of an accident if the driver of the vehicle takes no action to lower its speed or change its steering direction. An example of the sound emitting device 105b is a long-range acoustic device (LRAD) or any device capable of generating sound greater than 120 decibels, and an example of the light emitting device 105a is a strobe warning light. Similarly, the sound emitting device 105b and/or the warning light of the light emitting device 105a can be used to alert one or more people, or animals, intruding the designated zone.

The image sensor 106 is mounted to the detector 150 such that the designated zone extends in a direction away from the detector 150, and the image sensor 106 is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle with respect to a direction of view of the image sensor. When this predetermined angle is zero degrees, which is preferred, the mounting of the image sensor 106 to the detector 150 in the manner described will align the direction of extension of the designated zone and the direction of view of the image sensor 106 to face or focus in the same direction. In this arrangement, if the image sensor 106 and the detector 150 are moved, they will move together to face or focus in the same direction. In the case of road safety warning, this same direction that they face or focus can be along the length of a road or road lane, or in the direction pointing at a center of an intruder (e.g. vehicle or object) captured in the images captured by the image sensor 106.

The apparatus 100 comprises a motor 107 for moving both the image sensor 106 and the detector 150 to align the direction of extension of the designated zone with a point of interest in the captured images of the image sensor 106. For road safety warning application, the point of interest can be a point in a road or lane, such as a center of a road lane, a center of a plurality of road lanes (e.g. in the case that a few road lanes are to be monitored), a point in the road or road lane that is along an axis parallel with lane and/or road markings, or a center of a vehicle or object captured in the images of the image sensor 106.

The apparatus 100 may optionally comprise a display 200, which can be a LCD monitor, LED monitor, and the like, and optionally the display 200 may be a touchscreen. This display 200 can be for displaying a graphical user interface for a user to control and/or manipulate the settings of the motor 107, the detector 150 and/or the image sensor 106. The display 200 can be configured to display the images captured by the image sensor.

The apparatus 100 may also comprise one or more user control mechanisms (not shown in Figure 1 ) for controlling and/or manipulating the settings of the motor 107, the detector 150 and/or the image sensor 106. The one or more user control mechanisms may be used to control movements of the motor to align the direction of extension of the designated zone with the point of interest. In the case that the display 200 is a touchscreen, the one or more user control mechanisms may be graphically displayed on a screen of the display 200 to enable a user to control movements of the motor 107 via the touchscreen.

Further discussion on the direction of extension of the designated zone, the direction of view of the image sensor 106, the predetermined angle, and the point of interest will be provided below. With reference to Figure 7, an example of the camera view 700 (pyramidal shape) of the image sensor 106 of Figure 1 is shown. The image sensor 106 can have a field of view defined as a maximum area of an image that the image sensor 106 can capture. In a camera, the field of view is typically in the form of an image plane 702 that is quadrilateral in shape with 4 right angle corners (e.g. square or rectangle). The image plane 702 may have an aspect ratio of, for instance, 1 :1 , 2:1 , 3:2, 4:3, 16:9 etc. Figure 7 shows the image plane 702 having an aspect ratio of about 16:9. . The image plane 702 has boundaries defined by horizontal FOV 704, and vertical FOV 706. A central horizontal axis of the FOV 710 that is orthogonal to the vertical FOV 706, a central vertical axis of the FOV 712 that is orthogonal to the horizontal FOV 704, and a central axis 708 (which typically corresponds with an optical axis of lens that will be mounted with the image sensor 106 in a camera) are orthogonal to one another and they correspond to x, y, and z axes arrangement respectively. There is an intersection point 714 (also called center of the FOV 714 of the image sensor 106) between the central horizontal axis 710, the central vertical axis 712, and the central axis 708. In the case that the predetermined angle between the direction of view of the image sensor 106 with the direction of extension of the designated zone is set as zero degrees, their alignment can be performed such that the intersection point 714, the central axis 708 and/or the central vertical axis 712 intersect an axis (or line) representing the direction of extension of the designated zone. In one example, the direction of view of the image sensor 106 is along the central axis 708 and can be represented by the central axis 708.

With reference to Figures 2A and 2B, in one example, the detector 150 is a radar and has a radar Field of View (FOV) 154. The radar FOV 154 is represented by the shape of an ellipse with a center A in a top view of the apparatus (100 in Figure 1 ). The main lobe or main beam of the detector 150 defines the radar FOV 154. The maximum width and length of the elliptical shape of the radar FOV 154 are shown in Figures 2A and 2B. In the present example, there is one designated zone 156 located within the radar FOV 154 and extended in a direction away from the detector 150. The designated zone 156 is substantially rectangular in the top view of the apparatus 100 and has a substantially longer length compared to its width. The size of the radar FOV 154 is much larger than the size of the designated zone 156. The detector 150 is configured to detect and track one or more intruders (including objects and/or vehicles; objects include people and/or animals) in the designated zone 156. The designated zone 156 is elongate and is useful for detection of one or more intruders (including objects and/or vehicles; objects include people and/or animals) in a road or road lane. As an example, for road safety warning application, the length of the designated zone 156 may be set between 50 to 300 metres. The width of the designated zone 156 may be the width of the vehicle, which the detector 150 is mounted on, or smaller than or equal to the width of a road or road lane. In the present example, the longitudinal axis of symmetry 153 of the designated zone 156 extends along the direction of extension of the designated zone 156. Hence, the portion of the axis 153 in front of the detector 150 is representative of the direction of extension of the designated zone 156 and can be used in the calculation to align with the point of interest.

In the case that the detector 150 is a LiDAR sensor, the LiDAR sensor can be configured such that the designated zone is a region within the field of view of the LiDAR sensor. The field of view of the LiDAR sensor resembles that of the image sensor 106 that is described earlier with reference to Figure 7. The LiDAR sensor can be configured such that the direction of extension of the designated zone can be represented by the central axis 708 and/or the center of the FOV 714 in Figure 7. The central axis 708, the center of the FOV 714 and/or the central vertical axis 712 of the LiDAR can be used in the calculation required to align with the point of interest.

Furthermore, when mounting the image sensor 106 to the detector 150, the direction of extension of the designated zone and the direction of view of the image sensor 106 may be aligned such that the central axes 708 of the LiDAR sensor and the image sensor 106 are parallel or aligned at a predetermined angle relative to each other, and/or the axis of the central axis 708 of the image sensor 106 may be parallel or aligned at a predetermined angle with respect to the axis 153 of the designated zone 156. With reference to the example that the central horizontal axis 710, the central vertical axis 712 and the central axis 708 correspond to x, y, and z axes respectively, the predetermined angle can be an angle viewable in a top view i.e. x-z plane of the image sensor 106 and the detector 150, and/or can be an angle viewable in a side view i.e. y-z plane of the image sensor 106 and the detector 150. As mentioned earlier, the predetermined angle is preferably zero degrees i.e. 0 degrees in both the x-z plane and the y-z plane.

Figures 11 A and 1 1 B show two examples of how the image sensor 106 can be mounted on top of or below the detector 150. The drawings shown in Figure 1 1A and 11 B correspond to views in the x-z plane described above. The reference numerals in earlier figures are reused for the same elements in Figures 11 A and 11 B. In both Figures 1 1 A and 1 1 B, the detector 150 is configured to monitor a designated zone 156 (in the present example, rectangular in shape) extending from the detector 150 and overlaps with the camera view 700 of the image sensor 106. In Figure 1 1 A, the image sensor 106 is mounted to the detector 150 such that the longitudinal axis of symmetry 153 of the designated zone 156 is parallel to the central axis 708 of the image sensor 106 and the predetermined angle in the x-z plane is zero degrees. Hence, in the example of Figure 11 A, the predetermined angle will always remain as zero degrees when the image sensor 106 and the detector 150 are moved together by the motor 107. In Figure 1 1 B, the longitudinal axis of symmetry 153 of the designated zone 156 is at a predetermined angle W with respect to the central axis 708 of the image sensor 106. In the example of Figure 1 1 B, the predetermined angle W will always be at the same degree when the image sensor 106 and the detector 150 are moved together by the motor 107.

The designated zone described earlier, including the designate zone 156, or the designated zone of the above LiDAR example, can be divided or segmented to have different detection areas. For example, with reference to Figures 2A and 2B, the designated zone 156 comprises two elongate rectangular detection areas 151 and 152. In road warning application, the designated zone 156 may be configured to have, a width equal or smaller than the width of a road lane or a road, to, a width that is 50% to 100% of the width of the vehicle mounted with the detector 150. However, the designated zone should have a width that is preferably smaller than the width of the road lane or road. For instance, preferably, the designated zone 156 may have a width that is within 50% to 100% of the width of the vehicle mounted with the detector 150. More preferably, the designated zone 156 may have a width that is within 75% to 100% of the width of the vehicle mounted with the detector 150. The parameters for monitoring and detection of one or more intruders (including objects and/or vehicles; objects include people and/or animals) in these areas 151 and 152 can be different. For example, the parameters can be the sizes of these detection areas 151 and 152, the speed of an object or vehicle to trigger sending of a signal to alert a user in the areas, the distances of these detection areas or the object or vehicle away from the apparatus 100, both the speed of the object or vehicle and the distances of the object or vehicle away from the apparatus 100, etc. Furthermore, the alerts sent to users can be different between the detection areas 151 and 152. Different alerts can indicate different level of urgency and different level of traffic accident risk to a user (or driver) of a vehicle.

For example, if the alerts are sound alerts, an alert sent to a user of a vehicle detected in the detection area 152 that is further away from the apparatus 100 may have a lower tempo than an alert sent to a user of a vehicle detected in the detection area 151 that is closer to the apparatus 100. A sound alert sent to a user of a vehicle detected in the detection area 152 that is further away from the apparatus 100 may have a higher volume than a sound alert sent to a user of a vehicle detected in the detection area 151 that is closer to the apparatus 100, so that the user further away can hear the sound alert. A light alert sent to a user of a vehicle detected in the detection area 152 that is further away from the apparatus 100 may have a lower flashing frequency and/or less brighter than a light alert sent to a user of a vehicle detected in the detection area 151 that is closer to the apparatus. Lower frequency sound tend to reach further distances better, hence, in some examples, the sound alert can be configured to project sound with lower frequencies. Lower tempo and/or volume, suggests a lower level of urgency and traffic risk, and would also cause less disturbance to a user (or driver) of a vehicle. On the other hand, higher tempo and/or volume suggest higher level of urgency and traffic risk. In another example, the speed of the object or vehicle may be considered, alone or in combination with the distance of the detection areas, the object or vehicle from the apparatus 100, to differentiate the alerts to be sent. For instance, the higher the speed and the closer the distance to the apparatus 100, the higher the level of urgency and traffic risk and the lower the speed and the further the distance to the apparatus 100, the lower level of urgency and traffic risk. When the speed and distance are considered together, it should be noted that generally, the faster a vehicle, the longer the stopping distance. Hence, if the speed of a vehicle is fast enough, although the distance to the apparatus 100 may be long, an alert of higher level of urgency may be sent because the stopping distance of the vehicle is expected to be long.

In one example, the apparatus 100 can be configured to allow a user to adjust the length and width (i.e. size) of the designated zone 156 and/or the sizes of the detection areas 151 and/or 152.

Figures 4A and 4B illustrate an example of a monitoring unit 400 (or monitoring mechanism/device) of the apparatus 100 of Figure 1 . The following discussion with reference to Figures 4A and 4B will also refer to elements in Figures 1 , 7, 2A and 2B. The monitoring unit 400 comprises of the image sensor 106, the detector 150 and the motor 107 described earlier. Figure 4A shows a front view of the apparatus 100 and Figure 4B shows a side view of the monitoring unit 400. They show how the image sensor 106 of Figure 1 is mounted to the detector 150 and how both the image sensor 106 and the detector 150 are mounted to the motor 107. The detector 150 has the radar sensor described earlier with reference to Figures 2A and 2B, and, in this example, the image sensor 106 is a part of a camera that is capable of taking video. Furthermore, in this example, the image sensor 106 is mounted on top of the detector 150 and that the portions of the central axis 708 and the central vertical axis 712 in front of the image sensor 106 are representative of the direction of view of the image sensor 106. They are aligned with the axis 153 of the designated zone 156, which is representative of the direction of extension of the designated zone 156. In this example, the predetermined angle between the central axis 708 and the axis 153 is set at zero degrees in both the x-z plane and y-z plane described earlier. The image sensor 106 is mounted such that it is fixed in position relative to the detector 150. In the present example, the central axis 708 of the image sensor 106 and the axis 153 of the designated zone 156 are parallel to each other, and the image sensor 106 is facing the direction of extension (or projection) of the designated zone 156 from the radar sensor. The detector 150 is mounted to the motor 107 via a mounting plate or bracket 115 so that when the mounting plate or bracket 1 15 is moved, the image sensor 106 and the detector 150 move together. The motor 107 comprises a shaft 109 with one end extending from the motor 107 and an opposite end coupled to the mounting plate or bracket 115. The motor 107 is controllable through the controller 1 10 to rotate the shaft 109 along a vertical axis 402 to move the detector 150 and the image sensor 106 together to align the direction of extension of the designated zone 156 with the point of interest 155 in the captured real time images of the image sensor 106.

Figure 3 shows how the monitoring unit 400 of Figures 4A and 4B can be mounted to a vehicle 120. In the example of Figure 3, the vehicle 120 is a truck and the rear view of the truck is shown. The monitoring unit 400 is mounted on top of the vehicle 120. Specifically, the monitoring unit 400 is mounted on top of the roof, at a highest point of the vehicle 120, and at a central location on top of the vehicle. The monitoring unit 400 in Figure 3 is orientated such that the direction of view of the image sensor 106 and the direction of extension of the designated zone by the detector 150 are at the rear of the vehicle 120. A center line 123 is drawn in Figure 3 to illustrate the central location on top of the vehicle, which the monitoring unit 400 is mounted. The image sensor 106 and the detector 150 can be configured to be rotatable 360 degrees (or limited to a range of rotation angles) by the motor 107. They can be rotated such that the direction of view of the image sensor 106 and the direction of extension of the designated zone 156 in Figures 2A and 2B face or focus in a predetermined direction. As the direction of extension of the designated zone 156 can be adjusted, there is no need to adjust the positioning of the vehicle 120 during the setting up of the apparatus 100 for operation.

Figure 5 shows the top view of the vehicle 120 of Figure 3 parked in a road lane 500 with boundaries marked out by lane markings M1 and M2. The monitoring unit 400 of Figures 4A and 4B is mounted at a central location on the top of the vehicle 120 and a sound and/or lighting alerting device 105 is mounted at a rear side of the vehicle 120. In another example, the device 105 can be mounted to the monitoring unit 400. The device 105 is in data communication with the controller 110 of the apparatus 100. The controller 1 10 is configured to send signals to the device 105 to sound and/or flash an alert to one or more objects (e.g. humans/animals), and/or one or more users in the one or more vehicles detected in a designated zone 156 monitored by the detector 150. The designated zone 156 extends in a direction away from the detector 150. A road lane width 162 is marked out in Figure 5 and a central longitudinal axis 157 of the road lane 500 is shown. The vehicle 120 is not parked perfectly such that it is parallel to the lane markings M1 and M2, which are dashed lines marked out in a straight line. Hence, the monitoring device 400 has to be adjusted to move the image sensor 106 and the detector 150 together to align a direction of extension of the designated zone 156 with a point of interest 155. The point of interest 155 is a point captured in one or more images of the image sensor 106. In this case, the point of interest 155 happens to be a point midway between the width of the lane 500, in the rear of the vehicle 120, and along the longitudinal axis 157 that is captured in the field of view of the image sensor 106. In this example, the point of interest 155 is selected either manually by a user through a graphical user interface or automatically computed by a computer, processor and/or processing unit. The point of interest 155 should be selected to ensure that the designated zone 156 covers as much as possible the portion of the lane 500 that is furthest away from the detector 150. In another example, more than one point of interest 155 can be selected or identified in the one or more images of the image sensor 106 to be aligned with the direction of extension of the designated zone 156. For instance, at least two points of interest 155 can be selected to plot a line, which the axis 153 is to be made parallel.

Figure 5 shows the orientation of the image sensor 106 and the detector 150 before being adjusted by the motor 107. The detector 150 is configured to monitor and detect the designated zone 156 and the axis 153 of the designated zone 156 is misaligned with the point of interest 155. If the orientation of the image sensor 106 and the detector 150 is not adjusted, the designated zone 156 will be extended in a direction that will not cover or cover very little of the portion in the lane 500 that is furthest away from the detector 150, and the detection range in the lane 500 will be undesirably short. The longer the detection coverage of the lane 500, the earlier an alert can be sent, and for a detected vehicle, the driver can have more reaction time and the vehicle has longer stopping distance to avoid collision. Furthermore, in some instances, if not adjusted, the detection range may extend into an area that should not be monitored, like an adjacent lane, which may cause false alarm. Hence, the orientation of the image sensor 106 and the detector 150 have to be adjusted to align the axis 153 of the designated zone 156 with the point of interest 155. Specifically, in this case, the motor 107 should be controlled to tilt or rotate the image sensor 106 and the detector 150 by substantially an angle B formed between the major axis 153 and the central longitudinal axis 157 of the lane 500, such that the axis 153 becomes parallel with the central longitudinal axis 157.

Generally, for a straight lane like the lane 500, regardless whether the vehicle 120 is parked off center in the lane 500, the point of interest 155 should be selected such that the axis 153 is parallel to the longitudinal axis 157 of the lane 500. This will ensure that the designated zone 156 covers as much of the lane 500 behind the vehicle 120 as possible, and as far of the lane 500 behind the vehicle 120 as possible, For a bending lane (or road), regardless whether the vehicle 120 is parked off center in the lane 500, the point of interest 155 should be selected such that all or most of the designated zone 156 cover the lane shown in the image or images captured by the image sensor 106, and to centralize the axis 153 as much as possible within the width of the lane 500.

As mentioned above, the process of controlling the motor 107 to align the direction of extension of the designated zone 156 with the point of interest 155 captured by the image sensor 106 can be performed manually by a user and/or be automated.

Figure 6 shows the vehicle 120 in Figure 5 mounted with the apparatus 100 of Figure 1 , and a vehicle 125 at the rear of the vehicle 120. The reference numerals used in earlier figures are re-used in Figure 6 to label the same elements. The vehicle 120 is at a distance b away from the vehicle 125. The vehicle 120 may be stationary or moving, and the vehicle 125 may be approaching the vehicle 120. In this example, the vehicle 120 is a vehicle that has stopped in a lane along the highway for performing some road works. The point of interest in this example is the center of the front of the vehicle 125, which can be captured by the image sensor 106 but not covered by the designated zone 156. In this example, the field of view (or angle of view) of the image sensor 106 is wider than the width of the designated zone 156. The vehicle 120 is parked such that the orientation of the direction of extension of the designated zone 156 is not covering enough of the lane. The designated zone 156 is rectangular in shape (in a top view of the apparatus) and its longitudinal axis of symmetry 153 is at an angle a relative to a longitudinal axis of the approaching vehicle 125. This angle a can be regarded as an angle of misalignment. It is noted that in another example, it is possible that the designated zone 156 has a non-rectangular shape in a top view of the apparatus 100, such as an elongate rectangle (e.g. S, U, W and/or C shape and the like) that is bent according to road or lane bends identified (e.g. identified via computer vision techniques and/or via a machine trained by machine learning techniques) in the captured images of the image sensor 106. The shape of the designated zone 156 may also be oval, triangle, circle or any other shape desirable to the user for monitoring a specific zone.

In Figure 6, the shortest distance between the front of the vehicle 125 and a detection point of the designated zone 156 that is furthest from the vehicle 120 is marked as a. It should be appreciated that for the same angle of misalignment a, the longer the distance b, the longer the distance a will be. This observation is illustrated in the three examples below.

1) In the case of angle a = 5 degrees and distance b = 50 meters: 2) a = tan 5 X 50 meters = 4. 374 meters In the case of angle a = 5 degrees and distance b = 100 meters: a = tan 5° x 100 meters = 8. 749 meters

3) In the case of angle a = 5 degrees and distance b = 150 meters: a = tan 5° x 150 meters = 13. 123 meters

Furthermore, in some cases the shorter the distance a, the lesser the intrusion to the adjacent lane compared to having a longer distance a.

In the case of automated alignment, the apparatus 100 is configured to automatically detect a point of interest present in the images captured by the image sensor 106, which in the above example, is the center of the approaching vehicle 125. In another example, the point of interest can also be, for instance, a point in a lane or road, which when the point of interest is aligned with the axis 153 would make the axis 153 parallel to the lane or road in the case of a straight lane or road, or would make the designated zone 156 cover as much of the lane or road as possible and be centralised as much as possible relative to the width of the lane or road. The image sensor 106 should be configured to have a field of view (or angle of view) wider (e.g. by using wide angle lens) than a width of the designated zone 156 so as to facilitate calculation of the values to compensate for the misalignment angle a (or the misalignment angle B in the example of Figure 5). Having such wider field of view for the camera relative to the size of the designated zone 156 is also preferred in other examples of the present disclosure. In particular, in the case of road safety warning to be implemented for just one lane of a road with a plurality of lanes, the image sensor 106 should be configured to capture a field of view wider than that of the lane and the designated zone 156.

Location data of the point of interest can be determined based on image processing techniques conducted on the images captured by the image sensor 106 and/or via use of a neural network to predict as output the location of the point of interest from the images captured by the image sensor 106, wherein extracted parameters and/or features relating to the captured images are inputted to the neural network.

In the case of use of neural network, its prediction model can be trained using data of many images, and there can be a system in which the controller 110 (or processing unit or processor) of the apparatus 100 is able to provide data of images captured by the image sensor 106 during operation for the continuous training of the neural network. Deep learning may be involved. For example, in the case of road safety warning application, the training images can include images showing different working conditions and/or environment (e.g. different weather conditions, lighting conditions, terrain conditions etc.), images showing different road layouts, etc. Images captured by the apparatus 100 during operation may be stored and used as training images to continuously improve the prediction model of the neural network.

In the case of use of image processing techniques, and in the case of road safety warning application, location data of the point of interest can be obtained by processing the captured images of the image sensor 106 to identify features for calculation of the location of the point of interest. For example, lane markings, road pavement, road divider, road curb, road markings, signboards, one or more vehicles present in a road or a lane of the road etc. can be features used to calculate the location of the point of interest. In one example, if the point of interest is the center of a lane or road, its location can be determined by identifying the road or lane markings that are on the left and right boundaries of the road or lane. Once the left and right boundaries of the road or lane are identified, the location data (pixel location) of the center of the road or lane can be obtained. Other boundary features that can be identified include a road divider, and/or a road curb. In another example, if an approaching vehicle is captured in the images of the image sensor, the center of the front of the vehicle can be the point of interest to be aligned with the direction of extension of the designated zone.

In another example, the controller 1 10 of the apparatus 100 may consider readings or input relating to the orientation of the vehicle 120 obtained from sensors other than the image sensor 106 and the detector 150, such as a vehicle steering-angle sensor, to calculate the values to compensate for the examples of misalignment described above.

An example illustrating manual alignment by a user with help of a displayed graphical user interface is discussed below with reference to Figure 8. It should be appreciated that manual alignment can be implemented together with automated alignment or they may be implemented alone. If they are implemented together, manual alignment can be used if automated alignment is not sufficiently accurate, and vice versa.

Figure 8 shows the display 200 of Figure 1 . The reference numerals in earlier Figures are reused in Figure 8 for the same elements. The display 200 has a screen showing images captured by the image sensor 106. In Figure 8, the screen shows a road lane 802 in the images captured by the image sensor 106 in real time. The road lane 802 has lane markings 206 marking out the left and right boundaries of the road lane 802. In this example, the point of interest happens to be the center of the road lane 802. Optionally, the display 200 can comprise a physical visual marker 804 to guide a user to align the direction of extension of the designated zone with the point of interest. The physical visual marker 804 can be a sticker or a permanent mark provided on the display 200. In Figure 8, the physical visual marker 804 is a triangle located at the top of the screen of the display 200 and an apex of the triangle points to the center of the display 200. The physical visual marker 804 indicates to the user that he/she has to control the motor 107 to move both the image sensor 106 and the detector 150 so that the point of interest (i.e. the center of the road lane 802) that the user can see on the screen aligns with the center of the display 200. Once this is done, the direction of extension of the designated zone will be aligned with the point of interest. The physical visual marker 804 can also be in the form of a vertical straight line provided at the center of the display 200.

Optionally, the display 200 may be configured to display a graphical representation to guide a user to align the direction of extension of the designated zone with the point of interest. For example, in Figure 8, a graphical representation in a form of a vertical line (or vertical axis) 207 is located at a center of the display. Similar to the physical visual marker 804, this line 207 indicates to the user that he/she has to control the motor 107 to move both the image sensor 106 and the detector 150 so that the point of interest (i.e. the center of the road lane 802) that the user can see on the screen aligns with the center of the display 200. Once this is done, the direction of extension of the designated zone will be aligned with the point of interest.

In another example, a graphical representation of lane markings laid over the lane markings 206 may be provided to guide a user to make alignment. This can be provided in addition to the vertical line 207 or replace the vertical line 207.

The physical visual marker 804 or the graphical representation may comprise one or more lines indicative of a boundary or a portion of the boundary of the designated zone. In the example of Figure 8, the designated zone can be the designated zone 156 of Figures 2A and 2B, and there is provided a curved line 806 representative of a line that has undergone barrel distortion by the optical lens of a camera having the image sensor 106. Barrel distortion can be especially prominent at the boundaries of the field of view of the image sensor 106. This curved line 806 may or may not be joined to the vertical line 207. This curved line 806 can used during the manufacturing of the monitoring unit 400 to help a production worker or machine to align the image sensor 106 when the camera having the image sensor 106 is mounted to the detector 150. As mentioned earlier, the image sensor 106 is to be mounted to the detector 150 such that the direction of extension of the designated zone 156 is aligned at a predetermined angle with respect to a direction of view of the image sensor 106. In the example of Figure 8, the predetermined angle is zero degrees and the image sensor 106 is to be mounted on top of the detector 150. The detector 150 in this example has a box like casing or housing and one of the edges will be captured in the images of the image sensor 106 at the bottom of the screen. The edge captured in the images is a straight edge resembling a line in reality but in the captured images, this straight edge will become curved due to barrel distortion of the lens. Hence, during the alignment of the image sensor 106 with the detector 150 at the time of manufacturing, the production worker or machine can make use of the curved line 806 to align with curved representation of the straight edge shown in the captured images to ensure that the image sensor 106 and the detector 150 are aligned as intended.

Another use of the curved line 806 is that in the case of automated alignment with the point or points of interest 155, the curved line 806 can be an image feature used to compute barrel distortion compensation when the images of the image sensor 106 are processed to obtain one or more points of interest 155 in the images to be aligned with the direction of extension of the designated zone 156.

The display 200 can be configured to graphically display one or more user control mechanisms to enable a user to control the movements of the motor 107 manually. For example, in Figure 8, a left arrow 210a and a right arrow 210b are provided for a user. They are clickable to control the motor 107 to tilt or rotate the direction of view of the image sensor 106 left or right respectively. If the display 200 is a touchscreen, the pictures of the arrows 210a and 210b can be clicked via the touchscreen.

In one example, the controller 1 10 of Figure 1 and the display 200 may be configured to enable a user to make use of a graphical user interface to be displayed on the display 200 to adjust the size (length and width), dimensions, shape, number of divided detection areas with different alerts, and detection range of the designated zone. The type of alert to activate may also be adjusted. For example, the size of the designated zone may be adjusted to cover larger and/or wider range or shorter and/or narrower range. For example, the width of the designated zone may be adjusted to cover more than the width of one road lane as there could be instances in which road works are to be performed on more than one lane of a road.

The one or more user control mechanisms for controlling the motor 107 for alignment, the displaying of the images of the image sensor 106 on the screen of the display 200, and the one or more markers or graphical representations provided to guide a user to perform the alignment help a user (or operator) to align the designated zone with the point of interest quickly and accurately. These features provide a user-friendly solution compared to conventional systems, which have time consuming and difficult set up process that requires a trained person. These features allow a safety detection zone, for example, in the rear of a vehicle or location installed with the monitoring unit 400 of Figures 4A and 4B to be set up very quickly.

Figure 9 illustrates an example of a process to align direction of extension of the designated zone 156 with more than one points of interest present in the images captured by the image sensor 106 i.e. points 155, 902 and 904 in Figure 9. The reference numerals in earlier Figures are re-used in Figure 9 for the same elements. Figure 9 shows the vehicle 120 moving or being parked imperfectly within a straight road lane 500 having lane markings M1 and M2. The vehicle 120 is mounted with the monitoring unit 400. Figure 9 shows the positioning of the misaligned designated zone 156 being adjusted to assume the positioning of the desired designated zone (denoted by reference numeral 225). The motor 107 is controlled to move the detector 150 and the image sensor 106 together, which in turn shifts the direction of extension of the designated zone 156.

In the case of manual alignment using the display 200 of Figure 8, an operator can find the point of interest 904 through visual inspection of the left and right lane markings M1 and M2 and the lane width 162 in the images captured by the image sensor 106 that are displayed on the display 200. Once found, the operator can align the center of the lane 500 with the physical visual marker 804 or the vertical line 207 with the point of interest 904 by clicking the left and right arrows 210a and 210b to rotate the image sensor 106 and the detector 150. Once aligned, the designated zone 156 will be adjusted into the position of the desired designated zone 225, wherein the axis 153 of the designated zone is substantially parallel to the left and right lane markings M1 and M2.

In the case of automated alignment using image processing, in one example, the controller 110 of the apparatus 100 can process the captured images of the image sensor 106 to identify the left and right lane markings M1 and M2. In this example, the lane markings M1 and M2 are straight and parallel and they are checked to determine whether they are bending or straight before the processing for a straight lane is performed. After it is determined that the lane 500 is straight, two points of interest 155 and 902 located on, for instance, M1 can be plotted in the captured images of the image sensor 106 (in another example, two points of interest on M2 may be selected). A straight line can be overlayed to link up the two points of interest 155 and 902. This straight line will be parallel to M1 and M2. Thereafter, image processing is conducted to determine the compensation required to make the longitudinal axis of symmetry 153 of the designated zone parallel with this straight lane. The vertical line 207 described in Figure 8, which represents the axis 153, can be used. It can take the form of a generated image feature or a virtual line with pixel locations that can used to calculate the compensation. Firstly, a distance d1 from the vertical line 207 to point 902 can be determined from pixel locations of the vertical line 207 and point 902 and a distance d2 from the vertical line 207 to point 155 can be determined from the pixel locations of the vertical line 207 and point 155. With the points 902 and 155 and the straight line (that links up vertical line 207 and point 155) remaining static, the vertical line 207 should be adjusted graphically via image processing until the values of d1 and d2 are equal because when they are equal, the vertical line 207 i.e. the axis 153 will be parallel to the straight line. An angle of compensation (or angle of correction) 165 can be determined from the image adjustment made to make d1 and d2 equal, and the motor 107 can be moved according to this angle of compensation 165 to align the direction of extension of the designated zone 156 such that they are parallel to the straight line that joins the points 902 and 155. In this manner, the direction of extension of the designated zone 156 is aligned with the points of interest 902 and 155. This example illustrates that alignment with a point of interest does not necessarily mean that the axis 153 has to intersect the point of interest. Positional alignment of the direction of extension of the designated zone 156 relative to the points of interest, such as to achieve parallel arrangement in this example, also constitutes alignment.

Referring back to Figure 1 , the controller 1 10 of the apparatus 100 may be configured to save the aforementioned angle of compensation corresponding to each or selected frames of the images captured by the image sensor 106, along with the each or selected frames of the image in a database or data vault (e.g. local storage space or cloud data storage) as a log or record, or for machine learning purposes. A neural network may be trained with such images and be used for automated alignment or be used in conjunction with the abovementioned image processing method. The log or record can also be used to handle exceptions in which the automated alignment has errors or have failed. For instance, in the case that anomalies are discovered in the log or record, a notification may appear on the display 200 to advise an operator the need to manually re-align the direction of extension of the designated zone 156 with the one or more points of interest.

In another example, in the case that a neural network is used for automated alignment or used in conjunction with the abovementioned image processing method, the neural network may be configured to predict the one or more points of interest from the captured images of the image sensor 106 so that the computation to align the direction of extension of the designated zone 156 with the one or more points of interest can be done. The controller 1 10 of the apparatus 100 as described with reference to Figure 1 can be for instance, a desktop computer or a mobile device such as a mobile smartphone, laptop, notebook, and the like. The display 200 can be the display available in these devices. The controller 1 10 of the apparatus 100 may control the motor 107, via wired means (i.e. electrical wires and circuitries connect the controller 1 10 to the motor 107) or wirelessly, to move both the image sensor 106 and the detector 150. In the case of wireless implementation, the controller 110 may receive data relating to detection of one or more intruders (including objects and/or vehicles; objects include people and/or animals) in the designated zone wirelessly from the detector, and/or receive images captured by the image sensor 106 wirelessly. In this case, there may be another processing unit, processor or controller electrically connected to the image sensor 106 and/or the detector 150, and wireless transceiver, to manage the wireless data communication with the controller 110.

The controller 1 10 may be configured to count every detection of an intruder in the designated zone 156. The video footage (i.e. images) of every detection captured by the image sensor 106 can be stored in a database. The footage can show whether an accident has occurred or an accident has been prevented. The footages can be filtered to determine footages of accident occurrences and footages of near misses (i.e. an accident almost occur but did not occur). The number of accidents can be determined by adding up the counts of the footages of accident occurrences. Likewise, the number of near misses can be determined by adding up the counts of the footages of near misses. Such data on number of accidents and near misses would be useful to the user of the apparatus 100 to assess safety measures.

Figure 12 shows another example of automated alignment. The reference numerals of the elements used in earlier figures are reused in Figure 12 for the same elements. There is a vehicle 120 with a width of 2.2 meters parked on the extreme left of a lane 1202 with a width of 3.7 meters. The monitoring unit 400 is mounted on top of the vehicle 120. The distance of the longitudinal axis of symmetry 153 of the rectangular shaped designated zone 156 to the center of the lane 1204 is 0.75 meters. In this example, the lane 1202 is also a straight lane and the lane 1202 is checked whether it is bending or straight before the processing for a straight lane is performed. The point of interest is determined as the center of the lane 1202 from the images captured by the image sensor 106 but this point of interest is selected from a distance of 100 meters or further from the detector 150. As mentioned earlier, to ensure that the designated zone 156 covers as far as the lane 1202 as possible, the axis 153 of the designated zone 156 should be aligned to be parallel to the straight lane markings of the lane 1202. If the point of interest is the center of the lane 1202 that is selected from a distance of 100 meters or further from the detector 150, intersecting the axis 153 with this point of interest will be making the axis 153 substantially parallel with the straight lane markings of the lane 1202.

Figure 12 shows a right-angled triangle with base length a1 , side length b1 and longest side length c1 to illustrate that the axis 153 is substantially parallel with the straight lane markings of the lane 1202. The sides with length c1 and a1 are at an angle pi , the sides with length c1 and b1 are at an angle cd , and the sides with length a1 and b1 are at an angle of 90 degrees. The intersection of c1 and a1 is where the monitoring unit 400 (comprising both the detector 150 and the image sensor 106) is located and the intersection of c1 and b1 is the selected point of interest, which in this example, is the center of lane 1206 at a distance of 100 meters away. The values of a1, b1 , pi and a1 are as follows. a1 = 0.75 meters (distance of the axis 153 to the center of the lane 1204) b1 = 100 meters

To find angle pi : 100 89.57 degrees

To find angle cd : al = 180° - 90° - 89.57° = 0.47 degrees

As angle pi is 89.57 degrees, this shows that the axis 153 is almost parallel to the lane 1202. Furthermore, in this example, the detection zones 151 and 152 are present within the designated zone 156. After alignment, the designated zone 156, and the detection zones 151 and 152 can be said to be almost parallel to and within the lane 1202, and covering as far of the lane 1202 as possible. It can be appreciated that as the distance b1 is further from the monitoring unit 400, the angle pi will be closer to 90 degrees and as the distance b1 is nearer to the monitoring unit 400, angle pi will be further from 90 degrees.

Angle pi when distance b1 is 150 meters: 150 89.71 degrees

Angle pi when distance is 50 meters: 89.1 degrees Hence, the above demonstrates that as long as the center of lane 1206 at a distance greater than 100 meters is selected as the point of interest to align and intersect with the axis 153 of the designated zone 156, the designated zone 156 will be set up accurately regardless of where the vehicle 120 is positioned relative to the width of the straight lane 1202.

In the case of a bending lane or road, one example of automated alignment via image processing may be done by first identifying whether a lane or road is bent or straight from the captured images of the image sensor 106. This can be determined by checking whether lane and/or road markings in the images show that the lane or road is straight or not. Barrel distortion and other known imaging defects will be taken into consideration in the determination of whether the lane or road is straight or not. If it is confirmed that the lane or road is bent, the area (specifically, pixels locations) of the lane or road within the lane and/or road markings in the images captured by the image sensor 106 will be determined. These steps also apply to other examples directed to straight lane or road.

Assumptions have to be made that the vehicle 120 mounted with the apparatus 100 is parked or moving within the lane or road that the apparatus 100 has to monitor, and that the vehicle 120 is parked along or moving along the lane or road in a desired manner with the best effort of the driver. In the real world, it is difficult for a driver to park or move the vehicle 120 along the lane or road so perfectly that the detector 150 and the image sensor 106 both face the desired direction and the vehicle is perfectly centralized within the lane or road. That is why alignment still needs to be done. The best effort of the driver is required because automated alignment will not be possible if, for instance, the driver willfully park or move the vehicle 120 such that the image sensor 106 does not capture any parts of the lane or road to be monitored. These assumptions also apply to all the other manual and automated alignment examples in the present disclosure.

Once the area of the lane or road captured in the images of the image sensor 106 is determined, computation or processing will be done to virtually overlay the area of the designated zone 156 over the lane or road area in a predetermined manner. The virtual area of the designated zone 156 is known to the manufacturer and can be preprogrammed or predetermined for the purpose of overlaying onto the images captured by the image sensor 106. This virtual area of the designated zone 156 may or may not be graphically produced for displaying on a screen in the case of automated alignment. However, in the case of manual alignment, it can be graphically displayed (as a graphical representation) on a screen of a physical display to provide guidance to a user performing alignment manually. In the present example, the overlaying is done such that the area of the designated zone 156 resides within the lane or road area identified from the captured images, and is centralized within the boundaries of the lane and/or road markings identified from the captured images. The rules for centralization can be predetermined according to the extent of bent of the lane or road, and/or based on other factors. Once the overlaid virtual area of the designated zone 156 is centralized in this manner, the longitudinal axis of symmetry 153 of the overlaid virtual area of the designated zone 156 will be in the position in the captured images to help determine the angle of compensation from a current position of the designated zone 156. The current position of the designated zone 156 can be determined because it can be mounted on the vehicle 120 to face a specific direction by default and/or the apparatus 100 can be configured to store information of the current position with respect to previous movement from a default position. The determined angle of compensation from the current known positioning of the designated zone can be used to determine how much the motor 107 has to move the monitoring unit 400 (comprising both the image sensor 106 and the detector 150) to align the direction of extension of the designated zone 156 with the lane or road. In this example, the area of the lane or road captured in the images constitutes a plurality of points of interest, or in other words, a region of interest. The objective is to align the direction of extension of the designated zone 156 (represented by the axis 153) with this region of interest.

In other examples, certain sensor or sensors may be used alone or in conjunction with the image processing and/or neural network examples described in the present disclosure. For example, a gyrometer may be used to help determine the angle of compensation. A sensor used in a vehicle steering wheel may also be used to track the angle of compensation.

Furthermore, in some examples, with regard to the alerts for intruders, the apparatus 100 of Figure 1 may be configured to communication with a system via a telecommunication network, a satellite network, a wide area network (WLAN), Bluetooth, and the like, to send wireless signals to activate the alerts to a device in a vehicle, residing with a user, or residing in an object detected as an intruder. For example, such device can be a smartphone, a vehicle built- in dashboard display with a controller, or similar device, running an application programmed to receive such alerts. In this manner, alerts can be activated through the sound, tactile, and/or display capabilities of the device.

Figure 10 shows in more detail an example of the controller 1 10 of Figure 1 in the apparatus 100 and the display 200. The controller may comprise a processing unit 1002 for processing software including one or more computer programs for running one or more computer/server applications to enable a backend logic flow or the method or methods for carrying out the steps as described with reference to the earlier Figures.

Furthermore, the processing unit 1002 may include user input modules such as a computer mouse 1036, keyboard/keypad 1004, and/or a plurality of output devices such as a display device 1008. The display device 1008 can be the display 200 of Figure 1. The display of the display device 1008 may be a touch screen capable of receiving user input as well.

The processing unit 1002 may be connected to a computer network 1012 via a suitable transceiver device 1014 (i.e. a network interface), to enable access to e.g. the Internet or other network systems such as a wired Local Area Network (LAN) or Wide Area Network (WAN). The processing unit 1002 may also be connected to one or more external wireless communication enabled devices 1034 (For example, another apparatus 100, another user interface 200) via a suitable wireless transceiver device 1032, e.g. a WiFi transceiver, Bluetooth module, Mobile telecommunication transceiver suitable for Global System for Mobile Communication (GSM), 3G, 3.5G, 4G telecommunication systems, and the like. Through the computer network 1012, the processing unit 1002 can gain access to one or more storages i.e. data storages, databases, data servers and the like connectable to the computer network 1012 to retrieve and/or store data in the one or more storages.

The processing unit 1002 may include a processor 1018, a Random Access Memory (RAM) 1020 and a Read Only Memory (ROM) 1022. The processing unit 1002 may also include a number of Input/Output (I/O) interfaces, for example I/O interface 1038 to the computer mouse 1036, a memory card slot 1016, I/O interface 1024 to the display device 1008, and I/O interface 1026 to the keyboard/keypad 1004. The I/O interfaces of the processing unit 1002 is connected to the detector 150, motor 107, the sound and/or lighting device 105, and the image sensor 106, as described in other Figures.

The components of the processing unit 1002 typically communicate via an interconnected bus 1028 and in a manner known to the person skilled in the relevant art.

The computer programs may be supplied to the user of the processing unit 1002, or the processor (not shown) of one of the one or more external wireless communication enabled devices 1034, encoded on a data storage medium such as a CD-ROM, on a flash memory carrier or a Hard Disk Drive, and are to be read using a corresponding data storage medium drive of a data storage device 1030. Such computer or application programs may also be downloaded from the computer network 1012. The application programs are read and controlled in its execution by the processor 1018. Intermediate storage of program data may be accomplished using RAM 1020.

In more detail, one or more of the computer or application programs may be stored on any non-transitory machine- or computer- readable medium. The machine- or computer- readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general purpose computer. The machine- or computer- readable medium may also include a hard-wired medium such as that exemplified in the Internet system, or wireless medium such as that exemplified in the Wireless LAN (WLAN) system and the like. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the computing methods in examples herein described.

In summary, examples of the present disclosure may include the following features.

An apparatus (e.g. 100) for safety warning for a designated zone (e.g. 156), wherein the apparatus comprises: a detector (e.g. 150) configured to monitor a designated zone and detect one or more intruders partially or fully entering the designated zone; an image sensor (e.g. 106) for capturing images in real time; a motor (e.g. 107) for moving the image sensor and the detector; and a processing unit (e.g. 1 10, 1002) configured to execute instructions to operate the apparatus to: receive, from the detector, input relating to the one or more intruders detected in the designated zone; send a signal to alert that the one or more intruders are detected in the designated zone, wherein the designated zone extends in a direction away from the detector, and the image sensor is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle (e.g. W) with respect to a direction of view of the image sensor, wherein the apparatus is operable to: control the motor to move both the image sensor and detector to align the direction of extension of the designated zone with a point of interest (e.g. 155, 902, 904, 1206) in the captured images. The predetermined angle is preferably zero degrees.

The apparatus may further comprise: a display (e.g. 200) configured to display the images captured by the image sensor; and one or more user control mechanisms (e.g. 210a, 210b) for controlling movements of the motor to align the direction of extension of the designated zone with the point of interest.

The display may comprise a physical visual marker (e.g. 804) to guide a user to align the direction of extension of the designated zone with the point of interest.

The display may be configured to display a graphical representation (e.g. 207) to guide a user to align the direction of extension of the designated zone with the point of interest.

The physical visual marker or the graphical representation may be provided in a form of a vertical line (e.g. 207) located at a center of the display.

The physical visual marker or the graphical representation may comprise one or more curved lines (e.g. 806) indicative of a line in reality that has undergone barrel distortion in the images captured by the image sensor.

The display may be configured to graphically display the one or more user control mechanisms.

The point of interest may be a center of a road or lane in a road (e.g. 500, 1202), a center of a road or lane in a road at a predetermined distance away from the detector, a center of a plurality of lanes in a road, or a center of a vehicle (e.g. 125) or object in the captured images.

The direction of extension of the designated zone may be aligned with the point of interest when a longitudinal axis of symmetry of the designated zone (e.g. 153) is substantially parallel with a lane or a road.

The direction of extension of the designated zone may be aligned with the point of interest when an area of the designated zone is aligned with a region of interest including a plurality of said point of interest. The apparatus may be operable to: count every detection of an intruder in the designated zone, wherein images of every detection captured by the image sensor are stored in a database so as to enable an accident or near miss to be tracked.

The data to control the motor to move the image sensor and the detector to align the direction of extension of the designated zone with the point of interest may be determined based on an output of a neural network.

The data to control the motor to move the image sensor and the detector to align the direction of extension of the designated zone with the point of interest may be determined based on location data of the point of interest obtained from processing the captured images to identify features for calculation of the location of the point of interest.

The features identified may include a road kerb, a road divider, a road marking, a vehicle or object, and/or a lane marking (e.g. M1 , M2).

The apparatus may be operable to: send different signals to alert one or more intruders detected in different areas (e.g. 151 , 152) of the designated zone, wherein each signal for each area in the designated zone results in an alert that is different from an alert of a signal sent for another area in the designated zone.

The alerts in the different areas of the designated zone may be different in level of urgency and the level of urgency correspond to different speed of the intruders and/or distance of the intruders from the detector.

The alerts may have different tempo for different level of urgency.

The alerts may have different volume for different level of urgency.

The apparatus may be operable to: send signals wirelessly to a device residing at each of the one or more intruders to activate an alert.

The one or more intruders may be alerted through sound and lighting effects. The detector may be a radio detection and ranging (RADAR) sensor, or a light detection and ranging (LIDAR) sensor, or a combination of both.

The apparatus may comprise a mounting bracket (e.g. 1 15) for mounting the image sensor, the detector, and the motor to a surface of a vehicle (e.g. 120).

The apparatus may be operable to: send a signal to alert a user of a need to re-align the direction of extension of the designated zone with the point of interest in the case that alignment is off.

The field of view of the image sensor may be wider than a width of the designated zone.

The designated zone may have a width in a range of, a width of a lane or road, to, a width that is 50% to 100% of a width of a vehicle mounted with the image sensor and the detector. Preferably, the designated zone may have a width in a range 50% to 100% of a width of a vehicle mounted with the image sensor and the detector. More preferably, the designated zone may have a width in a range 75% to 100% of a width of a vehicle mounted with the image sensor and the detector.

In a top view of the apparatus, the designated zone may have an elongate rectangle shape.

In a top view of the apparatus, the designated zone may have an elongate rectangle shape that is bent according to a road or lane bend identified in the captured images.

A method for safety warning for a designated zone (e.g. 156), wherein the method comprises: monitoring a designated zone (e.g. 156) and detecting one or more intruders partially or fully entering the designated zone using a detector (e.g. 150); capturing images in real time using an image sensor (e.g. 106); receiving, from the detector, input relating to the one or more intruders detected in the designated zone; and sending a signal to alert that the one or more intruders are detected in the designated zone, wherein the designated zone extends in a direction away from the detector, and the image sensor is mounted to the detector such that the direction of extension of the designated zone is aligned at a predetermined angle (e.g. W) with respect to a direction of view of the image sensor, wherein the method comprises: controlling the motor to move both the image sensor and detector to align the direction of extension of the designated zone with a point of interest (e.g. 155, 904, 902, 1206) in the captured images.

Throughout the present disclosure, most of the time if not all the time, the similar elements described in the description are given the same reference numerals.

In the specification and claims, unless the context clearly indicates otherwise, the term “comprising” has the non-exclusive meaning of the word, in the sense of “including at least” rather than the exclusive meaning in the sense of “consisting only of”. The same applies with corresponding grammatical changes to other forms of the word such as “comprise”, “comprises” and so on.

While the invention has been described in the present disclosure in connection with a number of embodiments and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.