Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDIOVISUAL SYSTEM WITH POWER MODE SWITCHING
Document Type and Number:
WIPO Patent Application WO/2024/086943
Kind Code:
A1
Abstract:
An audiovisual system, including: an off-grid power supply for powering the audiovisual system; an audiovisual capture device operatively coupled to the off-grid power supply and operable to capture images or audio in accordance with audiovisual capture parameters associated with an operation mode of the system selectable from at least a first and a second mode, one being a low power mode (which can produce low grade video with high latency) and the other being a high power mode (producing high grade video in real time); and a processing entity operatively coupled to the capture device and to the off-grid power supply, the processing entity configured for detecting a change to the system or an event and forcing the system to operate in the second mode in response to the detected change or event.

Inventors:
CASSANI PABLO (CA)
Application Number:
PCT/CA2023/051432
Publication Date:
May 02, 2024
Filing Date:
October 26, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GENETEC INC (CA)
International Classes:
H04N23/65; G06V20/62; G08G1/017
Domestic Patent References:
WO2018142136A12018-08-09
WO2022011986A12022-01-20
WO2021182420A12021-09-16
Attorney, Agent or Firm:
SMART & BIGGAR LP (CA)
Download PDF:
Claims:
CLAIMS

1. An audiovisual system, comprising: an off-grid power supply for powering an audiovisual device; an audiovisual capture device operatively coupled to the off-grid power supply and operable to capture at least one of images or audio, in accordance with audiovisual capture parameters associated with an operation mode of the audiovisual system selectable from at least a first operation mode and a second operation mode, one of the first and second operation modes being a low power operation mode and the other of the first and second operation modes being a high power operation mode; and a processing entity operatively coupled to the audiovisual capture device and to the off-grid power supply, the processing entity configured for detecting a change to the audiovisual system and forcing the audiovisual system to operate in the second operation mode in response to the detected change.

2. The system defined in claim 1, wherein detecting the change to the audiovisual system comprises detecting movement of the audiovisual system, and wherein the second operation mode is the high power operation mode.

3. The system defined in claim 2, wherein the processing entity is configured for receiving a relocation schedule for the audiovisual system and determining if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

4. The system defined in claim 3, wherein the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

5. The system defined in claim 3, wherein the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode.

6. The system defined in claim 1, wherein the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

7. The system defined in claim 6, wherein the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

8. The system defined in claim 6, wherein the processing entity is further configured for carrying out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

9. The system defined in claim 1, wherein the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

10. The system defined in claim 1, wherein detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

11. The system defined in claim 1, wherein detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

12. The system defined in claim 1, wherein the off-grid power supply comprises a solar panel and a battery, and wherein detecting the change to the audiovisual system comprises detecting that the solar panel has become disconnected from the battery.

13. The system defined in claim 1, further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the change to the audiovisual system comprises further detecting movement of the audiovisual system from readings of the acceleration sensor, displacement of the audiovisual system from readings of the position sensor or a change of environment of the audiovisual system from readings of the temperature sensor or attempted tampering of the audiovisual system from readings of the motion sensor or the tamper detection sensor.

14. The system defined in claim 1, wherein the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

15. The system defined in claim 14, further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface.

16. The system defined in claim 15, wherein the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

17. The system defined in claim 1, wherein when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

18. The system defined in claim 1, wherein when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

19. The system defined in claim 1, wherein when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

20. The system defined in claim 1, further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real-time to the server via the wireless network interface.

21. The system defined in claim 1, further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

22. The system defined in claim 21, wherein the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

23. The system defined in claim 21, wherein the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

24. The system defined in claim 23, wherein the object recognition comprises character recognition of a license plate.

25. The system defined in claim 1, wherein the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

26. The system defined in claim 25, wherein when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

27. The system defined in claim 26, wherein when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

28. The system defined in claim 25, wherein the processing entity is configured for responding to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode.

29. The system defined in claim 25, wherein the processing entity is configured for responding to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

30. The system defined in claim 1, wherein the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

31. A computer-implemented method for execution by a processing system operatively coupled to an audiovisual system powered by an off-grid power supply, the method comprising: detecting a change to the audiovisual system; and forcing the audiovisual system to operate in a second operation mode in response to the detected change, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation.

32. A computer readable storage medium having stored therein instructions, which when executed by a processing entity of an audiovisual system powered by an off-grid power supply, cause the audiovisual system to: detect a change to the audiovisual system; and force the audiovisual system to operate in a second operation mode in response to the detected change, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation. udiovisual system, comprising: an off-grid power supply for powering the audiovisual system; and an audiovisual generation system comprising: an audiovisual capture device operatively coupled to the off-grid power supply and operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode of the audiovisual system selectable from at least a first operation mode and a second operation mode, one of the first and second operation modes being a low power operation mode and the other of the first and second operation modes being a high power operation mode; and - a processing entity operatively coupled to the image capture device and to the off-grid power supply, the processing entity configured for detecting an event based on at least one of (i) image processing of the captured images and (ii) audio processing of the captured audio and forcing the audiovisual system to operate in the second operation mode in response to the detected event.

Description:
AUDIOVISUAL SYSTEM WITH POWER MODE SWITCHING

FIELD

The present disclosure relates generally to power management and, more particularly, to a processor- implemented method for managing power consumption of an off-grid security camera under certain conditions.

BACKGROUND

Off-grid, battery-operated wireless security cameras are becoming increasingly utilized for capturing video or images of events occurring throughout a geographical area. Events that are of particular interest to law enforcement include traffic violations, formation of crowds, and crimes of various types such as shootings, robberies, muggings, riots, and assaults.

As off-grid security cameras proliferate, they are increasingly found in areas that may be dangerous, crime-ridden or remote, making them vulnerable to damage, theft, reorientation and other malicious acts. Due to their limited battery capacity, the cameras are usually configured not to continuously stream video in real-time, but rather to apply operating parameters (e.g., frame rate, resolution, bandwidth, etc.) that are at sufficiently low values to allow the battery to last a relatively long time. Consequently, it becomes difficult for cameras programmed in this way to capture events indicative of vandalism or theft or other events with sufficient precision or in sufficient time to take preventative or remedial action.

In view of the foregoing, an improvement in off-grid security cameras would be welcomed by the law enforcement community and others interested in monitoring dynamically unfolding events in a certain geographic area, in some cases where cameras are left vulnerable to malicious actors.

SUMMARY

The present disclosure provides a method and system for operating an off-grid camera that has the ability to operate in several power modes, including a low power mode and a high power mode . A processing entity is configured to detect a trigger (e.g., a change to the camera or an event) and to force the camera to operate in a chosen mode of operation in response to the detected trigger. The change to the camera may be a physical change as detected from sensor output. The event may be detected by processing images, audio or video captured by the camera. According to a first example aspect, there is provided an audiovisual system. The audiovisual system comprises an off-grid power supply for powering an audiovisual device; an audiovisual capture device operatively coupled to the off-grid power supply and operable to capture at least one of images or audio, in accordance with audiovisual capture parameters associated with an operation mode of the audiovisual system selectable from at least a first operation mode and a second operation mode, one of the first and second operation modes being a low power operation mode and the other of the first and second operation modes being a high power operation mode; and a processing entity operatively coupled to the audiovisual capture device and to the off-grid power supply, the processing entity configured for detecting a change to the audiovisual system and forcing the audiovisual system to operate in the second operation mode in response to the detected change.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting movement of the audiovisual system, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the processing entity is configured for receiving a relocation schedule for the audiovisual system and determining if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode. In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the processing entity is further configured for carrying out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein detecting the change to the audiovisual system comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the change to the audiovisual system comprises further detecting movement of the audiovisual system from readings of the acceleration sensor, displacement of the audiovisual system from readings of the position sensor or a change of environment of the audiovisual system from readings of the temperature sensor or attempted tampering of the audiovisual system from readings of the motion sensor or the tamper detection sensor.. In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface.

In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the processing entity is configured for responding to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode. In accordance with any of the preceding aspects, the processing entity is configured for responding to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

According to a second example aspect, there is provided a computer-implemented method for execution by a processing system operatively coupled to an audiovisual system powered by an off-grid power supply. The method comprises: detecting a change to the audiovisual system; and forcing the audiovisual system to operate in a second operation mode in response to the detected change, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting movement of the audiovisual system, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the method further comprises receiving a relocation schedule for the audiovisual system and determining if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode. In accordance with any of the preceding aspects, the method further comprises recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the method further comprises carrying out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the method further comprises recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein detecting the change to the audiovisual system comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the change to the audiovisual system comprises further detecting movement of the audiovisual system from readings of the acceleration sensor, displacement of the audiovisual system from readings of the position sensor or a change of environment of the audiovisual system from readings of the temperature sensor or attempted tampering of the audiovisual system from readings of the motion sensor or the tamper detection sensor..

In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the method further comprises sending full motion video to a server via the wireless network interface.

In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface. In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the method further comprises responding to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode. In accordance with any of the preceding aspects, the method further comprises responding to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

According to a third example aspect, there is a computer readable storage medium having stored therein instructions, which when executed by a processing entity of an audiovisual system powered by an off-grid power supply, cause the audiovisual system to: detect a change to the audiovisual system; and force the audiovisual system to operate in a second operation mode in response to the detected change, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation.

In accordance with any of the preceding aspects, detecting the change to the audiovisual system comprises detecting movement of the audiovisual system, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the processing entity is caused to: receive a relocation schedule for the audiovisual system and determine if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, to determine comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, to determine comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode. In accordance with any of the preceding aspects, the processing entity is caused to: record the operation mode of the audiovisual capture device prior to the forcing, determine that a condition is met after the forcing, and cause the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein to determine that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the processing entity is caused to: carry out threat assessment processing on the at least one of captured images or audio and wherein to determine that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the processing entity is caused to: record the operation mode of the audiovisual capture device prior to the forcing, determine that a condition is met after the forcing, and cause the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, to detect the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, to detect the change to the audiovisual system comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein to detect the change to the audiovisual system comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein to detect the change to the audiovisual system comprises further detecting movement of the audiovisual system from readings of the acceleration sensor, displacement of the audiovisual system from readings of the position sensor or a change of environment of the audiovisual system from readings of the temperature sensor or attempted tampering of the audiovisual system from readings of the motion sensor or the tamper detection sensor.

In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface.

In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface. In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the processing entity is caused to: respond to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode. In accordance with any of the preceding aspects, the processing entity is caused to: respond to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

According to a fourth example aspect, there is provided an audiovisual system. The audiovisual system comprises an off-grid power supply for powering the audiovisual system; and an audiovisual generation system. The audiovisual generation system comprises an audiovisual capture device operatively coupled to the off-grid power supply and operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode of the audiovisual system selectable from at least a first operation mode and a second operation mode, one of the first and second operation modes being a low power operation mode and the other of the first and second operation modes being a high power operation mode; and a processing entity operatively coupled to the image capture device and to the off-grid power supply, the processing entity configured for detecting an event based on at least one of (i) image processing of the captured images and (ii) audio processing of the captured audio and forcing the audiovisual system to operate in the second operation mode in response to the detected event.

In accordance with any of the preceding aspects, detecting the event comprises detection and/or recognition of objects or audio.

In accordance with any of the preceding aspects, detection and/or recognition of objects comprises one or more of detection and/or recognition of a face, a person, a vehicle, a building, motion, a gesture, an action or lack thereof.

In accordance with any of the preceding aspects, detection and/or recognition of objects comprises character recognition of a license plate.

In accordance with any of the preceding aspects, detection and/or recognition of audio comprises one or more of speech, sound, or a gunshot.

In accordance with any of the preceding aspects, detecting the event comprises, detection of manipulation of the audiovisual system. In accordance with any of the preceding aspects, the processing entity is configured for receiving a relocation schedule for the audiovisual system and determining if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the processing entity is further configured for carrying out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the processing entity is further configured for recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode. In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein detecting the event comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the event comprises further detecting the event from readings of the acceleration sensor, readings of the position sensor, readings of the temperature sensor, readings of the motion sensor or readings of the tamper detection sensor..

In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface.

In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution. In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation. In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the processing entity is configured for responding to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode.

In accordance with any of the preceding aspects, the processing entity is configured for responding to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

In accordance with a sixth example aspect, there is provided a computer-implemented method for execution by a processing system operatively coupled to an audiovisual system powered by an off-grid power supply. The method comprising: detecting an event based on at least one of (i) image processing of images captured by the audiovisual system and (ii) audio processing of audio captured by the audiovisual system audio and forcing the audiovisual system to operate in a second operation mode in response to the detected event, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation.

In accordance with any of the preceding aspects, detecting the event comprises detection and/or recognition of objects or audio. In accordance with any of the preceding aspects, detection and/or recognition of objects comprises one or more of detection and/or recognition of a face, a person, a vehicle, a building, motion, a gesture, an action or lack thereof.

In accordance with any of the preceding aspects, detection and/or recognition of objects comprises character recognition of a license plate.

In accordance with any of the preceding aspects, detection and/or recognition of audio comprises one or more of speech, sound, or a gunshot.

In accordance with any of the preceding aspects, detecting the event comprises, detection of manipulation of the audiovisual system.

In accordance with any of the preceding aspects, the method further comprises receiving a relocation schedule for the audiovisual system and determining if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the method further comprises recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the method further comprises carrying out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the method further comprises recording the operation mode of the audiovisual capture device prior to the forcing, determining that a condition is met after the forcing, and causing the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein detecting the event comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the event comprises further detecting the event from readings of the acceleration sensor, readings of the position sensor, readings of the temperature sensor, readings of the motion sensor or readings of the tamper detection sensor..

In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface. In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is configured for carrying out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is configured for performing second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is configured for sending the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing. In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the method further comprises responding to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode.

In accordance with any of the preceding aspects, the method further comprises responding to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

According to a sixth example aspect, there is a computer readable storage medium having stored therein instructions, which when executed by a processing entity of an audiovisual system powered by an off-grid power supply, cause the audiovisual system to: detect an event based on at least one of (i) image processing of images captured by the audiovisual system and (ii) audio processing of audio captured by the audiovisual system audio and force the audiovisual system to operate in a second operation mode in response to the detected event, wherein an audiovisual capture device of the audiovisual system is operable to capture at least one of images or audio in accordance with audiovisual capture parameters associated with an operation mode selectable from at least a first operation mode and the second operation mode, one of the first and second operation modes being a low power mode of operation and the other of the first and second operation modes being a high power mode of operation.

In accordance with any of the preceding aspects, detecting the event comprises detection and/or recognition of objects or audio.

In accordance with any of the preceding aspects, detection and/or recognition of objects comprises one or more of detection and/or recognition of a face, a person, a vehicle, a building, motion, a gesture, an action or lack thereof.

In accordance with any of the preceding aspects, detection and/or recognition of objects comprises character recognition of a license plate.

In accordance with any of the preceding aspects, detection and/or recognition of audio comprises one or more of speech, sound, or a gunshot.

In accordance with any of the preceding aspects, detecting the event comprises, detection of manipulation of the audiovisual system.

In accordance with any of the preceding aspects, the processing entity is caused to: receive a relocation schedule for the audiovisual system and determine if a current time corresponds to a time period when the audiovisual system is scheduled for relocation by comparing a current time against the relocation schedule.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is scheduled for relocation, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, the determining comprises determining that the current time corresponds to a time period when the audiovisual system is not scheduled for relocation, and wherein the second operation mode is the high power operation mode. In accordance with any of the preceding aspects, the processing entity is further caused to: record the operation mode of the audiovisual capture device prior to the forcing, determine that a condition is met after the forcing, and cause the audiovisual capture device to switch back to the first operation mode if the condition is met and if the recorded operation mode is the first operation mode.

In accordance with any of the preceding aspects, the audiovisual system is communicatively coupled to a server over a data network, and wherein determining that a condition is met comprises determining that a relinquish signal has been received from the server.

In accordance with any of the preceding aspects, the processing entity is further caused to: carry out threat assessment processing on the at least one of captured images or audio and wherein determining that a condition is met comprises determining that the threat assessment processing concludes that there is no threat.

In accordance with any of the preceding aspects, the processing entity is further caused to: record the operation mode of the audiovisual capture device prior to the forcing, determine that a condition is met after the forcing, and cause the audiovisual capture device to continue operating in the second operation mode if the condition is met and if the recorded operation mode is the second operation mode.

In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has dropped below a threshold level, and wherein the second operation mode is the low power operation mode.

In accordance with any of the preceding aspects, detecting the event comprises detecting that a charge level of the off-grid power supply has risen above a threshold level, and wherein the second operation mode is the high power operation mode.

In accordance with any of the preceding aspects, the off-grid power supply comprises a solar panel and a battery, and wherein detecting the event comprises detecting that the solar panel has become disconnected from the battery.

In accordance with any of the preceding aspects, the audiovisual system further comprising an acceleration sensor or a position sensor or a temperature sensor or a motion sensor or a tamper detection sensor, wherein detecting the event comprises further detecting the event from readings of the acceleration sensor, readings of the position sensor, readings of the temperature sensor, readings of the motion sensor or readings of the tamper detection sensor.. In accordance with any of the preceding aspects, the second operation mode is the high power operation mode and wherein when in the second mode, the audiovisual system is configured to record full motion video.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein in the second operation mode, the audiovisual system is configured to send the full motion video to a server via the wireless network interface.

In accordance with any of the preceding aspects, the positioning system is configured to record a position of the audiovisual system over time and to send the position of the audiovisual system via the wireless network interface.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device is configured to capture images at a first frame rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device is configured to capture images at a second frame rate greater than the first frame rate.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures images at a first resolution and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures images at a second resolution greater than the first resolution.

In accordance with any of the preceding aspects, when the audiovisual system is in the low power operation mode, the audiovisual capture device captures audio at a first sampling rate and wherein when the audiovisual system is in the high power operation mode, the audiovisual capture device captures audio at a second sampling rate greater than the first sampling rate.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein when the audiovisual system is in the low power operation mode, the at least one of the captured images or audio are buffered in memory and asynchronously uploaded to a server via the wireless network interface and wherein when the audiovisual system is in the high power operation mode, the at least one of the captured images or audio are streamed in real-time or near-real- time to the server via the wireless network interface.

In accordance with any of the preceding aspects, the audiovisual system further comprising a wireless network interface, wherein the processing entity is further caused to: carry out first-level processing of the at least one of the captured images or audio to create a result, wherein when the audiovisual system is in the high power operation mode, the processing entity is caused to: perform second-level processing on the result of the first-level processing and for sending a result of the second-level processing to a server via the wireless network interface, and wherein when the audiovisual system is in the low power operation mode, the processing entity is caused to: send the result of the first-level processing to the server via the wireless network interface without performing the second-level processing on the result of the first-level processing.

In accordance with any of the preceding aspects, the first-level processing comprises selection of a subset of images or audio from the at least one of the captured images or audio and wherein the second-level processing comprises detection and/or recognition of objects in the subset of images or audio.

In accordance with any of the preceding aspects, the first-level processing comprises object detection and wherein the second-level processing comprises object recognition.

In accordance with any of the preceding aspects, the object recognition comprises character recognition of a license plate.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a super-low power mode of operation.

In accordance with any of the preceding aspects, when the audiovisual system is the low power operation mode or in the high power operation mode, the audiovisual capture device is configured to capture the at least one of the images or audio, and wherein when the audiovisual system is in the super-low power mode of operation, the audiovisual capture device is configured to not capture any images or audio.

In accordance with any of the preceding aspects, when the audiovisual system is in the high power operation mode, the audiovisual system consumes more power from the off-grid power supply than during the low power mode of operation.

In accordance with any of the preceding aspects, the processing entity is caused to: respond to detection of a sleep command to force the audiovisual system to operate in the super-low operation mode. In accordance with any of the preceding aspects, the processing entity is caused to: respond to contents of the captured images or audio to switch from the high power operation mode to the low power operation mode.

In accordance with any of the preceding aspects, the operation mode is selectable from at least the first operation mode, the second operation mode and a third operation mode that is a covert mode of operation.

BRIEF DESCRIPTION OF THE DRAWINGS

Reference will now be made, by way of example, to the accompanying drawings which show example embodiments of the present application, and in which:

Fig. 1 is a schematic diagram of an example communication system in accordance with a non-limiting embodiment, involving a camera and a server.

Figs. 2A and 2B are block diagrams illustrating example components of a camera in the system of Fig. 1.

Fig. 2C is a block diagram illustrating an example audiovisual generation system of the camera of Fig. 1.

Figs. 3A to 3D are flowcharts of various non-limiting embodiments of a power management method that may be carried out by a camera control function of the camera.

Figs. 4A and 4B are flowcharts of various non-limiting embodiments of alternative power management methods that may be carried out by a camera control function of the camera.

Fig. 5 is a state diagram illustrating transitions among three (3) power modes, in accordance with a nonlimiting embodiment.

Figs. 6A and 6B are flowcharts of various non-limiting embodiments of other alternative power management method that may be carried out by a camera control function of the camera.

Fig. 7 is a block diagram illustrating examples components of the server in the system of Fig. 1.

Figs. 8A and 8B are conceptual diagrams illustrating where different levels of processing can be carried out, depending on an operation mode of the camera. In the drawings, embodiments are illustrated by way of example. It is to be expressly understood that the description and drawings are only for purposes of illustrating certain embodiments and are an aid for understanding. They are not intended to be a definition of the limits of the invention.

DESCRIPTION OF EXAMPLE EMBODIMENTS

Fig. 1 is a schematic diagram illustrating an example communication system 2 comprising an example battery-powered electronic device 12 (also referred to herein to as an audiovisual system, an off-grid audiovisual system, an off-grid security camera or simply a camera). The camera 12 communicates over a wireless link 4 with a wireless access point 6, which is connected to a data network 8. A server 10 is also connected to the data network 8 and may run one or more services, including an image (which may be frames of video) and/or audio processing service (for processing images, videos and/or audio captured by and received from the camera 12) and an information providing service (such as the provision of a relocation schedule to the camera 12).

The data network 8 has an infrastructure that supports a data communication protocol, such as a datagram exchange protocol (e.g., UDP or TCP/IP). In an example embodiment, the data network 8 could be the Internet.

In some embodiments, the wireless access point 6 may be part of a radio access network (RAN) such as a cellular network. In other embodiments, the wireless access point 6 may be part of a wireless local area network (WLAN). The WLAN may comprise a wireless network which conforms to IEEE 802. llx standards (sometimes referred to as Wi-Fi®). Other configurations of the wireless access point 102 are possible in other embodiments.

The camera 12 is an audiovisual device capable of capturing and recording images, videos and/or audio 2081 and communicating with the server 10. It is to be understood that, as used herein, "images" may refer to image frames of video and that "video" may include audio.

Fig. 2A is a block diagram of example components of the camera 12. Where Fig. 2A shows a single instance of a given component, there may be multiple instances of such component in the camera 12.

The camera 12 includes a set of input devices 220. One of the input devices 220 may be an image capture device 230 configured to capture images or videos (which may include audio) in accordance with specific audiovisual capture parameters (e.g., image capture parameters). In some examples, the audiovisual capture parameters may include at least frame rate, image resolution, and number of images captured over a given time period. An example of the image capture device 230 is a charge coupled device (CCD). A flash system (not shown) may also be provided and its operation may be coordinated with operation of the image capture device 230. As such, audiovisual capture parameters may also include activation of flash and brightness of flash. It is to be understood that, in other cases, the activation of the flash may be controlled independently from the image capture device 230 such that the audiovisual capture parameters do not include activation of flash.

For example, the at least one audiovisual capture parameter may include frame rate, image resolution, number of images captured over a given time period, and activation of flash. The frame rate refers to a frequency at which consecutive images are captured. For example, a frame rate for the camera 12 may be 24 frames per second (fps). The image resolution refers to a size of an image that the camera 12 produces. The image resolution indicates image pixels of the produced image. More resolution can mean better quality. For example, if the camera 12 is a 2.0-megapixel camera, an image produced by the camera 12 may include 1600 X 1200 pixels. Thus, the image resolution of the camera 12 would be 1600 X 1200 pixels. The number of images captured over a given time period represents a total number of images captured during the given time period, which would correspond to the frame rate times the length of the given time period if the frame rate is constant, but otherwise is an independent variable if the frame rate is not constant. Activation of flash may include a status of a flash of the image capture device 230 and/or a brightness of such flash. The status of the flash indicates whether the flash is turned on or off. The brightness of the flash represents a value of brightness of light that the flash generates, ranging from a low percentage value to a maximum value (100%).

In some cases, the camera 12 may be configured such that image capture is enabled and in other cases, the camera 12 is configured such that image capture is disabled. An audiovisual activation parameter of the camera 12 is indicative of whether image capture is enabled or disabled. For example, an "image capture enabled" status of the audiovisual activation parameter is indicative of the image capture device 230 being configured such that capture of images or video (which may include audio) is enabled and a "image capture disabled" status of the audiovisual activation parameter is indicative of the image capture device 230 being configured such that capture of images or video (which may include audio) is disabled.

The input device 220 may include an audio capture device 231, such as a microphone, as shown in Figs. 2A and 2B. The audio capture device 231 is configured to capture audio in accordance with specific audiovisual capture parameters (e.g., audio capture parameters). In some examples, the audiovisual capture parameters may include a sampling rate. As shown in Fig. 2A, the audio capture device 231 may be integrated as part of the image capture device

230 such that the image capture device 230 captures images, video, and/or audio 2081. Alternatively, as shown in Fig. 2B, the audio capture device 231 may be separate from the image capture device 230.

In some cases, the camera 12 may be configured such that audio capture is enabled and in other cases, the camera 12 is configured such that audio capture is disabled. An audiovisual activation parameter of the camera 12 is indicative of whether audio capture is enabled or disabled. For example, an "audio capture enabled" status of the audiovisual activation parameter is indicative of the audio capture device

231 (alone or as part of the image capture device 230) being configured such that capture of audio is enabled and an "audio capture disabled" status of the audiovisual activation parameter is indicative of the audio capture device 231 (alone or as part of the image capture device 230) being configured such that capture of audio is disabled.

In some cases, the camera 12 may be configured to capture images or video with audio and yet in other cases, the camera 12 may be configured to capture images or video without audio. Additionally, in some cases, the camera 12 may be configured to capture audio only (without images or video).

Other ones of the input devices 220 may be sensor systems such as an acceleration sensor system 214 (or acceleration sensing system), a position sensor system 216 (or position sensing system), a temperature sensor system 215 (or temperature sensing system), a motion sensing system 219 (or motion sensing system) and a tamper sensing system 211 (or tamper sensing system). The acceleration sensor system

214 may be configured to detect movement or motion (e.g., changes in velocity or direction) of the camera 12. Accordingly, the acceleration sensor system 214 may include an inertial measurement unit, gyroscope, accelerometer, etc. The position sensor system 216 may be configured to determine and trackthe position of the camera 12 in two-dimensional or three-dimensional space, including determining whether the camera 12 has been redirected (e.g., turned, tilted, etc.). Accordingly, the position sensor system 216 may include a GPS-based system or a wireless signal triangulation system. The temperature sensing system

215 may be configured to detect a temperature of the environment in which the camera 12 operates (e.g., detect a change in temperature of the environment in which the camera 12 operates). Accordingly, the temperature sensing system 215 may include a thermometer, an infrared sensor etc. The motion sensing system 219 may be configured to detect movement or motion in an area surrounding of the camera 12. Accordingly, the motion sensing system 219 may include an ultrasonic sensor, an infrared sensor, a passive infrared sensor, a microwave sensor, a photodetector, a tomographic sensor, an area reflective sensor, etc. The tamper sensing system 211 may be configured to detect tampering or attempted tampering of the camera 12 (e.g., detecting whether there was an attempt to open to camera such as opening a casing of the camera 12, detecting whether there was an attempt to physically uninstall the camera 12, detecting whether a seal of the camera 12 was breached, for example, by detecting ambient light or a change in humidity within the casing of the camera 12, detecting an attempt to turn off the camera or otherwise intentionally manipulate the camera in a manner indicative of tampering, etc.). Accordingly, the tamper sensing system 211 may include a hall-effect sensor, a magnetic sensor, an inductive sensor, a humidity sensor, an ambient light sensor, etc.

The camera 12 also comprises a wireless transceiver 218 for exchanging data communications over the wireless link 4. The wireless transceiver 218 could include a radio-frequency antenna. The wireless transceiver 218 could be configured for wireless communication such as cellular communication or Wi-Fi communication, depending on the type of wireless access point 6 with which the camera 12 communicates. The wireless transceiver 218 may also comprise a wireless personal area network (WPAN) transceiver, such as a short-range wireless or Bluetooth® transceiver, for communicating with a computer (not shown) or other Bluetooth® enabled devices such a smartphone. The wireless transceiver 218 can also include a near field communication (NFC) transceiver.

The image capture device 230, the acceleration sensor system 214, the position sensor system 216, the temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211 are connected to a processing system 200 of the camera 12 via an I/O interface 204.

The wireless transceiver 218 is connected to the processing system 200 via a network interface 206.

The processing system 200 may include a processing device 202, such as a central processing unit (CPU), a graphics processing unit (GPU), a tensor processing unit (TPU), a neural processing unit (NPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a dedicated logic circuitry, or combinations thereof.

The processing system 200 may also include a non-transitory memory 208, which may include one or more of a volatile or non-volatile memory (e.g., a flash memory, a random-access memory (RAM), a readonly memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory and a CD-ROM, to name a few non-limiting possibilities). In some examples, the storage unit 208 may store images and/or videos and/or audio 2081 captured by the image capture device 230 (and/or the audio capture device 231.) In some embodiments, the camera 12 may carry out audiovisual processing to process the captured images and/or videos and/or audio 2081, for example including image detection such as motion detection and object detection such as license plate detection, facial recognition, and audio detection such as gunshot detection, voice detection, speech detection, environmental sound detection and so on. It is to be understood that "detection" may also involve "recognition" of the detected elements. It is also to be understood that image detection and audio detection may be carried out alone or in conjunction with one another. Audiovisual processing of the captured images and/or videos and/or audio 2081 may be carried out in order to detect occurrence of an event and determine characteristics of the event, such as the event type, direction and speed of movement (for instance, detectable using Doppler if no images are available).

Audiovisual processing may be carried out by the processing device 202 of the processing system 200. The camera 12 may be configured to process the images and/or videos and/or audio 2081 in accordance with specific audiovisual processing parameters (which may include image processing parameters and audio processing parameters). In some examples, the audiovisual processing parameters may include an image detection rate which defines the rate at which frames of images are processed for detection (e.g., an image detector rate indicative of the rate at which images are passed through an image detector, which may in some cases correspond to the frame rate), an audio detection rate which defines the rate at which audio is processed for detection (e.g., an audio detector rate indicative of the rate at which recorded audio is passed through an audio detector such as a gunshot detector), a threshold limit of acceptable false positive detections such as a threshold confidence score, and a compression ratio, so on. The compression ratio is a measurement of the relative reduction in size of a data file produced by a data compression algorithm. Thus, the size of the image files, video files, and/or the audio files associated with the images, videos, and/or audio 2081 generated by the camera 10 may be reduced in accordance with a compression ratio.

With reference to Fig. 2D, the image capture device 230 and the audio capture device 231 (or the image capture device 230 including the audio capture device 231) may be collectively referred to as an audiovisual capture device 3000 of the camera 10. With continued reference to Fig. 2D, the image capture device 230, the audio capture device 231 and the processing device 202 may be collectively referred to as an audiovisual (AV) generation system 2000 of the camera 12. The AV generation system 2000 may be said to "generate" images and/or video and/or audio 2081 and which includes the capture of the images and/or videos and/or audio 2081 by the audiovisual capture device 3000 and in some cases at least part of the processing of image or video or audio by the processing device 202. The AV generation system 2000 may be said to be configured to generate (capture or capture and at least partly process) the images and/or videos and/or audio 2081 in accordance with audiovisual generation parameters. The audiovisual generation parameters include at least one of the aforementioned audiovisual capture parameters (which include image capture parameters and audio capture parameters) and the aforementioned audiovisual processing parameters (which include image processing parameters and audio processing parameters).

In some cases, the audiovisual processing of the images and/or videos and/or audio 2081 may be carried out in whole or in part on the camera 12, depending on operational requirements and constraints, including if there is sufficient power. In other cases, the audiovisual processing of the images and/or videos and/or audio 2081 may be carried out in whole or in part on the server 10. In the case where these functions are to be performed in whole on the server 10, the detection rate of the AV generation system 2000 may be zero such that the AV generation system 2000 is configured to generate images and/or videos and/or audio 2081 by capturing images and/or videos and/or audio 2081 without processing the captured images and/or videos and/or audio 2081. Part of the memory 208 may be reserved for storing computer-readable instructions 210 for execution by the processing device 202, such as to carry out a camera control function 290 as will be described herein below. The computer-readable instructions 210 may also include instructions which, when executed by the processing device 202, cause the processing device 202 to carry out an operating system and other applications/functions. The memory 208 may further store data for use by the camera control function 290, such as a relocation schedule 280 and a policy 285.

There may be a bus 217 providing communication among the processing device 202, the I/O interface 204, the network interface 206 and the memory 208. The bus 217 may be any suitable bus architecture including, for example, a memory bus, a peripheral bus or a video bus.

The camera 12 further includes a replenishable power supply such as a rechargeable battery 212 connected to a solar panel 224. The solar panel 224 is coupled to the battery 212 such that the battery 212 can be replenished / recharged using energy from the sun as converted into electricity by the solar panel 224. In a non-limiting example, the solar panel 224 may include a 10" x 6" panel producing 5V at 1A. Manufacturers of such solar panels include Viewzone, Eufy, Lorex, etc. The solar panel 224 may include an array of photoelectric cells.

The battery 212 provides power to the processing system 200, the I/O interface 204, the network interface 206, the input devices 220 and the output devices 222. In addition, the battery 212 provides a signal to the processing system 200 conveying the charge level of the battery 212, i.e., the battery charge level. The battery charge level may be expressed as a percentage, a number from I to 10, a binary value selected from "low" and "high", or in accordance with any other suitable scale.

Additional components may be provided. For example, the camera 12 may include an output device 222 such as a display and/or a visual or audible alarm, which may also be connected to and controlled by the processing system 200.

The camera 12 may additionally communicate with a computer or other user device over a physical link such as a data port (e.g., USB port), which can occur during device setup or diagnostics testing, for example.

In operation, the camera control function 290 causes images, videos, and/or audio 2081 to be captured by the image capture device 230 (according to certain specific image capture parameters) and sent in the form of datagrams over the wireless link 4 via the network interface 206 and the antenna 218. Proper addressing of the datagrams can allow them to be routed by the access point 6 and the network 8 to the server 10. Transmission of the images, videos, and/or audio 2081 can be carried out in accordance with certain specific transmission parameters. The transmission parameters may include at least one of a data transmission setting indicative of whether real-time data transmission to the server 10 is enabled, a data consumption setting indicative of whether wireless data consumption is enabled, a threshold limit for an amount of data consumed over a given time period, bandwidth, update rate (latency), duration of transmission (or transmission duty cycle), modulation schemeand data rate . When the camera 12 is initially set up, the transmission parameters may be set to respective default values.

In some cases, the camera 12 may be configured to transmit generated images or audio to the server 10 in near real-time or real time. In other cases, the camera 12 may be configured to store the generated images or audio in the memory 208. In such cases, the camera 12 may provide an indication to the server 10 that images or audio have been stored in the memory 208 such that the server 10 may retrieve the stored images or audio as needed. Thus, a data transmission setting may be indicative of whether realtime data transmission to the computing apparatus is enabled. In some cases, the camera 12 may be configured to transmit generated images or audio to the server 10 by consuming wireless data. For instance, the camera 12 may be configured to transmit generated images or audio to the server 10 by consuming wireless cellular data. In other cases, the camera 12 may be configured to store the generated images or audio in the memory 208. In such cases, the camera 12 may be precluded from transmitting generated images or audio to the server 10 by consuming wireless cellular data (e.g., wireless cellular data). Thus, a data consumption setting may be indicative of whether wireless data consumption is enabled. A threshold limit for an amount of data consumed over a given time period may establish a limit for how much data the camera 12 may be permitted to use to transmit generated images or audio to the server 10. The bandwidth refers to a frequency range between a lowest and a highest attainable frequency, which defines a channel capacity of a wireless/wired communication path/link established between the camera 12 and the server 10. The latency refers to the amount of time it takes for a captured image to be sent to (or arrive at) the server 10 via the network 30. The transmission duty cycle means how much percent of the time the camera 12 transmits captured images to the server 10. The modulation scheme determines how bits are mapped to the phase and amplitude of transmitted signals between the camera 12 and the server 10. The modulation scheme may include orthogonal frequency-division multiplexing (OFDM), filter bank multi-carrier (FBMC), universal filtered multi-carrier (UFMC), generalized frequency division multiplexing (GFDM), filtered OFDM (f-OFDM), and so on. The OFDM is implemented in Long Term Evolution (LTE) wireless communication, and the FBMC, UFMC, GFDM, and f-OFDM are applied in fifth-generation (5G) wireless communication. The data rate defines a transmission rate between the camera 12 and the server 10.

In some applications, the storage unit 208 may store configurations (e.g., sets of values) of the audiovisual generation parameters and/or the transmission parameters. It should be appreciated that the aforementioned list of audiovisual generation parameters and transmission parameters is not intended to be exhaustive. Still other audiovisual generation parameters and transmission parameters exist and will be apparent to those of ordinary skill in the art.

In some examples, execution of the instructions stored in the memory 210 result in the processing system 200 regulating at least one of the aforementioned audiovisual generation parameters (i.e., image capture parameters, audio capture parameters, image processing parameters, and/or audio processing parameters) and/or at least one of the aforementioned transmission parameters.

In some embodiments, the camera 12 may be configured to communicate its battery charge level and/or its operating parameters (audiovisual generation parameters and transmission parameters) to the server 10 via the data network 8. The camera 12 may also be configured to respond to messages from the server 10 to change its operating parameters. Fig. 7 is block diagram of an example simplified processing system 300, which may be used to implement the server 10. Although Fig. 7 may show a single instance of each component, there may be multiple instances of each component in the server 10.

The server 10 may also be referred to as a centralized device (or centralized server), which receives, stores, and/or processes images, photos, videos and/or audio 2081 from the camera 12 and other cameras. For example, the server 10 may run a security software platform that gathers images and video (which may include audio) received from various cameras 12 in a common neighborhood, processes them to identify (e.g., detect or detect and recognize) events or entities, and provides either a report or a graphical display for security personnel or law enforcement departments. The server 10 may also run a remote control operation that controls various functions of the camera 12 over the data network 8. The server 10 may be a cloud server running in a cloud computing environment.

With continued reference to Fig. 7, the server 10 comprises a network interface 506 for wired or wireless communication with the communication network 8. The server 10 also includes a processing device 502, such as a central processing unit (CPU), a graphics processing unit (GPU), a tensor processing unit (TPU), a neural processing unit (NPU), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a dedicated logic circuitry, or combinations thereof. The server 10 further includes a storage unit 508, which may include a mass storage unit such as a solid state drive, a hard disk drive, a magnetic disk drive and/or an optical disk drive. In some examples, the storage unit 508 may store images or videos received from the camera 12 over the network 8.

One or more storage units 508 may store images or videos or audio 2081 received from one or more security cameras 12.

The storage unit 508 may include an instruction memory 510, which may include a volatile or non-volatile memory (e.g., a flash memory, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable ROM (EPROM), an electrically erasable programmable ROM (EEPROM), a flash memory and a CD-ROM, to name a few non-limiting possibilities). The instruction memory 510 may store instructions for execution by the processing device 502, such as to carry out example methods described in the present disclosure. The instruction memory 510 may store other software (e.g., instructions for execution by the processing device(s) 502), such as an operating system and other applications/functions.

The storage unit 508 may also store various data for use by the methods and processes defined by the instructions stored in the instruction memory 510. Such data may include the relocation schedule 208, the policy 285 and a table 580 that may store gathered information such as the battery charge level and operation mode of various audiovisual systems including the camera 12.

The processing system 200 may also include a storage unit 208, which may include a mass storage unit such as a solid state drive, a hard disk drive, a magnetic disk drive and/or an optical disk drive. In some examples, the storage unit 208 may store images and/or videos and/or audio 2081 captured by the image capture device 230 (and/or the audio capture device 231.)

The server 10 may also include a power supply 512, however in contrast to the power supply 212 of the camera 12, the power supply 512 of the server 10 need not be an "off-grid" power supply. In particular, the power supply 512 can be connected to the utility grid.

There may be a bus 517 providing communication among components of the server, including the processing device 502, network interface 506 and storage unit 508. The bus 517 may be any suitable bus architecture including, for example, a memory bus, a peripheral bus and/or a video bus.

Additional components may be provided. For example, the server 10 may comprise a user input/output interface (not shown) for interfacing with a user via input and/or output devices, such as a display, keyboard, mouse, touchscreen and/or haptic module, for example.

In some examples, some data used by the methods disclosed herein may be stored at the server 10 and may be stored additionally or alternatively at the camera 12. For example, this may include the battery charge level of the camera 12, the audiovisual generation parameters and the transmission parameters, the camera position and the camera angle / field of view.

By virtue of executing the computer readable instructions in the memory 510, the server 10 may carry out various methods, such as a method of processing images, video and/or audio received from the camera 12, and a method of providing information (such as the relocation schedule 280 or the policy 285) to the camera 12. The server 10 may also carry out threat assessment processing based on images, video and/or audio received from the camera 12.

For example, the server 10 may be configured to process the images and/or videos and/or the audio received from the camera 12 in order to detect and/or recognize objects or events in the received images and/or audio, such as people, vehicles, buildings, etc. The server 10 may jointly apply its processing efforts to multiple image streams from multiple cameras to improve its understanding of a scene that is partly shared by the multiple cameras. The server 10 may be trained to look for certain objects, movements, gestures, actions or lack thereof in order to trigger higher-level processes, such as control of the camera 12, issuance of an audible or visual alarm, or sending a message to an emergency services server (e.g., over the data network 8 or over a separate connection). The server 10 may be trained to identify an event detecting and/or recognizing an event and/or an object based on the captured images, video, and/or audio. In some embodiments, the server 10 may be or be part of a video management server (VMS).

The camera 12 is configured to operate in multiple operation modes. In one embodiment, these operation modes may include a "low power mode" and a "high power mode".

Low power mode refers to an operation mode in which the camera 12 consumes less power than in high power mode. In some cases, low power mode may be associated with power consumption of no more than X watts and high power mode may be associated with power consumption of no less than Y watts, with Y being greater than X. The number of watts may be measured instantaneously during operation in a given one of the modes or may be integrated over a period of time (e.g., 1 second, 30 seconds, 5 minutes) of continuous operation in the given one of the modes, in which case reference to power consumption may be taken to be the average power consumption over the relevant period of time. In still other cases, low power mode may be associated with peak power consumption of up to no more than X watts and high power mode may be associated with peak power consumption of up to no more than Y watts, with Y being greater than X.

The consumption of power in a given mode of operation is the result of operational parameters being applied by the camera control function 290 to the various components of the camera 12, including, for example, the AV generation system 2000 and the network interface 206. Examples of operational parameters that can influence power consumption may include AV generation parameters (e.g., frame rate, resolution) and image transmission parameters (e.g., bandwidth, update rate (latency) to name a few non-limiting possibilities.

For example, in low power mode, the AV generation system 2000 may be configured to capture images at a lower frame rate or resolution than in high power mode. In low power mode, the AV generation system 2000 may configured to decrease the brightness of the flash, decrease the number of images captured in a given time period, decrease the number of activations of the flash in a given time period, decrease the sampling rate, decrease the detection rate, to increase the confidence score, increase the compression rate than in high power mode. Additionally, in low power mode image capture or audio capture by the camera 12 may be disabled. In another example, in low power mode, the network interface 206 may be configured to send captured images at a lower bandwidth or at a lower update rate (higher latency) than in high power mode. For instance, the network interface 206 may be configured to send to the server 10 a smaller percentage of the images or videos or audio generated by the AV generation system 2000 than in high power mode, thus requiring a higher bandwidth in high power mode than in low power mode. A shorter transmission duty cycle also causes the network interface 206 to consume less power. Regarding the data rate, less power will be consumed by the network interface 206 at lower data rates between the camera 12 and the server 10. Regarding wireless transmission, less power will be used or consumed by the network interface 206 when the images and/or videos and/or audio 2081 are not wirelessly transmitted via the network interface 206 (e.g., if the images and/or videos and/or audio 2081 are stored in memory). As such, to reduce the power consumed by the network interface 206, transmission of the images and/or videos and/or audio 2081 via the network interface 206 may be disabled (e.g., an option for sending wireless data (e.g., cellular data) may be disabled or turned off).

In a still further example, consider that image, video, and/or audio processing may be separated into a sequence of operations that can be executed at the camera 12 or at the server 10. In low power mode, a smaller number of such operations in the sequence are performed at the camera 12 than in high power mode, with the balance of operations (if any) being performed at the server 10. For example, if the sequence of operations associated with image, video and/or audio processing includes first-level processing 610 followed by second-level processing 620, the camera 12 operating in low power mode (see Fig. 8A) may perform only the first-level processing 610, with the second-level processing 620 being performed at the server 10, whereas in high power mode (see Fig. 8B), the camera 12 may perform both the first-level processing 610 and the second-level processing 620.

In a non-limiting embodiment, the first-level processing 610 may include object and/or event detection and the second-level processing 620 may include object and/or event recognition. In this context, the object may include a face, a car make/model or a license plate and the event may include a gunshot or speech, to name a few non-limiting possibilities. By way of the first-level processing, the object and/or event is detected, and therefore a result of the first-level processing could be a position of the suspected object or a bounding box containing the suspected object (e.g., person, license plate, vehicle) or detection of a source sound (without recognition of the sound) , and the second-level processing (involving object recognition, which could also include optical character recognition) is applied to the result of the first- level processing, leading to a result that could be the identity of a person, the characters of a license plate, the make and model of a vehicle, gunshot recognition, speech recognition etc. Once the second-level processing 620 has taken place, an action may be triggered, which can include further investigation, summoning the authorities, issuing an alert, etc.

In another embodiment, the first-level processing 610 applied to a stream of captured images, videos and/or audio 2081 could be the identification of a relevant subset of images (i.e., a reduction in the number images) based on criteria such as contrast, motion, ambient light, etc. In this case, second-level processing 620 is performed only on the subset of images that result from the first-level processing. In such an embodiment, second-level processing 620 may include both object and/or event detection and object and/or event recognition.

In another embodiment, in some cases, first level processing 610 may include processing to detect a given condition (e.g., detection of a sound, detection of an object, etc.) by a first detector and the results of the first level processing 610 are transmitted to a second detector which carries out second level processing 620. In some cases, second-level processing 620 is performed only if the results of the first level processing meet a particular criterion.

Method 300

To switch between the aforementioned modes of operation under certain circumstances, and thereby dynamically adjusting power consumption, the camera control function 290 implements a power management method. In response to detection of a trigger (e.g., a change to the camera or an event), the camera control function 290 may force the camera 12 to operate in a chosen mode of operation in response to the detected trigger.

Fig. 3A is a flowchart showing steps in an example power management method 300.

At step 302 of the power management method 300, the camera control function 290 detects a change to the camera 12. In an embodiment, the change to the camera 12 is a physical change to the camera 12, and detection of the change does not require or utilize the images captured by the camera 12. That is to say, the change to the camera 12 is detected by sensors other than the camera 12. These sensors (examples of which include the acceleration sensor system 214, the position sensor system 216, temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211) are coupled to the processing system 200 which processes the signals received from the sensors and determines whether the camera 12 has undergone a physical change. At step 304 of the power management method 300, the camera control function 290 forces the camera 12 to operate in a certain operation mode in response to detection of the change at step 302. It is noted that if the camera 12 was already operating in the certain operation mode, then step 304 of the power management method 300 causes the camera 12 to continue operating in the certain operation mode, whereas if the camera 12 was not operating in the certain operation mode, step 304 of the power management method 300 changes the mode of operation of the camera 12 and, in particular, switches the camera 12 into the certain operation mode.

The certain operation mode may be the low power mode or the high power mode, depending on the embodiment and on the change detected at step 302 (and on the policy 285), as will now be described.

In one non-limiting embodiment of the power management method 300, and with reference to Fig. 3B, the change to the camera 12 detected at step 302 may be a physical change indicative of potential theft, vandalism or tampering or attempted tampering . Examples of such a change could include (i) movement of the camera 12; (ii) disconnection of the solar panel from the battery 212; (iii) a temperature change in the surrounding environment; or (iv) tampering or an attempt to tamper with the camera. Movement of the camera 12 can be detected by processing the output of the accelerometer and/or GPS, and may include linear movement and/or angular movement. In order to qualify as movement indicative of theft or vandalism, the movement may need to take place over a certain threshold linear or angular threshold distance, possibly lasting over a certain threshold period of time and/or be characterized by linear or angular acceleration above a certain threshold acceleration. In other embodiments, the monitored movement of the camera 12 (whether it be acceleration or position) can be run through a neural network trained to discriminate between movements associated with theft and vandalism, and movements not associated with theft and vandalism (e.g., such as from wind and/or animals). Disconnection of the solar panel 224 from the battery 212 can be detected by monitoring a connection signal from the solar panel and/or from the battery 212. The mere fact that the solar panel 224 has been disconnected from the battery 212 can be considered indicative of theft or vandalism. A temperature change may be indicative of a change of environment (e.g., a sudden change of environment, for example, from outdoors to indoors a vehicle or building, or other indoor environment). In order to qualify as a temperature change indicative of theft or vandalism, the temperature change may need to be greater than a threshold amount and/or may need to take place over a certain threshold period of time. In other embodiments, the temperature change can be run through a neural network trained to discriminate between temperature change associated with a change of environment, and a temperature change not associated with a change of environment (e.g., such as expected or typical temperature change for an outdoor or an indoor environment).

The sensor output of the motion sensing system 219 and/or the tamper sensing system 211 may be analyzed for signs of tampering or risk of tampering. A breach of a casing of the camera 12 may be indicative of an attempt to tamper with the camera 12. The breach of the casing of the camera 12 may be detected based on analyzing sensor out of the tamper sensing system 211, for example, by detecting ambient light or a change in humidity within the casing of the camera 12. An attempt to remove the camera 12 from its installation place may be indicative of an attempt to tamper with the camera 12. Detection of motion within a threshold distance to the camera 12 may be indicative of an attempt to tamper with the camera 12. For instance, if it is detected that an object or individual is too close to the camera 12 (i.e., at a distance to the camera 12 that is less than a threshold distance), this may be indicative of an attempt to tamper with the camera 12 (e.g., an attempt to obstruct the field of view of the camera 12 or to manipulate the camera 12 (e.g., remove the camera etc., redirect the camera 12, etc.). Alternatively, sudden absence of motion may be indicate of a deliberate act associated with an attempt to tamper with the camera 12.

Assuming therefore that the detected change to the camera 12 is indicative of potential theft, vandalism, tampering or attempted tampering, it may be desirable to record video and send it back rapidly to the server 10 for processing. As such, the certain operation mode may be the high power operation mode, which can involve sending "high grade" video (e.g., full-motion video at high resolution) in real-time or near real-time (at low latency and high bandwidth). This may be triggered irrespective of the battery charge level or current operation mode of the camera 12. That is to say, even if the battery charge level is low and the camera 12 is in low power mode, the fact that a change to the camera 12 indicative of potential vandalism, theft, tampering or attempted tampering has been detected warrants a response (namely sending high grade video in real-time or near real-time) that can maximize the ability of the server 10 to determine what is happening to the camera 12, even if this means more rapidly depleting the battery 212 due to having entered high power mode.

While the camera 12 is in high power mode, the camera control function 290 may also be configured to collect data from the acceleration sensor system 214, the position sensor system 216, the temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211 and feed this information to the server 10 for analysis. This could allow the server 10 to track the whereabouts of the camera 12 in case it continues to be moved and/or vandalized. The gathered motion data and position data can be sent in packets sent across the data network 8 via the network interface 206 and the antenna 218.

While the camera 12 is in high power mode, the camera control function 290 may also be configured to capture images, video, and/or audio 2081 and process this information by the processing device 202 of the camera 12 for analysis. This could allow the camera 12 to detect and/or recognize objects and/or events.

The camera 12 may subsequently remain in high power mode until the battery 212 is depleted. Alternatively, and as depicted in Fig. 3B, the camera 12 may wait for a condition to be met (step 305) and then decide what to do next (step 306). If the camera 12 had been in low power mode at the time of detecting the change at step 302 (which can be recorded in the memory 208 at step 303), then after the condition is met, the camera control function 290 may switch the camera 12 back into low power mode (the "YES" branch of step 306, followed by step 307).

In one embodiment, the condition verified at step 305 may be the passage of a certain amount of elapsed time. The certain amount of elapsed time may correspond to how much time is considered adequate to upload high-quality video that could provide the server 10 with a useful time window for assessing the situation with the camera 12 and, in particular, assessing whether the threat of theft, vandalism or tampering is confirmed. The certain amount of elapsed time may correspond to how much time is considered adequate to provide the processing device 202 with a useful time window for assessing the situation with the camera 12 and, in particular, assessing whether the threat of theft, vandalism, tampering or attempted tampering is confirmed. This could be on the order of 10 seconds, 30 seconds, 5 minutes or any other period of time, which could vary according to factors such as the battery charge level.

In another embodiment, the condition verified at step 305 may be detection that movement of the camera 12 has ceased (e.g., based on processing sensor data from the position sensing system 216 or the acceleration sensor system 214). For instance, the condition may be detection that movement of the camera 12 has ceased for a threshold amount of time. This could be on the order of 30 seconds, 45 seconds, 2 minutes or any other period of time. In one example of implementation, the camera 12 may be configured to switch back into its previous mode of operation, in this case low power mode. In one example, image or audio capture, which was previously enabled during high power mode, may now be disabled in low power mode. In another embodiment, the condition verified at step 305 may be receipt of a "relinquish" signal.

In some instances, the relinquish signal may be received from the server 10, which tells the camera 12 to switch back into its previous mode of operation, in this case low power mode. The relinquish signal may be generated by the server 10 after the server 10 (or a user thereof) is satisfied that the threat of theft, vandalism, tampering or attempted tampering is not substantiated. Specifically, the server 10 may carry out threat assessment processing of images, videos and/or audio 2081 received from the camera 12, whereby such threat assessment processing confirms the existence or absence of a threat (such as theft, vandalism, tampering or attempted tampering). If the threat assessment processing confirms that there is no threat (despite the detected change to the camera 12), the server may generate the relinquish signal. However, if the threat assessment processing confirms that there is a threat or (is inconclusive), threat assessment processing may continue and/or an alert may be issued by the server 10 to another entity.

In other instances, the relinquish signal may be produced internally by the camera 12 based on processing of the images, videos and/or audio 2081 generated by the AV generation system 2000. Specifically, the camera control function 290 may itself perform threat assessment processing on the captured images and/or the captured audio and may confirm whether the detected change to the camera is truly to be considered theft, vandalism, tampering or attempted tampering. As such, although forcing the camera 12 to operate in high power mode may result from a physical change to the camera 12 and not the content of the images and/or the audio captured by the camera 12, toggling the camera 12 back into low power mode (if that was the mode in which it was operating at the time that the change to the camera 12 was detected) may occur based on content of the images, videos and/or audio 2081 generated by the AV generation system 2000 of the camera 12.

Of course, if the camera 12 had been in high power mode at the time of execution of step 302 (and step 303), then no particular action needs to be taken (the "NO" branch of step 306), and the camera 12 can remain in high power mode until possibly another process run by the camera control function 290 makes a change.

Thus, it should be apparent that detecting movement of the camera 12 at step 302 of the power management method 300 does not necessarily correlate with theft, vandalism, tampering or attempted tampering. Another situation where this may be the case, for example, is if the camera 12 is part of a planned relocation. As such, in some embodiments, and with additional reference to Fig. 3C, the camera control function 290 may detect movement at step 302 (and save the current operation mode at step 303) and then, at step 308, may consult a relocation schedule 280 stored in the memory 208. The relocation schedule 280 may have been previously downloaded from the server 10 over the data network 8. The relocation schedule 280 may indicate time periods when the camera 12 is scheduled for movement (e.g., relocation or installation). The current time (obtained from an internal clock) is then compared with the relocation schedule 280 at step 309.

If the current time is within a time period when the camera 12 is not scheduled for movement (the "NO" branch of step 309), this could mean that the camera 12 is indeed in the process of being stolen, vandalized or tampered with higher probability. Accordingly, at step 304, and as previously described, the camera control function 290 may force the camera 12 to operate in high power mode so as to relay video of the current environment of the camera 12 right after movement of the camera 12 (or even during movement if still ongoing). The same conditions for potentially returning to low power mode may apply as previously described with reference to steps 305, 306 and 307.

In contrast, if it is found that the current time falls within a time period when the camera 12 is scheduled for movement (the "YES" branch of step 309), then at step 310, the camera control function 290 may force the camera 12 to operate in low power mode so as to conserve power during the scheduled move. This could mean leaving the camera 12 in low power mode if it was already operating in low power mode, or switching it into low power mode if it had been operating in high power mode. Once the scheduled move is complete, at step 311, the camera control function 290 may relinquish its forced operation of the camera 12 in low power mode. This could mean a return to high power mode if that is the operating mode that was recorded at step 303.

In yet another non-limiting embodiment, and with reference to Fig. 3D, the change to the camera 12 detected at step 302 may consist of a drop in the battery charge level below a threshold. This condition may be detected based on continuous monitoring of the charge level of the battery 212. In that case, it may be desirable to minimize the power being consumed by the camera 12. As such, the certain operation mode is the low power operation mode. That is to say, if the camera 12 was in high power mode during execution of step 302, the camera control function 290 switches the camera 12 into low power mode, and if the camera 12 was in low power mode, the camera control function 290 keeps the camera 12 operating in low power mode.

It is noted that the situation described above with reference to Fig. 3D may be combined with the situation described with reference to 3B, thereby creating a scenario where there is seemingly forced operation in low power mode (due to the low battery charge level) at the same time as forced operation in high power mode (due to detected movement of the camera 12). This apparent contradiction can be resolved by the camera control function 290 consulting a policy 285 stored the memory 208. The policy 285 outlines various possibilities and priorities for determining the exact conditions under which forced operation in high power mode will prevail versus those under which forced operation in lower power mode will prevail.

In other non-limiting embodiments, the change to the camera 12 detected at step 302 may be a rise in the battery charge level above a threshold. In that case, it may be desirable to increase the power being consumed by the camera 12. In this scenario, therefore, the certain operation mode is the high power operation mode. That is to say, if the camera 12 was in low power mode, the camera control function switches it into high power mode, and if the camera 12 was in high power mode, the camera control function keeps it in high power mode.

Method 400

By virtue of executing the computer readable instructions in the memory 208, the camera 12 may carry out various methods, such as a method of processing the captured images, videos, and/or audio. More specifically, the processing device 202 of the camera 12 may carry out various methods, such as a method of processing the captured images and/or the captured audio. The processing device 202 may also carry out threat assessment processing based on images, videos, and/or audio captured by the camera 12.

For example, the processing device 202 of the camera 12 may be configured to process the images and/or the audio captured by the camera 12 in order to detect and/or recognize objects or events in the images and/or audio 2081 generated by the camera 12, such as people, vehicles, buildings, etc. The camera 12 may be trained to look for certain objects, movements, gestures, actions or lack thereof in order to trigger higher-level processes, such as control of the camera 12, issuance of an audible or visual alarm, or sending a message to an emergency services server (e.g., over the data network 8 or over a separate connection). The processing device 202 of the camera 12 may be trained to identify an event by detecting and/or recognizing an event based on the captured images, videos and/or audio 2081. It is to be understood that detecting and/or recognizing an event may include detecting and/or recognizing an object.

Examples of identifying an event may include detecting and/or recognizing events such as faces, people, vehicles, buildings, objects, speech, sounds, motion, gestures, actions, etc. For instance, examples of identifying an event may include identifying a license plate read, a stolen car, an identity of an individual (e.g., a registered criminal, a missing person), a gunshot event associated with the picking up of a sound that is identified to be a gunshot, a traffic accident event, indicators of a fire (e.g., smoke, heat, a fire alarm, etc.), indicators of vandalism or theft, etc. Examples of identifying an event may also include detecting and/or recognizing the lack thereof of an event and/or an object, for example, the absence of an object, a person, etc. Examples of identifying an event may also include identifying manipulation of the camera 12 such redirection of the camera 12, obstructing the field of view of the camera (e.g., by covering or spray-painting a lens of the camera 12).

The camera 12 may be in communication with one or more data sources 112 that may be accessed by the processing device 202 of the camera 12 to detect an event. The data sources 112 may include various types of sources, including access control systems, video monitoring systems, building monitoring systems, analytics systems, automated license plate recognition (ALPR) systems, gunshot monitoring systems, and the like.

As previously discussed, the camera 12 is configured to operate in multiple operation modes. In one embodiment, these operation modes may include a "low power mode" and a "high power mode". In one example, consider that image and/or audio processing may be separated into a sequence of operations that can be executed at the camera 12 or at the server 10. In one non-limiting embodiment, even in low power mode, the camera 12 may perform both the first-level processing 610 and the second-level processing 620.

However, it is to be understood that, in some embodiments, a portion of the processing of the captured images, videos, and/or audio 2081 may also be carried out by the server 10.

In another embodiment, to switch between the aforementioned modes of operation under certain circumstances, and thereby dynamically adjusting power consumption, the camera control function 290 implements another embodiment of a power management method. Fig. 4A is a flowchart showing steps in an example power management method 400.

At step 402 of the power management method 400, the camera control function 290 detects an event. In this embodiment, detecting an event is based on the images, video, and/or audio 2081 generated by the camera 12. That is to say, the event is detected by the camera 12, for example, by analysis of the images, videos and/or audio captured 2081 by the processing device 202 to detect and/or recognize an event (which may include detecting and/or recognizing an object). To detect and/or recognize an event, the processing system 200 may also process the signals generated from the sensors of the camera 12 (e.g., examples of which include the acceleration sensor system 214, the position sensor system 216, temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211) to detect and/or recognize an event.

At step 404 of the power management method 400, the camera control function 290 forces the camera 12 to operate in a certain operation mode in response to detecting the event at step 402. It is noted that if the camera 12 was already operating in the certain operation mode, then step 404 of the power management method 400 causes the camera 12 to continue operating in the certain operation mode, whereas if the camera 12 was not operating in the certain operation mode, step 404 of the power management method 400 changes the mode of operation of the camera 12 and, in particular, switches the camera 12 into the certain operation mode.

The certain operation mode may be the low power mode or the high power mode, depending on the embodiment and on the event detected at step 402 (and on the policy 285), as will now be described.

In one non-limiting embodiment of the power management method 400, and with reference to Fig. 4A, detecting the event by the camera 12 at step 402 may include detecting and/or recognizing events (which may include objects) such as faces, people, vehicles, buildings, objects, speech, sounds, movements, gestures, actions, a license plate read, an identity of an individual (e.g., a registered criminal), a gunshot event associated with the picking up of a sound that is detected to be a gunshot, a traffic accident event, an event indicative of vandalism or theft, etc. and so on. The event is detected based on processing of images, videos, and/or audio 2081 captured by the camera 12 and processed by the processing device 202.

For instance, detecting an event may include the processing device 202 of the camera 12 accessing the one or more the data sources 112 (e.g., including access control systems, video monitoring systems, building monitoring systems, analytics systems, automated license plate recognition (ALPR) systems, gunshot monitoring systems, and the like). In other embodiments, the images, video, and/or audio 2081 captured by the camera 12 can be run through a neural network trained to discriminate between significant events, and non-significant events (e.g., such as from wind and/or animals).

Further to detecting the event, it may be desirable to record images, video, and/or audio 2081 for further processing (e.g., for further processing by the camera 12 and/or the server 10).

The further processing may involve event assessment processing to substantiate the detected event. For instance, the event assessment processing may involve processing to confirm whether the event detected by the camera 12 has been correctly detected. The event assessment processing may involve determining whether the detected event is a significant event. A significant event may be understood as an event associated with a threat and/or an event requiring further action.

The event assessment processing may involve processing to confirm that a license plate number that has been detected / recognized is indeed associated with a target vehicle that corresponds to a stolen vehicle, or to a vehicle that is owned by a criminal, etc.

The event assessment processing may involve processing to analyze the images captured to identify shell casings further to identifying a gunshot event via the audio capture device.

In other instances, the event assessment processing of the images and/or the audio captured from the camera 12 may be carried out by the server 10.

Assuming therefore that the event detected by the camera 12 is significant, it may be desirable to record video and send it back rapidly to the server 10 for processing. As such, the certain operation mode may be the high power operation mode, which can involve sending high grade video (e.g., full-motion video at high resolution) in real-time or near real-time (at low latency and high bandwidth). This may be triggered irrespective of the battery charge level or current operation mode of the camera 12. That is to say, even if the battery charge level is low and the camera 12 is in low power mode, the fact that an event has been detected by the camera 12 and warrants a response (namely sending high grade video in real-time or near real-time) that can maximize the ability of the camera 12 and/or the server 10 to obtain further details regarding the event, even if this means more rapidly depleting the battery 212 due to having entered high power mode.

In one example of implementation, the camera 12 operating in low power mode prior to detecting an event may have been operating such that image or video capture was disabled and audio capture was enabled. In this example, the camera 12 may detect a sound (e.g., by detecting a change in amplitude). Image or video capture may be re-enabled based on detecting sound (e.g., detecting sound indicative of a car, etc.).

While the camera 12 is in high power mode, the camera control function 290 may also be configured to collect data from the acceleration sensor system 214, the position sensor system 216, the temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211 and feed this information to the processing device 202 for analysis. This information may also be sent to the server 10 to further assist in detecting the event. The gathered motion data and position data can be sent in packets sent across the data network 8 via the network interface 206 and the antenna 218.

The camera 12 may subsequently remain in high power mode until the battery 212 is depleted. Alternatively, and as depicted in Fig. 4B, the camera 12 may wait for a condition to be met (step 405) and then decide what to do next (step 406). If the camera 12 had been in low power mode at the time of detecting the event at step 402 (which can be recorded in the memory 208 at step 403), then after the condition is met, the camera control function 290 may switch the camera 12 back into low power mode (the "YES" branch of step 406, followed by step 407).

In one embodiment, the condition verified at step 405 may be the passage of a certain amount of elapsed time. The certain amount of elapsed time may correspond to how much time is considered adequate to process high-quality video that could provide the camera 12 and/or the server 10 with a useful time window for conducting event assessment processing (e.g., substantiating the event detected by the camera 12 (e.g., verifying the nature of the detected event, etc.) and/or determining whether the detected event is a significant event (e.g., detected whether the detected event warrants further action, determining that a threat associated with the identified event is substantiated, etc.). The certain amount of elapsed time may correspond to how much time is considered adequate to provide the processing device 202 and/or the server 10 with a useful time window for conducting event assessment processing (e.g., substantiating the event detected by the camera 12 (e.g., verifying the nature of the detected event, etc.) and/or determining whether the detected event is a significant event (e.g., assessing whether the detected event warrants further action, determining that a threat associated with the detected event is substantiated, etc.)) This could be on the order of 10 seconds, 30 seconds, 5 minutes or any other period of time, which could vary according to factors such as the battery charge level.

In another embodiment, the condition verified at step 405 may be receipt of a "relinquish" signal.

In some instances, the relinquish signal may be produced internally by the camera 12 based on processing of the images, video, and/or audio 2081 captured by the audiovisual capture device 3000 (i.e., by the image capture device 230 and/or the audio capture device 231). Specifically, the camera control function 290 may itself perform event assessment processing on the captured images and/or the captured audio to substantiate the event detected by the camera 12 (e.g., verifying the nature of the detected event, etc.) and/or determines whether the detected event is a significant event (e.g., assessing whether the detected event warrants further action, determining that a threat associated with the detected event is substantiated, etc.)

If the event assessment processing confirms that no further action is required and/or that there is no threat (despite the event detected by the camera 12), the camera 12 may generate the relinquish signal. However, if the event assessment processing confirms that there is a threat or that further action is required (or is inconclusive), event assessment processing may continue and/or an alert may be issued by the camera 12 to another entity (e.g., generating a command for an alert that is transmitted to the relevant authorities (e.g., law enforcement, fire departments, emergency services, etc.)). The further action may be encoded as part of the policy 285.

In other instances, the relinquish signal may be received from the server 10, which tells the camera 12 to switch back into its previous mode of operation, in this case low power mode. The relinquish signal may be generated by the server 10 after the server 10 (or a user thereof) is satisfied that the detected event warrants further action or is satisfied that a threat associated with the detected event is not substantiated, etc. Specifically, the server 10 may carry out event assessment processing of images, video, and/or audio 2081 received from the camera 12. However, if the event assessment processing confirms that there is a threat or that further action is required (or is inconclusive), event assessment processing may continue and/or an alert may be issued by the server 10 to another entity (e.g., generating a command for an alert that is transmitted to the relevant authorities (e.g., law enforcement, fire departments, emergency services, etc.)). The further action may be encoded as part of the policy 285.

Thus, it should be apparent that the event detected by the camera 12 at step 402 of the power management method 400 may or may not be a significant event. That is, it should be apparent that the event detected by the camera 12 at step 402 of the power management function 400 may or may not be an event that correlates with a threat or an event requiring action.

If the event is deemed a significant (the "YES" branch of step 409), at step 404, and as previously described, the camera control function 290 may force the camera 12 to operate in high power mode so as to relay video of the current environment of the camera 12 right after movement of the camera 12 (or even during movement if still ongoing). The same conditions for potentially returning to low power mode may apply as previously described with reference to steps 405, 406 and 407.

In contrast, if it is found that the event is not deemed a significant event (the "NO" branch of step 409), then at step 310, the camera control function 290 may force the camera 12 to operate in low power mode so as to conserve power during the insignificant event. This could mean leaving the camera 12 in low power mode if it was already operating in low power mode, or switching it into low power mode if it had been operating in high power mode.

Of course, if the camera 12 had been in high power mode at the time of execution of step 402 (and step 403), then no particular action needs to be taken (the "NO" branch of step 406), and the camera 12 can remain in high power mode until possibly another process run by the camera control function 290 makes a change.

It is noted that the situation described above with reference to Fig. 3A and 3D may be combined with the situation described with reference to Fig. 4A, thereby creating a scenario where there is seemingly forced operation in low power mode (e.g., due to the low battery charge level or due to a planned relocation) at the same time as forced operation in high power mode (due to an event having been detected by the camera 12). This apparent contradiction can be resolved by the camera control function 290 consulting a policy 285 stored the memory 208. The policy 285 outlines various possibilities and priorities for determining the exact conditions under which forced operation in high power mode will prevail versus those under which forced operation in lower power mode will prevail.

It Is noted that the situation described above with reference to Fig. 3A may be combined with the situation described with reference to 4A, thereby creating a scenario where an event detected by the camera 12 based on images, video, and/or audio processing is indicative of theft, vandalism, tampering or attempted tampering may be confirmed by detecting a physical change to the camera 12 indicative of theft, vandalism, tampering or attempted tampering. For instance, the event assessment processing may involve detecting a physical change to the camera 12 indicative of theft, vandalism, tampering or attempted tampering based on the images, video and/audio generated by the camera 12 as well as movement of the camera 12, disconnection of the solar panel from the battery 212 or a temperature change in the surrounding environment.

In some cases, steps 304, 404 may include changing the operation mode of the camera 12 from high power mode to low power mode. For instance, this may be due to a trigger detected in steps 302, 402 such as power input to the camera 12 (e.g., an increase of the battery charge level), physical disconnection of the camera 12, theft, vandalism, tampering or attempted tampering.

In a further embodiment, the camera 12 is configured to operate in one of three operation modes, which can be termed a super-low power mode, a low power mode and a high power mode. Super-low power mode defines an operation mode in which the camera 12 consumes less power than in low power mode, and low power mode defines an operation mode in which the camera 12 consumes less power than in high power mode. In some cases, super-low power mode may be associated with power consumption of no more than W watts, low power mode may be associated with power consumption of no less than X watts, with X being greater than W, and high power mode may be associated with power consumption of no less than Y watts, with Y being greater than X. In still other cases, super-low power mode may be associated with peak power consumption of up to no more than W watts, low power mode may be associated with peak power consumption of up to no more than X watts, with X being greater than W and high power mode may be associated with peak power consumption of up to no more than Y watts, with Y being greater than X.

In some cases, in the low and high power modes, the camera 12 is configured to capture images, video, and/or audio 2081 using the audiovisual capture device 3000, whereas in the super-low power mode of operation, the camera 12 is configured to not capture any images, video, and/or audio 2081.

Transitions between operation modes are now described with reference to the finite state machine diagram in Fig. 5, which may be controlled by the camera control function 290. Whether it is in low power mode or high power mode, the camera control function 290 may be configured for responding to detection of a sleep command to force the camera 12 to operate in super-low power mode. The sleep command may be obtained from the server 10 over the data network 8. The sleep command may also be self-generated by the camera control function 290 after X minutes of lack of detected movement of the camera 12.

Another situation where the camera control function 290 forces the camera 12 to operate in super low- power mode may be where the camera control function 290 detects movement of the camera 12 but concludes, by consulting the relocation schedule 280, that the movement of the camera 12 was scheduled. If the camera was in low power or high power mode, forcing the camera 12 to operate in super low-power mode in this way can allow power savings by shutting down the image taking capability of the camera 12 and keeping it that way until the camera 12 exits super-low power mode. Of course, if the camera 12 was already in super-low power mode, detection of scheduled movement by the camera control function 290 simply keeps the camera 12 operating in super-low power mode.

In the present non-limiting embodiment, the camera 12 can exit super low-power mode in one of two ways. Firstly, if there is no more movement of the camera 12 (e.g., for a certain threshold minimum period of time, such as 5 minutes) and if the solar panel 224 is producing power (even at night, a solar panel in an urban landscape will be able to produce some power), the camera control function 290 can infer that the camera 12 is not in a box but is positioned in an area where useful images or video or audio could be generated by the AV generation system 2000. As such, the camera control function 290 places the camera 12 in low power mode and causes the camera control function 290 to capture images in accordance with certain audiovisual generation parameters (e.g., at a certain frame rate and resolution), and causes the images, video and audio to be wirelessly transmitted to the server 10 via the network interface 206 and the antenna 218.

A second way for the camera 12 to exit super low-power mode is to detect movement of the camera 12 that is unscheduled. In this case, and in fact regardless of whether the camera 12 was in super-low power mode, low power mode or high power mode, the detection of unscheduled movement by the camera control function 290 forces the camera 12 to operate in high power mode of operation so that AV generation system 2000 starts generating full motion video (and so that the network interface 206 starts sending it in real-time or near real-time) in an attempt to assist the server 10 to determine what is happening in the vicinity of the camera 12.

With the camera 12 now in high power mode, it may either enter super low-power mode as previously described, i.e., by the camera control function 290 receiving a sleep command from the server 10, or this may happen if the camera control function 290 detects movement that is scheduled (as this would give the camera control function 290 knowledge that movement of the camera 12 was not likely due to an attempt at theft, vandalism, tampering or attempted tampering). The camera 12 may also enter low power mode from high power mode if the camera is stationary and the battery charge level drops below a certain threshold. Of course, other conditions for exiting one operation mode and entering another may be specified in the behavior of the camera control function 290.

Of course, in still further embodiments, the camera 12 may be configured to operate in one of three or more operation modes, each associated with a different level or range of power consumption.

For instance, the three operation modes can be termed a low power mode, a high power mode and a covert mode.

Covert mode defines an operation mode in which the camera 12 is configured to generate images, video, and/or audio 2081, however the camera 12 is also configured to appear to be turned off. For example, to appear turned off in covert mode, the display of the camera 12 may be turned off, and the visual or audible alarm may be disabled.

In one embodiment, with reference to Fig. 6A, the camera control function 290 may force the camera 12 to operate in covert mode further to detecting a physical change indicative of potential theft, vandalism, tampering or attempted tampering as described above with respect to step 302 of the power management method 300. In this case, in step 304 described above, the camera control function 290 forces the camera 12 to operate in a certain operation mode in response to detection of the change at step 302, the certain operation mode being a covert mode, depending on the change detected at step 302 (and on the policy 285).

In another embodiment, with reference to Fig. 6B, the camera control function 290 may force the camera 12 to operate in covert mode further to detecting an event as described above with respect to step 402 of the power management method 400. As such, step 404 described above, the camera control function 290 forces the camera 12 to operate in a certain operation mode in response to detection of the event at step 402, the certain operation mode being a covert mode, depending on the event detected at step 402 (and on the policy 285).

In covert mode, the camera 12 may operate in a substantially similar or similar fashion as it would operate in high power mode. For example, in covert mode, the AV generation system 2000 may be configured to generate images at a higher frame rate or resolution than in low power mode mode. In another example, in cover mode, the network interface 206 may be configured to send generated images at a higher bandwidth or at a higher update rate (lower latency) than in low power mode. For instance, the network interface 206 may be configured to send to the server 10 a larger percentage of the images generated by the AV generation system 2000 than in low power mode, thus requiring a lower bandwidth in low power mode than in covert mode. In yet another example, flash may be used less frequently in low power mode than in covert mode.

Assuming therefore that a detected change to the camera 12 is indicative of potential theft, vandalism, tampering or attempted tampering and/or a detected event is a significant event, it may be desirable to record images, video and/or audio 2081 and send it back rapidly to the server 10 for processing while the camera 12 appears to be turned off. As such, covert mode can involve sending high grade video (e.g., fullmotion video at high resolution) in real-time or near real-time (at low latency and high bandwidth) while the camera 12 appears to be turned off. This may be triggered irrespective of the battery charge level or current operation mode of the camera 12. That is to say, even if the battery charge level is low and the camera 12 is in low power mode, the fact that a change to the camera 12 indicative of potential vandalism or theft has been detected and/or an event has been detected that warrants a response (namely sending high grade video in real-time or near real-time) that can maximize the ability of the server 10 and/or the processing device 202 of the camera 12 to determine what is happening (e.g., what the nature of the event is, what is happening to the camera 12), even if this means more rapidly depleting the battery 212 due to having entered covert mode.

While the camera 12 is in covert mode, the camera control function 290 may also be configured to collect data from the acceleration sensor system 214, the position sensor system 216, the temperature sensing system 215, the motion sensing system 219 and the tamper sensing system 211 and feed this information to the server 10 and/or the processing device 202 of the camera 12 for analysis. This could allow the server 10 to track the whereabouts of the camera 12 in case it continues to be moved and/or vandalized. The gathered motion data and position data can be sent in packets sent across the data network 8 via the network interface 206 and the antenna 218.

While the camera 12 is in covert mode, the camera control function 290 may also be configured to generate images, video and/or audio 2081 and process this information by the processing device 202 of the camera 12 for analysis. This could allow the camera 12 to detect and/or recognize objects and/or events.

The camera 12 may be configured to exit covert mode in a similar fashion as was described above with respect to high power mode.

Of course, in still further embodiments, the camera 12 may be configured to operate in one of four or more operation modes, each associated with a different level or range of power consumption.

For instance, the four operation modes can be termed a super-low power mode, a low power mode, a high power mode and a covert mode.

Thus, it will be appreciated that the present disclosure describes a method of forcing an electronic device, such as a security camera, to operate in a certain operation mode in response to a detected change to the electronic device, for example including movement, solar power disconnection or battery charge level decrease or increase. Such adjusting of the operation mode of the security camera in response to the detected change to the electronic device may enable the security camera to capture disruptive actions, such as theft, reorienting or vandalism, while conserving power in other circumstances.

Those skilled in the art will appreciate that separate boxes or illustrated separation of functional elements or modules of illustrated systems and devices does not necessarily require physical separation of such functions or modules, as communication between such elements can occur by way of messaging, function calls, shared memory space, and so on, without any such physical separation. As such, functions or modules need not be implemented in physically or logically separated platforms, although they are illustrated separately for ease of explanation herein. Different devices can have different designs, such that while some devices implement some functions in fixed function hardware, other devices can implement such functions in a programmable processor with code obtained from a machine-readable medium.

Although the present disclosure describes methods and processes with steps in a certain order, one or more steps of the methods and processes may be omitted or altered as appropriate. One or more steps may take place in an order other than that in which they are described, as appropriate.

Furthermore, although the present disclosure is described, at least in part, in terms of methods, a person of ordinary skill in the art will understand that the present disclosure is also directed to the various components for performing at least some of the aspects and features of the described methods, be it by way of hardware components, systems, software or any combination of the two.

Accordingly, certain technical solutions of the present disclosure may be embodied in the form of a software product. A suitable software product may be stored in a pre-recorded storage device or other similar non-volatile or non-transitory computer readable medium, for example. The software product includes instructions tangibly stored thereon that enable a processing system or device (e.g., a microprocessor) to execute examples of the methods disclosed herein.

Additionally or alternatively, certain technical solutions of the present disclosure may be embodied in the form of a system (e.g., an audiovisual system). A suitable system includes one or more hardware components. In some cases, the system includes a processing system or device (e.g., a microprocessor) configured to execute examples of the methods disclosed herein. The processing device may be enabled to execute examples of the methods disclosed herein based on instructions which may be stored on a suitable hardware component such as an instruction memory. The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. The described example embodiments are to be considered in all respects as being only illustrative and not restrictive. Selected features from one or more of the above-described embodiments may be combined to create alternative embodiments not explicitly described, features suitable for such combinations being understood within the scope of this disclosure.

Although the systems, devices and processes disclosed and shown herein may comprise a specific number of elements/components, the systems, devices and assemblies could be modified to include additional or fewer of such elements/components. For example, although any of the elements/components disclosed may be referenced as being singular, the embodiments disclosed herein could be modified to include a plurality of such elements/components. The subject matter described herein intends to cover and embrace all suitable changes in technology.