Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISPLAYING A VIRTUAL SENSOR VALUE IN AN AUGMENTED REALITY USER INTERFACE
Document Type and Number:
WIPO Patent Application WO/2024/083560
Kind Code:
A1
Abstract:
A system (1) for displaying sensor values in an augmented reality user interface is configured to receive sensor data of a plurality of sensors (33,34) located in a space, obtain sensor location information indicative of locations of the sensors, select a location (97) in a field of view (91) which is visible on or through a display (9), determine, based on the locations of the sensors and the location, one or more virtual sensor values (95) by interpolating the sensor data, and control the display to display the one or more virtual sensor values in relation to the location in the field of view.

Inventors:
MEERBEEK BERENT (NL)
VAN DE SLUIS BARTEL (NL)
WENDT MATTHIAS (NL)
Application Number:
PCT/EP2023/077962
Publication Date:
April 25, 2024
Filing Date:
October 10, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIGNIFY HOLDING BV (NL)
International Classes:
G06F3/01
Foreign References:
US20190035152A12019-01-31
KR101621175B12016-05-13
US20210383611A12021-12-09
US10880973B22020-12-29
US11335072B22022-05-17
Attorney, Agent or Firm:
MAES, Jérôme, Eduard et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A system (1,21) for displaying sensor values in an augmented reality user interface, said system (1,21) comprising: at least one input interface (3,23); at least one output interface (4,24); and at least one processor (5,25) configured to:

- receive, via said at least one input interface (3,23), sensor data values of a plurality of sensors (31-34) located in a space,

- obtain, via said at least one input interface (3,23), sensor location information indicative of locations of said sensors (31-34),

- select a location in a field of view (81,91) which is visible on or through a display (9),

- determine, based on said locations of said sensors (31-34) and said location, one or more virtual sensor values (87,95) at said location by interpolating between said sensor data values at said locations of said sensors, and

- control, via said at least one output interface (6,24), said display (9) to display said one or more virtual sensor values (87,95) in relation to said location in said field of view (81,91).

2. A system (1,21) as claimed in claim 1, wherein said sensor data values comprise historical sensor data values and said one or more virtual sensor values (87) comprise a plurality of virtual sensor values determined by interpolating said historical sensor data values.

3. A system (1,21) as claimed in claim 2, wherein said at least one processor (5,25) is configured to enable a user to scroll through said plurality of virtual sensor values (87).

4. A system (1,21) as claimed in claim 2 or 3, wherein said at least one processor (5,25) is configured to detect one or more events in said plurality of virtual sensor values and control said display (9) to display a timeline which comprises markers highlighting said one or more events.

5. A system (1,21) as claimed in any one of the preceding claims, wherein said sensor data values comprise current sensor data values and said one or more virtual sensor values (95) comprise a virtual sensor value (95) determined by interpolating said current sensor data values.

6. A system (1,21) as claimed in claim 5, wherein said at least one processor (5,25) is configured to select one or more control actions for controlling one or more further devices based on said virtual sensor value and control said display (9) to indicate said one or more control actions for controlling said one or more further devices.

7. A system (1,21) as claimed in claim 6, wherein said at least one processor (5,25) is configured to allow a user to select a control action of said one or more control actions and execute said control action when said control action is selected.

8. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to select a center of said field of view (81,91) as said location.

9. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to receive user input indicative of a user-specified location in said field of view (81,91) and select said user-specified location as said location.

10. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to:

- allow a user to select one of said one or more virtual sensor values, and

- configure an automatic device control routine with said selected virtual sensor value as a threshold, a device being controlled in a configured manner when a current sensor value of said virtual sensor crosses said threshold.

11. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to: - detect and select one or more sensors from said plurality of sensors (31-34) in said space, said one or more sensors being visible in said field of view,

- select relevant sensor data values from said sensor data values, said relevant sensor data values originating from said one or more sensors,

- determine, based on said locations of said one or more sensors and said location, said one or more virtual sensor values by interpolating said relevant sensor data values.

12. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to control said display (9) to display said one or more virtual sensor values (95) at said location in said field of view (91) as an overlay on said field of view (91).

13. A system (1,21) as claimed in any one of the preceding claims, wherein said at least one processor (5,25) is configured to control said display (9) to display a representation (83) of a virtual sensor at said location in said field of view (81) as an overlay on said field of view (83) and a link (84) between said one or more virtual sensor values (87) and said representation (83) of said virtual sensor, said link indicating that said one or more virtual sensor values (87) correspond to said virtual sensor.

14. A method of displaying sensor values in an augmented reality user interface, said method comprising:

- receiving (101) sensor data values of a plurality of sensors located in a space;

- obtaining (103) sensor location information indicative of locations of said sensors;

- selecting (105) a location in a field of view which is visible on or through a display;

- determining (107), based on said locations of said sensors and said location, one or more virtual sensor values at said location by interpolating between said sensor data values at said locations of said sensors; and

- controlling (109) said display to display said one or more determined virtual sensor values in relation to said location in said field of view.

15. A computer program product for a computing device, the computer program product comprising computer program code to perform the method of claim 14 when the computer program product is run on a processing unit of the computing device.

Description:
Displaying a virtual sensor value in an augmented reality user interface

FIELD OF THE INVENTION

The invention relates to a system for displaying sensor values in an augmented reality user interface.

The invention further relates to a method of displaying sensor values in an augmented reality user interface.

The invention also relates to a computer program product enabling a computer system to perform such a method.

BACKGROUND OF THE INVENTION

Connected lighting systems like Hue and Interact have an increasing variety of sensors connected to their gateways. These sensors are reporting sensor readings to the gateway, often at regular intervals. Various sensors and sensor bundles are added to the lighting system for light control use cases and use cases beyond illumination, e.g. temperature, humidity, light, acoustics, PIR, thermopile, radar, and camera.

In connected lighting systems, the sensor data is often used for direct light control use cases, e.g. to increase the light output level if the ambient light sensor reports lower values. In some systems, the sensor values are stored and visualized in aggregated form on a dashboard, for example in a web portal or smart phone application. One of the disadvantages of these dashboards, is that the user reading the data cannot easily link the sensor readings to particular locations in a building, making it difficult to interpret the data and extract meaningful and actionable insights from the data.

Another way of visualizing sensor data involves the use of virtual reality. For example, US 11,335,072 B2 discloses providing a realistic 3D virtual model of monitored items. The 3D virtual model may include virtual icons that represent a set of sensors. Data visualizations are provided in the 3D virtual model depicting relevant information about the monitored items and the set of sensors. In an embodiment, Augmented Reality (AR) is used to allow digital manipulation or physical manipulation of the monitored items and/or the set of sensors. A drawback of the method of US 11,335,072 B2 is that the displayed information is limited to the actual sensor values of the actual sensors.

SUMMARY OF THE INVENTION

It is a first object of the invention to provide a system, which is able to visualize useful information derived from actual sensor values of the actual sensors such that the information is easy to interpret.

It is a second object of the invention to provide a method, which can be used to visualize useful information derived from actual sensor values of the actual sensors such that the information is easy to interpret.

In a first aspect of the invention, a system for displaying sensor values in an augmented reality user interface comprises at least one input interface, at least one output interface, and at least one processor configured to receive, via said at least one input interface, sensor data values of a plurality of sensors located in a space, obtain, via said at least one input interface, sensor location information indicative of locations of said sensors, select a location in a field of view which is visible on or through a display, determine, based on said locations of said sensors and said location, one or more virtual sensor values at said location by interpolating between said sensor data values at said locations of said sensors, and control, via said at least one output interface, said display to display said one or more determined virtual sensor values in relation to said location in said field of view.

By determining one or more virtual sensors values for the selected location in the field of view and displaying these one or more virtual sensor values in relation to this location in the field of view, e.g. as an overlay at this location, the user can easily and quickly see a current sensor value estimate and/or historic sensor value estimates of a sensor if that sensor would be placed at the selected location. This can be used for various use cases such as space optimization (visualizing occupation/presence), environmental monitoring solutions (temperature, humidity, light level), and store analytics.

Augmented reality makes it easy to select the desired location in the field of view. For instance, if a user wants to see a temperature estimate at a location without a sensor, the user may be able to just point the camera towards this location. Interpolation of sensor data values of actual sensors may then be used to estimate the temperature at this location, which is then displayed in relation to this location in the field of view.

Said at least one processor may be configured to control said display to display said one or more virtual sensor values at said location in said field of view as an overlay on said field of view, for example. Alternatively, said at least one processor may be configured to control said display to display a representation of a virtual sensor at said location in said field of view as an overlay on said field of view and a link between said one or more virtual sensor values and said representation of said virtual sensor, for example. Said link may indicate that said one or more virtual sensor values correspond to said virtual sensor.

A sensor may be, for example, a sensor integrated in a lighting device or a lighting-associated sensor, e.g. a sensor integrated in a luminaire or lighting fixture or a separate sensor device that is functionally connected to a lighting control system. Example sensor types that may be supported are light level, temperature, humidity, microphone, passive infrared, RF sensor, thermopile, or camera, for example. Sensors may be part of a sensor array (e.g. a microphone array) or a “sensor bundle device” which comprises a combination of different sensor modalities.

Said sensor data values may comprise historical sensor data values and said one or more virtual sensor values may comprise a plurality of virtual sensor values determined by interpolating said historical sensor data values. This may be used for space optimization (visualizing occupation/presence) and/or store analytics, for example.

Said at least one processor may be configured to enable a user to scroll through said plurality of virtual sensor values. This makes it possible for the user to go through the historical virtual sensor values without showing too much data simultaneously.

Said at least one processor may be configured to detect one or more events in said plurality of virtual sensor values and control said display to display a timeline which comprises markers highlighting said one or more events. This allows the user to focus on the most important part of the information and makes it even easier for the user to interpret the sensor data values.

Said sensor data values may comprise current sensor data values and said one or more virtual sensor values may comprise a virtual sensor value determined by interpolating said current sensor data values. This may be used for environmental monitoring solutions (temperature, humidity, light level), for example.

Said at least one processor may be configured to select one or more control actions for controlling one or more further devices based on said virtual sensor value and control said display to indicate said one or more control actions for controlling said one or more further devices. The user may then use the system or another system to execute the indicated control action(s). For example, said at least one processor may be configured to allow a user to select a control action of said one or more control actions and execute said control action when said control action is selected.

If the current temperature is lower than a configured temperature, a heating action may be recommended. If the current temperature is higher than a configured temperature, a cooling action may be recommended. If the current humidity is higher than a configured humidity, activation of an air-conditioning system may be recommended. If the currently measured light level is lower than required at a certain location, an increase in light output level may be suggested for a certain lighting device. If the currently measured light level is higher than needed at a certain location, a decrease in light output level may be suggested for a certain lighting device.

Said at least one processor may be configured to select a center of said field of view as said location. This allows the user to simply adjust the location and/or orientation of the camera, e.g. by moving the user’s mobile device, to change the location of which the one or more virtual sensor values are shown. Especially for augmented reality glasses is the center of the field of view the most logical location to select.

Said at least one processor may be configured to receive user input indicative of a user-specified location in said field of view and select said user-specified location as said location. For example, the user may be able to touch an area of the mobile phone’s display or the tablet’s display or point at a location visible through the user’s AR glasses to select the location.

Said at least one processor may be configured to allow a user to select one of said one or more virtual sensor values, and configure an automatic device control routine with said selected virtual sensor value as a threshold, a device being controlled in a configured manner when a current sensor value of said virtual sensor crosses said threshold. Conventionally, a device is controlled in a configured manner when the current sensor value of a real sensor crosses a threshold. By allowing the user to configure an automatic device control routine with a virtual sensor value as a threshold, the user does not need to have as many sensors and/or does not need place sensors at inconvenient locations. For example, the user may adjust the camera’s location and/or orientation until a suitable location (e.g. with a suitable virtual sensor value) is found and then select this location as the location of the virtual sensor and a virtual sensor value at this location as the threshold. The system or another system may be configured to repeatedly determine the one or more virtual sensor values at this location, so even when this location is not in the field of view. Said at least one processor may be configured to detect and select one or more sensors from said plurality of sensors in said space, said one or more sensors being visible in said field of view, select relevant sensor data values from said sensor data values, said relevant sensor data values originating from said one or more sensors, determine, based on said locations of said one or more sensors and said location, said one or more virtual sensor values by interpolating said relevant sensor data values. This makes it clear to the user of which sensors the sensor values are interpolated, especially if the sensor values of the real sensors in the field of view are also displayed. Alternatively, all sensors in the space may be selected and the relevant sensor data values may originate from these sensors, for example.

In a second aspect of the invention, a method of displaying sensor values in an augmented reality user interface comprises receiving sensor data values of a plurality of sensors located in a space, obtaining sensor location information indicative of locations of said sensors, selecting a location in a field of view which is visible on or through a display, determining, based on said locations of said sensors and said location, one or more virtual sensor values at said location by interpolating between said sensor data values at said locations of said sensors, and controlling said display to display said determined one or more virtual sensor values in relation to said location in said field of view. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.

Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.

A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations for displaying sensor values in an augmented reality user interface.

The executable operations comprise receiving sensor data values of a plurality of sensors located in a space, obtaining sensor location information indicative of locations of said sensors, selecting a location in a field of view which is visible on or through a display, determining, based on said locations of said sensors and said location, one or more virtual sensor values by interpolating said sensor data values, and controlling said display to display said one or more virtual sensor values in relation to said location in said field of view. As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.

Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).

Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:

Fig. l is a block diagram of a first embodiment of the system;

Fig. 2 is a block diagram of a second embodiment of the system;

Fig. 3 shows an example of a building in which the system may be used;

Fig. 4 is a flow diagram of a first embodiment of the method;

Fig. 5 is a flow diagram of a second embodiment of the method;

Fig. 6 is a flow diagram of a third embodiment of the method;

Fig. 7 shows a first example of an augmented reality user interface;

Fig. 8 is a flow diagram of a fourth embodiment of the method;

Fig. 9 shows a second example of an augmented reality user interface;

Fig. 10 is a flow diagram of a fifth embodiment of the method; and

Fig. 11 is a block diagram of an exemplary data processing system for performing the method of the invention.

Corresponding elements in the drawings are denoted by the same reference numeral. DETAILED DESCRIPTION OF THE EMBODIMENTS

Fig. 1 shows a first embodiment of the system for displaying sensor values in an augmented reality user interface. In this first embodiment, the system is a mobile device 1. The mobile device 1 may be a mobile phone, a tablet, or augmented reality glasses, for example.

In the example of Fig. 1, sensors 31-34 are located in the same space, e.g. in the same room. Mobile device 1 is able to obtain sensor data values of sensors 31-34 via a controller/gateway 16 and optionally via an Internet server 13. The sensors 31-34 may be incorporated into lighting devices and the controller 16 may be a light controller 16, for example. Mobile device 1 may also be able to obtain sensor data values of other sensors in other spaces via controller 16 and optionally via Internet server 13. These other sensors are not shown in Fig. 1.

Controller 16 is connected to a wireless LAN access point 17, e.g. via Wi-Fi or Ethernet. The wireless LAN access point 17 is connected to the Internet 11. In an alternative embodiment, the sensors 31-34 can communicate with other devices without the use of a controller, e.g. directly via Bluetooth or via the Internet server 13. The sensors 31-34 may be capable of receiving and transmitting Wi-Fi signals, for example. The Internet server 13 is also connected to the Internet 11.

The mobile device 1 comprises a receiver 3, a transmitter 4, a processor 5, memory 7, a camera 8, a (e.g. touchscreen) display 9, and a display interface 6 between the processor 5 and the display 9. The processor 5 is configured to receive, via the receiver 3, sensor data values of at least the sensors 31-34, obtain, e.g. via the receiver 3 or the camera 8, sensor location information indicative of locations of at least the sensors 31-34, and select a location in a field of view which is visible on or through the display 9.

The processor 5 is configured to analyze images captured by the camera 8 to determine the field of view. If the mobile device l is a pair of augmented reality glasses, this field of view is visible through the glasses. If the mobile device l is a mobile phone or a tablet, the processor 5 is configured to display images captured by camera 8.

The processor 5 is further configured to determine, based on the locations of the sensors 31-34 and the location in the field of view, one or more virtual sensor values by interpolating the sensor data values, and control, via the display interface 6, the display 9 to display the one or more virtual sensor values in relation to the location in the field of view.

In the embodiment of the mobile device 1 shown in Fig. 1, the mobile device 1 comprises one processor 5. In an alternative embodiment, the mobile device 1 comprises multiple processors. The processor 5 of the mobile device 1 may be a general-purpose processor, e.g. from ARM or Qualcomm or an application-specific processor. The processor 5 of the mobile device 1 may run an Android or iOS operating system for example. The camera 8 may comprise a CMOS or CCD sensor, for example. The display 9 may comprise an LCD or OLED display panel, for example. The memory 7 may comprise one or more memory units. The memory 7 may comprise solid state memory, for example.

The receiver 3 and the transmitter 4 may use one or more wireless communication technologies, e.g. Wi-Fi (IEEE 802.11) for communicating with the wireless LAN access point 17, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 1, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 3 and the transmitter 4 are combined into a transceiver. The mobile device 1 may comprise other components typical for a mobile device such as a battery and a power connector. The invention may be implemented using a computer program running on one or more processors.

Fig. 2 shows a second embodiment of the system for displaying sensor values in an augmented reality user interface. In this second embodiment, the system is a computer 21. The computer 21 is connected to the Internet 11 and acts as a server. The computer 21 may be operated by a lighting company, for example. In the example of Fig. 2, the computer 21 controls the mobile device 41, e.g. a web browser or other app running on mobile device 41, to display the augmented reality user interface. The mobile device 41 may be a mobile phone, a tablet, or augmented reality glasses, for example. The computer 21 is able to obtain sensor data values of sensors 31-34 via the controller 16 and optionally via the Internet server 13.

The computer 21 comprises a receiver 23, a transmitter 24, a processor 25, and storage means 27. The processor 25 is configured to receive, via the receiver 23, sensor data values of at least the sensors 31-34, obtain, via the receiver 23, sensor location information indicative of locations of at least the sensors 31-34, and select a location in a field of view which is visible on or through a display of the mobile device 41.

The processor 25 is further configured to determine, based on the locations of the sensors 31-34 and the location in the field of view, one or more virtual sensor values by interpolating the sensor data values, and control, via the transmitter 24, the display of the mobile device 41 to display the one or more virtual sensor values in relation to the location in the field of view. The received sensor data values may be stored on the storage means 27, e.g. temporarily.

In the embodiment of the computer 21 shown in Fig. 2, the computer 21 comprises one processor 25. In an alternative embodiment, the computer 21 comprises multiple processors. The processor 25 of the computer 21 may be a general -purpose processor, e.g. from Intel or AMD, or an application-specific processor. The processor 25 of the computer 21 may run a Windows or Unix-based operating system for example. The storage means 27 may comprise one or more memory units. The storage means 27 may comprise one or more hard disks and/or solid-state memory, for example. The storage means 27 may be used to store an operating system, applications and application data, for example.

The receiver 23 and the transmitter 24 may use one or more wired and/or wireless communication technologies such as Ethernet and/or Wi-Fi (IEEE 802.11) to communicate with devices on the Internet 11, for example. In an alternative embodiment, multiple receivers and/or multiple transmitters are used instead of a single receiver and a single transmitter. In the embodiment shown in Fig. 2, a separate receiver and a separate transmitter are used. In an alternative embodiment, the receiver 23 and the transmitter 24 are combined into a transceiver. The computer 21 may comprise other components typical for a computer such as a power connector. The invention may be implemented using a computer program running on one or more processors.

In the embodiment of Fig. 2, the computer 21 is able to obtain sensor data values of sensors 31-34 via the (e.g. light) controller 16. In an alternative embodiment, the computer 21 is additionally or alternatively able to obtain sensor data values without the use of a (e.g. light) controller. For example, the sensors 31-34 may be connected directly to the wireless LAN access point 17 and may transmit their data to the Internet server 21 without passing a (e.g. light) controller, optionally via the Internet server 13.

Fig. 3 shows an example of a building in which the system of Fig. 1 or Fig. 2 may be used. In the example of Fig. 3, a user 69 uses mobile device 1 of Fig. 1. The building 61 comprises a hallway 63, a kitchen 64, and a living room 65. Wireless LAN access point 17 has been installed in the hallway 63. Sensors 31-34 and controller 16 have been installed in the living room 65. In the example of Fig. 3, the camera of the mobile device 1 captures an image of the sensors 33 and 34. In the example of Fig. 3, the mobile device 1 is a mobile phone and the image is displayed in the augmented reality user interface on the display of the mobile phone, e.g. as shown in Fig. 9. If augmented reality glasses would be used instead of a mobile phone, then the same field of view would be visible through the augmented reality glasses.

A first embodiment of the method of displaying sensor values in an augmented reality user interface is shown in Fig. 4. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example.

A step 101 comprises receiving sensor data values of a plurality of sensors located in a space. For example, sensor device information may be retrieved by a controller/gateway (e.g. a Hue or Interact bridge) that polls the sensors for information about their configuration (occasionally) and the actually measured values (frequently). This sensor device information may be stored in a memory on the controller (e.g. controller 16 of Figs. 1 and 2) or in a cloud environment to which the controller is connected (e.g. cloud computer 21 of Fig. 2). Sensor device information may comprise information about the sensor name, modality, measurement unit, update frequency, field of view, monitored region, calibration settings, for example. The sensor values typically comprises a timestamp and a measurement (e.g. light level in lux at time t).

The user might first need to perform an action before the sensor data values are retrieved, e.g. by starting an application or feature. The started application or feature might determine the specific type of sensor data that the user is interested in. For example, an “optimize light level” application might show the measured light level throughout the room, while an “acoustic comfort” application might visualize the measured sound levels in the space.

The user may be able/asked to select a certain period for which the sensor data values should be received, e.g. the previous minute, hour, day, week, month, or year. Only the sensor data values that are immediately needed may be received in step 101 or additional data may be received as well. This additional data may be displayed in the augmented reality user interface on request of the user, for example. Sensors may offer prefiltering in order to reduce the amount of data. These filters may be under user control.

A step 103 comprises obtaining sensor location information indicative of locations of the sensors. Locations of sensor devices might be determined by analyzing images captured by the camera of an AR device (e.g. a mobile phone, a tablet, or augmented reality glasses). For instance, computer vision techniques may be used to detect (and possibly identify) sensor devices in the image, e.g. by detecting the specific shape of the sensor device. If the sensors are embedded in lighting devices, unique codes coded into the light emitted by the lighting devices may be used to determine the identities of the sensor devices in the camera images.

Alternatively or additionally, a floor plan, 3D room scan or Building Information Model (BIM) may be used. In this case, the locations of the sensor devices are indicated by a user or building owner. As a result, a sensor may be hidden or outside the camera’s field of view, while it is still possible to determine the sensor’s location based on the floor plan, the 3D room scan, or the BIM.

Alternatively or additionally, the sensor location information may be obtained from the sensors or from devices in which the sensors are embedded. In this case, steps 101 and 103 may be combined into a single step, for example. The sensor location information may also be input by a user.

A step 105 comprises selecting a location in a field of view which is visible on or through a display. For example, the center of the field of view may be selected automatically and/or the user may be allowed to select the location by touching an area of a mobile phone’s or tablet’s touchscreen display. The user may be able to fix the selected location such that the user can then adjust the location and/or orientation of the camera without changing the selected location.

One or more of steps 101, 103, and 105 may be performed partly or completely in parallel. Additionally or alternatively, one or more of steps 101, 103, and 105 may be performed in sequence. For example, step 101 may be performed first, step 103 may be performed next, and step 105 may be performed last.

A step 107 is performed after steps 101, 103, and 105 have been performed. Step 107 comprises determining, based on the locations of the sensors obtained in step 103 and the location selected in step 105, one or more virtual sensor values by interpolating the sensor data values received in step 101. For example, the most recent sensor data values, e.g. relating to the most recent sensor events, or the most important sensor data values, may be selected by default in step 107. When determining the most recent sensor data values or most important sensor data values, the most recent sensor events which are caused by the AR device or its user may be excluded (or visually marked). Alternatively, only data relevant to an application selected by the user (as described in relation to step 101) may be selected in step 107.

A step 109 comprises controlling the display to display the one or more virtual sensor values determined in step 107 in relation to the location in the field of view selected in step 105. Thus, for a specific location in the field of view, the one or more virtual sensor values are derived from the actual sensor values and the sensor location information. For example, the light level at a specific location might be estimated by taking the actual measured values from nearby light sensors and determining a weighted average of these actual measured values, wherein the weights are determined based on the distances between the location of the visualization and each sensor location. Optionally, the augmented reality user interface also shows the actual output values from the actual sensors (e.g. at the location of the sensors).

The specific information visualization technique may be determined based on the type of sensor data values (discrete, continuous, numeric, categorical, etc.), the sensor modality (audio, visual, light level, etc.), and/or the selected period, for example. For instance, when the sensor data values represent light level in lux, the optimal visualization might be to darken/brighten the pixels of the camera image as a function of (historically) measured lux level at the various locations in the room. If the sensor data values represent sound level measured in decibel, the overlay might include a color coding where quiet areas are visualized in light green and noisy areas are visualized in dark red. In these visualizations, only some of the pixels are adjusted based on actually measured sensor values; most of the pixels are adjusted based on virtual sensor values.

The sensor value(s) may be visualized, for example, in an overlay over the AR camera image and continuously updated as the user moves the AR device around, such that the data visualization is represented at the right location on top of the camera image. In this way, the user can directly see the values and link them to real world locations.

After step 109 has been performed, step 105 is repeated and the method proceeds as shown in Fig. 4. Optionally, step 101 and/or step 103 may be repeated after step 109, e.g. to receive and display new sensor values and/or when locations of one or more sensors change. This is not shown in Fig. 4.

A second embodiment of the method of displaying sensor values in an augmented reality user interface is shown in Fig. 5. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example. The second embodiment of Fig. 5 is an extension of the first embodiment of Fig. 4. In the embodiment of Fig. 5, steps 121 and 123 are performed after step 109 of Fig. 4.

Step 121 comprises allowing a user to select one of the one or more virtual sensor values. If the user selects one of the one or more virtual sensor values in step 121, then step 123 is performed. Otherwise, step 105 is repeated and the method proceeds as shown in Fig. 5. Step 123 comprises configuring an automatic device control routine with the virtual sensor value selected in step 121 as a threshold. As a result, a device is controlled in a configured manner when a current sensor value of the virtual sensor crosses the threshold. After step 123, step 105 is repeated and the method proceeds as shown in Fig. 5.

Conventionally, a device is controlled in a configured manner when the current sensor value of a real sensor crosses a threshold. By allowing the user to configure an automatic device control routine with a virtual sensor value as a threshold, the user does not need to have as many sensors and/or does not need place sensors at inconvenient locations. For example, the user may adjust the camera’s location and/or orientation until a suitable location (e.g. with a suitable virtual sensor value) is found and then select this location as the location of the virtual sensor and a virtual sensor value at this location as the threshold. The system performing the method or another system may be configured to repeatedly determine the one or more virtual sensor values at this location, so even when this location is not in the field of view.

A third embodiment of the method of displaying sensor values in an augmented reality user interface is shown in Fig. 6. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example. The third embodiment of Fig. 6 is an extension of the first embodiment of Fig. 4.

Step 101 comprises receiving sensor data values of a plurality of sensors located in a space. In the embodiment of Fig. 6, the sensor data values received in step 101 comprises historical sensor data values. Step 103 comprises obtaining sensor location information indicative of locations of the sensors. Step 105 comprises selecting a location in a field of view which is visible on or through a display.

In the embodiment of Fig. 6, step 105 is implemented by a step 141. Step 141 comprises selecting a center of the field of view as the location. This allows the user to simply adjust the location and/or orientation of the camera, e.g. by moving the user’s mobile device, to change the location of which the one or more virtual sensor values are shown.

Step 107 is performed after steps 101, 103, and 105 have been performed. Step 107 comprises determining, based on the locations of the sensors obtained in step 103 and the location selected in step 105, one or more virtual sensor values by interpolating the sensor data values received in step 101. In the embodiment of Fig. 6, step 107 comprises determining the one or more virtual sensor values by interpolating the historical sensor data values received in step 101.

A step 143 is performed after step 107. Step 143 comprises detecting one or more events in the plurality of virtual sensor values determined in step 107. For example, step 143 may comprise detecting outliers and/or big changes. A step 145 is performed after step 143. Step 145 comprises displaying the augmented reality user interface. Step 145 comprises step 109 of Fig. 4 and steps 147 and 149. Step 149 comprises controlling the display to display a timeline which comprises markers highlighting the one or more events detected in step 143.

Step 109 comprises controlling the display to display the one or more virtual sensor values determined in step 107 in relation to the location in the field of view selected in step 105. In the embodiment of Fig. 6, the sensor values are not displayed at the selected location but e.g. somewhere else in the field of view as an overlay or adjacent to the field of view. Step 147 comprises controlling the display to display a representation of a virtual sensor at the location in the field of view as an overlay on the field of view and a link between the one or more virtual sensor values displayed in step 109 and this representation of the virtual sensor.

Next, a step 151 is performed. Step 151 comprises enabling a user to scroll through the plurality of virtual sensor values displayed in step 109. If the user scrolls through the virtual sensor values, step 145 is repeated and the augmented reality user interface is thereby updated. Alternatively, step 105 may be repeated, e.g. when the user adjusts the location of the user’s mobile device.

Fig. 7 shows an example of an augmented reality user interface displayed in step 145 of the method of Fig. 6. Fig. 7 shows a field of view 81 of a camera of mobile device 1 displayed on display 9 of mobile device 1. Two sensors 31 and 32 are visible in the field of view 81. A representation 83 of a virtual sensor is displayed in the center of the field of view as an overlay on the field of view. Furthermore, virtual sensor values are displayed in a window 85 as a timeline/graph 87 to provide users with time-based navigation through the sensor data set.

The virtual sensor values comprise at least virtual sensor values determined based on historical sensor data values and may further comprise a virtual sensor value determined based on current sensor data values. In the embodiment of Fig. 6 and the example of Fig. 7, the timeline 87 comprises markers indicating significant events in the sensor data values, e.g. outliers and/or big changes. In the example of Fig. 7, two markers are displayed, including a marker 86. In an alternative embodiment, a timeline is displayed without markers.

The markers may be used to mark moments at which a control threshold is crossed. For example, when a movement threshold is exceeded, this may be marked on timeline 87. The markers may indicate moments at which the then-applicable control threshold was crossed in the past or moments at which the current control threshold would have been crossed when the current control threshold had been applied in the past. In the latter case, a user may be able to change this control threshold and the markers may then be updated accordingly. For instance, a user may change the threshold for a movement detection rule when the user is interested to check whether movements in the past would have been sufficient to trigger that adjusted rule.

Additionally, a link 84 between window 85 and the representation 83 is displayed. Window 85 also includes a left arrow 88 and a right arrow 89 to allow the user to scroll through the virtual sensor values/timeline 87. When the user presses the left arrow 88, a new part of the timeline 87 with previously hidden (e.g. older) virtual sensor values is displayed on the left side of the timeline 87. When the user presses the right arrow 89, a new part of the timeline 87 with previously hidden (e.g. newer) virtual sensor values is displayed on the right side of the timeline 87.

A fourth embodiment of the method of displaying sensor values in an augmented reality user interface is shown in Fig. 8. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example. The fourth embodiment of Fig. 8 is an extension of the first embodiment of Fig. 4.

Step 101 comprises receiving sensor data values of a plurality of sensors located in a space. In the embodiment of Fig. 8, the sensor data values received in step 101 comprises current sensor data values. Step 103 comprises obtaining sensor location information indicative of locations of the sensors.

A step 171 comprises receiving user input indicative of a user-specified location in the field of view. Step 105 is performed after step 171. Step 105 comprises selecting a location in a field of view which is visible on or through a display. In the embodiment of Fig. 8, step 105 is implemented by a step 173. Step 173 comprises selecting the user-specified location as the location. For example, the user may be able to touch an area of the touchscreen display of the user’s mobile phone or tablet or point at a location visible through the user’s AR glasses to select the location.

Step 107 is performed after steps 101, 103, and 105 have been performed. Step 107 comprises determining, based on the locations of the sensors obtained in step 103 and the location selected in step 105, one or more virtual sensor values by interpolating the sensor data values received in step 101. In the embodiment of Fig. 8, step 107 comprises determining a virtual sensor value by interpolating the current sensor data values received in step 101. A step 177 is performed after step 107. Step 177 comprises selecting one or more control actions based on the virtual sensor value determined in step 107. As a first example, if the current temperature is lower than a configured temperature, a heating action may be recommended. As a second example, if the current temperature is higher than a configured temperature, a cooling action may be recommended.

As a third example, if the current humidity is higher than a configured humidity, activation of an air-conditioning system may be recommended. As a fourth example, if the currently measured light level is lower than required at a certain location, an increase in light output level may be suggested for a certain lighting device. As a fifth example, if the currently measured light level is higher than needed at a certain location, a decrease in light output level may be suggested for a certain lighting device.

Step 145 is performed after step 177. Step 145 comprises displaying the augmented reality user interface. In the embodiment of Fig. 8, step 145 comprises step 109 and a step 181. Step 181 comprises controlling the display to indicate the one or more control actions selected in step 177. Step 109 comprises controlling the display to display the one or more virtual sensor values determined in step 107 in relation to the location in the field of view selected in step 105. In the embodiment of Fig. 8, step 109 is implemented by a step 179. Step 179 comprises controlling the display to display the one or more virtual sensor values at the location in the field of view as an overlay on the field of view.

Next, a step 183 is performed. Step 183 comprises allowing a user to select a control action of the one or more control actions. If the user selects a control action of the one or more control actions in step 183, then a step 185 is performed. Otherwise, step 171 is repeated and the method proceeds as shown in Fig. 8. Step 185 comprises executing the control action selected in step 183. After step 185, step 171 is repeated and the method proceeds as shown in Fig. 8.

Fig. 9 shows an example of an augmented reality user interface displayed in step 145 of the method of Fig. 8. Fig. 9 shows a field of view 91 of a camera of mobile device 1 displayed on display 9 of mobile device 1. Two sensors 33 and 34 are visible in the field of view 91. Current sensor values 93 and 94 of sensors 33 and 34, respectively, are displayed at the locations of the sensors 33 and 34 in the field of view 91. A virtual sensor value 95 is displayed at a user-specified location 97 in the field of view 91 as an overlay on the field of view.

In the example of Fig. 9, sensor 33 has a current sensor value of 20 degrees

Celsius and sensor 34 has a current sensor value of 18 degrees Celsius. A temperature of 19 degrees Celsius is determined by interpolation as virtual sensor value 95. This is a simple example of interpolation. Using knowledge of the type of sensor value (e.g. temperature), it may be possible to determine the virtual sensor value 95 relatively accurately. Often, the more real sensors are accessible, the better the accuracy of the determined virtual sensor value gets.

A button 98 is also displayed as an overlay over the field of view 91. The button 98 indicates a control action “activate heating”. This control action has been selected, because the virtual sensor value 95 (19 degrees Celsius) is below a configured value (e.g. 20 degrees Celsius). If the user presses the button 98, the heating system is activated. In the example of the Fig. 9, the heating system was inactive. If the heating system was already active, the control action may have been “increase heating”.

Instead of a button, another user input element related to the visualized sensor data values may be displayed in the overlay. For example, if the light level at a certain location is low, a control slider which the user can use to dim up the light may be shown. The slider may be mapped to all the light sources that influence the light level at that location. Such a user input element could also be included in window 85 of Fig. 7.

In the example of Fig. 9, only current sensor values (real and virtual) are displayed. Alternatively or additionally, historical sensor values may be displayed. To be able to display historical sensor values of multiple sensors (e.g. of a virtual sensor and one or more real sensors or of multiple virtual sensors), the historical sensor values may be represented graphically. For instance, the historical sensor values may be represented as a (small) graph or as a time-ordered sequence of colored stripes in which the color of the stripe indicates the sensor value (e.g. red for high audio volume and green for low audio volume).

A fifth embodiment of the method of displaying sensor values in an augmented reality user interface is shown in Fig. 10. The method may be performed by the mobile device 1 of Fig. 1 or the cloud computer 21 of Fig. 2, for example. The fifth embodiment of Fig. 10 is an extension of the first embodiment of Fig. 4. In the embodiment of Fig. 10, steps 201 and 203 are performed between steps 101-103 and step 107, and step 107 is implemented by a step 205.

Step 201 is performed after step 103. Step 201 comprises detecting and selecting, from the plurality of sensors in the space, one or more sensors which are visible in the field of view. The sensors may be detected by analyzing one or more images captured by a camera comprised in the augmented reality device, or may be determined to be present in the field of view based on information of the sensor’s locations (e.g. obtained from an indoor positioning system) relative to the augmented reality device. In an alternative embodiment, all sensors in the space are selected in step 201. In the embodiment of Fig. 10, the selection in step 201 is made based on the sensor location information obtained in step 103.

Step 203 is performed after 101 and steps 201 have been performed. Step 203 comprises selecting relevant sensor data values from the sensor data values received in step 101. The relevant sensor data values are the sensor data values which originate from the one or more sensors selected in step 201. Step 205 is performed after steps 203 and 105 have been performed. Step 205 comprises determining, based on the locations of the sensors selected in step 201, as obtained in step 103, and the location selected in step 105, the one or more virtual sensor values by interpolating the relevant sensor data values selected in step 203.

The embodiments of Figs. 4 to 6, 8, and 10 differ from each other in multiple aspects, i.e. multiple steps have been added or replaced. In variations on these embodiments, only a subset of these steps is added or replaced and/or one or more steps is omitted. As a first example, step 141 may be omitted from the embodiment of Fig. 6 and/or added to the embodiment of Fig. 8. As a second example, steps 171 and 173 may be omitted from the embodiment of Fig. 8 and/or added to the embodiment of Fig. 6.

As a third example, step 147 may be omitted from the embodiment of Fig. 6 and/or added to the embodiment of Fig. 8. As a fourth example, step 179 may be omitted from the embodiment of Fig. 8 and/or added to the embodiment of Fig. 6. One or more of the embodiments may be combined. For example, the embodiment of Fig. 5 may be combined with the embodiment of Fig. 6 or Fig. 8 and/or with the embodiment of Fig. 10.

Fig. 11 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 4 to 6, 8, and 10.

As shown in Fig. 11, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification. The data processing system may be an Intemet/cloud server, for example. The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.

Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like. Input and/or output devices may be coupled to the data processing system either directly or through intervening VO controllers.

In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig. 11 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a “touch screen display” or simply “touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.

A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.

As pictured in Fig. 11, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig. 11) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.

Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression “non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein.

The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.