Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
NAVIGATION GUIDANCE METHODS AND NAVIGATION GUIDANCE DEVICES
Document Type and Number:
WIPO Patent Application WO/2022/250605
Kind Code:
A1
Abstract:
According to various embodiments, there is provided a navigation guidance method. The navigation guidance method may include: determining pixel coordinates of a target in an input image; converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

Inventors:
HIRAYAMA JUNICHI (SG)
KITAJIMA YUSUKE (SG)
ISHII TOSHIKI (SG)
Application Number:
PCT/SG2021/050285
Publication Date:
December 01, 2022
Filing Date:
May 24, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HITACHI LTD (JP)
International Classes:
G01C21/20; G01C11/06; G06T7/73
Foreign References:
US20190286124A12019-09-19
CN110440810A2019-11-12
CN110017841A2019-07-16
US20180247122A12018-08-30
US20210104064A12021-04-08
CN105225240A2016-01-06
CN112465907A2021-03-09
Attorney, Agent or Firm:
VIERING, JENTSCHURA & PARTNER LLP (SG)
Download PDF:
Claims:
CLAIMS

1. A navigation guidance method comprising: determining pixel coordinates of a target in an input image; converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

2. The navigation guidance method of claim 1, wherein the guidance image is a first- person-view image, and wherein the guidance coordinates are pixel coordinates of the target in the first-person-view image.

3. The navigation guidance method of claim 2, further comprising: providing visual instruction using the guiding device by marking the target in the first-person-view image based on the pixel coordinates of the target in the first-person- view image.

4. The navigation guidance method of claim 1, wherein the guidance image is a plan view image, and wherein the guidance coordinates are pixel coordinates of the target in the plan view image.

5. The navigation guidance method of claim 4, further comprising: providing visual instruction using the guiding device by marking the target in the plan view image based on the pixel coordinates of the target in the plan view image.

6. The navigation guidance method of claim 1, wherein the guiding device comprises a plurality of light emitters, wherein the guidance image is a map indicating respective locations of the plurality of light emitters, and wherein the guidance coordinates identifies a light emitter of the plurality of light emitters that is closest to a location of the target.

7. The navigation guidance method of claim 6, further comprising: providing visual instruction using the guiding device by switching on the light emitter identified by the guidance coordinates.

8. The navigation guidance method of any one of claims 1 to 7, wherein the conversion template comprises a three-dimensional model of an indoor environment that the target is located in.

9. The navigation guidance method of claim 8, wherein the template coordinates indicate a three-dimensional position within the three-dimensional model.

10. The navigation guidance method of claim 8, wherein converting the determined pixel coordinates in the input image to template coordinates in the conversion template comprises: extracting feature points in the input image; determining feature points in the three three-dimensional model; selecting a view within the three-dimensional model having feature points that match the extracted feature points; mapping the pixel coordinates in the input image to a position in the selected view; and determining the three-dimensional position based on the mapped position in the selected view.

11. The navigation guidance method of claim 8, wherein converting the template coordinates to the guidance coordinates in the guidance image comprises: extracting feature points in the guidance image; selecting a view within the three-dimensional model having feature points that match the extracted feature points; mapping the template coordinates to a position in the selected view; determining the guidance coordinates based on the mapped position in the selected view.

12. The navigation guidance method of any one of claims 1 to 7, wherein the conversion template comprises a plurality of template images of an indoor environment that the target is located in, wherein the plurality of template images are taken from different positions in the indoor environment.

13. The navigation guidance method of claim 12, wherein the template coordinates include pixel coordinates of the target in at least one of the plurality of template images.

14. The navigation guidance method of claim 12, wherein converting the determined pixel coordinates in the input image to template coordinates in the conversion template comprises: extracting feature points in the input image; determining feature points in each template image of the plurality of template images; computing a respective coordinate transform matrix between the input image and each template image of the plurality of template images based on the extracted feature points in the input image and the feature points of each template image; and transforming the pixel coordinates to a set of template coordinates for each template image of the plurality of template images, using the respective coordinate transform matrix.

15. The navigation guidance method of claim 12, wherein converting the template coordinates to the guidance coordinates in the guidance image comprises: extracting feature points in the guidance image; computing a respective coordinate transform matrix between the guidance image and each template image of the plurality of template images based on the extracted feature points in the guidance image and the feature points of each template image; and selecting a template image of the plurality of template images that is closest to the guidance image, based on the computed coordinate transform matrices; transforming the pixel coordinates to the template coordinates using the coordinate transform matrix computed for the selected template image.

16. The navigation guidance method of any one of claims 1 to 15, further comprising: receiving the guidance image from the guiding device.

17. The navigation guidance method of any one of claims 1 to 15, further comprising: receiving information on a position of the guiding device; and updating the guidance image based on the received information.

18. The navigation guidance method of any one of claims 1 to 15, wherein the input image is a surveillance image captured from an elevated angle.

19. A navigation guidance device comprising: a memory; and at least one processor communicatively coupled to the memory and configured to: determine pixel coordinates of a target in an input image; convert the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; convert the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and provide the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

20. A non-transitory computer-readable storage medium, comprising instructions executable by at least one processor, to perform a method of navigation guidance comprising: determining pixel coordinates of a target in an input image; converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

Description:
NAVIGATION GUIDANCE METHODS AND NAVIGATION GUIDANCE DEVICES

TECHNICAL FIELD

[0001] Various embodiments relate to navigation guidance methods and navigation guidance devices.

BACKGROUND

[0002] Dispatch systems may be used to automatically deploy staff to respond to events within an indoor facility. For example, security officers may be deployed to a target location to interview a suspicious person, cleaning staff may be deployed to areas of frequent human contact to perform cleaning, or building staff may be deployed to pick up an item. A guidance method may be required to guide the staff to the target location. In existing dispatch systems, the guidance method may be lacking, or unsuitable for indoor applications. For example, the existing guidance methods generally rely on global positioning system which cannot accurately guide the staff in an indoor facility. Other guidance methods such as using LiDAR is also undesirable due to the high equipment cost. Other guidance methods may include providing the user with a stationary third-person view image as a navigational reference. However, such a navigation reference does not provide the staff with information about the target location in the context of the building, and hence, may be of limited utility.

SUMMARY

[0003] According to various embodiments, there may be provided a navigation guidance method. The navigation guidance method may include: determining pixel coordinates of a target in an input image; converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates. [0004] According to various embodiments, there may be provided a navigation guidance device. The navigation guidance device may include a memory and at least one processor communicatively coupled to the memory. The at least one processor may be configured to: determine pixel coordinates of a target in an input image; convert the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; convert the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and provide the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

[0005] According to various embodiments, there may be provided a non-transitory computer- readable storage medium including instructions executable by at least one processor, to perform a method of navigation guidance. The method of navigation guidance may include: determining pixel coordinates of a target in an input image; converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template; converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template; and providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates.

[0006] Additional features for advantageous embodiments are provided in the dependent claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0007] In the drawings, like reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various embodiments are described with reference to the following drawings, in which:

[0008] FIG. 1 shows a schematic diagram of a dispatch system according to various embodiments.

[0009] FIG. 2 is a block diagram providing an overview of a navigation guidance method according to various embodiments.

[0010] FIG. 3 shows a block diagram showing operation of a navigation guidance device according to various embodiments. [0011] FIG. 4 shows a schematic diagram of the hardware of the navigation guidance device of FIG. 3 and its operating environment according to various embodiments.

[0012] FIG. 5 shows a flow diagram of a navigation guidance method according to various embodiments.

[0013] FIG. 6 illustrates the navigation guidance method described with respect to FIG. 5, with sample images of each process.

[0014] FIG. 7 shows a flow diagram of a navigation guidance method according to various embodiments.

[0015] FIG. 8 illustrates the navigation guidance method described with respect to FIG. 7, with sample images of each process.

[0016] FIG. 9 shows a flow diagram of a navigation guidance method according to various embodiments.

[0017] FIG. 10 illustrates the navigation guidance method described with respect to FIG. 7, with sample images of each process.

[0018] FIG. 11 shows a flow diagram of a navigation guidance method according to various embodiments.

[0019] FIG. 12 illustrates the navigation guidance method described with respect to FIG. 11, with sample images of each process.

[0020] FIG. 13 shows a flow diagram of a navigation guidance method according to various embodiments.

[0021] FIG. 14 illustrates the navigation guidance method described with respect to FIG. 13, with sample images of each process.

[0022] FIG. 15 shows a flow diagram of a navigation guidance method according to various embodiments.

[0023] FIG. 16 shows a block diagram of a navigation guidance device according to various embodiments.

DESCRIPTION

[0024] Embodiments described below in context of the devices are analogously valid for the respective methods, and vice versa. Furthermore, it will be understood that the embodiments described below may be combined, for example, a part of one embodiment may be combined with a part of another embodiment. [0025] It will be understood that any property described herein for a specific device may also hold for any device described herein. It will be understood that any property described herein for a specific method may also hold for any method described herein. Furthermore, it will be understood that for any device or method described herein, not necessarily all the components or steps described must be enclosed in the device or method, but only some (but not all) components or steps may be enclosed.

[0026] In this context, the navigation guidance device as described in this description may include a memory which is for example used in the processing carried out in the navigation guidance device. A memory used in the embodiments may be a volatile memory, for example a DRAM (Dynamic Random Access Memory) or a non-volatile memory, for example a PROM (Programmable Read Only Memory), an EPROM (Erasable PROM), EEPROM (Electrically Erasable PROM), or a flash memory, e.g., a floating gate memory, a charge trapping memory, an MRAM (Magnetoresistive Random Access Memory) or a PCRAM (Phase Change Random Access Memory).

[0027] The term “coupled” (or “connected”) herein may be understood as electrically coupled or as mechanically coupled, for example attached or fixed, or just in contact without any fixation, and it will be understood that both direct coupling or indirect coupling (in other words: coupling without direct contact) may be provided.

[0028] In order that the invention may be readily understood and put into practical effect, various embodiments will now be described by way of examples and not limitations, and with reference to the figures.

[0029] FIG. 1 shows a schematic diagram of a dispatch system 100 according to various embodiments. The dispatch system 100 may include a processor 108. The dispatch system 100 may further include a sensor unit 102. The sensor unit 102 may include, for example, surveillance cameras. The sensor unit 102 may be configured to detect events and/or objects using image processing algorithms. For example, the sensor unit 102 may be configured to detect lost items, suspicious persons, surfaces or objects that are frequently touched etc., inside a building. The processor 108 may receive information about the detections, from the sensor unit 102. In alternative embodiments, the detection information may be manually provided to the processor 108, for example, in scenarios where the event or object is reported by a staff. The processor 108 may determine the location of the event or object based on images captured by the sensor unit 102, for example, based on coordinates of the event or object in the images. The processor 108 may select a staff 110 to be deployed to the event or to fetch the object, based on an optimization algorithm. The processor 108 may send a dispatch request to a mobile terminal 104 carried by the staff. The dispatch request may contain information about the event or object, including its location. The deployed staff 110 may respond to the event, for example, move to the object location to retrieve the object. The dispatch system 100 may include a guiding device 112. The guiding device 112 may guide the staff 110 to the object location, for example, in the form of a map or visualization aid, that is updated in real-time according to the movement of the staff 110. The guiding device 112 and the mobile terminal 104 may be provided as a single device, for example, a mobile device that runs a navigation guidance software. The processor 108 may provide navigation guidance information to the guiding device 112. The navigation guidance information may include data relating to visual instructions. The visual instructions may be displayed on the guiding device 112. The processor 108 may be connected to electrical appliances in the building where the event or object is located, and may additionally or alternatively, provide the visual instructions through operations of the electrical appliances. The staff 110 may report updates to the processor 108 through his mobile terminal 104. The updates may include a completion status of the task, or the status of the event or object.

[0030] Without the navigation guidance information, the staff 110 may spend time to search for the target location, or even get lost in the building. By providing the staff 110 with navigation guidance information, the dispatch system 100 may improve the work productivity of the staff 110 as the staff 110 may be able to respond to multiple events within a shorter time. The navigation guidance information may be provided in an intuitive, easy-to- understand format, so that the staff 110 may proceed directly to the location without having to spend additional time to comprehend the information.

[0031] FIG. 2 is a block diagram 200 providing an overview of a navigation guidance method according to various embodiments. The navigation guidance method may include providing visual instructions that lead a person to a target location in a building, or any other indoor facility. The target location 210 (represented by a pointer in the figure) may be visible in a third-person-view (TPV) image 202. The TPV image 202 may also be referred herein as an input image. The TPV image may be a surveillance image captured from an elevated angle. The TPV image 202 may be a perspective view taken by a camera mounted on a wall or ceiling, for example, by a surveillance camera that may be part of the sensor unit 102. The target location 210 may be represented by TPV coordinates 250 which indicate the position of the target location 210 in the TPV image 202. The TPV coordinates 250 may be pixel coordinates that correspond to the position of the target location in the TPV image 202, and hence, are also referred herein as “pixel coordinates”. The TPV coordinates 250 may be inputs to a viewpoint conversion process 260. The output of the viewpoint conversion process 260 may include guidance coordinates, also referred herein as guidance view (GV) coordinates. The guidance coordinates may be used to provide visual instructions based on a guidance image, also referred herein as a GV image. The guidance image may include at least one of a first-person-view (FPV) image 204, a plan view (PV) image 206, and a map indicating respective locations of lighting devices 208 in the building. The visual instructions may also include visual effects generated by the lighting devices 208 in the building.

[0032] According to various embodiments, the navigation guidance method may include converting the TPV coordinates 250 to FPV coordinates 252 of an FPV image 204. The FPV coordinates 252 may be an example of the guidance view coordinates. The FPV image 204 may be an image taken at a person’s eye-level. The FPV image 204 may also have a similar field-of-view as a person’s eyes. The FPV image 204 may emulate human vision. The FPV image 204 may also include images taken by a wearable camera worn on a staff, or images taken by a staff using a mobile phone. The FPV coordinates 252 may be pixel coordinates that correspond to the position of the target location 210 in the FPV image 204.

[0033] According to various embodiments, the navigation guidance method may include converting the TPV coordinates 250 to PV coordinates 254 of a PV image 206. The PV coordinates 254 may be an example of the guidance view coordinates. The PV image 206 may show a top view of indoor areas in the building. The PV image 206 may include indications of walls and may further include indications of furniture. The PV image 206 may include a map drawing, such as a floor plan diagram. The PV image 206 may include a top view photo captured by a camera mounted on a ceiling in the building. The PV coordinates 254 may be pixel coordinates that correspond to the position of the target location 210 in the PV image 206.

[0034] According to various embodiments, the navigation guidance method may include operating a lighting device 208 based on the guidance coordinates. The lighting device 208 may include a lighting control system and a plurality of light emitters such as ceiling lights or wall lights. The guidance coordinates may include an identifier (referred herein as lighting ID 256) of a light emitter that is closest to the target location 210. The lighting ID 256 may be determined based on the PV coordinates 254. The lighting device 208 may be configured to switch on a light emitter identified by the lighting ID 256 to provide illumination in the vicinity of the target location 210. The illumination may be the visual effect that guides the staff 110 to the target location 210.

[0035] FIG. 3 shows a block diagram showing operation of a navigation guidance device 300 according to various embodiments. The navigation guidance device 300 may include a target detection unit 314, a TPV-template mapping unit 326, and a template-GV mapping unit 320. The navigation guidance device 300 may be configured to execute the viewpoint conversion process 260. The navigation guidance device 300 may be configured to receive a TPV image from an image monitoring unit 310. The image monitoring unit 310 may include the sensor unit 102 described with respect to FIG. 1. The target detection unit 314 may determine the TPV coordinates 250 of a target (for example, an object or an event) in the TPV image 202. In various embodiments, the target detection unit 314 may be further configured to detect the target by applying image processing algorithms on the TPV image 202. The TPV-template mapping unit 316 may be configured to receive the TPV coordinates 250 from the target detection unit 314. The TPV-template mapping unit 316 may be further configured to map the TPV coordinates 250 to template coordinates 350 based on a conversion template 318. The conversion template 318 may be stored in a template database. The navigation guidance device 300 may include the template database, or may be communicatively connected to the template database. The TPV-template mapping unit 316 may convert the TPV coordinates 250 to the template coordinates 350 by comparing the TPV image 202 to the conversion template 318. The template coordinates 350 may indicate a position of the target in the conversion template 318. For example, the conversion template 318 may include a three- dimensional (3D) model of an indoor environment that the target is located in, e.g. the building. The template coordinates 350 may point to a position within the 3D model.

[0036] The template-GV mapping unit 320 may be configured to map the template coordinates 350 to GV coordinates 360. The GV coordinates 360 may indicate a position of the target in, or relative to a GV image 362. The GV image 362 may be any one of an FPV image, a PV image, or a lighting emitter map. The template-GV mapping unit 320 may convert the template coordinates 350 to the GV coordinates 360 by comparing the GV image 362 to the conversion template 318. The template-GV mapping unit 320 may provide the GV coordinates 360 to a guiding device 112.

[0037] The deployed staff 110 may carry with him, the guiding device 112 as he navigates through the building to find the target. The guiding device 112 may include a camera configured to capture live images of the indoor environment that the staff 110 is walking through. The live images may include the GV image 362, for example, FPV image 204, in various embodiments. The guiding device 112 may transmit these live images to the template-GV mapping unit 320 so that the visual instructions may be indicated, for example, overlaid, on the GV image 362. The template-GV mapping unit 320 may provide the visual instructions to the guiding device 112. The template-GV mapping unit 320 may also be configured to determine the deployed staffs real-time position based on the received live images.

[0038] The guiding device 112 may also include a positioning sensor, such as a global positioning system (GPS) locator or an inertial measurement unit (IMU), that may determine a position of the guiding device 112, and hence the position of the deployed staff. The guiding device 112 may also transmit the determined position to the template-GV mapping unit 320. The template-GV mapping unit 320 may update its visual instructions sent to the guiding device 112 based on the determined position. The processes of transmitting the TPV image to the viewpoint conversion module 302, determining image coordinates, and determining the template coordinates may be executed repeatedly so that any changes in the target location may be updated. The processes of determining the GV coordinates, sending the GV coordinates to the guiding device 112 and providing feedback in the form of GV or determined position of the deployed staff, may be executed repeatedly so that any changes in the deployed staff location may be updated and the visual instructions may be updated accordingly.

[0039] FIG. 4 shows a schematic diagram of the hardware of the navigation guidance device 300 and its operating environment according to various embodiments. The navigation guidance device 300 may be an application that runs on a processor. The processor may be part of a server 404. A database 402 may store the conversion template 318. A sensor unit 102 may include surveillance cameras, and may be configured to monitor a target area 410. The target area 410 may be an indoor environment. The target location 210 may be located within the target area 410. Each of the server 404, the database 402, the sensor unit 102 and the guiding device 112 may be connected to a network 406. The server 404 may communicate with at least one of the database 402, the sensor unit 102 and the guiding device 112 through the network 406. The network 406 may include, for example, mobile network, a local area network, wide area network or the Internet. The network 406 may include wireless communication means, for example, through WiFi, telecommunications, mobile data, Bluetooth, near field communication etc. The network 406 may further include wired communications means such as ethemet. The network 406 may include more than one type of communication means. In other words, the network 406 may include a combination of network types, and may connect to each device via a different communication means. For example, the network 406 may include a wired communication link connected to the sensor unit and may include a wireless communication link to the guiding device 112.

[0040] FIG. 5 shows a flow diagram of a navigation guidance method 500 according to various embodiments. In the navigation guidance method 500, the conversion template 318 may be a 3D model of the target area 410, hereinafter referred to simply as “3D model 518”. Accordingly, the template coordinates 350 may be 3D model coordinates 550. The GV may be an FPV image 204. Accordingly, the GV coordinates may be FPV coordinates 550 which may be pixel coordinates of the target in the FPV image 204.

[0041] As described with respect to FIG. 3, the navigation guidance device 300 may include a target detection unit 314, a TPV-template mapping unit 316 and a template-GV-mapping unit 320. The navigation guidance device 300 may further include, or maybe coupled to, a database storing the 3D model of the target area 518. The target detection unit 314 may receive a TPV image 202 from an image monitoring unit 310. The image monitoring unit 310 may capture the TPV image 202, in 552. The target detection unit 314 may detect a target in the TPV image 202, and may further determine the TPV coordinates 250. The TPV-template mapping unit 316 may extract feature points of the TPV image 202, in 556. The TPV- template mapping unit 316 may determine a position and angle of the 3D model where the feature points of the 3D model 518 match the feature points of the TPV image 202, in 558. The process 558 may include rotating the 3D model 518 until the position and angle with the matching feature points are found. Feature points may be identified in the 3D model 518 prior to the process 558. The TPV-template mapping unit 316 may convert the TPV coordinates 250 into 3D model coordinates 550 based on the determined position and angle, in 560. The 3D model coordinates 550 may indicate a three-dimensional position within the 3D model, that matches the target location.

[0042] The TPV-template mapping unit 316 may transmit the 3D model coordinates 550 to the template-GV mapping unit 320. The template-GV mapping unit 320 may receive an FPV image 204 from a guiding device 112. The template-GV mapping unit 320 may extract feature points of the FPV image 204, in 562. The template-GV mapping unit 320 may determine a position and angle in the 3D model 518, where the feature points of the 3D model 518 matches the feature points of the extracted feature points of the FPV image 204, in 564. The process 566 may include rotating the 3D model 518 until the position and angle with the matching feature points are found. The template-GV mapping unit 320 may convert the 3D model coordinates 550 into FPV coordinates 560 based on the determined position and angle, in 566. The template-GV mapping unit 320 may transmit the FPV coordinates 560 to the guiding device 112, for presenting visual instructions on the guiding device 112. The guiding device 112 may present the visual instructions on the FPV image 204, based on the FPV coordinates 560, in 568.

[0043] FIG. 6 illustrates the navigation guidance method 500 described with respect to FIG. 5, with sample images of each process. In 552, the image monitoring unit 310 may obtain a TPV image 202 and may transmit it to the navigation guidance device 300. In 554, the target detection unit 314 may determine the TPV coordinates 250 which is represented by a pointer 662. The TPV coordinates 250 may indicate a position of the target in the TPV image 202. The TPV coordinates 250 may include two-axes coordinates that represents the pixel in the TPV image 202, that corresponds to the target location. In 556, the TPV-template mapping unit 316 may extract feature points in the TPV image 202. The process of extracting the feature points may include detecting edges and comers in the TPV image 202, using computer vision algorithms. The process of extracting feature points in the image may include identifying edges and/or comers in the image, for example, by identifying points of discontinuities in the image. In 558, the TPV-template mapping unit 316 may rotate the 3D model 518, to select a view within the 3D model that has feature points that match the feature points extracted in 556. In 560, the TPV-template mapping unit 316 may map the TPV coordinates 250 to a position in the selected view from 558, and may determine the 3D model coordinates 550 based on the mapped position in the selected view. In 570, the guiding device 112 may obtain and transmit an FPV image 204 to the navigation guidance device 300. For example, a deployed staff may use the guiding device 112 to snap a photo of the target area from his point of view, which may be used as the FPV image 204. In 562, the template-GV mapping unit 320 may extract feature points from the FPV image 204, in a similar manner as in 556. In 564, the template-GV mapping unit 320 may rotate the 3D model 518 to select a view in the 3D model 518 where the features points in that view matches the feature points extracted in 562. In 566, the template-GV mapping unit 320 may map the 3D model coordinates 560 to a position in the selected view, and may determine the FPV coordinates 560 based on the mapped position in the selected view. In 568, the guiding device 112 may provide visual instruction by marking the target in the FPV image 204 based on the FPV coordinates 560. For example, the guiding device 112 may overlay a pointer 664 over the FPV image 204. Consequently, the deployed staff may easily identify the target location by looking at the overlaid image.

[0044] FIG. 7 shows a flow diagram of a navigation guidance method 700 according to various embodiments. The navigation guidance method 700 may differ from the navigation guidance method 500, in that the GV may be a PV image 206. The PV image 206 may be a map. Accordingly, the GV coordinates are PV coordinates 760 which may be pixel coordinates of the target in the PV image 206. For brevity, common processes between the navigation guidance method 500 and 700 are not described again.

[0045] The template-GV mapping unit 320 may convert the 3D model coordinates 550 into PV coordinates 760, in 766. The conversion of the 3D model coordinates 550 into PV coordinates 760 may involve projecting the 3D model coordinates onto a 2D plane from an elevated viewpoint. The template-GV mapping unit 320 may transmit the PV coordinates 760 to the guiding device 112, for presenting visual instructions on the guiding device 112. The guiding device 112 may present the visual instructions on the PV image 206, based on the PV coordinates 760, in 768.

[0046] FIG. 8 illustrates the navigation guidance method 700 described with respect to FIG. 7, with sample images of each process. For brevity, common processes between the navigation guidance method 500 and 700 are not described again.

[0047] In 768, the template-GV mapping unit 320 may determine the PV coordinates 760. Determining the PV coordinates 760 may include rotating the 3D model 518 to view the target area from an aerial perspective, i.e. top-down view, that matches the PV image 206. Alternatively, the distances between the 3D model coordinates 550 and the walls of the indoor venue may be determined, and the PV coordinates 760 may be determined based on these distances. In 768, the guiding device 112 may provide visual instruction by marking the target in the PV image 206 based on the PV coordinates 760. For example, the guiding device 112 may overlay a pointer 864 over the PV image 206. Consequently, the deployed staff may easily identify the target location by looking at the overlaid image.

[0048] FIG. 9 shows a flow diagram of a navigation guidance method 900 according to various embodiments. The navigation guidance method 900 may differ from the navigation guidance methods 500 and 700, in that the guiding device 112 may be a lighting device 208 that includes a lighting control system and a plurality of light emitters such as ceiling lights or wall lights. The GV image may be a map indicating respective positions of the light emitters. The GV coordinates may identify a light emitter of the lighting control system 912 that is closest to the target location. The navigation guidance method 900 may provide visual instruction using the lighting device, by switching on the light emitter identified by the GV coordinates. For brevity, processes already described with respect to the navigation guidance methods 500 and 700 are not described again.

[0049] The template-GV mapping unit 320 may convert the 3D model coordinates 550 into a lighting ID 960, based on data from a correspondence database 990, in 966. The correspondence database 990 may store information on the locations of the light emitters of the lighting device 208. The template-GV mapping unit 320 may identify the lighting ID 960 that corresponds to the 3D model coordinates 550. The lighting ID 960 may identify a light emitter of the lighting device 208 that is closest to the target location. The template-GV mapping unit 320 may transmit the lighting ID 960 to the lighting device 208. The lighting control system of the lighting device 208 may switch on the light emitter identified by the lighting ID 960, in 968. The light emitted by the light emitter that is switched on, may guide the deployed staff to the target location.

[0050] FIG. 10 illustrates the navigation guidance method 900 described with respect to FIG. 7, with sample images of each process. For brevity, processes already described with respect to the navigation guidance methods 500 and 700 are not described again.

[0051] In 966, the template-GV mapping unit 320 may refer to the correspondence database 990, to identify the light emitter 1008 that is nearest to the 3D model coordinates 550, and its lighting ID 960. In 968, the lighting device 208 may switch on the identified light emitter 1008, so as to provide visual instruction to the deployed staff.

[0052] According to various embodiments, the lighting device 208 may further switch on a plurality of light emitters in a sequence that guides the deployed staff from his existing location to the target location. The mobile terminal 104 may transmit a real-time location of the staff to the template-GV mapping unit 320, which may identify the lighting ID of a first light emitter in between the staffs real-time location and the target location. The lighting device 208 may switch on the first light emitter. As the staff approaches the first light emitter, the mobile terminal 104 may transmit the new real-time location of the staff to the template- GV mapping unit 320, which may identify the lighting ID of a second light emitter in between the staffs new real-time location and the target location. The lighting device 208 may switch on the second light emitter. These processes of updating the template-GV mapping unit 320 with the real-time location of the staff, identifying another light emitter between the staff and the target location, and switching on the light emitter, may repeat until the staff reaches the target location.

[0053] FIG. 11 shows a flow diagram of a navigation guidance method 1100 according to various embodiments. The navigation guidance method 1100 may differ from the navigation guidance method 500, in that the conversion template 318 may include a plurality of images of an indoor environment that the target is located in, instead of the 3D model 518. The plurality of template images 1118, also referred herein as “surrounding images”, may be taken from different positions in the indoor environment. Accordingly, the template coordinates 350 may include template image coordinates 1150 which includes pixel coordinates of the target in at least one of the plurality of template images. The GV may be an FPV image 204. Accordingly, the GV coordinates are FPV coordinates 550. For brevity, processes already described with respect to the navigation guidance method 500 are not described again.

[0054] In 1158, the TSV-template mapping unit 316 may calculate the coordinate transform matrices between the TPV image 202 and each template image of the template images 1118. The process may include running scale-invariant feature transform (SIFT) algorithm to detect features in each of the TPV image 202 and the template images 1118. Each coordinate transform matrix may be calculated by matching feature points in the TPV image 202 to the feature points of the respective template image 1118. In 1158, coordinate transform matrices may be calculated for the plurality of template images 1118. The coordinate transform matrices may be calculated sequentially for the plurality of template images 1118, in order of their proximity, i.e. level of similarity, to the TPV image 202, for improved accuracy. For example, in using SIFT to match feature points, the transform matrix calculation may become more accurate as the images’ similarity increases. In 1160, the TSV-template mapping unit 316 may convert the TPV coordinates 250 to template image coordinates 1150 on at least one of, or all of, the template images 1118.

[0055] In 1164, the template-GV mapping unit 320 may calculate the coordinate transform matrix between the FPV image 204 and a closest template image 1118 based on matching feature points in the FPV image 204 to feature points in the closest template image 1118. In 1164, the closest template image to the FPV image 204 may be selected based on feature point matching between the FPV image 204 to each of the template images 1118. [0056] In 1166, the template-GV mapping unit 320 may convert the template image coordinates 1150 for the closest template image into FPV coordinates 560 based on the coordinate transform matrix of the closest template image.

[0057] In an alternative embodiment, 1164 may include calculating for each template image 1118 of the plurality of template images 1118, the coordinate transform matrices between the template image 1118 and the FPV image 202. The coordinate transform matrices may be computed sequentially for the plurality of template images 1118, in order of proximity, i.e. level of similarity, to the FPV image 202; and in 1166, the template-GV mapping unit 320 may select a template image that is the closest, or the most similar, to the FPV image 204, and may convert the template image coordinates 1150 for the closest template image into FPV coordinates 560 based on the coordinate transform matrix of the closest template image. In 1166, selecting the template image that is the closest to the FPV image 204 may include comparing the transform matrices of each template image relative to the FPV image 204. [0058] FIG. 12 illustrates the navigation guidance method 1100 described with respect to FIG. 11, with sample images of each process. For brevity, processes already described with respect to the navigation guidance method 500 are not described again. In 1158, the TSV- template mapping unit 316 may determine the coordinate transform matrices between the TPV image 202 and each template image of the template images 1118. Determining the coordinate transform matrix between the TPV image 202 and any template image 1118 may include detecting features, also referred herein as “feature points” of both images, and then determining the transform matrix based on a relationship between the coordinates of the same feature points in the TPV image 202 and in the template image 1118. Similarly, in 1164, the template-GV mapping unit 320 may calculate respective coordinate transform matrices between the FPV image 204 and each template image 1118, by detecting feature points of both images, and then determining the transform matrix based on a relationship between the coordinates of the same feature points in the FPV image 204 and in the template image 1118. In 1160, the TSV-template mapping unit 316 may convert the TPV coordinates 250 to template image coordinates 1150 on all of the template images 1118, in order of proximity to the TPV image 202. In 1166, the template-GV mapping unit 320 may select a template image that is the closest to the FPV image 204, and may convert the template image coordinates 1150 for the closest template image into FPV coordinates 560 based on the coordinate transform matrix of the closest template image. [0059] FIG. 13 shows a flow diagram of a navigation guidance method 1300 according to various embodiments. The navigation guidance method 1300 may differ from the navigation guidance method 1100, in that the GV may be a PV image 206, like in the navigation guidance method 700. Accordingly, the GV coordinates are PV coordinates 760. Alternatively, the guiding device 112 may be a lighting device 208 that includes a lighting control system and a plurality of light emitters, like in the navigation guidance method 900. For brevity, processes described earlier are not described again.

[0060] In 1364, the template-GV mapping unit 320 may calculate a coordinate transform matrix between the PV image 206 and each template image 1118. The process 1364 may include calculating the coordinate transform matrices sequentially based on matching feature points in the PV image 206 to feature points in each of the template images 1118. In 1366, template-GV mapping unit 320 may select a closest, i.e. the template image 1118 that is the most similar to the PV image 206, and convert the template image coordinates 1150 of the selected template image 1118 to PV coordinates 760, based on the coordinate transform matrix determined for the template image. In 1368, the guiding device 112 may identify a light emitter of the lighting device 208, that is closest to the PV coordinates 760 determined in the process 1366. The PV map may include a map that identifies the location of each light emitter. Alternatively, the template-GV mapping unit 320 may determine the light emitter that is closest to the PV coordinates based on the correspondence database 990, and may send the light emitter ID to the guiding device 112. The lighting device may then turn on the light emitter identified by the light emitter ID.

[0061] FIG. 14 illustrates the navigation guidance method 1300 described with respect to FIG. 13, with sample images of each process. For brevity, processes described earlier are not described again. In 1364, the template-GV mapping unit 320 may determine a coordinate transform matrix between the PV image 206 and each template image 1118. The process 1364 may include running a feature point matching algorithm, for example, SIFT, on the PV image 206 and each template image 1118. The feature point matching process may be repeated for each template image 1118 in order of its proximity to the viewpoint in the PV image 206. In 1366, template-GV mapping unit 320 may convert the template image coordinates 1150 of the template image that is the most similar to the PV image 206, to PV coordinates 760, based on the coordinate transform matrix determined for the template image. In 1368, the guiding device 112 may identify a light emitter of the lighting device 208, that is closest to the PV coordinates 760 determined in the process 1366, based on the correspondence database 990.

[0062] FIG. 15 shows a flow diagram of a navigation guidance method 1500 according to various embodiments. The navigation guidance method 1500 may include any one of the navigation guidance methods 500, 700, 900, 1100, and 1300. The navigation guidance method 1500 may include, in 1502, determining pixel coordinates of a target in an input image. The input image may include the TPV image 202. The pixel coordinates of the target in the input image may include TPV coordinates 250. The navigation guidance method 1500 may include, in 1504, converting the determined pixel coordinates in the input image to template coordinates in a conversion template, based on comparing the input image to the conversion template. The conversion template may include any one of the 3D model 518 and the plurality of template images 1118. If the conversion template includes the 3D model 518, the template coordinates may include 3D model coordinates 550. If the conversion template includes the plurality of template images 1118, the template coordinates may include template image coordinates 1150. The navigation guidance method 1500 may include, in 1506, converting the template coordinates to guidance coordinates in a guidance image, based on comparing the guidance image to the conversion template. The guidance image may include any one of FPV image 204, PV image 206 and a map indicating respective positions of a plurality of light emitters. Accordingly, the guidance coordinates may include any one of FPV coordinates 560, PV coordinates 760 and lighting ID 960. The navigation guidance method 1500 may include, in 1508, providing the guidance coordinates to a guiding device configured to present visual instruction based on the guidance coordinates. The guiding device may include the guiding device 112.

[0063] According to various embodiments, the navigation guidance method 1500 may further include receiving the guidance image from the guiding device. For example, the guidance image may be an FPV image 204 captured using the guiding device or a mobile terminal 104. [0064] According to various embodiments, the navigation guidance method 1500 may further include receiving information on a position of the guiding device and updating the guidance image based on the received information. For example, the information may be provided by a global positioning system sensor in the guiding device. For example, the guidance image may be a PV image 206 and a different PV image 206 may be shown as the staff walks around in the target area. [0065] According to various embodiments, in the navigation guidance method 1500, converting the determined pixel coordinates in the input image to template coordinates in the conversion template may include extracting feature points in the input image, determining feature points in each template image of the plurality of template images, and computing a respective coordinate transform matrix between the input image and each template image of the plurality of template images based on the extracted feature points in the input image and the feature points of each template image, like in 1158. Converting the determined pixel coordinates in the input image to the template coordinates may further include transforming the pixel coordinates to a set of template coordinates for each template image of the plurality of template images, using the respective coordinate transform matrix, like in 1160.

[0066] According to various embodiments, in the navigation guidance method 1500, converting the template coordinates to the guidance coordinates in the guidance image may include extracting feature points in the guidance image, and computing a respective coordinate transform matrix between the guidance image and each template image of the plurality of template images based on the extracted feature points in the guidance image and the feature points of each template image, like in 1164 or 1364. Converting the template coordinates to the guidance coordinates in the guidance image may further include selecting a template image of the plurality of template images that is closest to the guidance image, based on the computed coordinate transform matrices, and transforming the pixel coordinates to the template coordinates using the coordinate transform matrix computed for the selected template image, like in 1166 or 1366.

[0067] FIG. 16 shows a block diagram of a navigation guidance device 1600 according to various embodiments. The navigation guidance device 1600 may include the navigation guidance device 300. The navigation guidance device 1600 may include a memory 1602, and at least one processor 1604. The at least one processor 1604 may be communicatively coupled to the memory 1602 like indicated by the line 1606, and may be configured to execute the method 1500.

[0068] It will be appreciated to a person skilled in the art that the terminology used herein is for the purpose of describing various embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

[0069] It is understood that the specific order or hierarchy of blocks in the processes / flowcharts disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes / flowcharts may be rearranged. Further, some blocks may be combined or omitted. The accompanying method claims present elements of the various blocks in a sample order, and are not meant to be limited to the specific order or hierarchy presented.

[0070] The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Unless specifically stated otherwise, the term “some” refers to one or more. Combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof’ include any combination of A, B, and/or C, and may include multiples of A, multiples of B, or multiples of C. Specifically, combinations such as “at least one of A, B, or C,” “one or more of A, B, or C,” “at least one of A, B, and C,” “one or more of A, B, and C,” and “A, B, C, or any combination thereof’ may be A only, B only, C only, A and B, A and C, B and C, or A and B and C, where any such combinations may contain one or more member or members of A, B, or C. All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. The words “module,” “mechanism,” “element,” “device,” and the like may not be a substitute for the word “means.” As such, no claim element is to be construed as a means plus function unless the element is expressly recited using the phrase “means for.”