Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND MOBILE DEVICE FOR DETERMINING A VISUAL POINT OF A PERSON
Document Type and Number:
WIPO Patent Application WO/2023/152372
Kind Code:
A1
Abstract:
Methods using mobile devices and mobile devices for determining a visual point of a person are provided. The method comprises determining a head position and a pupil position of the person while the person looks at a target, and determining the visual point based on an avatar of at least an eye portion of the head of the person, which avatar is set to the determined head and pupil position. The method further comprises generating the avatar by the mobile device.

Inventors:
WAHL SIEGFRIED (DE)
KRATZER TIMO (DE)
KALTENBACHER AXEL (DE)
LEUBE ALEXANDER (DE)
WEINREICH MANUELA (DE)
SINNOTT DAVID (IE)
BREHER KATHARINA (DE)
Application Number:
PCT/EP2023/053504
Publication Date:
August 17, 2023
Filing Date:
February 13, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZEISS CARL VISION INT GMBH (DE)
CARL ZEISS VISION IRELAND LTD (IE)
International Classes:
G02C7/02; A61B3/00; A61B3/113; G02C13/00
Domestic Patent References:
WO2015124574A12015-08-27
WO2019164502A12019-08-29
Foreign References:
US20210181538A12021-06-17
EP3413122B12020-03-04
US20200349754A12020-11-05
US20210142566A12021-05-13
US9645413B22017-05-09
US9841615B22017-12-12
US10330958B22019-06-25
EP3145386B12018-07-18
EP1815289B92016-05-11
EP1747750A12007-01-31
EP2342599B12016-09-07
EP1960826B12012-02-15
US20200349754A12020-11-05
US20210142566A12021-05-13
US9645413B22017-05-09
US9841615B22017-12-12
EP3413122A12018-12-12
US20210181538A12021-06-17
EP3542211A12019-09-25
US20030123026A12003-07-03
US20020105530A12002-08-08
US20160327811A12016-11-10
EP20202529A2020-10-19
Other References:
WU, Y., HASSNER, T., KIM, K., MEDIONI, G., NATARAJAN, P.: "Facial landmark detection with tweaked convolutional neural networks", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 40, no. 12, 2017, pages 3067 - 3074, XP011698767, DOI: 10.1109/TPAMI.2017.2787130
PERAKIS, P.PASSALIS, G.THEOHARIS, T.KAKADIARIS, I. A.: "3D facial landmark detection under large yaw and expression variations", IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 35, no. 7, 2012, pages 1552 - 1564, XP011510377, DOI: 10.1109/TPAMI.2012.247
WU, Y.JI, Q.: "Facial landmark detection: A literature survey", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 127, no. 2, 2019, pages 115 - 142, XP037697793, DOI: 10.1007/s11263-018-1097-z
RAU J-YYEH P-C: "A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration", SENSORS (BASLE, SWITZERLAND, vol. 12, no. 8, 2012, pages 11271 - 11293
M. NIEΒNERM. ZOLLHDFERS. IZADIM. STAMMINGER: "Real-time 3D reconstruction at scale using voxel hashing", ACM TRANS. GRAPH, vol. 32, no. 6, November 2013 (2013-11-01)
WU, Y.JI, Q.: "Facial landmark detection: A literature survey", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 127, no. 2, 2018, pages 115 - 142, XP037697793, DOI: 10.1007/s11263-018-1097-z
BESL, PAZL JMCKAY, NEIL D.: "Sensor fusion IV: control paradigms and data structures", vol. 1611, 1992, INTERNATIONAL SOCIETY FOR OPTICS AND PHOTONICS, article "Method for registration of 3D shapes"
FISCHLER, MARTIN A.BOLLES, ROBERT C.: "Random sampling consensus: a paradigm for model fitting with applications to image analysis and automated cartography", COMMUNICATIONS OF THE ACM, vol. 24, no. 6, 1981, pages 381 - 2395
PARK, JAESIKZHOU, QUIAN-YIKOLTUN, VLADLEN: "Colored point cloud registration revisited", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2017
RUSINKIEWICZ, SZYMONLEVOY, MARC: "Proceedings of the third international conference on 3D digital imaging and modeling", 2001, IEEE, article "Efficient variants of the ICP algorithm"
KAZHDAN, MICHAELBOLITHO, MATTHEWHOPPE, HUGUES: "Poisson surface reconstruction", PROCEEDINGS OF THE FOURTH EUROGRAPHICS SYMPOSIUM ON GEOMETRY PROCESSING, vol. 7, 2006
WAECHTER, MICHAELMOEHRLE, NILSGOESELE, MICHAEL: "European conference on computer vision", 2014, SPRINGER, article "Let there be color! Large-scale texturing of 3D reconstructions"
SHEEDY, J. E: "Progressive addition lenses— matching the specific lens to patient needs", OPTOMETRY-JOURNAL OF THE AMERICAN OPTOMETRIC ASSOCIATION, vol. 75, no. 2, 2004, pages 83 - 102, XP022634198, DOI: 10.1016/S1529-1839(04)70021-4
SHEEDY, J.HARDY, R. F.HAYES, J. R: "Progressive addition lenses-measurements and ratings", OPTOMETRY-JOURNAL OF THE AMERICAN OPTOMETRIC ASSOCIATION, vol. 77, no. 1, 2006, pages 23 - 39, XP028082014, DOI: 10.1016/j.optm.2005.10.019
Attorney, Agent or Firm:
STICHT, Andreas (DE)
Download PDF:
Claims:
CLAIMS

1. A computer-implemented method for determining a visual point (55A, 55B; 65A, 65B; 70) of a person by a mobile device (10) including at least one sensor (11, 12), the at least one sensor comprising an image sensor (11), wherein the visual point (55A, 55B; 65A, 65B; 70) is an intersection of a line of sight of the person with a plane approximating the position of a spectacle lens, comprising: determining a head position and a pupil position of the person while the person looks at a target using the at least one sensor (11 , 12), and determining the visual point (55A, 55B; 65A, 65B; 70) based on an avatar (50) which is a 3D model of at least an eye portion of the head of the person, which is set to the determined head and pupil position, based on the set head and pupil positions of the avatar (50) and on the position of the plane approximating the position of the spectacle lens, wherein either:

- the method further comprises virtually fitting a spectacle frame to the avatar (50), and determining the visual point (55A, 55B; 65A, 65B; 70) based on the virtually fitted spectacle frame; or

-the person wears a spectacle frame, and the method further comprises: identifying the spectacle frame worn by the person, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is further based on the identified spectacle frame, characterized in that determining the head position comprises determining parts or facial landmarks of the head of the person, and in that the method further comprises moving the target, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is performed for a plurality of target positions.

2. The method of claim 1, characterized in that determining a head position comprises determining at least one selected from a group consisting of the head position and the facial landmarks relative to the target.

3. The method of claim 1 or 2, characterized in that the at least one sensor further comprises a depth sensor (12). 4. The method of claim 3, characterized in that determining the head position and the pupil position is performed by using the depth sensor.

5. The method of claim 4, characterized in that the target is the mobile device.

6. The method of claim 5, characterized by determining the respective poses of the mobile device relative to the head at the plurality of target positions using the depth sensor, and determining the respective visual point for the respective target position based on the respective pose.

7. The method of any one of claims 1 to 6, characterized by generating the avatar (50) with the mobile device.

8. The method of claim 7, characterized in that generating the avatar (50) comprises measuring the head of the person in a plurality of positions by the at least one sensor, and calculating the avatar (50) based on the measurement.

9. The method of claim 8, characterized in that measuring the head of the person in a plurality of positions comprises measuring the head of the person in a plurality of upright positions of the head of the person rotated around about an axis of the head.

10. The method of any one of claims 1 to 9 characterized in that the target is provided in the near field.

11. A mobile device (10), comprising: at least one sensor (11 , 12), comprising an image sensor (11) and configured to determine a head position and a pupil position of a person while the person looks at a target, and a processor configured to determine the visual point (55A, 55B; 65A, 65B; 70) of the person based on an avatar (50) which is a 3D model of at least an eye portion of the person and set to the determined head and pupil position based on the set head and pupil positions of the avatar (50) and on a position of a plane approximating the position of a spectacle lens, wherein the visual point (55A, 55B; 65A, 65B; 70) is an intersection of a line of sight of the person with a plane approximating the position of the spectacle lens, wherein either: - the processor is further configured to virtually fit a spectacle frame to the avatar (50), and determine the visual point (55A, 55B; 65A, 65B; 70) based on the virtually fitted spectacle frame; or

- the person wears a spectacle frame, and the processor is further configured to identify the spectacle frame worn by the person, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is further based on the identified spectacle frame, characterized in that determining the head position comprises determining parts or facial landmarks of the head of the person, and in that the mobile device (10) is further configured to determine the visual point of the person for a plurality of positions while the target moving through a field of view of the person.

12. The mobile device (10) of claim 11 , characterized in that the mobile device is configured to perform the method of any one of claims 1 to 10.

13. A computer program for a mobile device including at least one sensor (11 , 12) and a processor, characterized in that the computer program, when executed on the processor, causes execution of the method of any one of claims 1 to 10.

14. A method for producing a spectacle lens, characterized by: determining the visual point (55A, 55B; 65A, 65B; 70) of a person according to the method of any one of claims 1 to 10, and producing the spectacle lens on the basis of the determined visual point (55A, 55B; 65A, 65B; 70).

15. A computer-implemented method for determining a visual point (55A, 55B; 65A, 65B; 70) of a person by a mobile device (10) including at least one sensor (11, 12), the at least one sensor comprising an image sensor (11) and a depth sensor (12), wherein the visual point (55A, 55B; 65A, 65B; 70) is an intersection of a line of sight of the person with a plane approximating the position of a spectacle lens, comprising: determining a head position and a pupil position of the person while the person looks at a target, wherein the target is the mobile device (10), using the at least one sensor (11 , 12), and determining the visual point (55A, 55B; 65A, 65B; 70) based on an avatar (50) which is a 3D model of at least an eye portion of the head of the person, which is set to the determined head and pupil position, based on the set head and pupil positions of the avatar(50) and on the position of the plane approximating the position of the spectacle lens, wherein either:

- the method further comprises virtually fitting a spectacle frame to the avatar (50), and determining the visual point (55A, 55B; 65A, 65B; 70) based on the virtually fitted spectacle frame; or

-the person wears a spectacle frame, and the method further comprises: identifying the spectacle frame worn by the person, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is further based on the identified spectacle frame, characterized in that determining the head position comprises determining parts or facial landmarks of the head of the person, in that the method further comprises moving the target, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is performed for a plurality of target positions, in that determining a head position and a pupil position is performed by using the depth sensor, and in that the method further comprises determining the respective poses of the mobile device relative to the head at the plurality of target positions using the depth sensor, and determining the respective visual point for the respective target position based on the respective pose.

16. The method of claim 15, characterized in that determining a head position comprises determining at least one selected from a group consisting of the head position and the facial landmarks relative to the target.

17. The method of claim 15 or 16, characterized by generating the avatar (50) with the mobile device.

18. The method of claim 17, characterized in that generating the avatar (50) comprises measuring the head of the person in a plurality of positions by the at least one sensor, and calculating the avatar (50) based on the measurement. 19. The method of claim 18 characterized in that measuring the head of the person in a plurality of positions comprises measuring the head of the person in a plurality of upright positions of the head of the person rotated around about an axis of the head.

20. The method of any one of claims 15 to 19 characterized in that the target is provided in the near field.

21. A mobile device (10), comprising: at least one sensor (11 , 12), comprising an image sensor (11) and a depth sensor (12) and configured to determine a head position and a pupil position of a person while the person looks at a target, wherein the target is the mobile device (10), and a processor configured to determine the visual point (55A, 55B; 65A, 65B; 70) of the person based on an avatar (50) which is a 3D model of at least an eye portion of the person and set to the determined head and pupil position based on the set head and pupil positions of the avatar(50) and on a position of a plane approximating the position of a spectacle lens, wherein the visual point (55A, 55B; 65A, 65B; 70) is an intersection of a line of sight of the person with a plane approximating the position of the spectacle lens, wherein either:

- the processor is further configured to virtually fit a spectacle frame to the avatar (50), and determine the visual point (55A, 55B; 65A, 65B; 70) based on the virtually fitted spectacle frame; or

- the person wears a spectacle frame, and the the processor is further configured to identify the spectacle frame worn by the person, wherein determining the visual point (55A, 55B; 65A, 65B; 70) is further based on the identified spectacle frame, characterized in that determining the head position comprises determining parts or facial landmarks of the head of the person, in that the mobile device (10) is further configured to determine the visual point of the person for a plurality of positions while the target moving through a field of view of the person, in that determining a head position and a pupil position is performed by using the depth sensor, and in that the processor is further configured to determine the respective poses of the mobile device relative to the head at the plurality of target positions using the depth sensor, and determine the respective visual point for the respective target position based on the respective pose. The mobile device (10) of claim 21 , characterized in that the mobile device is configured to perform the method of any one of claims 15 to 20. A computer program for a mobile device including at least one sensor (11 , 12) and a processor, characterized in that the computer program, when executed on the processor, causes execution of the method of any one of claims 15 to 20. A method for producing a spectacle lens, characterized by: determining the visual point (55A, 55B; 65A, 65B; 70) of a person according to the method of any one of claims 15 to 20, and producing the spectacle lens on the basis of the determined visual point (55A, 55B; 65A, 65B; 70).

Description:
Description

Method and mobile device for determining a visual point of a person

The present application relates to methods and mobile devices for determining a visual point of a person, as well as to corresponding computer programs and methods for manufacturing spectacle lenses based on the determined visual point.

For adapting spectacle lenses to a particular spectacle frame for a particular person, various parameters are conventionally determined, which are referred to as centration parameters. Centration parameters are parameters that are needed to be correctly arranged, that is to say centered, of spectacle lenses in a spectacle frame such that the spectacle lenses are worn in a correct position relative to the person’s eyes.

Examples of such centration parameters comprise the interpupillary distance, the vertex distance, the face form angle, the y coordinates of the left and right centration points (also designated as the fitting point height), the distance visual point, the pantoscopic angle, and further parameters defined in section 5 of DIN EN ISO 13666:2012 and, among otherthings, the inclination of the frame.

For further designing and manufacturing spectacle lenses adapted to a particular person, the individual eye behavior for seeing in different distances is important. This eye behavior depends on the viewing distance itself, the scene, and convergence of the eye. The eye behavior determines where the person wearing the spectacle looks through the spectacle lens. According to DIN EN ISO 13666:2012 5.11 , the intersection of the line of sight of the person when looking at a particular target with the back surface of the spectacle lens is referred to as visual point which corresponds to the aperture in the spectacle plane. In the context of the present application, sometimes the thickness of the spectacle lens and/or a curvature of the spectacle lens may be neglected, such that the visual point is taken as the intersection of the line of sight with a plane approximating the position of the spectacle lens.

Knowing the visual point for different distances, for example the distance visual point according to 5.16 of DIN EN ISO 13666:2012 or the near visual point according to 5.17 of DIN EN ISO 13666:2012 is particularly important for design and centration of progressive addition lenses (PALs), where different parts of the lens provide optical corrections for different distances, for example for near vision or far vision. Distance visual point is an assumed position of the visual point on a lens, which is used for distance vision under given conditions and near vision point is the same used for near vision. The near vision part of the lens should be for example at or around the near visual point, whereas a distance vision part of the lens should be located at or around the distance visual point. Also for single vision lens (SVL) design and centration knowing the visual point(s) may be helpful.

To illustrate, FIG. 8 shows an example distribution of visual points over a field of view of a lens for a single vision lens in form of a heat map representing where the visual points are during a typical office task including writing, reading and switching between two screens, monitored over a certain time. In other words, the visual points, i.e. intersection between the line of sight of the person and the spectacle lens, were determined and plotted in the diagram of Fig. 8 over time. Here, when the person looks at different targets, the person may either move the eye, leading to different visual points, or move the head, such that the visual point stays constant while the head moves, or combinations thereof (head movement in combination with a slight eye movement, for example). In the example shown, there is a preferred visual point area between about 20 and 40 mm horizontal and -20 and -35 mm vertical, with visual points being detected in some other areas over the field of view. This means that the particular person wearing this particular spectacle mostly looks through the preferred visual point area, but may move the eye to look through other parts of the lens sometimes.

FIG. 9 shows an example distribution of the visual points for a progressive addition lens for the same task as in FIG. 8. Here, two main visual point areas may be seen, one for far sight and the other for near sight. The areas for far sight and near sight in progressive addition lenses are generally limited compared to the whole area of single vision lens, such that apart from changing between far field and near field which is usually done by eye movement, more head movement is required. Therefore, it has been found in studies that generally persons wearing single vision lenses use more eye movement, i.e., are more "eye movers" than person wearing progressive addition lenses, who are more “head movers”. The location and distribution of the visual points depend not only on the type of the lens (PAL or SVL) but also on the person wearing the lens. For example, a person may be more of an eye mover or more of a head mover. Therefore, knowing the visual point or visual point distribution for specific tasks (like reading and writing) of a person is helpful for producing a lens-like progressive addition lens specifically adapted to the person wearing the spectacle.

Various systems are known for measuring individual vision parameters including the vision point or vision point distribution of a person and designing lenses accordingly, for example Essilor Visioffice 2 (https://www.pointsdevue.com/article/effect-multifocal-lense s- eye-and-head-movementspresbyopic-vdu-users-neck-and-shoulder ), Essilor Visioffice X (https://www.pointsdevue.com/white-paper/varilux-x-seriestm- lenses-near-vision- behaviorpersonalization) or Essilor Visiostaff (https://www.essilorpro.de/Produkte/Messen- und-Demonstrieren/visioffice/Documents/Visiostaff_Broschuere .pdf).

Some techniques used in Visioffice 2 are described in EP 3145386 B1. This document generally describes a method for determining at least one parameter of visual behavior of a person. The center of rotation of at least one eye is determined in a first reference frame associated with the head of the person. Then, in a real-life situation like reading, an image of the person which includes a body part of the person is captured, and the position and orientation of the body part is determined in a second reference frame. Ultimately, the center of rotation of the eye is then determined in the second reference frame. Based on knowing the center of rotation, eye parameters are determined. By mounting an eye tracker to a pair of spectacles used by the individual, a visual point may be determined. This application requires an eye tracker on the spectacle frame.

A somewhat similar system is disclosed in WO 2015/124574 A1 , which also uses an eye tracker mounted on a spectacle frame. With this spectacle frame mounted eye tracker, eye movement of a person may be observed over a longer time. However, this requires the person to wear the corresponding spectacle frame with the eye tracker.

Therefore, the above solutions require an eye tracker mounted to the spectacle frame, i.e. , specific hardware.

EP 1815289 B9 discloses a method for designing a spectacle lens, considering a movement of head and eye of a person. For determining the visual point, a commercially available head and eye movement measurement device is modified, such that also here a specific hardware is needed.

EP 1747750 A1 discloses a further method and device for determining the visual behavior of a person, for example the visual point, for customization of a spectacle lens. The person has to wear a specific hairband with LEDs for tracking the head movement, which may be cumbersome to the person.

EP 2342599 B1 and EP 1960826 B1 each disclose a determination of the visual behavior of a person based on manual measurements, in the former via convergence data and the latter via wire had a head inclination. Therefore, the various solutions discussed above require specific equipment. Some cases which use eye and head tracking devices can be used reliably only in laboratory set ups, or require manual measurements which are time consuming.

US 20201349 754 A1 discloses a method of generating a 3D model of a head using a smartphone, where a depth map is used.

US 2021 1 142 566 A1 discloses a virtual try-on of spectacle frames.

US 9645413 B2 and US 9 841 615 B2 discuss approaches for determine vision points.

EP 3413122 A1 discloses a method and a device for determining a near field visual point, where in some implementations a device like a tablet PC may be used for measuring the near vision point. The device may be used in connection with a centration device. The position and/or orientation of a near target like the above-mentioned tablet is determined, and an image of a person as captured while the person looks at the target is taken. The near visual point is then determined based on the image, position and/or orientation. This method essentially uses a static measurement for a single position. In some implementations, an avatar, i.e., a model, of the head may be used to which a virtual spectacle frame is fitted, and the vision point is determined using this avatar fitted with the virtual spectacle frame. For generating the avatar, a stationary device with a plurality of cameras provided approximately in a semicircle is used. Therefore, while a mobile device like a tablet is involved, for making use of an avatar still a specific stationary apparatus needs to be provided. Furthermore, only a single near vision point is determined. Additionally, a far vision point may be determined. US 2021 / 181 538 A1 discloses a further method where an avatar is used to determine a visual point, with only a single visual point being detected.

Starting for example from EP 3413122 A1, it is an object of the present invention to provide a flexible approach to determine the visual point, which may use virtual spectacle frames, but does not require specific hardware, and enables measurement of a vision point distribution. Compared to the other references cited above, no specific hardware and no manual measurements which requires a trained person like an optician would not be necessary.

According to a first aspect of the invention, a computer-implemented method for a mobile device including at least one sensor, which comprises an image sensor and optionally also a depth sensor, to determine a visual point of a person is provided, comprising: determining a head position and a pupil position of the person while the person looks at a target using the at least one sensor, and determining the visual point based on an avatar of at least an eye portion of the head of the person, which is set to the determined head and pupil position.

The method is characterized in that the target is moved through a certain field of view of the person, and the visual point is determined for a plurality of positions of the target. In this way, the visual point for a plurality of target positions may be determined. From the determined visual point, it may also be detected if the person is a head-mover or an eyemover as explained initially, i.e. , if the person rather moves the head for following the target when it moves or only moves the eyes. The moving of the target may be performed by the person or another person, for example upon instructions output by the mobile device.

A mobile device, as used herein, refers to a device that is designed to be carried around easily. Typically, such a mobile device has a weight of less than 2kg, usually less than 1kg or even less. Examples for such mobile devices include smartphones or tablet PCs. Nowadays, smartphones or tablet PCs besides a processor and memory, include a plurality of additional components, in particularly sensors, which are used to implement the method. For example, nowadays smartphones or tablet PCs essentially always include cameras and, in many cases, also include depth sensors like time of flight sensors, that may be used for determining the head position and pupil position and to generate the avatar. Further, such depth sensors enable smartphone software to detect facial features and/or landmarks in an image and output a 3D-coordinate of these landmarks. For example, the head position may be determined using a depth sensor measuring the distance of different parts and/or facial landmarks of the head from the mobile device, and the pupil position may be determined using a camera of the mobile device, for example as described in the above cited EP 3413122 A1 , or may also be detected as facial landmarks. Detection of facial landmarks may for example be performed as described in Wu, Y., Hassner, T., Kim, K., Medioni, G., & Natarajan, P. (2017), Facial landmark detection with tweaked convolutional neural networks, IEEE transactions on pattern analysis and machine intelligence, 40(12), 3067-3074; Perakis, P., Passalis, G., Theoharis, T., & Kakadiaris, I. A. (2012), “3D facial landmark detection under large yaw and expression variations,” IEEE transactions on pattern analysis and machine intelligence, 35(7), 1552-1564; or Wu, Y., & Ji, Q. (2019), Facial landmark detection: A literature survey. International Journal of Computer Vision, 127(2), 115-142 using computer vision or machine learning techniques.

Alternatively, only an image sensor may be provided, and the landmarks may be detected in an image. The head position and pupil position may then be detected based on the images and the landmarks detected therein, for which various mobile device operating systems offer built-in solutions, for example ARKit for iOS devices and ARCore for Android devices. These solutions use trained machine learning algorithms which are able to determine 3D coordinates from 2D contours of faces in images. While the use of a depth sensor is preferred and may result in more precise measurements, for mobile devices without a depth sensor this is an alternative solution.

Therefore, the method may be implemented by programming an off-the-shelf mobile device accordingly (with a so-called “app”), and requires no specific hardware. After the avatar is generated using at least one camera and/or at least one depth sensor, the relative change in position of the head, the pupils and the facial landmarks are used to calculate a movement of the user in respect to the mobile device.

An avatar, as generally understood in the field of computing, is a graphical representation of a person. In the context of the present application, the term avatar is to be understood as a 3D avatar, all in other words a three-dimensional model representation of a person or part thereof. Such a 3D avatar is also referred to as 3D model. A model, in particular a 3D model, should be understood to mean a representation, in the case of a 3D model, a three-dimensional representation, of real objects which are present as a data set in a storage medium, for example a memory of a computer or a data carrier. By way of an example, such a three-dimensional representation can a 3D mesh, consisting of a set of 3D points, which are also referred to as vertices, and connections between the points, which connections are also referred to as edges. In the simplest case, this connection form a triangle mesh. Such representation as a 3D mesh only describes the surface of an object and not the volume. The mesh need not necessarily be closed. Thus, if a head, for example, is described in the form of a mesh, it appears like a mask. Details in respect of such 3D models are found in Rau J-Y, Yeh P-C, “A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration,” Sensors (Basle, Switzerland), 2012;12(8):11271-11293. doi:10.3390/s120811271 ; in particular in page 11289, "FIG.16."

A voxel grid, which represents a volume-type representation, is a further option for representing a 3D model. Here, the space is divided into small cubes or cuboids, which are referred to as voxels. In the simplest case, the presence or absence of the object to be represented is stored in the form of a binary value (1 or 0) for each voxel. In the case of an edge length of the voxels of 1 mm and a volume of 300 mm x 300 mm x 300 mm, which represents a typical volume for a head, a total of 27 million voxels are consequently obtained. Such voxel grids are described in, e.g., M. NieBner, M. Zollhofer, S. Izadi, and M. Stamminger, "Real-time 3D reconstruction at scale using voxel hashing," ACM Trans. Graph. 32, 6, Article 169 (November 2013), available at the url doi.org/10.1145/2508363.2508374.

An avatar of at least the eye portion of the head means that at least the eyes of the person and their surroundings are included in the avatar. In some embodiments, the avatar may for example be an avatar of the complete face of the person or of the complete head of the person, and may include more parts of the person as long as the eye portion is included.

Using the avatar, only a few landmarks of the head have to be determined during the determination of the head position, such that the avatar can be set based on the determined head position and/or facial landmarks. Similarly, as the avatar includes the eyes, determining the pupil position may be easier. Finally, by using an avatar, the method may be implemented without the person having to wear specific devices, and without the use of specific eye tracking equipment for example in a frame.

In one alternative, the person may wear a spectacle frame. Here, pursuant to DIN EN ISO 7998 and DIN EN ISO 8624, a spectacle frame should be understood to mean a frame or a holder by means of which spectacle lenses can be worn on the head. In particular, the term as used herein also includes rimless spectacle frames. In this case, the method may include detecting the spectacle frame and determining the visual point based on the detected spectacle frame. Detecting a spectacle frame is for example described in EP 3 542 211 A1. In another alternative, the avatar may be provided with a virtual spectacle frame in a so-called virtual try on, referred to as virtual fitting. Such a virtual try on is for example described in US 2003/0123026 A1 , US 2002/105530 A1 or US 2016/0327811 A1. In this case, the method may comprise determining the visual point based on the corresponding virtual spectacle frame fitted to the avatar. Using such a virtual spectacle frame has the advantage that the visual point may be determined for a plurality of spectacle frames without the person actually having to wear the spectacle frame.

Determining the head position and the pupil position of the person may be performed relative to the target. In this way, based on the avatar set to the determined head and pupil position relative to the target, a line of sight of the person may be determined, and an intersection of the line of sight with a spectacle lens plane may be taken as the visual point.

The avatar set to the determined eye and pupil position means that the avatar is set to a position where the eye and pupil positions of the avatar match the determined eye and pupil positions.

The method preferably may be further characterized by generating the avatar by the mobile device.

Therefore, unlike the approach of EP 3413122 A1, the mobile device is used for generating the avatar. This has the advantage that no specific stationary apparatus is needed. Also, compared to other approaches discussed above, no specific hardware is needed. In some embodiments, for providing the avatar, the person may rotate the head about the longitudinal axis (up-down direction) of the head while the mobile device measures the head with a depth sensor and optionally also with an image sensor of a camera.

Combined depth and 2D camera images captured in this way will be simply referred to as combined images hereinafter, and in case of color camera images, it is also referred to as RGB-D images. In other embodiments, additionally or alternatively the head may be rotated about another axis like a horizontal axis. For determining the avatar, the head may be held in an up-right position and the gaze may be directed horizontally far away (normal gaze direction). In this way, an individual avatar of the person may be determined without the need for additional equipment.

One main issue for generating an avatar using a smartphone is that in contrast to using a stationary apparatus like in EP 3413122 A1 , the camera poses relative to the head when capturing the head with the depth sensor and camera are not known, while in a stationary arrangement the poses of the cameras are known by design of the apparatus. Generally, the term pose refers to the combination of position and orientation, as for example defined in DIN EN ISO 8373: 2012-03.

As the way a 3D object like a head is imaged onto an image sensor or captured by a depth sensor is determined by the characteristics of the device like focal length of a lens or sensor resolution, which are known by design, the problem of determining the camera poses is equivalent to the problem of registering the above-mentioned combined images.

Generally, a combined image refers to a 2D image and a depth image taken from the respective position essentially simultaneously. The 2D image may be a color image like an RGB image (red, green, blue) or may also be a grayscale image. A depth image provides a map of distances of the camera to the object, in this case, the head. For capturing the 2D part of the combined image, any conventional image sensor, combined with corresponding camera optics may be used. To capture depth images, also any conventional depth sensor like a time-of-flight sensor may be used. The combined image may include two separate files or other data entities, where in one data entity for each 2D coordinate, e.g. pixel, a greyscale value or color value is given, and in another data entity for each 2D coordinate a depth value is given. The combined image may also include only a single data entity, where for each 2D coordinate both greyscale/color information and depth information is given. In other words, the way the information is stored in data entities like files is not important as long as for the scene captured in the image both greyscale/color information and depth information is available. A camera adapted for capturing combined images, in this case color images, is also referred to an RGBD camera (red, green, blue, depth). Some modern smartphones or other mobile devices are equipped with such RGBD cameras. In other cases, also an RGBD camera or a depth sensor (which is then used together with a built-in camera of the smartphone) may be attached to a smartphone. It should be noted that the depth image need not have the same resolution as the 2D image. In such a case, a scaling operation may be performed (downscaling or upscaling) to adapt the resolution of the 2D and depth images to each other. The result is essentially a point cloud where each point has a 3D coordinate based on the 2D coordinates in the image and a depth coordinate from the depth sensor, as well as a pixel value (color or grayscale value).

Image registration generally relates to the process of finding a transformation which transforms one of the combined images to another one of the combined images. Such a transformation may include a rotation component, a translation component and a magnification component (magnification greater or smaller than one) and may be written in matrix form. As mentioned above, performing the registration by determining the above- mentioned transformation is essentially equivalent to determining the (relative) camera poses (positions and orientation) from which the combined images were captured.

In some embodiments, such a registration may include a pairwise coarse registration based on 3D landmark points obtained based on the combined images and a fine registration based on full point clouds represented by the combined images.

A landmark point is a predefined point on the head. Such landmark points may for example include the tip of the nose, points on the nose bridge, corners of the mouth or of the eyes, points describing the eyebrows and the like. Such landmark points in the combined 2D and depth images may be determined by various conventional means. For example, a trained machine learning logic like a neural network may be used to determine the landmark points. In this case, for training, a number of combined 2D and depth images from different positions and for a plurality of different heads are used as training data, where the landmark points may be manually annotated. After training, the trained machine learning logic determines the landmark points. Details may be found for example in Wu, Y., Hassner, T., Kim, K., Medioni, G., & Natarajan, P. (2017), Facial landmark detection with tweaked convolutional neural networks, IEEE transactions on pattern analysis and machine intelligence, 40(12), 3067-3074; Perakis, P., Passalis, G., Theoharis, T., & Kakadiaris, I. A. (2012), 3D facial landmark detection under large yaw and expression variations, IEEE transactions on pattern analysis and machine intelligence, 35(7), 1552- 1564; or Wu, Y., & Ji, Q. (2019), Facial landmark detection: A literature survey, International Journal of Computer Vision, 127(2), 115-142 (2018).

The pairwise coarse registration provides a coarse alignment between the 3D landmarks of the pairs. In embodiments, this pairwise coarse registration estimates a transformation matrix between the landmarks of the two combined images that aligns the landmarks in a least square sense, i.e. that the error e = ||l/ =1 - T L =2 1| A 2 is minimized, where is the i-th landmark of the j-th image and T is a transformation matrix from a second image of the respective pair (j=2) to the first image of the respective pair (j=1). This coarse registration may be performed by a method called point a to point ICP (“Iterative Closest Point”), which is for example described in Besl, Pazl J. and McKay, Neil D., “Method for registration of 3D shapes”, Sensor fusion IV: control paradigms and data structures, Vol. 1611, International Society for Optics and Photonics, 1992. Preferably, to eliminate potential outliers which may be generated in the landmark determining step, a random sampling consensus procedure may be used, as described in Fischler, Martin A., and Bolles, Robert C., “Random sampling consensus: a paradigm for model fitting with applications to image analysis and automated cartography”, Communications of the ACM 24.6 (1981): 381-2395. The above and other formula presented herein use so-called homogeneous coordinates, as frequently done the case in computer vision applications. This means that transformations T are represented as 4x4 matrices [R t; 0 0 0 1], with a 3x3 rotation matrix R, a translation vector t and the last row being 0 0 0 1. 3D point (x, y, z) are augmented with a homogeneous component w, i.e. (x, y, z, w), where usually w=1. This makes it possible to include translation and rotation in a single matrix multiplication, i.e., instead of x2=R x1 + t, with x2 and x1 vectors in Cartesian coordinates, one can write x2w=T x1w, where x1w and x1w are corresponding vectors in homogeneous coordinates. Nevertheless, this is merely a matter of notation, and the same calculations may also be performed in cartesian or other coordinates.

The fine registration refines the above-mentioned transformations, i.e., makes the transformations more precise. For this fine registration, full point clouds represented by the combined 2D and depth images (i.e., the full RGBD images or point clouds derived therefrom) may be used. In particular, also color information may be used. As a coarse registration already has been performed, the fine registration may be performed more efficiently than in cases where only the point cloud is used for registration, or a corresponding mesh is used.

Different approaches may be used for fine registration. In some embodiments, the approach selected may depend on the error remaining after the coarse registration, for example the above-mentioned error e or other error quantity like angle opposition difference between the landmark points of the respective pairs of combined 2D and depth images based on the transformation determined after coarse registration. For example, also an angle difference opposition difference may be used. If this deviation, (for example error e below a value, angle difference between below a threshold angle like 5° or position difference below a position like 5 cm, for example as an average for the landmark points, RGBD odometry may be used for fine registration, where not only the depths coordinate but also the color of the points of the point cloud is considered. RGBD odometry is for example described in Park, Jaesik, Zhou, Quian-Yi and Koltun, Vladlen, “Colored point cloud registration revisited”, Proceedings of the IEEE International Conference on Computer Vision 2017. For larger differences a point to plane ICP on the point clouds may be used to register the images, as described for example in Rusinkiewicz, Szymon, and Levoy, Marc “Efficient variants of the ICP algorithm”, Proceedings of the third international conference on 3D digital imaging and modeling, IEEE 2001.

For both alternatives, preferably the registration is performed two times, once for estimating a transformation from a first combined image of the respective pair to a second combined image of the respective pair and once for estimating a transformation from the second combined image to the first combined image, with slightly different start values for the algorithm. This may help to increase overall accuracy. In other words, T and T2 are determined. The error between the two registrations T e = T T2 is determined, which, if the registration is stable, should be close to an identity I4 (i.e., a diagonal matrix with only values of one). If the error, i.e., deviation from identity, is below a certain threshold, the respective transformation may be added to a so-called pose graph as an adjective between the respective combined images. In some embodiments, covariances of the transformations may also be determined.

Preferably, for both coarse and fine registration, not all possible pairs of combined 2D and depth images are used, but the pairs to be registered may be determined based on a classification of the combined 2D and depth images with respect to a direction relative to the head from which they are captured, for example based on a so-called putative matching graph which indicates for which pairs a registration may be performed. This may be done using approximate pose data or other approximate information to determine combined images for which the poses from which they are captured are similar enough such that a registration is reasonably possible. For example, if one combined image is taken from the left side of the head and another combined image is captured from the right side of the head, there are hardly any common landmark points to be obtained from both images (for example left eye only visible from the left side, right side only visible from the right side, the same for left mouth corner and right mouth corner). Therefore, classification may classify the combined images into categories like left, right, up and down starting from a frontal image. Inside each category, the pairwise coarse and fine registration as described above is performed.

The categorization in some embodiments may be based on metric data from the image recording device itself, for example from ARKit tools in case of iOS based devices or ARCore tools in case of Android based devices used for capturing the combined images. In other cases, the putative matching graph may be derived from the above landmarks via 2D/3D correspondences and a perspective n-point solver, as for example described in Urban, Steffen, Leitloff, Jens and Hinz, Stefan, “Mlpnp - a real-time maximum likelihood solution to the perspective n-point problem,” arXiv preprint arXiv: 1607.08112 (2016).

In this way, the method may avoid attempting to register combined images for which such a registration is difficult or impossible due to lack of common landmark points which improve robustness.

Based on the registration, i.e. , the transformations above, then poses may be estimated for each combined image in a global reference system. The poses may be poses of the head represented by the combined images or camera poses. As explained above, the poses of the camera when capturing the combined images are directly linked to registration of the images and therefore to the poses of the head in the combined images, such that if the camera pose is known, the head pose can be determined and vice versa. This pose graph preferably is then optimized. A possible method for pose graph optimization including generating poses Mj for each of the combined images, based on the registration, is described in Choi, Sungjoon, Zhou, Qian-Yi and Vladlen, Koltun “Robust reconstruction of indoor scenes, proceedings of the IEEE conference on computer vision and pattern recognition, 2015.

Based on these poses Mj a pose graph p = {M, e} is provided, consisting of nodes M, i.e. , the poses Mj and edges E, which are the transformations T and possible covariances if these are determined.

Based on this pose graph, for further optimization then a so-called edge pruning may be performed. This may serve to remove wrong estimates of T to determine if the poses were estimated from valid edges, i.e., valid transformations obtained in the registration. For this, one of the poses is taken as a reference. Then, edges of the pose graph are concatenated from an unoptimized pose graph along a shortest path to another node. This obtains a further pose estimate for the node to be tested. This further estimate is then compared to the pose from the optimized pose graph. In case of a high deviation in this comparison, i.e., deviation above a threshold, the corresponding edges may be identified as erroneous. These edges may then be removed. Following this, the pose graph optimization mentioned above is repeated without the removed edges, until no more erroneous edges remain in the graph.

Based on the registration and the thus-determined camera poses, then the avatar may be created based on the images essentially in a similar manner as for stationary camera arrangements. This may include fusing the point clouds derived from the individual combined images based on the registration, generating a mesh based on the fused point cloud for example using a Poisson reconstruction as disclosed in Kazhdan, Michael, Bolitho, Matthew and Hoppe, Hugues “Poisson surface reconstruction", Proceedings of the fourth Eurographics symposium on Geometry processing, Vol. 7, 2006 and texturing the mesh based on the images for example as described in Waechter, Michael, Moehrle, Nils and Goesele, Michael, “Let there be color! Large-scale texturing of 3D reconstructions”, European conference on computer vision, Springer, Cham, 2014.

Another approach for generating a 3D model of a head using a smartphone is disclosed in WO 2019 / 164502 A1.

The target may be in the near field. In this way, the near visual point may be determined. The near field is an area where the person uses near vision, which includes typical reading distances or distances for similar tasks, for example a distance up to or at about 40cm from the eyes, as given under 5.28.1 in DIN DIN EN ISO 13666:2012.

The target may be the mobile device, e.g., a certain feature of the mobile device like a camera or a target displayed on the mobile device. The relative position and orientation, i.e., pose, of the mobile device (target) with respect to the head when moving the target through the field of view may then be performed using sensors of the mobile device, for example 2D camera and depth sensor as mentioned above. For example, facial landmarks may be detected for each position as described above, and the poses may be detected based on the facial landmarks, corresponding to the coarse registration process described above. Optionally, to refine the poses, a fine registration as described above, and further optionally the pose graph registration as described above may be used. In this way, the poses of the mobile device may be accurately determined.

The target may also be or include a 3D landscape.

The plurality of vision points may be provided to a database and displayed as a map, for example on a user interface on the screen of a mobile device or a computer screen as a platform which also provides other information regarding the person’s vision like refraction, may enable a virtual try-on of spectacle frames etc. Furthermore, such a platform may provide information on recommended optical design of ophthalmic lenses based on the individual points and provides explanation to an eye care professional or the glasses wearer on optical design. The plurality of visual points recorded at different points in time or using different ophthalmic lens design and or spectacles are stored in the database. The platform may comprise a functionality to directly compare the displayed maps from different points in time or using different ophthalmic lens design and or spectacles from the database.

This information can be used to tailor spectacle lens specifically to the person. Examples for such a tailoring are described in Sheedy, J. E. (2004), “Progressive addition lenses — matching the specific lens to patient needs”, Optometry-Journal of the American Optometric Association, 75(2), 83-102, or Sheedy, J., Hardy, R. F., & Hayes, J. R. (2006), “Progressive addition lenses — measurements and ratings”, Optometry-Journal of the American Optometric Association, 77(1), 23-39. Therefore, according to a further aspect, a method for manufacturing spectacle lens is provided, comprising: determining a visual point as discussed above, and manufacturing the spectacle lens, for example PAL lens, according to the determined visual point.

In this way, a spectacle lens may be better tailored to the person.

As mentioned, the method may be implemented using an off the shelf mobile device like a smartphone or a tablet PC equipped with corresponding sensors. In this case, a computer program for such a mobile device may be provided to implement any of the methods above. In the context of mobile devices, such a computer program is often referred to as application or "app".

According to another aspect, a mobile device is provided, including: a sensor configured to determine a head position and a pupil position of a person while the person looks at a target, and a processor configured to determine the visual point of the person point based on an avatar provided at least for an eye portion of the person, which is set to the determined head and pupil position, characterized in that the mobile device is further configured to determine the visual point of the person for a plurality of positions when the target is moved through a field of view of the person.

The mobile device may be configured, for example programmed, for any of the methods discussed above.

A processor, in this respect, refers to any entity that is capable of performing corresponding calculations, for example a microprocessor programmed accordingly, or a microcontroller. As mentioned previously, the mobile device may be an off the shelf mobile device like a smartphone or a tablet, where corresponding sensors or processors are usually provided. The sensor may for example include a depth sensor like a time-of- flight sensor, a camera or both.

The above concepts will be further explained using specific embodiments, wherein: FIG. 1 is a block diagram of a mobile device usable for implementation of embodiments,

FIG. 2 is a flowchart of a method according to an embodiment,

FIGs. 3 to 7 are diagrams for explaining the method of FIG. 2, and

FIGs. 8 and 9 are heat maps showing example visual point distributions.

FIG. 1 illustrates a mobile device 10 according to an embodiment. Mobile device 10 is a smartphone or a tablet PC. FIG. 1 shows some components of mobile device 10 that may be used to implement methods as described herein. Mobile device 10 may include further components conventionally used in mobile devices like smartphones, which are not needed for the explanations of embodiments and therefore not shown in FIG. 1.

As sensors, mobile device 10 includes a camera 11 and a depth sensor 12. In many cases, mobile devices include more than one camera, for example a so-called front side camera and a so-called backside camera. Both types of cameras may be used in embodiments. Depth sensor 12 may for example be a time-of-flight based depth sensor, as often provided in mobile devices.

For inputting and outputting information, mobile device 10 includes a speaker 13 for outputting audio signals, a microphone 14 for receiving audio signals and a touchscreen 19. By outputting signals on speaker 13 or displaying information on touchscreen 19, mobile device 10 may output instructions to a person to perform the methods described herein. Via microphone 14 and touchscreen 19, mobile device 10 may receive feedback from a person.

Furthermore, mobile device 10 includes a processor which controls the various components of device 10 and executes programs stored in a storage 15. Storage 15 in mobile devices like smartphones or tablet PCs typically is a flash memory or the like. Storage 15 stores computer programs executed by the processor 16 such that the method described further below with respect to FIG. 2 is executed.

Mobile device 10 further comprises an acceleration sensor 17 and an orientation sensor 110. With acceleration sensor 17, an acceleration of mobile device 10 and, by integrating the acceleration twice, a position of mobile device 10 compared to a start position may be determined. Via orientation sensor 110, an orientation (for example inclination angle) of mobile device 10 may be determined. The combination of position and orientation is also referred to as pose, see DIN EN ISO 8373: 2012-03.

FIG. 2 illustrates a method according to an embodiment, which may be implemented by providing a corresponding computer program in storage 15 of mobile device 10 of FIG. 1 to be executed by processor 16.

In step 20, the method of FIG. 2 comprises determining an avatar of at least an eye portion of the head of a person to be examined. This is further illustrated in FIG. 3. Here, a head 30 of the person with an eye 31 is schematically shown. The person (or another person) holds mobile device 10, for example in this case a smartphone, in front of head 30 while the person looks straight to some target far away, as illustrated by a line 32. The person then rotates the head about the vertical axis 33 while mobile device measures the head using depth sensor 12, and camera 11 for a plurality of rotational positions. From these measurements, then an avatar of at least an eye portion of head 30, i.e. , a portion including the eye 31 as shown and the other eye (not shown) and preferably the complete head is calculated by processor 16. Alternatively, mobile device 10 may be moved around head 30, and may measure head 30 at different positions of mobile device 10. Mobile device 10 may determine the positions based on acceleration sensor 17.

Returning to FIG. 2, at 21 the method comprises measuring the head and pupil position while the person looks at a target. This is illustrated in FIG. 4. In FIG. 4, mobile device 10 serves as the target, for example by displaying some target the person should look at on touchscreen 19. For example, for giving instructions to the person at the same time, mobile device 10 may display a text like “now look here" on touchscreen 19. Mobile device 10 is for example held in a reading position or in another typical near field position. To look at the target, the person inclines head 30 by an angle between line 32 and a line 41. Moreover, the person may rotate eye 13 resulting in a line of sight. Using depth sensor 12 and camera 11 , smartphone 10 detects the pupil position and the head position. Additionally, in some embodiments the person may wear a spectacle frame 40, where the position of the spectacle frame is also detected by mobile device 10 as discussed above. In other embodiments, as already explained above, the avatar of the person determined at step 20 may be fitted with a virtual spectacle frame. While in FIG. 4 the target is the mobile device, the person may also look at other targets or at a 3D landscape or scene, and mobile device 10 serves merely for measurements. At 22, the method comprises setting the avatar to the head and pupil position measured at 21. In other words, the head is sent set to a pose (position and inclination) as measured at 21 , and the eyes of the avatar are rotated such that they have the pupil position measured at 21.

Then, at 23, the method of FIG. 2 includes determining the visual point based on the avatar in the measured head and pupil position. Therefore, unlike some conventional methods, for the final determination no direct measurements are used, but the avatar set to the corresponding position is used as a basis. As mentioned above, for this a real spectacle frame worn by the person or a virtually fitted spectacle frame may be used. This determination will be explained referring to FIGs. 5 to 7.

As indicated by box 24, steps 21 to 23 are repeated for different target positions, for example by moving the mobile device in the field of view of the person as indicated by arrows 43 in Fig. 4. By the repetition, a map of vision points similar to Figs. 8 and 9 may be obtained using a smartphone or similar mobile device. This map may be displayed or transmitted to a lens manufacturer for manufacturing lenses, and then adapted to the vision point map of the respective person.

For the determination of the visual point at 23, mobile device 10, an avatar of head 30, an avatar of eye 31 and spectacle frame 40 (as virtual frame or as detected real frame) may each be represented by a six-dimensional vector including at least three-dimensional position (x, y, z coordinates) and three angles (a, p, y). That is, mobile device may be represented by position {x m obiie, ymobiie, z m obiie} and an orientation { a m obiie, mobiie, /mobile }, the head and/or facial features may be represented by position {xHead, yHead, ZHead} and an orientation { aHead, nead, /Head }, the at least one eye may be represented by position {x eye , y eye , z eye } and an orientation { a eye , p eye , y eye }. Herein, the position coordinates can be represented in cartesian or polar or similar space. The orientations can be represented in Euler angels or rotation matrix or quaternions or similar. The orientation is measured by orientation sensor 110, and the position of mobile device 10 may be set to 0, such that the other positions explained below are determined relative to mobile device 10.

By measuring for example a few landmark points of head 30, the position of the head of the person {x hea d, yhead, z hea d} and the corresponding orientation {a hea d, head, Yhead} may be determined. The position and orientation of the eye within the avatar corresponding to eye 31 may be determined as {x eye , y eye , z eye } and the orientation may be determined as {a eye , p eye , y eye }. The avatar is then set to the corresponding position and orientation.

Using a mobile device 10, also the center of rotation of the eye may be determined as described for example in European Patent Application EP 20202529.2, and based on the pupil position and the center of rotation the position and orientation of the eye may be determined. Furthermore, also for the spectacle frame a corresponding position {Xf ra me, yframe, Zframe} and orientation {Qframe, Pframe, Yframe} may be determined as explained above.

In FIGs. 5 to 7, reference numeral 50 denotes the avatar of the head 30 set to the position and orientation measured in step 21 , numerals 53A and 53B designate the left and right eye of the avatar, respectively, and reference 51 designates the position of mobile device 10, i.e., essentially the measurement plane. The position 51 is set to a vertical position for the calculation. This means that the orientations for eye, head and spectacles are transformed based on the measured orientation of mobile device 10 to a coordinate system where mobile device 10 is in an x-z-plane. In FIG. 5, 54A and 54B denote lines of sight of eye 53A and 53B, respectively, and in FIG. 6 lines 64A, 64B denote the respective lines of sight. While registering the relative change of position and orientation of the avatar using at least one camera and/or at least one depth sensor, no additional device is necessary to determine the visual point.

FIGs. 5 to 7 illustrate a rotation of the head about the x-axis of FIGs. 3 and 4, or, in other words, illustrate the y-z-plane. Similar calculations may be performed for other planes.

In FIG. 5, the eyes essentially look straight ahead, such that the angle a eye ,os and a eye ,oD corresponding to the left and right eye, respectively, as seen at 51 , corresponds to the head rotation angle ahead. In FIG. 6, the gaze of the two eyes converges ata single point, such that the angles a eye ,os and a eye ,oD are different from the angle ahead by the respective relative eye rotation angles a eye for the left and right eyes with respect to the head. Aa in FIG. 6 designates the difference between the eye rotation and the inclination af ra me of the spectacle frame at the respective eye.

In FIG. 5, reference numerals 55A, 55B designate the visual point for the left and right eye, respectively. In FIG. 6, 65A and 65B designate the visual point for the left and right eye. The construction of the visual point based on the angles discussed is again shown in FIG. 7 for a right eye. Essentially, the difference between the spectacle rotation angle and the eye rotation is estimated by Act, and the visual point 70 is then calculated based on the position of the spectacle frame 52 (given for example by the coordinates with index frame mentioned above.)