Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR OPTIMUM POSITIONING OF CAMERAS FOR ACCURATE RENDERING OF A VIRTUAL SCENE
Document Type and Number:
WIPO Patent Application WO/2024/013137
Kind Code:
A1
Abstract:
A method for positioning of cameras on an object that enables accurate rendering of the scene around the object on a dome accurately in real time. The method involves providing a 3D model of the object having a surface, and selecting locations on the surface where the cameras are to be placed to provide a camera rig. The choice of locations is such that every camera has a field of view that overlaps with at least one other camera. Each camera is also associated with a blind volume. The location is also associated with an importance weight that may be user defined or it may be estimated based on a number of predefined factors. Subsequently, based on all the locations, an objective function is estimated. This step is repeated for a different set of locations until a minimum objective function is arrived at.

Inventors:
KOVACEVIC SRDJAN (HR)
PETRINEC ANA (HR)
Application Number:
PCT/EP2023/069127
Publication Date:
January 18, 2024
Filing Date:
July 11, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ORQA HOLDING LTD (IE)
International Classes:
H04N23/661; G03B37/00; G03B37/04; H04N5/74; H04N7/18; H04N23/698; H04N23/90
Foreign References:
US6141034A2000-10-31
Attorney, Agent or Firm:
FRKELLY (IE)
Download PDF:
Claims:
What is claimed is:

1 . A method of providing a camera rig by positioning of cameras on an object, the method comprising: providing a 3D model of the object, wherein the 3D model comprises a surface; designating each of the cameras as a Direct View Camera or a Secondary View Camera; positioning the Direct View Camera and the Secondary View Camera at selected locations on the surface of the 3D model of the object; and specifying a direction, a rotation along a longitudinal axis and a field of view for each of the cameras.

2. The method of claim 1 further comprising predefining the number of the Direct View Cameras and the Secondary View Cameras to be placed.

3. The method of claim 1 wherein the Direct View Camera is positioned such that there is an overlap between a field of view of the Direct View Camera with at least one other of the cameras.

4. The method of claim 1 wherein the selected locations are selected to align to a specific geometry.

5. The method of claim 1 wherein the object is a vehicle.

6. A method of rendering a virtual scene, the method comprising: providing a camera rig with a plurality of cameras, comprising: providing a 3D model of the object, wherein the 3D model comprises a surface; designating each camera of the plurality of cameras as a Direct View Camera or a Secondary View Camera; specifying a direction, a rotation along a longitudinal axis and a field of view for each camera of the plurality of cameras; and positioning the Direct View Camera and the Secondary View Camera at selected locations on the surface of the 3D model of the object; providing a projection rig that comprises a plurality of projectors within a hollow half-sphere, wherein the hollow half-sphere comprises an inner surface and an outer surface; designating each projector of the plurality of projectors as a Direct View Projector or a Secondary View Projector; positioning each of the Direct View Projector such that each Direct View Projector corresponds to a Direct View Camera by having identical spatial position, direction, rotation along a longitudinal axis and field of view of the corresponding Direct View Camera and capable of receiving images from the corresponding Direct View Camera; positioning of each of the Secondary View Projectors arbitrarily, such that each Secondary View Projector corresponds to the Secondary View Camera on the camera rig and capable of receiving images from the corresponding Secondary View Camera; and projecting by all the projectors the images obtained from each corresponding camera onto the inner surface of the hollow half-sphere.

7. The method of claim 6 wherein the spatial position is defined by Cartesian coordinates.

8. The method of claim 6 wherein the direction is defined by pitch and yaw.

9. The method of claim 6 wherein the rotation along the longitudinal axis is defined by roll.

10. A system for virtual rendering of space around an object to a user, the system comprising: a 3D model of an object comprising a surface; a camera rig that comprises a plurality of cameras positioned at selected locations on the surface of the 3D model of the object, wherein each camera of the plurality of cameras is designated as a Direct View Camera or a Secondary View Camera, wherein each camera of the plurality of cameras is associated with a direction, a rotation along a longitudinal axis and a field of view; and a projection rig that comprises a plurality of projectors within a hollow half-sphere, wherein the hollow half-sphere comprises an inner surface and an outer surface, wherein each projector of the plurality of projectors is designated as a Direct View Projector or a Secondary View Projector, wherein the plurality of projectors are positioned such that each Direct View Projector corresponds to a Direct View Camera having identical spatial position, direction, rotation along a longitudinal axis and field of view of the corresponding Direct View Camera and capable of receiving images from the corresponding Direct View Camera, each of the Secondary View Projector is positioned arbitrarily, such that each Secondary View Projector corresponds to the Secondary View Camera on the camera rig and capable of receiving images from the corresponding Secondary View Camera, and all the projectors of the plurality of projectors are configured to project the images obtained from the corresponding cameras onto the inner surface of the hollow half-sphere.

11. The system of claim 10 wherein the spatial position is defined by Cartesian coordinates.

12. The system of claim 10 wherein the direction is defined by pitch and yaw.

13. The system of claim 10 wherein the rotation along the longitudinal axis is defined by roll.

Description:
METHOD AND SYSTEM FOR OPTIMUM POSITIONING OF CAMERAS FOR ACCURATE RENDERING OF A VIRTUAL SCENE

CROSS REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U.S. Nonprovisional Application Number 17/864,070, filed on July 13, 2022, and entitled METHOD AND SYSTEM FOR OPTIMUM POSITIONING OF CAMERAS FOR ACCURATE RENDERING OF A VIRTUAL SCENE, the entire disclosure of which is incorporated herein by reference.

TECHNICAL FIELD

[0002] The present disclosure relates generally to providing a virtual rendering of a scene and more specifically to a method for positioning cameras on an object that allows for accurate rendering of a virtual scene on a real time basis.

BACKGROUND

[0003] Virtual reality scenes are generally created by capturing videos or images using special techniques and then rendered using special algorithms on a screen to allow users to get a fully immersive experience.

[0004] The 360° video is a special kind of video media, which is recorded by capturing the entire scene around the camera, rather than just a portion of the scene in front of the camera, which is limited by the camera’s field of view (FoV). The typical 360° video setup enables an immersive viewing experience, by placing the viewer in the center of the scene, which allows them to view the entire scene by looking around. Typically, such 360° videos are viewed using a video headset, like for example a virtual reality (VR) headset.

[0005] Video headsets are typically equipped with sensors, which determine the head pose in 3 or 6 degrees of freedom (DoF). In a 3DoF setup, the headset determines the head pose described with three parameters corresponding to three angles of rotation around the three axes (x, y, and z); the three angles are typically called roll, pitch (or sometimes tilt), and yaw (or sometimes pan), respectively. Further, in a 3DoF setup the headset will track the user’s head pose by repeatedly updating the three angles, which are then used to adjust the video image that is displayed to the user at any given moment, so the user’s current perspective at every moment corresponds to the head pose currently registered by the headset.

[0006] In reality, user’s head movements are not strictly limited to three angles of rotation, there are linear movements also along the three axes (x, y, and z). However, 3DoF systems are not able to capture those linear movements, so the user’s visual experience is limited only to rotational movements, as if the vantage point is always fixed in a single point positioned in the origin of the x, y, and z axes. The 6D0F systems are able to capture these linear movements too, so the user has the experience of full three-dimensional linear movement, in addition to the rotational movement.

[0007] Typical camera rigs for capturing immersive, 360° video setups require multiple cameras to ideally be arranged in a perfect circle or a perfect sphere with cameras facing outward, and with identical overlap between the cameras’ field of view, so the images can be algorithmically recombined in an immersive virtual scene. Special algorithms need to be applied in order to deform and stitch separate camera feeds which can then be recombined into 360° video frames, which can then be viewed by the user via an immersive video headset.

[0008] Camera rigs typically used for 360° video capture are often cumbersome, and if they are to be used for teleoperation, they often cannot be integrated on the vehicle in a manner that is practical, inconspicuous and/or robust. Furthermore, such setups are able to provide only one vantage point for the user, the one at the center of the circle/sphere along which cameras are arranged on the rig.

[0009] It would therefore be advantageous to provide a solution that would overcome the challenges noted above.

SUMMARY

[0010] A summary of several example embodiments of the disclosure follows. This summary is provided for the convenience of the reader to provide a basic understanding of such embodiments and does not wholly define the breadth of the disclosure. This summary is not an extensive overview of all contemplated embodiments, and is intended to neither identify key or critical elements of all embodiments nor to delineate the scope of any or all aspects. Its sole purpose is to present some concepts of one or more embodiments in a simplified form as a prelude to the more detailed description that is presented later. For convenience, the term “some embodiments” or “certain embodiments” may be used herein to refer to a single embodiment or multiple embodiments of the disclosure.

[0011] Certain embodiments disclosed herein include a method for providing a camera rig by positioning of cameras on a 3D model of the object having a surface. Each camera is then designated either as a Direct View Camera (DVC) or a Secondary View Camera (SVC). Then, the method comprises specifying each camera with a direction, a rotation along a longitudinal axis and a field of view.

[0012] Certain embodiments disclosed herein also include a method for rendering a virtual scene. The method includes providing the camera rig as described herein. The method then includes providing a projection rig that includes a plurality of projectors within a hollow half-sphere, wherein the hollow half-sphere includes an inner surface and an outer surface. Each of the plurality of projectors is designated as a Direct View Projector (DVP) or a Secondary View Projector (SVP). Each DVP corresponds to a DVC such that each DVP has identical spatial position, direction, rotation along a longitudinal axis and field of view of the corresponding DVC. Also, each DVP is capable of receiving images from the corresponding DVC. The method further involves arbitrary (free) positioning of the SVP, wherein each SVP corresponding to one SVC on the camera rig, with each SVP being capable of receiving images from the corresponding SVC. The method then involves projecting by the projector the images obtained from the corresponding camera onto the inner surface of the hollow half-sphere.

[0013] In another aspect, the invention provides a system for virtual rendering of space around an object to a user. The system includes a camera rig on an object and a projection rig as described herein. The system also includes a projection dome onto which the DVPs and SVPs project the images received from the corresponding cameras. Thus, the space around an object is rendered to a user in a way that DVPs provide a view of the space equivalent to the experience of looking through a window, whereas SVPs provide a view of the space equivalent to the experience of looking at secondary viewing devices in a vehicle (e.g., rear-view mirrors or in-cockpit displays showing a feed from rear-view or side-view camera). The arbitrary positioning of the SVP’s within the hollow sphere determines the positioning of the views from these virtual secondary viewing devices within the user’s virtual environment, and it can be adapted to a given application.

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.

[0015] FIG. 1 is a flowchart representation of the steps involved in the method for positioning cameras on a camera rig, according to an embodiment.

[0016] FIG. 2 is a block diagrammatic representation of the components of the system for rendering a virtual scene to a user, according to an embodiment.

[0017] FIG. 3 is a schematic representation of the camera rig on a vehicle and a projection rig in the Dome, according to an embodiment.

[0018] FIG. 4A is an exemplary scene to be rendered virtually, wherein the object is a vehicle in which the user is present, according to an embodiment.

[0019] FIG. 4B is an exemplary 3D rendering of the scene shown in FIG. 4A from the perspective of the user sitting in a vehicle, according to an embodiment.

DETAILED DESCRIPTION

[0020] It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed embodiments. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.

[0021] The definitions provided herein are to facilitate understanding of certain terms used frequently herein and are not meant to limit the scope of the present disclosure.

[0022] As used in this specification and the appended claims, the singular forms "a", "an", and "the" encompass embodiments having plural referents, unless the content clearly dictates otherwise.

[0023] Unless otherwise indicated, all numbers expressing feature sizes, amounts, and physical properties used in the specification and claims are to be understood as being modified in all instances by the term "about." Accordingly, unless indicated to the contrary, the numerical parameters set forth in the foregoing specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by those skilled in the art utilizing the teachings disclosed herein.

[0024] As used in this specification and the appended claims, the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise.

[0025] As used herein, “Camera Rig” is used to mean a plurality of cameras arranged in a particular configuration and mounted on an object.

[0026] As noted herein a method for providing a camera rig is provided. Fig. 1 is a flowchart 10 of a method of positioning at least two cameras, according to an embodiment. At S12, a 3D model of the object is provided, wherein the 3D model includes a surface. Then, at S14, the cameras are designated as Direct View Camera (DVC) ora Secondary View Camera (SVC). Next, at S16 DVCs and SVCs-designated cameras are placed on the surface of a 3D model. Finally, at S18 for each camera, direction, rotation along a longitudinal axis and field of view is specified.

[0027] The locations for placing the DVCs are chosen in such a way that ensures that the field of view of each individual camera is sufficiently overlapping with the fields of view of its neighboring cameras, in order to create the degree of visibility that will satisfy a particular application or use case, in a way that will minimize the total number of cameras being used. The location for placing cameras on the surface of the 3D model may be aligned in such a way that it fits a certain geometric pattern, or it could be randomly shaped as long as it fits into the criteria of overlapping fields of view.

[0028] The locations for placing SVCs are chosen in such a way that ensures that the field of view of each individual camera covers a certain ‘blind spot’, which was not covered in the collective field of view of the DVC’s, but is nevertheless relevant for a particular application or use case.

[0029] Placement of each individual camera in a given arrangement of a camera rig is generally described by the three cartesian coordinates (x, y, z) determining the position of the camera, and three angles (pitch, yaw, roll) determining the direction the camera is pointed to (pitch, yaw) and an angle of its rotation around its longitudinal axis (roll).

Camera positioning in a Camera Rig

[0030] One of the advantages of the method as disclosed is that the camera arrangement does not necessarily need to be ‘regular’ (like, e.g., a perfect circle, or perfect sphere), and can be adapted to particular circumstances of each implementation (e.g., the camera arrangement can be specifically adapted to a specific geometry, or a specific operational context of each different type of vehicle).

[0031] The methods as disclosed are particularly useful in rendering a virtual reality rendering of a scene on a real time basis of events occurring that are captured by the camera placed on the camera rigs. The methods are particularly useful in rendering scenes occurring outside a vehicle to a driver of the vehicle.

[0032] Thus, as already noted, the embodiments provide a method for rendering a virtual scene. Fig. 2 is a flowchart 20 of a method for rendering a virtual scene to a user, shown generally by numeral 20. At S22, a camera rig is provided on an object based on a selection of locations on a 3D model of the object. Next, at S24, a DVP projection rig is provided that includes a plurality of DVPs within a hollow half-sphere. Here, each DVP corresponds to a DVC on the camera rig such that each DVP has identical spatial position, direction, rotation along the longitudinal axis, and field of view of the corresponding DVC. Thereafter, at S26 an SVP projection rig that includes a plurality of SVPs within a hollow half-sphere is provided. Here, the spatial position, direction, rotation along the longitudinal axis, and field of view of each SVP is freely adjusted so that its projection is positioned on the inner surface of the hollow halfsphere so that it will satisfy a particular application or use case (e.g. the projection may be positioned in a way that will provide an experience and the function of a rearview mirror or an in-cockpit display showing a feed from rear-view or side-view camera). Spatial position is typically defined by the Cartesian coordinates, while direction is defined by pitch and yaw, and rotation along the longitudinal axis is defined by roll. It is noted that both DVPs and SVPs are configured to receive camera feed from the camera rig and project it onto an inner surface of the hollow half-sphere.

[0033] Next, at S28, images are obtained by the respective projection rigs from the corresponding cameras. Here, the camera feed typically includes video streams that are generally accompanied by audio, but may only be a video feed, and sometimes may also be a series of images. The hollow half-sphere, also sometimes referred to as a dome, serves as a virtual projection screen for the projection rigs. Further the field of view of each projector on the DVP projection rig is exactly the same as that of the camera at that corresponding location on the camera rig, where the field of view of each projector on the SVP projection rig can be adjusted so that it will satisfy a particular application or use case.

[0034] The video streams obtained from the cameras are used to render a virtual scene which can be displayed to the user via VR headset as a realistic and spatially accurate representation of the portion of real world being captured by the camera rig, and through which virtual scene the user can virtually move using a VR headset. One skilled in the art will understand that any virtual projector can be implemented using an existing class of virtual objects typically available in any 3D engine (e.g., Projector class of objects in Unity 3D Engine https://docs.unity3d.com/560/Documentation/Manual/class-Proj ector.html, or Projector Interactable Blueprint class of objects in Unreal 3D Engine https://www.unrealengine.com/marketplace/en-US/product/proje ctor).

[0035] The real-world environment within which the camera rig is placed can be reconstructed in the virtual world if each virtual projector on the projection rig receives live video feed from its corresponding camera on the camera rig, and projects this video feed onto the inner surface of the dome. If a user is equipped with a VR headset, and user’s virtual vantage point is placed within the dome, user will experience a highly accurate and immersive visual representation of the real scene.

[0036] Thus, in yet another aspect, the present disclosure provides a system for virtual rendering of a space around an object to a user using the methods as described herein. Fig. 3 is a block diagram of the components of a system 32 of the embodiments. The system 32 includes a camera rig 34 provided on an object, and a projection rig 36 provided within a hollow half-sphere or a dome 37. The positions of the cameras on the camera rig 34 are chosen as described herein, and the projection rig 36 is configured such that every DVP is situated at the corresponding virtual rendering of the location of the camera having the same field of view, and every SVP is situated freely, so that their field of view and arrangement of their projections will satisfy a particular application or use case.

[0037] Fig. 4A is a top-down view of the camera rig in field operation, according to an embodiment. Here, a camera rig including three DVC-designated cameras, namely DVC(1) 38a, DVC(2) 38b, and DVC(3) 38c, and two SVC-designated cameras, namely SVC(1 ) 38d, and SVC(2) 38e, with different fields of views on a vehicle 40 are shown. Also Fig. 4B is a view of the projection rig in field operation, according to an embodiment. Here, the projection rig including projectors (43a through 43e) on the 3D model of the vehicle 41 project the projections 42a through 42c on a dome 37, where the projectors DVP(1) 43a, DVP(2) 43b, and DVP(3) 43c are each projecting live image from cameras DVC(1) 38a, DVC(2) 38b, and DVC(3) 38c, respectively, and their respective projections are merged into the projection 42a, while the projector SVP(1 ) 43d projects the live image from SVC(1) 38d creating the projection 42d, and the projector SVP(2) 43e projects the live image from SVC(2) 38e creating the projection 42e.

Applications in vehicle operation

[0038] The embodiments are described with respect to a vehicle operation, but other use case scenarios are also encompassed to be within the scope of the embodiments. The described method can be applied to situations where the operator is on board the vehicle, or in a remote vehicle operation. In both cases, the proposed method provides the operator with a superior situational awareness.

[0039] In a remote vehicle operation use case scenario, applying the proposed method can provide the operator with an immersive visual experience and the sense of presence in the remote real-world environment in which a remote vehicle equipped with a camera rig is operating.

[0040] In a local (direct) vehicle operation, where the operator is on board the vehicle, applying the proposed method can significantly improve the operator’s situational awareness, especially in vehicles where visibility from the operator’s position is reduced (e.g., in armored vehicles).

[0041] Each of the two applications requires a slightly different variant of implementation of the method in order to improve the user experience and functionality.

Applying the Method to remote vehicle operation

[0042] In this use-case scenario, the described method is used to implement a vision system for the operator of a remote vehicle to which a camera rig is attached. In such a setup, the vehicle equipped with a camera rig is moving, the projection rig is stationary in relation to the dome, and the operator is stationary on a fixed platform, independent of the vehicle.

[0043] The rotational movement of the vehicle (and thus the camera rig) in relation to the natural horizon (namely: pitching and rolling of the vehicle and camera rig) will cause the projection of the horizon on the dome (the “Virtual Horizon”) to rotate (pitch and roll) accordingly.

[0044] Given that the operator is physically positioned on a firm platform that is fixed in relation to the operator’s real horizon, this rotation of the virtual horizon may cause discomfort (sometimes also referred to as kinetosis, better known as motion sickness or travel sickness), as the operator’s visual sense is implying movement that her inner ear does not sense (as the operator is standing on a fixed, motionless platform). The intensity of discomfort will depend on individual physiological predispositions of each operator but can range from mild disorientation to nausea.

[0045] To reduce discomfort, a degree of projection stabilization can be implemented in order to reduce the intensity of the movement of the virtual horizon, or to fix it entirely in relation to the operator’s real (actual) horizon. This can be done by equipping the vehicle with an Inertial Measurement Unit (the “IMU”), and/or use the live video feed from DVCs and SVCs to perform Visual Inertial Odometry (“VIO”) in order to track the movement of the vehicle, and then using the data about the pitching and rolling of the vehicle to rotate the projection rig within the dome in the opposite direction. By moving the projection rig in the opposite direction in relation to the remote vehicle (and thus the camera rig), the movement of the virtual horizon will be compensated, and it will remain fixed in relation to the dome (and thus the remote operator’s actual horizon).

[0046] This fixing of the virtual horizon will, however, eliminate the information about actual pitching and rolling of the vehicle, which may be relevant and useful in performing the remote operation function. This information can be conveyed to the operator by rendering a virtual model of the vehicle within the Dome and using the IMU and/or VIO data to replicate its pitch and roll movements in real time.

Applying the Method to on-board vehicle operation

[0047] In this use-case scenario, the described method is used to implement a vision system for the operator on board the vehicle. This may be beneficial for situations where the visibility from the operator’s seat does not provide an adequate degree of situational awareness due to blind spots and/or operational limitations. A good example is armored vehicles, in which the visibility from operators’ position is typically severely limited. In such a setup, the operator is positioned within the vehicle, moves with the vehicle (and so does the camera rig mounted on it), and is thus exposed to the same inertial forces the vehicle is exposed to.

[0048] Here too the pitching and rolling of the vehicle (and the camera rig) in relation to the natural horizon will cause a proportional pitching and rolling of the virtual horizon in relation to the dome, but because the operator actually moves together with the vehicle and the camera rig, the user’s visual perception of the movement will be congruent with the inertial forces that the user’s inner ear will actually sense, so there should be no discomfort.

[0049] However, while this setup provides for congruence between human visual and inertial perception, it confuses the computer hardware. VR headsets typically rely on a combination of visual sensors (cameras) and inertial sensors in order to track the head pose in 6D0F, which causes a problem in this particular setup. In a typical setup, both the visual and inertial sensors on the VR headset are taking measurements from effectively the same frame of reference, as the environment from which visual sensors are taking cues (e.g., a room) is typically fixed in relation to the inertial frame of reference (Earth’s gravity), so effectively they are one single frame of reference.

[0050] In this particular setup, however, we have two inertial frames of reference: Earth’s gravity as the global frame of reference, to which the inertial frame of reference of the moving vehicle is superimposed. So, when visual sensors would be taking visual cues from the headset’s movement, they would only register the component of the movement relative to the inside of the vehicle, while the component of the movement in relation to the global frame of reference (Earth) would not be registered by the visual sensors. Inertial sensors, on the other hand, would sense the movement in relation to the global frame of reference.

[0051] Therefore, the methods typically used to determine the head pose in 6DoF by combining the measurements from the inertial and visual sensors would be ‘confused’, because the inertial sensor would pick up movements both of the headset in relation to the vehicle and the vehicle in relation to the global frame of reference, while visual sensors would only be picking up the former component, resulting in an erratic movement of the operator’s viewpoint within the Dome. This would cause the user's overall visual experience to imply a movement that is inconsistent with the movement the user would experience with the sense of inertia.

[0052] Given that the dome is effectively fixed to the vehicle’s frame of reference by the virtue of camera rig being fixed to the vehicle and the projection rig being fixed to the dome, movements of the operator’s viewpoint in the Dome should be driven exclusively by the movement of the headset relative to the vehicle, while the movement of the vehicle relative to the global frame of reference should be ignored. This can be done either by relying exclusively on visual sensors in 6D0F head tracking, or subtracting the movement of the vehicle from the movement of the headset by combining the measurement data from the inertial sensors in the headset with that from the one affixed to the vehicle.

Advantages of the method

[0053] The method as disclosed does not require cameras to be arranged in a circular arrangement, thus enabling the camera arrangement to be adjusted to the vehicle geometry, optimizing for function, convenience, inconspicuity, robustness, or the combination thereof. The virtual scene is rendered with an aim to create a realistic and spatially accurate virtual representation of the real environment being captured by the camera rig.

[0054] Further, to facilitate both teleoperation and direct operation use case scenarios, the method allows the geometric extent of the vehicle to be rendered within the virtual scene that is recreated for the user, and it is done in a spatially accurate way so that the user gets a realistic perception of the vehicle within its environment, which enables precise navigation in complex environments replete with obstacles. This also gives the user a more accurate representation of the spatial relationship of the vehicle in relation to the real-world environment.

[0055] The method also enables placing the user at the specific place within the virtual representation of the vehicle, which enables keeping the user’s spatial perception within the virtual environment congruent with what the user’s spatial perception would be in the real world. This is useful in cases where user is onboard the vehicle, and it is necessary to preserve the user’s capability to operate the vehicle without the system of the invention. If the system would give the user a virtual vantage point that is different from her actual vantage point, switching from one to another would cause a temporary disorientation, which would reduce operator’s effectiveness in case the operator needs to switch between two modes of operation. [0056] In case of a teleoperation application, this method helps user’s experience to more accurately mimic the actual experience the user would have if the user were to be physically present in the remote location that is being virtually reconstructed.

[0057] While only certain features of the embodiments have been illustrated and described herein, many modifications and changes will occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the embodiments.