Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HUMAN BODY IMITATION AND VIRTUAL BODY TRANSFER USING HOLOGRAM-GUIDED ROBOTICS TECHNIQUES
Document Type and Number:
WIPO Patent Application WO/2024/038476
Kind Code:
A1
Abstract:
There is provided a system and a method for human body imitation and virtual body transfer communication using hologram-guided robotics, the system may comprise a first end having one or more people, a first plurality of hologram sources may be configured to simulate a second end's surroundings, a display unit may be configured to project the second end's surroundings; a first set of image acquisition units; and a first voice recognition unit; a second end having a robot body, a second plurality of hologram sources may be configured to simulate the first end's surroundings, a second set of image acquisition units, and a second voice recognition unit; a controlling and processing unit may be configured to control the robot body, and generate spatial mapping using data acquired by the first and second image acquisition units, wherein the controlling and processing unit may be located in the robot body; and a network.

Inventors:
MAGABLEH AMER (JO)
Application Number:
PCT/JO2022/050013
Publication Date:
February 22, 2024
Filing Date:
August 17, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
JORDAN UNIV OF SCIENCE AND TECHNOLOGY (JO)
International Classes:
G03H1/00; G03H1/22
Foreign References:
US20200142354A12020-05-07
US20170308904A12017-10-26
US20200218767A12020-07-09
US20200368616A12020-11-26
Other References:
BACHMANN DANIEL, WEICHERT FRANK, RINKENAUER GERHARD: "Review of Three-Dimensional Human-Computer Interaction with Focus on the Leap Motion Controller", SENSORS, MDPI, CH, vol. 18, no. 7, CH , pages 2194, XP093142574, ISSN: 1424-8220, DOI: 10.3390/s18072194
MEL SLATER, SPANLANG BERNHARD, SANCHEZ VIVES MARIA, BLANKE OLAF, WILLIAMS MARK: "First Person Experience of Body Transfer in Virtual Reality", PLOS ONE, PUBLIC LIBRARY OF SCIENCE, vol. 5, no. 5, 12 May 2010 (2010-05-12), pages e10564, XP055236839, DOI: 10.1371/journal.pone.0010564
Attorney, Agent or Firm:
THE INTELLECTUAL PROPERTY COMMERCIALIZATION OFFICE/ ROYAL SCIENTIFIC SOCIETY (JO)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for human body imitation and virtual body transfer communication using hologram-guided robotics, the system comprises:

A first end having one or more people, a first plurality of hologram sources configured to simulate a second end’s surroundings, a display unit configured to project the second end’s surroundings, a first set of image acquisition units, and a first voice recognition unit; A second end having a robot body, a second plurality of hologram sources configured to simulate the first end’s surroundings, a second set of image acquisition units, and a second voice recognition unit;

A controlling and processing unit configured to control the robot body, and generate spatial mapping using data acquired by the first and second image acquisition units, wherein the controlling and processing unit is located in the robot body; and A network.

2. The system of claim 1, wherein the first end and the second end are transceiving ends.

3. The system of claim 1, wherein the first and second voice recognition units comprise speakers and microphones.

4. The system of claim 1, wherein the robot body is a humanoid robot body.

5. The system of claim 1, wherein the robot body is configured to have pipes with flexible exterior, expandable body parts, a plurality of actuators, at least one built-in camera, and a group of sensors mounted thereon.

6. The system of claims 1 or 5, wherein the pipes are distributed along the robot body.

7. The system of claim 1, wherein the group of sensors comprises gyroscopes, accelerometers, and proximity sensors configured to detect robot body’s interactions and movements. The system of claim 1, wherein the robot body is configured to mimic the one or more people. The system of claims 1 or 5, wherein the robot body is configured to have a formation material transferred through the pipes. The system of claim 9, wherein the formation material is a flesh-like formation material. The system of claim 10, wherein the flesh-like formation material is a foaming material. The system of claim 11, wherein the foaming material is carbon nanotubes. The system of claim 12, wherein the carbon nanotubes are straight carbon nanotubes, waved carbon nanotubes, bent carbon nanotubes or a combination thereof. The system of claims 1 or 9, wherein the formation material is configured to provide hard biometrics for the robot body. The system of claim 1, wherein the second plurality of hologram sources is configured to provide soft biometrics to the robot body. A method for operating the system of claim 1, the method comprises an initialization stage, a spatial mapping stage, a shaping and mimicking stage, a tracking stage, and an operation stage, wherein the initialization stage comprises the steps of:

Entering dimensions of human body characteristics and first metrics for the one or more people using hard and soft biometric methods to be processed by the controlling and processing unit;

Shaping, by the formation material, the robot body;

Gathering information by the second set of image acquisition units and take measurements of the second end;

Categorizing the information, by the controlling and processing unit, into static objects and dynamic objects;

Scanning, by the second set of image acquisition units and the at least one built-in camera, the second end’s surroundings to generate a virtual layout of a scene to be mapped into the first end; Providing, by the first set of image acquisition units, a 3D image for the one or more people; and

Controlling the robot body according to the human body characteristics and dimensions collected for the one or more people. The method of claim 16, wherein the spatial mapping stage comprises the steps of: Identifying the static and dynamic objects in the first end;

Assigning hologram numbers to identified static and dynamic objects, wherein hologram projections are assigned to the robot body and nearby objects;

Constructing, by reflecting the data collected by the first set of image acquisition units, holographic representations of the identified static and dynamic objects;

Refreshing data collected continuously; and

Using 3D holographic projection to provide a highly accurate measurement of the first end. The method of claim 17, wherein the spatial mapping stage further comprises a recursive measurement for the second end. The method of claim 18, wherein the recursive measurement is achieved by 3D Holographic Projection. The method of claim 16, wherein the shaping and mimicking stage comprises the steps of:

Expanding the robot body based on the dimensions and first metrics entered in the initialization stage;

Allocating, the group of sensors on the robot body for better interaction, wherein the placement of the group of sensors depends on an event setup, level of movement needed and function;

Analyzing the robot body shape by validating the dimensions entered;

Obtaining second metrics for the robot body;

Comparing the second metrics of the robot body with the first metrics;

Recalculating the second metrics; Comparing the second metrics of the robot body with the first metrics of the one or more people obtained in the initialization stage; and

Formulating the robot body shape based on the recalculated second metrics until achieving a certain accuracy. The method of claim 16, wherein the tracking stage comprises the steps of: Pre-processing, by the controlling and processing unit, the hologram images to reduce the noise in the hologram; and

Determining an object center localization. The method of claim 16, wherein the operation stage comprises the steps of: Receiving instructions, from the first end;

Sending, by the controlling and processing unit, instructions to the second end to imitate the one or more people by the robot body; and

Imitating, by the robot body, the one or more people. The method of claim 16, wherein the categorization of the static and dynamic objects is by using a Smooth Flow Vector Estimation, Motion Flow Identification or a combination thereof. The method of claim 16, wherein the hard and soft biometric methods used to obtain the human body characteristics are Heat Kernel Signature method, Wave Kernel Signature, or a combination thereof. The method of claim 16, wherein the body shape analysis is obtained by using spectral geometry method, non-rigid shape analysis method, or a combination thereof. The method of claim 16, wherein further- in- distance objects are projected using the display unit. The method of claim 20, wherein recalculating and comparing the second metrics keep occurring until level of required accuracy is achieved.

AMENDED CLAIMS received by the International Bureau on 15 December 2022 (15.12.2022)

[Claim 1] A system for human body imitation and virtual body transfer communication using hologram-guided robotics, the system comprises: A first end having one or more people, a first plurality of hologram sources configured to simulate a second end’s surroundings, a display unit configured to project the second end’s surroundings, a first set of image acquisition units, and a first voice recognition unit;

A second end having a robot body, a second plurality of hologram sources configured to simulate the first end’s surroundings, a second set of image acquisition units, and a second voice recognition unit;

A controlling and processing unit configured to control the robot body, and generate spatial mapping using data acquired by the first and second image acquisition units, wherein the controlling and processing unit is located in the robot body; and A network.

[Claim 2] The system of claim 1, wherein the first end and the second end are transceiving ends.

[Claim 3] The system of claim 1, wherein the first and second voice recognition units comprise speakers and microphones.

[Claim 4] The system of claim 1, wherein the robot body is a humanoid robot body.

[Claim 5] The system of claim 1, wherein the robot body is configured to have pipes with flexible exterior, expandable body parts, a plurality of actuators, at least one built-in camera, and a group of sensors mounted thereon.

[Claim 6] The system of claims 1 or 5, wherein the pipes are distributed along the robot body.

[Claim 7] The system of claim 5, wherein the group of sensors comprises gyroscopes, accelerometers, and proximity sensors configured to detect robot body’s interactions and movements.

[Claim 8] The system of claim 1, wherein the robot body is configured to mimic the one or more people.

[Claim 9] The system of claims 1 or 5, wherein the robot body is configured to have a formation material transferred through the pipes.

[Claim 10] The system of claim 9, wherein the formation material is a flesh-like formation material.

[Claim 11] The system of claim 10, wherein the flesh-like formation material is a

19

AMENDED SHEET (ARTICLE 19) foaming material.

[Claim 12] The system of claim 11, wherein the foaming material is carbon nanotubes.

[Claim 13] The system of claim 12, wherein the carbon nanotubes are straight carbon nanotubes, waved carbon nanotubes, bent carbon nanotubes or a combination thereof.

[Claim 14] The system of claim 9, wherein the formation material is configured to provide hard biometrics for the robot body.

[Claim 15] The system of claim 1, wherein the second plurality of hologram sources is configured to provide soft biometrics to the robot body.

[Claim 16] A method for operating the system of claim 1, the method comprises an initialization stage, a spatial mapping stage, a shaping and mimicking stage, a tracking stage, and an operation stage, wherein the initialization stage comprises the steps of:

Entering dimensions of human body characteristics and first metrics for the one or more people using hard and soft biometric methods to be processed by the controlling and processing unit;

Shaping, by the formation material, the robot body;

Gathering information by the second set of image acquisition units and take measurements of the second end;

Categorizing the information, by the controlling and processing unit, into static objects and dynamic objects;

Scanning, by the second set of image acquisition units and the at least one built-in camera, the second end’s surroundings to generate a virtual layout of a scene to be mapped into the first end;

Providing, by the first set of image acquisition units, a 3D image for the one or more people; and

Controlling the robot body according to the human body characteristics and dimensions collected for the one or more people.

[Claim 17] The method of claim 16, wherein the spatial mapping stage comprises the steps of:

Identifying the static and dynamic objects in the first end;

Assigning hologram numbers to identified static and dynamic objects, wherein hologram projections are assigned to the robot body and nearby objects;

Constructing, by reflecting the data collected by the first set of image acquisition units, holographic representations of the identified static and dynamic objects;

20

AMENDED SHEET (ARTICLE 19) Refreshing data collected continuously; and

Using 3D holographic projection to provide a highly accurate measurement of the first end.

[Claim 18] The method of claim 17, wherein the spatial mapping stage further comprises a recursive measurement for the second end.

[Claim 19] The method of claim 18, wherein the recursive measurement is achieved by 3D Holographic Projection.

[Claim 20] The method of claim 16, wherein the shaping and mimicking stage comprises the steps of:

Expanding the robot body based on the dimensions and first metrics entered in the initialization stage;

Allocating, the group of sensors on the robot body for better interaction, wherein the placement of the group of sensors depends on an event setup, level of movement needed and function;

Analyzing the robot body shape by validating the dimensions entered; Obtaining second metrics for the robot body;

Comparing the second metrics of the robot body with the first metrics; Recalculating the second metrics;

Comparing the second metrics of the robot body with the first metrics of the one or more people obtained in the initialization stage; and Formulating the robot body shape based on the recalculated second metrics until achieving a certain accuracy.

[Claim 21] The method of claim 16, wherein the tracking stage comprises the steps of:

Pre-processing, by the controlling and processing unit, the hologram images to reduce the noise in the hologram; and Determining an object center localization.

[Claim 22] The method of claim 16, wherein the operation stage comprises the steps of:

Receiving instructions, from the first end;

Sending, by the controlling and processing unit, instructions to the second end to imitate the one or more people by the robot body; and Imitating, by the robot body, the one or more people.

[Claim 23] The method of claim 16, wherein the categorization of the static and dynamic objects is by using a Smooth Flow Vector Estimation, Motion Flow Identification or a combination thereof.

[Claim 24] The method of claim 16, wherein the hard and soft biometric methods used to obtain the human body characteristics are Heat Kernel

21

AMENDED SHEET (ARTICLE 19) Signature method, Wave Kernel Signature, or a combination thereof.

[Claim 25] The method of claim 16, wherein the body shape analysis is obtained by using spectral geometry method, non-rigid shape analysis method, or a combination thereof.

[Claim 26] The method of claim 16, wherein further- in- distance objects are projected using the display unit.

[Claim 27] The method of claim 20, wherein recalculating and comparing the second metrics keep occurring until level of required accuracy is achieved.

22

AMENDED SHEET (ARTICLE 19)

Description:
HUMAN BODY IMITATION AND VIRTUAL BODY TRANSFER USING HOLOGRAM-

GUIDED ROBOTICS TECHNIQUES

TECHNICAL FIELD

[01] The present disclosure relates to communications systems and Artificial Intelligence applications. In particular, a system utilizing, robotics and holographic telepresence in communication.

BACKGROUND

[02] Huge developments are being witnessed every day in a variety of fields of technology, most prominent of which are the developments in the fields of communication, and robotics.

[03] The collaboration between different technologies to leverage certain aspects that allow further advancements and more realistic experiences in communications is disclosed in the prior art, such as the collaboration between virtual reality, robots, and holograms. For instance, the Korean patent number KR102146375 discloses a robot simulator platform system based on mixed reality, which enables realistic robot experiential learning through a robot simulator. The robot simulator platform system is built in a computer of a mixed reality device to provide robot simulator experiential learning, comprises: a robot learning unit allowing a user to interactively learn knowledge for robot assembly by outputting robot learning hologram based on mixed reality; a robot assembly simulator allowing a user to manufacture a robot by assembling components of a robot by outputting robot assembly hologram based on the mixed reality; a database registering robot learning information and robot component information based on the mixed reality; and a robot simulator platform including a robot simulator physical engine for calculating robot components and kinetic energy of a robot based on the robot component information registered in the database and providing the same to the robot learning unit and the robot assembly simulator so as to implement a similar motion to kinetic energy of an actual robot. [04] The US patent number US9661272 disclose an apparatus, system and method of projecting holographic images for a video conference, wherein a user may use the apparatus for telecommunication transmissions. The apparatus also includes biometric verification means for verifying the identity of the user using his/her biometric authentication credentials. Upon verification of the user's biometric sample, a keypad door is opened to reveal a concealed keypad and a camera door is also opened to reveal and release a telescopic camera in the retracted position, enabled for capturing an image (video including picture and audio) of teleconferencing attendees that are within the apparatus' field of view, where the captured image may be transmitted to a receiving holographic video conferencing apparatus, where the captured video image is projected as a hologram, with audio output provided via the speaker/microphone system.

[05] The international application publication number WO2020244861 discloses a method for operating a videoconference system, in which a camera device in a first room captures image data and a screen device, coupled to the camera device via a communication connection of a communication device, in a second room displays an image reproduction of the first room on the basis of the image data. The invention provides for an alignment device to capture a physical position of at least one reference object in the first room and to describe said physical position by means of position data and to take the position data as a basis for orienting the at least one reference object in the image reproduction with respect to at least one stipulated object arranged in the second room by defining a physical orientation of the camera device and/or of an image processing of the image data.

[06] The US patent application publication number US20130054028 discloses a method for controlling a robot using a computing device, 3D images of an operator are captured in real-time. Different portions of the operator are determined in one of the 3D images according to moveable joints of the robot, and each of the determined portions is correlated with one of the moveable joints. Motion data of each of the determined portions is obtained from the 3D images. A control command is sent to the robot according to the motion data of each of the determined portions, to control each moveable joint of the robot to implement a motion of a determined portion that is correlated with the moveable joint. [07] The US patent number US7489427 discloses a hologram recording device configured to perform multiple recording of data in a same recording region by varying an angle of incidence of a reference beam on a hologram recording medium at a time of recording an interference fringe of the reference beam and a signal beam on the hologram recording medium.

[08] Although holographic images provide accurate representations of humans and other objects, they have many limitations that hinder their ability to serve in certain environments and under different conditions such as the visual quality and color saturation depending on the light source, inability to move objects, and the possibility of disturbance of the electromagnetic signals.

[09] The prior art documents provide a wide range of telecommunication methods and systems that employ hologram technique along other technologies, yet they all lack characteristics of realism, which if supported, can make the communication experience more engaging.

SUMMARY

[010] Therefore, it is an object of the present disclosure to provide a more realistic communication system and method that enable the imitation of the human body through unique incorporation of robotics with holographic telepresence.

[011] It is another object of the present disclosure to provide a communication system that enables hosting people from diverse and distant places to facilitate conducting events, meetings, conferences or the like within more realistic exposure to surroundings.

[012] Aspects of the present disclosure provide a system for human body imitation and virtual body transfer communication using hologram-guided robotics, the system may comprise a first end having one or more people, a first plurality of hologram sources that may be configured to simulate a second end’s surroundings, a display unit that may be configured to project the second end’s surroundings, a first set of image acquisition units, and a first voice recognition unit; a second end having a robot body, a second plurality of hologram sources may be configured to simulate the first end’s surroundings, a second set of image acquisition units, and a second voice recognition unit; a controlling and processing unit may be configured to control the robot body, and generate spatial mapping using data acquired by the first and second image acquisition units, wherein the controlling and processing unit may be located in the robot body; and a network.

[013] In aspects of the present disclosure, the first end and the second end may be transceiving ends.

[014] In some aspects of the disclosure, the first and second voice recognition units may comprise speakers and microphones.

[015] In some aspects of the disclosure, the robot body may be a humanoid robot body.

[016] In some aspects of the disclosure, the robot body may be configured to have pipes with flexible exterior, expandable body parts, a plurality of actuators, at least one built-in camera, and a group of sensors mounted thereon.

[017] In some aspects of the disclosure, the pipes may be distributed along the robot body.

[018] In some aspects of the disclosure, the group of sensors may comprise gyroscopes, accelerometers, and proximity sensors configured to detect the robot body’s interactions and movements.

[019] In some aspects of the disclosure, the robot body may be configured to mimic the one or more people.

[020] In some aspects of the disclosure, the robot body may be configured to have a formation material transferred through the pipes.

[021] In some aspects of the disclosure, the formation material may be a flesh-like formation material.

[022] In some aspects of the disclosure, the flesh-like formation material may be a foaming material. [023] In some aspects of the disclosure, the foaming material may be carbon nanotube.

[024] In some aspects of the disclosure, the carbon nanotube may be straight carbon nanotube, waved carbon nanotube, bent carbon nanotube or a combination thereof.

[025] In some aspects of the disclosure, the formation material may be configured to provide hard biometrics for the robot body.

[026] In some aspects of the disclosure, the second plurality of hologram sources may be configured to provide soft biometrics to the robot body.

[027] Other aspects provide a method for operating the system of the present disclosure, wherein the method may include an initialization stage, a spatial mapping stage, a shaping and mimicking stage, a tracking stage, and an operation stage.

[028] In other aspects of the disclosure, the initialization stage may include the steps of: Entering dimensions of human body characteristics and first metrics for the one or more people using hard and soft biometric methods;

Shaping, by the foaming material, the robot body;

Gathering information by the second set of image acquisition units and take measurements of the second end;

Categorizing the information, by the controlling and processing unit, into the static objects and dynamic objects;

Scanning, by the second set of image acquisition units and the at least one built-in camera, the second end’s surroundings to generate a virtual layout of a scene to be mapped into the first end;

Providing, by the first set of image acquisition units, a 3D image for the one or more people; and

Controlling the robot body according to the human body characteristics and dimensions collected for the one or more people.

[029] In other aspects of the disclosure, the spatial mapping stage may include the steps of:

Identifying the static and dynamic objects in the first end; Assigning hologram numbers to the static and dynamic objects, wherein hologram projections are assigned to the robot body and nearby objects;

Constructing, by reflecting the data collected by the first set of image acquisition units, holographic representations of the static and dynamic objects;

Refreshing the data collected continuously; and

Using 3D holographic projection to provide a highly accurate measurement of the first end.

[030] In other aspects of the disclosure, the shaping and mimicking stage may comprise the steps of:

Expanding the robot body based on the dimensions and measurements collected in the initialization stage;

Allocating, the group of sensors on the robot body for better interaction, wherein the placement of the group of sensors depends on an event setup, level of movement needed and function;

Analyzing the robot body shape by validating the dimensions collected;

Obtaining second metrics for the robot body;

Comparing the second metrics of the robot body with the first metrics;

Recalculating the second metrics;

Comparing the second metrics of the robot body with the first metrics of the one or more people obtained in the initialization stage; and

Formulating the robot body shape based on the recalculated second metrics until achieving a certain accuracy.

[031] In other aspects of the disclosure, the tracking stage may comprise the steps of: Pre-processing the hologram images, by the controlling and processing unit, to reduce the noise in the hologram; and

Determining an object center localization.

[032] In other aspects of the disclosure, the operation stage may comprise the steps of: Receiving instructions, from the first end; Sending, by the controlling and processing unit, instructions to the second end to imitate the one or more people by the robot body; and

Imitating, by the robot body, the one or more people.

[033] In yet other aspects, the categorization of the static and dynamic objects is by using a Smooth Flow Vector Estimation, Motion Flow Identification or a combination thereof.

[034] In yet other aspects, the hard and soft biometric methods used to obtain the human body characteristics are Heat Kernel Signature method, Wave Kernel Signature, or a combination thereof.

[035] In yet other aspects, the body shape analysis may be obtained by using spectral geometry method, non-rigid shape analysis method, or a combination thereof.

[036] In yet other aspects, further- in- distance objects are projected using the display unit.

[037] In yet other aspects, recalculating and comparing the second metrics keep occurring until accuracy required is achieved.

[038] In yet other aspects, the spatial mapping stage may further comprise a recursive measurement for the second end, wherein the recursive measurement may be achieved by 3D Holographic Projection.

BRIEF DESCRIPTION OF THE DRAWINGS

[039] The disclosure will now be described with reference to the accompanying drawings, without however limiting the scope of the disclosure thereto, and in which:

[040] FIG. 1 illustrates a flow chart of a method for human body imitation and virtual body transfer communication using hologram-guided robotics, the method being configured in accordance with embodiments of the present disclosure. [041] FIG.2 illustrates a flowchart of an initialization stage of a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure.

[042] FIG.3 illustrates a flowchart of a spatial mapping stage of a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure.

[043] FIG.4 illustrates a flowchart of a shaping and mimicking stage of a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure.

[044] FIG.5 illustrates a flowchart of a tracking stage of a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure.

[045] FIG.6 illustrates a flowchart of an operation stage of a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure.

[046] FIG. 7 illustrates a block diagram for the system being configured in accordance with embodiments of the present disclosure.

DETAILED DESCRIPTION

[047] As used herein, the term “hologram-guided robotics” refers to a three-dimensional projection which can be seen without using any special equipment such as cameras or glasses, whereby such projection is manifested on a robot body.

[048] Embodiments of the present disclosure provide an interactive communication system and a method that enable holding hybrid events, i.e. mixed physical and virtual events, while imitating appearance, movements, and interactions of remote individuals as well as surroundings through an incorporation of robotics and hologram technologies. [049] FIGS. 1-6 illustrate a method for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure, and FIG. 7 illustrates a system for human body imitation and virtual body transfer communication using hologram-guided robotics configured in accordance with embodiments of the present disclosure. Referring now to FIG. 7, embodiments of the present disclosure provide a system for human body imitation and virtual body transfer communication using hologram-guided robotics, the system may include a first end 1, a second end 2, a controlling and processing unit 3, and a network 4. The first end 1 may have one or more people 101, a first plurality of hologram sources 102 that may be configured to simulate a second end’s 2 surroundings, a display unit 105 that may be configured to project the second end’s 2 surroundings; a first set of image acquisition units 103, and a first voice recognition unit 104. The second end 2 may have a robot body 201, a second plurality of hologram sources 202 that may be configured to simulate the first end’s 1 surroundings, a second set of image acquisition units 203, and a second voice recognition unit 204. In embodiments of the present disclosure, the controlling and processing unit 3 may be configured to control the robot body 201, and generate spatial mapping using data acquired by the first and second image acquisition units 103, 203 In some embodiments, such controlling and processing unit 3 may be located in the robot body 201.

[050] In embodiments of the present disclosure, the first end 1 and the second end 2 may be transceiving ends, each being able to send and receive data over the network 4 to the other end.

[051] In some embodiments of the disclosure, the first and second voice recognitionunits 104, 204 may comprise speakers and microphones.

[052] In some embodiments of the disclosure, the robot body 201 may be a humanoid robot body.

[053] In some embodiments of the disclosure, the robot body 201 may be configured to have pipes 20 with flexible exterior, expandable body parts 30, a plurality of actuators 40, at least one built-in camera 60, and a group of sensors 80 mounted thereon. [054] In some embodiments of the disclosure, the pipes 20 may be distributed along the robot body 201.

[055] In some embodiments of the disclosure, the group of sensors 80 may comprise gyroscopes, accelerometers, and proximity sensors configured to detect the robot body’s

201 interactions and movements.

[056] In some embodiments of the disclosure, the robot body 201 may be configured to mimic the one or more people 101.

[057] In some embodiments of the disclosure, the robot body 201 may be configured to have a formation material 90 that is transferrable through the pipes 20 to a skin of the robot body 201.

[058] In some embodiments of the disclosure, the formation material 90 may be a fleshlike formation material.

[059] In some embodiments of the disclosure, the flesh-like formation material may be a foaming material.

[060] In some embodiments of the disclosure, the foaming material may be carbon nanotubes.

[061] In some embodiments of the disclosure, the carbon nanotube may be straight carbon nanotube, waved carbon nanotube, bent carbon nanotube or a combination thereof.

[062] In some embodiments of the disclosure, the formation material 90 may be configured to provide hard biometrics for the robot body 201.

[063] In some embodiments of the disclosure, the second plurality of hologram sources

202 may be configured to provide soft biometrics to the robot body 201.

[064] In some embodiments of the disclosure, the group of sensors 80 may be fixed and/or movable sensors distributed along the robot body 201. [065] Reference now is being made to FIG.1, which illustrates a flowchart for a method for human body imitation and virtual body transfer communication using the system described in the present disclosure, the method may include five stages; an initialization stage (process block 1-1), a spatial mapping stage (process block 1-2), a shaping and mimicking stage (process block 1-3), a tracking stage (process block 1-4), and an operation stage (process block 1-5).

[066] Reference is now being made to FIG. 2 with continued reference to FIG. 7. FIG. 2 illustrates a flowchart for an initialization stage according to embodiments of the present disclosure, which may include the steps of: entering actual dimensions of human body characteristics and first metrics for the one or more people 101 using hard and soft biometric methods to be processed by the controlling and processing unit 3 (process block 2-1); shaping, by the formation material 90, the robot body to reflect the actual dimensions of the one or more people (process block 2-2); gathering information such as static and dynamic objects’ numbers and sizes as well as distances between the objects by the second set of image acquisition units 203 and take measurements of the second end 2 (process block 2-3); categorizing, by the controlling and processing unit 3 the information into the static objects and dynamic objects (process block 2-4); scanning, by the second set of image acquisition units 203 and the at least one built-in camera 60, the second end’s surroundings to generate a virtual layout of a scene to be mapped into the first end 1 (process block 2-5); providing, by the first set of image acquisition units 103, a three dimensional (“3D”) image for the one or more people 101 (process block 2-6); controlling the robot body 201 according to the human body characteristics and dimensions collected for the one or more people 101 (process block 2-7).

[067] In embodiments of the present disclosure, measuring the dimensions of the second end 2 may be performed by the second plurality of hologram sources 202 through the use of a laser, interference, reflection, and/or light intensity recording systems along with a suitable illumination method.

[068] Referring now to FIG. 3 with continued reference to FIG. 7. FIG. 3 illustrates a flowchart for a spatial mapping stage, which may include the steps of: identifying the static and dynamic objects in the first end 1 by the controlling and processing unit 3 (process block 3-1); assigning hologram numbers to the static and dynamic objects, wherein hologram projections are assigned to the robot body 201 and nearby objects (process block 3-2); constructing, by reflecting the data collected by the first set of image acquisition units 103, holographic representations of the static and dynamic objects (process block 3-3); refreshing the data collected continuously (process block 3-4); and using 3D holographic projection by the first plurality of hologram sources 102 to provide a highly accurate measurement of the first end 1 (process block 3-5).

[069] In some embodiments of the present disclosure, the static and dynamic objects can be determined using distant view images by applying the following Newton’s image formula: f stz' = — - x stz dis wherein dis is the distance between the object and the focal plane in the object space, “f ’ is the focal length in the object space, and “siz” is the size of the object, then the size of the object in the captured two dimensional (“2D”) image “siz'” can be obtained from Newton’s image formula.

[070] In other embodiments, the spatial mapping stage may further comprise a recursive measurement for the second end 2, wherein the recursive measurement may be achieved by 3D Holographic Projection.

[071] Referring now to FIG. 4 and FIG. 7. FIG. 4 illustrates a flowchart for the shaping and mimicking stage, which may include the steps of: expanding the robot body 201 based on the dimensions and first metrics entered in the initialization stage of FIG. 2 (process block 4-1); allocating, the group of sensors 80 on the robot body 201 for better interaction, wherein the placement of the group of sensors 80 depends on an event setup, level of movement needed and function (process block 4-2); analyzing the robot body shape by validating the dimensions and first metrics entered (process block 4-3); obtaining second metrics for the robot body 201 (process block 4-4); comparing the second metrics of the robot body 201 with the first metrics (process block 4-5); recalculating the second metrics (process block 4-6); comparing the second metrics of the robot body 201 with the first metrics of the one or more people 101 entered in the initialization stage (process block 4- 7); formulating the robot body 201 shape based on the recalculated second metrics until achieving a certain acceptable and pre-identified accuracy level (process block 4-8).

[072] Continued reference to FIG. 7 is made now with a new reference to FIG. 5 , which illustrates a flowchart for the tracking stage, which may include the steps of: preprocessing, by the controlling and processing unit 3, the hologram images to reduce the noise in the hologram (process block 5-1); and determining an object center localization (process block 5-2).

[073] In some embodiments, pre-processing the hologram images may require adding about 1% of the pixel intensity to avoid division by zero value.

[074] In some embodiments, the object tracking can be done using Single-particle Tracking, Object Tracking by Reconstruction, Kernel-Based Object Tracking, contour tracking to extract the images’ boundaries, or a combination thereof.

[075] Referring now to FIG. 6 with continued reference to FIG. 7. FIG. 6 illustrates a flowchart for the operation stage, which may include the steps of: receiving instructions, from the first end 1 (process block 6-1); sending, by the controlling and processing unit 3, instructions to the second end 2 to imitate the one or more people 101 by the robot body 201 (process block 6-2); and imitating, by the robot body 201, the one or more people 101 (process block 6-3).

[076] In yet other embodiments, the categorization of the static and dynamic objects is by using a Smooth Flow Vector Estimation, Motion Flow Identification or a combination thereof.

[077] In embodiments of the disclosure, Smooth Flow Vector is estimated by performing a subtraction of corresponding points (pixel value) of consecutive frame, wherein if the result is null (zero) means static object, otherwise dynamic. [078] In embodiments of the disclosure, in Motion Flow Identification, the static objects frames would coherently overlap (convolution results are present), while dynamic objects do not (the convolution result is low).

[079] In yet other embodiments, the hard and soft biometric methods used to obtain the human body characteristics are Heat Kernel Signature method, Wave Kernel Signature, or a combination thereof.

[080] In yet other embodiments, the body shape analysis may be obtained by using spectral geometry method, non-rigid shape analysis method, or a combination thereof.

[081] In yet other embodiments, further- in- distance objects are projected using the display unit 105.

[082] In yet other embodiments, recalculating and comparing the second metrics keep occurring until accuracy required is achieved.

[083] The singular forms “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise.

[084] While embodiments of the present disclosure have been described in detail and with reference to specific embodiments thereof, it will be apparent to one skilled in the art that various additions, omissions, and modifications can be made without departing from the spirit and scope thereof.