Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE PROCESSING FOR MEDICAL CONDITION DIAGNOSIS
Document Type and Number:
WIPO Patent Application WO/2024/076683
Kind Code:
A1
Abstract:
Embodiments of the present application discloses a method and a related system for detecting medical conditions. In the method, a mobile computing device collects data (visual, sensor, etc.) from a user. A feature extraction circuit preprocesses the collected data to extract a feature. Based on the extracted feature, a prediction circuit determines a probability of the presence of a medical condition in the user. The embodiments provide a cost-effective and convenient approach to diagnosing certain health conditions that are visibly identifiable.

Inventors:
LEBIDEV ANTON (US)
SEMIANOV KONSTANTIN (US)
SEMYANOV ARTEM (US)
Application Number:
PCT/US2023/034553
Publication Date:
April 11, 2024
Filing Date:
October 05, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NEATSY INC (US)
International Classes:
G16H10/60; G06T7/00; G06T15/08; G06T19/20; G16H30/20; G16H50/20; G06N3/08; G06N20/10
Foreign References:
US20160147959A12016-05-26
US20190139641A12019-05-09
US20200219272A12020-07-09
US20200303074A12020-09-24
Attorney, Agent or Firm:
QIN, Letao (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system for detecting medical conditions, comprising: a mobile computing device configured to collect data from a user; a feature extraction circuit configured to preprocess the collected data to extract a feature; and a prediction circuit configured to predict a probability of a medical condition of the user based on the extracted feature.

2. The system of claim 1, further comprising a remote computing device communicatively coupled to the mobile computing device, wherein the feature extraction circuit is comprised in the remote computing device.

3. The system of claim 2, wherein the prediction circuit is comprised in the remote computing device.

4. The system of claims 1, 2, or 3, wherein: the mobile computing device comprises a red-green-blue (RGB) camera; and the collected data includes at least one of RGB images or RGB videos captured by the RGB camera.

5. The system of claims 1, 2, or 3, wherein: the mobile computing device has a red-green-blue depth (RGBD) camera; and the collected data includes at least one of RGBD images or RGBD videos captured by the RGBD camera.

6. The system of claim 1, wherein: the collected data includes at least one of images or videos captured by the mobile computing device; the feature extraction circuit comprises a preprocessing algorithm module; and the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using pre-trained neural networks.

7. The system of claim 1, wherein: the collected data includes at least one of images or videos captured by the mobile computing device; the feature extraction circuit comprises a preprocessing algorithm module; and the preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.

8. The system of claim 1, wherein the mobile computing device is further configured to collect data input by the user via a questionnaire.

9. The system as in any of claims 6-8, wherein the feature extraction circuit is configured to transform the collected data into point cloud data.

10. The system as in any of claims 6-8, wherein the feature extraction circuit is configured to transform the collected data into neural networks embeddings.

11. The system of claim 1, wherein: the prediction circuit comprises a prediction algorithm module; and the prediction algorithm module is trained using at least one from the following: synthetic data or data collected from a plurality of users.

12. The system of claim 1 , wherein: the prediction circuit comprises a first prediction algorithm module and a second prediction algorithm module; the first prediction algorithm module is trained using synthetic data and the second prediction algorithm module is trained using data collected by a plurality of users; a first output of the first prediction algorithm module and a second output of the second prediction algorithm module are input into an aggregation algorithm module of the prediction circuit; and the probability of the medical condition of the user is determined based on an output of the aggregation algorithm module.

13. The system of claim 1, wherein: the prediction circuit comprises a prediction algorithm module; and the prediction algorithm module is trained using features extracted by the feature extraction module.

14. A method for detecting medical conditions, comprising: collecting, by a mobile computing device, data from a user; preprocessing, by a feature extraction circuit, the collected data to extract a feature; and predicting, by a prediction circuit, a probability of a medical condition of the user based on the extracted feature.

15. The method of claim 14, further comprising sending, by the mobile computing device, the extracted feature to a remote computing device comprising the prediction circuit.

16. The method of claim 14, further comprising sending, by the mobile computing device, the collected data to a remote computing device comprising the feature extraction circuit.

17. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a red-green-blue (RGB) camera of the mobile computing device, at least one of RGB images or RGB videos.

18. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a red-green-blue depth (RGBD) camera of the mobile computing device, at least one of RGBD images or RGBD videos.

19. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a camera of the mobile computing device, at least one of images or videos; wherein the preprocessing, by the feature extraction circuit, the collected data to extract the feature comprises: preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using pre-trained neural networks.

20. The method of claim 14, wherein the collecting, by the mobile computing device, the data from the user comprises: capturing, by a camera of the mobile computing device, at least one of images or videos; wherein the preprocessing, by the feature extraction circuit, the collected data to extract the feature comprises: preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.

21. The method of claim 14, further comprising: collecting, by the mobile computing device, input data by the user via a questionnaire.

22. The method as in any of claims 14-21, further comprising: transforming, by the feature extraction circuit, the collected data into point cloud data. The method as in any of claims 14-21, further comprising: transforming, by the feature extraction circuit, the collected data into neural networks embeddings. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using synthetic data. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using data collected from a plurality of users. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using synthetic data and data collected by a plurality of users. The method as in any of claims 14-21, further comprising: training a first prediction algorithm of the prediction circuit using synthetic data; training a second prediction algorithm of the prediction circuit using data collected from a plurality of users; aggregating, by an aggregation algorithm module, a first output of the first prediction algorithm and a second output by the second prediction algorithm to predict the probability of the medical condition of the user. The method as in any of claims 14-21, further comprising: training a prediction algorithm of the prediction circuit using features extracted by the feature extraction module.

Description:
IMAGE PROCESSING FOR MEDICAL CONDITION DIAGNOSIS

RELATED APPLICATIONS

[0001] This application claims priority to US Provisional Application 63/413,575 filed on October 5, 2022, the entire content of which is incorporated herein in its entirety.

FIELD OF THE TECHNOLOGY

[0002] The present disclosure generally relates to medical diagnosis, and more specifically to systems and methods for detecting medical conditions that have visible symptoms.

BACKGROUND

[0003] As medical sciences continue to advance, the number of identifiable health conditions also continues to grow. Some health conditions are diagnosed by specific tests that require particular and expensive equipment. Some health conditions are diagnosed after a patient conducts a questionnaire and a visual analysis is performed by a medical professional. In some cases, a visual analysis must be performed prior to conducting a particular test. An example of such an ailment is flat feet. The presence or at least a suspicion of the condition can be obtained via a visual analysis by the medical professional before conducting a deeper analysis using expensive equipment, such as a magnetic resonance imaging (MRI) or an X-ray.

[0004] With advanced technologies, mobile devices are now equipped with various types of sensors. Some examples of such sensors include cameras, microphones, depth cameras, light detection and ranging sensors (lidars), and so on. As medical costs become cost-prohibitive and, at times, access to medical devices becomes restrictive for many people, there is a need for a more cost-effective and convenient approach to diagnose some of visually identifiable health conditions.

SUMMARY

[0005] Embodiments of this disclosure provide for a system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops). These mobile computing devices may include, but are not limited to, a red-green-blue (RGB) camera (i.e., a conventional camera found in most mobile computing devices), a depth camera, and/or one or more sensors which may include, for example, a lidar sensor. Embodiments of the systems and the methods disclosed herein utilize the data collected by a mobile computing device to process features and predict a probability of the presence of certain health conditions based on the processed features.

[0006] The mobile computing device is configured to collect data from users and, in some embodiments, perform feature processing via a feature processing module (e.g., feature processing circuit) and/or probability prediction via a prediction module (e.g., prediction circuit). An objective of the feature processing module is to generate features from the collected data for the prediction module to perform a prediction. An objective of the prediction module is to convert the processed features processed by the feature processing module into a probability of the presence of certain health conditions. In some embodiments, the prediction module and/or the feature processing module utilize machine learning algorithms or non-trainable algorithms based on prior knowledge/training or a combination of both types of algorithms.

[0007] In some embodiments, when the feature processing module and/or the prediction module utilize machine learning algorithms, machine learning techniques are developed using training data. The data for training the machine learning algorithms may be obtained from real data collected from users and labels associated with the real data. These labels may include, but are not limited to, features that correlate with a particular health condition. These labels may be determined by a health professional or may be self-diagnosed by the user. The example of such labels can be classification binary labels of condition presence such as hypo-lordosis presence, hyper-lordosis presence, scoliosis presence, flat feet presence, hallux-valgus presence, cavus foot presence, varicose veins presence. The labels can be also non-binary, such as the severity of a condition. Examples of features that correlate with diagnosis include Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates. Other examples of features include the presence of condition symptoms such as pain, presence of deformed veins, traumas, skin pigment changes presence, activity level of the patient.

[0008] On its own or in combination with using user data, another method for training the machine algorithms includes generating synthetic data. For example, synthetic data may be generated by creating a three-dimensional (3D) model of a human foot and then rendering the 3D model. The rendered 3D model is synthetic data and can be used to train the machine leaning algorithms the same as the data collected from users. In this example, the labels, such as classification labels of condition absence/presence such as hypo-lordosis absence/ absence/presence, hyper-lordosis absence/presence, scoliosis absence/presence, flat feet absence/presence, hallux-valgus absence/presence, cavus foot absence/presence, the severity of the conditions, and even the anatomic features related to the condition such as: Hallux Valgus angle, Meary angle, first intermetatarsal angle, lordotic angle, bone joints coordinates, may be generated automatically based on the parameters of this model. In some embodiments, the training data is a combination of collected and synthetic data. [0009] According to a first aspect, a system for detecting medical conditions is disclosed. The system includes a mobile computing device configured to collect data from a user. The system further includes a feature extraction circuit configured to preprocess the collected data to extract a feature. The system also includes a prediction circuit configured to predict a probability of a medical condition of the user based on the extracted feature.

[0010] In some embodiments, the system further includes a remote computing device communicatively coupled to the mobile computing device. The feature extraction circuit is disposed in the remote computing device.

[0011] In some embodiments, the system further includes a remote computing device communicatively coupled to the mobile computing device. The prediction circuit is disposed in the remote computing device.

[0012] In some embodiments, the mobile computing device includes a red-green- blue (RGB) camera. The collected data includes at least one of RGB images or RGB videos captured by the RGB camera.

[0013] In some embodiments, the mobile computing device includes a red-green- blue depth (RGBD) camera for collecting data. The collected data includes at least one of RGBD images or RGBD videos captured by the RGBD camera.

[0014] In some embodiments, the collected data includes at least one of images or videos captured by the mobile computing device. The feature extraction circuit includes a preprocessing algorithm module. The preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using pre-trained neural networks.

[0015] In some embodiments, the collected data includes at least one of images or videos captured by the mobile computing device. The feature extraction circuit includes a preprocessing algorithm module. The preprocessing algorithm module is configured to preprocess at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.

[0016] In some embodiments, the mobile computing device is further configured to collect data input by the user via a questionnaire. In one embodiment, the feature extraction circuit is configured to transform the collected data into point cloud data. In one embodiment, the feature extraction circuit is configured to transform the collected data into neural networks embeddings.

[0017] In some embodiments, the prediction circuit includes a prediction algorithm module. The prediction algorithm module is trained using at least one from the following: synthetic data or data collected from a plurality of users.

[0018] In some embodiments, the prediction circuit includes a first prediction algorithm module and a second prediction algorithm module. The first prediction algorithm module is trained using synthetic data and the second prediction algorithm module is trained using data collected by a plurality of users. A first output of the first prediction algorithm module and a second output of the second prediction algorithm module are input into an aggregation algorithm module of the prediction circuit. The probability of the medical condition of the user is determined based on an output of the aggregation algorithm module.

[0019] In some embodiments, the prediction circuit includes a prediction algorithm module. The prediction algorithm module is trained using features extracted by the feature extraction module.

[0020] According to a second aspect, a method for detecting medical conditions is disclosed. The method includes collecting, by a mobile computing device, data from a user. The method further includes preprocessing, by a feature extraction circuit, the collected data to extract a feature, and predicting, by a prediction circuit, a probability of a medical condition of the user based on the extracted feature.

[0021] In one embodiment, the method further includes sending, by the mobile computing device, the extracted feature to a remote computing device comprising the prediction circuit. In one embodiment, the method further includes sending, by the mobile computing device, the collected data to a remote computing device comprising the feature extraction circuit.

[0022] In some embodiments, the collecting of the data from the user by the mobile computing device includes capturing, by a red-green-blue (RGB) camera or a red-green-blue depth (RGBD) camera of the mobile computing device, at least one of RGB images or RGB videos.

[0023] In some embodiments, the collecting, by the mobile computing device, the data from the user includes capturing, by a camera of the mobile computing device, at least one of images or videos. In one embodiment, the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using pre-trained neural networks. In one embodiment, the preprocessing, by the feature extraction circuit, the collected data to extract the feature includes preprocessing, by a preprocessing algorithm of the feature extraction circuit, the at least one of the images or the videos by using computer vision algorithms or non-trainable algorithms based on the computer vision algorithms.

[0024] In some embodiments, the method further includes collecting, by the mobile computing device, input data by the user via a questionnaire. [0025] In some embodiments, the method further includes transforming, by the feature extraction circuit, the collected data into point cloud data and/or neural networks embeddings.

[0026] In some embodiments, the method further includes training a prediction algorithm of the prediction circuit using synthetic data or using data collected from a plurality of users or using synthetic data and data collected by a plurality of users.

[0027] In some embodiments, the method further includes training a first prediction algorithm of the prediction circuit using synthetic data. The method further includes training a second prediction algorithm of the prediction circuit using data collected from a plurality of users. The method further includes aggregating, by an aggregation algorithm module, a first output of the first prediction algorithm and a second output by the second prediction algorithm to predict the probability of the medical condition of the user.

[0028] In some embodiments, the method further includes training a prediction algorithm of the prediction circuit using features extracted by the feature extraction module.

BRIEF DESCRIPTION OF THE DRAWINGS

[0029] These and other features of the present disclosure will become readily apparent upon further review of the following specification and drawings. In the drawings, like reference numerals designate corresponding parts throughout the views. Moreover, components in the drawings are not necessarily drawn to scale, the emphasis instead being placed upon clearly illustrating the principles of the present disclosure. [0030] Aspects of the present disclosure are best understood from the following detailed description when read with the accompanying figures. It is noted that, in accordance with the standard practice in the industry, various features are not drawn to scale. In fact, the dimensions of the various features may be arbitrarily increased or reduced for clarity of discussion.

[0031] FIG. 1 is an example of a system diagram of a medical condition detecting system having a mobile computing device, according to some embodiments of the present application.

[0032] FIG. 2 is an example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.

[0033] FIG. 3 is another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.

[0034] FIG. 4 is yet another example of a system diagram of a medical condition detecting system having a mobile computing device and an external computing system, according to some embodiments of the present application.

[0035] FIG. 5 is an example block diagram of a data collection module in a medical condition detection system, according to some embodiments of the present application.

[0036] FIG. 6 is an example block diagram of a feature processing module in a medical condition detection system, according to some embodiments of the present application.

[0037] FIG. 7 is an example block diagram of a prediction module in a medical condition detection system, according to some embodiments of the present application. [0038] FIG. 8 is a flowchart diagram of a process for detecting a medical condition, according to some embodiments of the present application.

[0039] FIGs. 9a -9c are flowchart diagrams of a process for detecting one or more medical conditions, according to some embodiments of the present application.

[0040] FIGs. lOa-lOb are examples of point clouds generated during diagnosis, according to some embodiments of the present application.

DETAILED DESCRIPTION

[0041] Embodiments of the disclosure are described more fully hereinafter with reference to the accompanying drawings, in which preferred embodiments of the disclosure are shown. The various embodiments of the disclosure may, however, be embodied in many different forms and should not be construed as limitations to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.

[0042] The proposed approach is an advanced system and method that allow users to calculate a probability of the presence of certain health conditions by utilizing a mobile computing device (e.g., smart phones, tablets, laptops). The medical conditions that can be detected using the disclosed systems and methods include ones having symptoms that may be observed with a modern mobile device without the use of a specialized sensors and/or equipment. Non-limiting examples of such medical conditions include flat feet, foot over/under-pronation, hallux valgus, nerd neck (e.g., forward head posture), and scoliosis. As visual analysis is a major component of the diagnosis of such conditions, they are suitable for diagnosis using a mobile computing device. Although a final diagnosis of such conditions may require the use of specialized equipment (e.g., X-ray scans), a preliminary step in performing the assessment is conducted through visual analysis. Only after performing the preliminary step, does the medical professional decide whether additional testing (e.g., X-rays) should be performed.

[0043] In the following disclosure, numerous embodiments are set forth in order to provide a more thorough description of the proposed approach. It will be apparent, however, to one skilled in the art, that the disclosure extends beyond the specific embodiments and may include techniques and/or features that are well-known by those skilled in the art. In some instances, these well-known techniques and/or features have not been described in full detail so as not to obscure the teachings of this disclosure.

[0044] In the present disclosure, the term “module” refers to a component of an apparatus, which may be implemented as hardware (e.g., chips, circuits, processors, etc.), software (e.g., applications, API calls, function library, embedded code, etc.), or a combination of hardware and software.

[0045] FIGs. 1, 2, 3, and 4 illustrate various examples of an architecture of systems 100, 200, 300, 400 for determining a probability of a medical condition for a user. As shown in FIGs. 1, 2, 3, and 4, the systems 100, 200, 300, 400 include a mobile computing device 110, and optionally include an external computing system 210 (external as to the mobile computing device 110). The mobile computing device 110 may be, but is not limited to, a smartphone, a tablet, a laptop, virtual reality (VR)/artificial reality (AR) headset, or any other suitable mobile communication device. The external computing device 210 may be a remote system or server that is communicatively coupled to the mobile computing device 110. [0046] For example, as illustrated in FIG. 1, the system 100 includes the mobile computing device 110. The mobile computing device 110 includes a data collection module 112 (e.g., data collection circuit), a feature processing module 114 (e.g., feature extraction module, feature extraction circuit), and a prediction module 116 (e.g., prediction circuit). The mobile computing device 110 receives one or more types of collected data 120 using one or more sensors of the mobile computing device 110, e.g., Data Source 1, Data Source 2, and Data Source 3. The one or more types of collected data 120 are input and stored by the data collection module 112. The data collection module 112 outputs the collected data to the feature processing module 114. The feature processing module 114 preprocesses the data obtained by the data collection module 512 to extract one or more features from the preprocessed data via one or more preprocessing algorithms of the feature processing module 114. After extracting the one or more features, the feature processing module 114 outputs the extracted features to the prediction module 116.

[0047] The prediction module 116 receives the extracted features from the feature processing module 114 and calculates a probability (e.g., prediction) of the presence of a health condition via one or more prediction algorithms of the prediction module 116. An output 122 indicative of a probability of the health condition is output by the mobile computing device 110.

[0048] As illustrated in FIG. 2, the system 200 includes the mobile computing device 110 and the external computing device 210. Each of the mobile computing device 110 and the external computing device 210 include communication components 118 and 218, respectively. The communication components 118 and 218 communicatively couple the mobile computing device 110 and the external computing device 210. Accordingly, the mobile computing device 110 and the external computing device 210 may send and/or receive data from the other device via the communication components 118 and 218. [0049] In FIG. 2, the mobile computing device 110 includes the data collection module 112 which may be structurally or operationally the same as, or similar to, the data collection module described in FIG 1. However, the system 200 differs from the system 100 in that a feature processing module 214 and a prediction module 216 are disposed in the external computing device 210. In these embodiments, the collected data from the mobile computing device 110 is transmitted from the mobile computing device 110 to the external computing device 210 via the communication components 118 and 218. In these embodiments, the feature processing module 214 outputs the extracted features to the prediction module 216 within the external computing device 210. The prediction module 216 then outputs a probability of the presence of a health condition of a user to the mobile computing device 110 via the communication components 118 and 218. The mobile computing device 110 may then output 122 the probability of the health condition to the user of the mobile computing device 110.

[0050] In referring to FIG. 3, the system 300 includes a mobile computing device 110 and an external computing system 210. The mobile computing device 110 includes a data collection module 112, a feature processing module 114, and a communication component 118. The external computing system 210 include a prediction module 216 and a communication component 218. In the system 300, the feature processing module 114 is disposed in the mobile computing device 110, not in the external computing system 210 as in the system 200. In these embodiments, the feature processing module 114 provides the extracted features to the external computing device, via the communication components 118 and 218, which in turn, outputs the extract features to the prediction module 216.

[0051] In FIG. 4, the system 400 includes a mobile computing device 110 and an external computing system 210. The mobile computing device 110 includes a data collection module 112, a prediction module 116, and a communication component 118. The external computing system 210 include a feature processing module 214 and a communication component 218. In the system 400, the feature processing module 314 is disposed in the external computing system 210, not in the mobile computing device 110 as in the system 200. In these embodiments, the feature processing module 214 provides the extracted features to the mobile computing device 110, via the communication components 118 and 218, which in turn, outputs the extracted features to the prediction module 116 disposed in the mobile computing device 110.

[0052] The mobile computing device 110 and the external computing system 210 can each include communication components 118 and 218, respectively, that facilitate communication for each of the mobile computing device 110 and the external computing system 210 shown in FIGs. 1, 2, 3, and 4, for example, to communicate with each other over a communication network. Some examples of communication networks include, but are not limited to, internet, intranet, wide area network (WAN), local area network (LAN), wireless network, Bluetooth, Wi-Fi, and other similar mobile communication networks. The connections of the network and the communication protocols are well known to those of skill in the art. The communication components typically embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal. By way of example, not limitation, communication components include wired media such as a wired network or a direct-wired connection, and wireless media such as acoustic, radio frequency (RF), and infrared. In an alternate embodiment, where all processing is performed by the mobile device 110 (as illustrated in FIG. 1), the mobile computing device 110 may not include a communication component 210 for communicating with an external server (e.g., outside of the conventional usage of a mobile computing device 110). [0053] The mobile computing device 110 further includes a data collection module 112 (e.g., data collection circuit). In the system 100, the mobile computing device 110 further includes a feature extraction module 114 (e.g., feature extraction circuit) and prediction module 116 (e.g., prediction circuit). In some embodiments, the feature extraction module and/or the prediction module are not included in a mobile computing device 110, and instead, are included in the external computing system 210. In these embodiments, the feature extraction module 214 and the prediction module 216 included in the external computing system 210 may be identical or similar to the feature extraction module 114 and the prediction module 116 in the mobile computing device 110. As shown in FIG. 5, the data collection module 512 (e.g., data collection circuit) includes storages and sensors, and is configured to collect data from a user. In some scenarios, the user performs self-diagnosis or participates in a telecommunication visit (e.g., a telemedicine or telehealth session). In some scenarios, a medical professional operates the mobile computing device 110 for a patient. The data collection module 512 may be similar or identical to the data collection module 112 in the systems 100, 200, 300, and 400. In some embodiments, only one type of data is collected from the user (e.g., a patient). In some embodiments, various types of data are collected from the user. The types of collected include, but are not limited to, red-green- blue (RGB) images that capture anteroposterior, lateral, medial, and coronal views of a human foot, RGB images of a human’s back and neck, etc. One or more sensors 514 equipped in the mobile computing device 110 can be utilized by the data collection module 512 to collect the RGB images. In these examples, one of the one or more sensors 514 may be a native camera of the mobile computing device 110 (e.g., a standard camera included with the mobile computing device 110). In addition to collecting images, the mobile computing device 110 may be configured to prompt the user for additional information, for example, via a questionnaire, to assess the health or medical condition of the user. The one or more sensors 514 of the mobile computing device 110 may also capture RGB depth (RGBD) photographs, record audio (via a microphone of the mobile computing device 110) or video recordings (via the native camera of the mobile computing device 110), and the like. In this way, it is not necessary to include any specialized biometric sensors such as a heart rate sensor, a weight sensor, a blood sugar sensor, or other sensors, in order for the aforementioned approach to work. This is advantageous for applications such as remote medicine (telemedicine) during self-diagnosis or otherwise, or when access to specialized equipment is limited.

[0054] In some embodiments, one of the one or more sensors 514 includes motion sensors, which may be native to the mobile communication device 100, to improve accuracy for detecting certain medical conditions such as limping. For each type of collected data, the data collection module 512 may be configured to provide instructions and feedbacks to the user. The instructions and feedback may be provided in any manner that is well-known in the art, including one or more of visual cues (e.g., via graphical user interfaces (GUIs), prompts), auditory cues (e.g., via a speaker of the mobile computing device 110), or tactile forms. The instructions and feedback may be preprogrammed or provided in real-time by a medical professional in a telemedicine environment.

[0055] Along with instructions and feedback, the data collection module 512 performs the collecting of data from the sensors 514 and stores the collected data. In some embodiments, the collected data may be stored internally in one or more storages 516 of the data collection module 112. In some embodiments, the collected data is transmitted by the mobile computing device 110 to the external computing device 210 via the communication components 118, 218, respectively. The data collection module 512 transmits the output data 522 to the feature processing module 614 directly or via the communication components 118 and 218.

[0056] As illustrated in FIG. 6, the feature processing module 614 is configured to preprocess the output data 522 obtained by the data collection module 512. The data obtained by the data collection module 512 may be organized by various data types 620. After the collected data is organized by various data types 620, different preprocessing algorithms 622 may be utilized with respect to a data type 620. For example, if the data type is RGB images of a user’s foot, a particular preprocessing algorithm directed to RBG images of a user’s foot may be utilized to identify features of the user’s foot. In some embodiments, the feature processing module 614 is performed by a mobile computing device 110. In these embodiments, as the preprocessing algorithm 622 is performed internally in the mobile computing device 110, additional systems (e.g., an external server) are not required, which improves the speed of the diagnosis. Localized information processing also makes health information less vulnerable to hacking and reduces opportunities for health information being hacked by cyberattacks as a transmission to an auxiliary component and/or being stored by an auxiliary component are not required.

[0057] In some embodiments, the preprocessing algorithm 622 is performed by the external computing system 210. In these embodiments, processing overhead on the mobile computing device 110 is reduced. Further, the external computing system 210 may comprise superior computing capabilities in comparison to the mobile computing device 110, thereby improving the speed of processing and/or the capacity to process the collected data.

[0058] In some embodiments, the preprocessing algorithm 622 may be omitted. In one embodiment, the preprocessing algorithm 622 is equivalent to a simple algorithm that inputs the raw data received from the data collection module 512. In some embodiments, the feature processing module 614 transforms the input data into their latent representation 624. Examples of transformations include, but are not limited to, transforming a RGB photo to a neural network embedding, transforming video to a set of key RGB photos with an optional transformation to neural networking embeddings, transforming RGBD photos to neural network embeddings, transforming sound to spectrogram (via short-time Fourier transform or wavelet transformation, for example), transforming the RGBD video to a point cloud. The feature processing module may include a pipeline of algorithms. An example of such algorithms is transformation of the input data into neural networking embeddings with subsequent dimension reduction. Another example is point cloud calculation with subsequent points of interest estimation and/or statistics calculation. As stated above, in some embodiments, the data collection module 512 provides multiple types of data 620. In these embodiments, the feature processing module 614 separately processes each data type and outputs the latent representation 624 for each of the data types.

[0059] As illustrated in FIG. 7, the prediction module 716 is responsible for predicting a probability of a medical condition based on the features 720 provided from the feature processing module 614. Non-limiting examples of the features include neural network embeddings, spectrograms, point clouds, hand-crafted features calculated on raw data, etc. A feature 702 refers to extracted or transformed data obtained from the collected data after some processing by a preprocessing algorithm or a combination of different preprocessing algorithms. In some embodiments, the feature processing module 614 does not make any computations. The feature processing module 614 supplies the raw input as the output to the prediction module 716, which takes the raw collected data as an input. An example of such embodiments is a classification neural network being used as a prediction module and a camera that collects images as a data collection module. The prediction module 716 may be further configured to output the features 720 to the user of the mobile computing device 110 in real-time. In some embodiments, the data collection module 512 collects various types of data 620 and the feature processing module 614 outputs one or more latent representations 624. In some embodiments, the prediction module 716, may be a multimodal system. In a multimodal system, for each data source a separate prediction algorithm 722 may be utilized to calculate a probability prediction. The outputs of the prediction algorithm 722 are then aggregated by an aggregation algorithm 724. Examples of the aggregation algorithm 724 include, but are not limited to, bootstrapping processing or boosting processing. In these examples, the prediction algorithms 722 are one or more of the following: deep learning algorithms, linear regression algorithms, decision trees, ensembles methods, or are non-trainable algorithms based on prior knowledge. The aggregated output 122 by the aggregation algorithm 724 is indicative of a probability of the health condition to the user of the mobile computing device 110.

[0060] In some embodiments, the feature processing module 614 and/or the prediction module 716 include machine learning algorithms. Such algorithms require training in order to attain accuracy. The training of such algorithms requires testing data and labels. In some embodiments, training data is data collected from real people by using the data collection module 512. Labeling of the collected data may be performed by a medical professional who determines a probability of an ailment for each user based on the input data. The labels should include the classification labels for the predicted conditions or the probability of such conditions. In addition, the labels may include other features related to the condition such as severity, anatomic features, anamnesis and even the subjective confidence in diagnosis from the clinician who performed the labeling. This information can be taken into account in the training of the model by loss assigning weights for loss function or changing sampling balance for the training procedures based on mini- batch training. As an example, for data with severe conditions the weight for loss can be bigger to make the model pay more attention to severe cases. In other embodiments, the data and the labels are synthesized by another algorithm. Some examples of algorithms that can be used to synthesize training data include generative neural networks, three-dimensional (3D) modeling, or a Markov process. In such cases, labeling of the training data can be performed by medical professionals or by other algorithms based on generative parameters. An example of such parameters is a Meary’s angle on a rigged 3D foot model in a flat feet prediction. In other embodiments, the training data consists of a combination of data collected from real people and synthetic data that is generated by machines. In this case the mixing strategy of such data for training may in assigning weights for the loss, changing sampling probability for the training procedures based on mini-batch training.

[0061] In other embodiments, the feature processing module 614 and/or the prediction module 716 consist of algorithms based on prior knowledge. Examples of types of prior knowledge include knowledge pertaining to flat feet and hallux -valgus estimations and applying that knowledge to a 3D foot scan. The following paragraphs will describe these particular examples in detail.

[0062] FIGs. 10a and 10b illustrate examples of points clouds generated by the feature processing module 614, according to some embodiments of the application. In some embodiments, the point cloud is a data type utilized in the medical diagnosis processes disclosed herein. A point cloud is a mandatory component in a majority of 3D models. For those 3D models that do not include a point cloud, a point cloud can be created by a sampling procedure with a sufficient resolution. For example, a 3D model can be created from a series of RGB(D) photos of a foot via any Structure from Motion (SIM) algorithm. In this case, the data collection module 512 is an application that collects the RGB(D) photos of the foot, the feature processing module 614 is the SfM algorithm, and the prediction module 716 is described in detail below. For clinical purposes, a medical condition is diagnosed based on either an X-ray or by a visual analysis made by a skilled medical professional. For both flat feet and hallux -valgus conditions, the diagnosis is based on a location of the bones within the foot. Accordingly, a location of the bones may be determined by a hand-crafted (e.g., customized) non-trainable algorithm. FIG. 10a illustrates an example of a point cloud generated for diagnosing a hallux-valgus condition. For the hallux-valgus condition (also known as bunions), the diagnosis requires finding a joint 1002 connecting a big toe 1006 to the rest of the foot 1004 (also known as the first metatarsophalangeal (MTP) joint). This joint 1002 may be found as an extreme point in the 3D point cloud in the front part of the medial view. An extreme point estimation can be performed by comparing points along the length axis and their corresponding values along the width axis. A point that is farther in the width direction than its local neighborhood is the extremum point by definition. When the location of the joint is known, a surrogate hallux-valgus angle can be defined as an angle 1008 in the dorsal view projection between the line connecting the big toe 1006 and joint point 1002 and the line connecting the heel location 1004 and the joint point 1002. It should be noted, in this diagnosis example, the big toe 1006 and the heel point locations 1004 can also be found as an extremum in an anterior and a posterior view, correspondingly. In some embodiments, the angle 1008 is calculated by the feature preprocessing module 614 using a SfM algorithm. In other embodiments, it is an initial part of the processing in the prediction module 716. The angle 1008 is a highly descriptive feature in certain medical condition prediction as high angle values often indicate a high probability of hallux-valgus deformity. For example, the simplest prediction model can linearly map the surrogate angle to a probability of the presence of a medical condition with fixed coefficients determined by research. In this example, the prediction module 716 can take the angle 1008 as the output of the feature processing model 614 and apply the linear model that maps the angle 1008 into a probability of a medical condition, such as hallux-valgus. In other embodiments, the prediction module 716 is a pipeline of algorithms that takes a point cloud as an input from the feature processing module 614, calculates the angle 1008, and applies the mapping model to the angle 1008 to arrive at a probably of the medical condition.

[0063] FIG. 10b illustrate an example of diagnosing a flat feet condition. In referring to Fig. 10b, the point of interest in the point cloud may be the highest point 1050 of a longitudinal arch of the foot 1054, which can be found in the point cloud as the highest point 1050 of the bottom surface 1056 of the foot 1054. The bottom surface 1056 can be calculated by splitting the point cloud in disjoint sets of points defined by a 2D grid of a floor plate. For each set of points, the point with a minimal height coordinate is computed. This set of minimal points is defined as the bottom surface 1056. This highest point 1050 will not have any neighboring points that are located lower (relative to the foot) and will be closer to the camera's image plane in a medial view than any neighboring points. When the highest point 1050 is determined, the diagnosis can be performed by using an angle 1058 in the dorsal view projection between the line 1052 connecting the arch point 1050 and the joint point and the line 1060 connecting the arch point 1050 and the heel location in the same manner as in the diagnosis of hallux valgus.

[0064] Another method for predicting a probability of flat feet, without training an algorithm, is by utilizing a RGBD camera for a medial view. In the feature processing module 614, each pixel can be transformed into a point in a point cloud, where these points are filtered by the measured distance to the camera (e.g., one of the one or more sensors 514 is a distance sensor), so only the points representing the foot and the floor will remain. The floor can be found and then filtered by a random sample consensus (RANSAC) algorithm. At this stage, the foot point cloud for the medial view is determined and can be utilized to estimate the same points for the 3D model as in the previous example and the same angle calculated on those points.

[0065] Another example is through the use of only an RGB camera. This method requires capturing an image using a light source above the foot, so the arch area is covered by shadow. The foot area can be segmented using various computer vision techniques including pretrained neural networks or simple color segmentation. The arch point can be found as the upper point on the edge line between the lighted foot and the shadows. This edge line can be determined by classic computer vision algorithms such as a Canny algorithm, a Sobel operator, a Laplacian of Gaussian (LoG) algorithm, or any other suitable algorithm. As for the hallux valgus prediction, either the feature processing module 614 or the prediction module 716 can include the points of interest position estimation.

[0066] In referring now to FIG. 8, a method 800 for detecting medical conditions is illustrated, according to some embodiments. The method 800 may be executed by any of the systems 100, 200, 300, or 400. The method 800 includes steps 802 - 806. In step 802, a mobile computing device, such as the mobile computing device 110, collects data from a user. The step 802 may be executed according to any of the manners described above with respect to collecting data from a user using the mobile computing device 110.

[0067] In step 804, the collected data, from step 802, is preprocessed to extract a feature. A feature may be a neural network embedding, spectrogram, list of key points, point cloud. A feature may also be a region of interest in image measurements, for example, the length of a point cloud and so on. The feature may be extracted according to any of the manners described above with respect to preprocessing the collected data. The step 804 may be executed by the mobile computing device 110 or by a remote computing device, such as the external computing device 210.

[0068] In step 806, the extracted feature from step 804 is used to predict a probability of a medical condition. The probability may be predicted according to any of the manners described above with respect to predicting the probability. The step 806 may be executed by the mobile computing device 110 or by the remote computing device such as the external computing device 210.

[0069] In referring to FIGs. 9a, 9b, and 9c, various embodiments for detecting medical conditions (such as the method 800) are disclosed. In FIG. 9a, a method 902 is illustrated. In step 904, a mobile computing device, such as the mobile computing device 110, collects data from a user. In step 906, the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210. In step 908, the remote computing device preprocesses the collected data to extract a feature. In step 910, the remote computing device sends the extracted feature to the mobile computing device. In step 912, the mobile computing device predicts a probability of a medical condition of the user based on the extracted feature. In step 914, the mobile computing device then outputs the probability of the medical condition to the user.

[0070] In FIG. 9b, a method 920 is illustrated. A mobile computing device, such as the mobile computing device 110, collects data from a user (step 922). The mobile computing device preprocesses the collected data to extract a feature (step 924), then sends the extracted feature to a remote computing device (step 926), such as the external computing device 210. The remote computing device predicts a probability of a medical condition of the user based on the extracted feature (step 928). In step 930, the external computing device sends the probability of the medical condition to the mobile computing device. In step 932, the mobile computing device then outputs the probability of the medical condition.

[0071] In FIG. 9c, a method 940 is illustrated. In step 942, a mobile computing device, such as the mobile computing device 110, collects data from a user. In step 944, the mobile computing device sends the collected data to a remote computing device, such as the external computing device 210. In step 946, the remote computing device preprocesses the collected data to extract a feature. In step 948, the remote computing device predicts a probability of a medical condition of the user based on the extracted feature. The remote computing device then sends the probability of the medical condition to the mobile computing device (step 950) and outputs the probability of the medical condition (step 952).

[0072] Although the disclosure is illustrated and described herein with reference to specific embodiments, the disclosure is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the disclosure.