Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ULTRASOUND IMAGE-BASED PATIENT-SPECIFIC REGION OF INTEREST IDENTIFICATION, AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS
Document Type and Number:
WIPO Patent Application WO/2022/069208
Kind Code:
A1
Abstract:
A medical imaging system includes a processor configured for communication with a medical imaging device (e.g., an ultrasound probe). The processor receives a user input related to a region of interest within a first set of medical images (e.g., ultrasound images). The processor trains a neural network on the first set of medical images using the user input, thereby generating a patient-specific neural network. The processor obtains a second set of medical images from the first patient. The processor applies the patient-specific neural network to the second set of medical images to identify the region of interest. The processor provides, based on the application, a graphical representation related to the region of interest in the second set of medical images.

Inventors:
CANFIELD EARL (NL)
TRAHMS ROBERT (NL)
Application Number:
PCT/EP2021/075162
Publication Date:
April 07, 2022
Filing Date:
September 14, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKLIJKE PHILIPS NV (NL)
International Classes:
G06T7/11; G06K9/00; G06T7/194
Domestic Patent References:
WO2019219387A12019-11-21
Attorney, Agent or Firm:
PHILIPS INTELLECTUAL PROPERTY & STANDARDS (NL)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An ultrasound imaging system, comprising: a processor configured for communication with an ultrasound probe, the processor configured to: receive a first ultrasound image frame representative of a first patient during a first ultrasound examination of the first patient; receive first neural network parameters, wherein the first neural network parameters are associated only with the first patient; identify, using a neural network implemented with the first neural network parameters, a first region of interest within the first ultrasound image frame; and output, to a display in communication with the processor, the first ultrasound image frame and a first graphical representation of the first region of interest.

2. The system of claim 1, wherein the processor is configured to: receive a second ultrasound image frame representative of a second patient during a second ultrasound examination of the second patient; receive second neural network parameters, wherein the second neural network parameters are associated only with the second patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest.

3. The system of claim 1, wherein the first neural network parameters are associated with only a first anatomy of the first patient, and wherein the first region of interest comprises the first anatomy of the first patient.

38

4. The system of claim 3, wherein the processor is configured to: receive a second ultrasound image frame representative of the first patient; receive second neural network parameters, wherein the second neural network parameters are associated only with a second anatomy of the first patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame, wherein the second region of interest comprises the second anatomy of the first patient; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest.

5. The system of claim 1, wherein the first neural network parameters are determined based on training during a previous ultrasound examination of the first patient.

6. The system of claim 5, wherein the first neural network parameters are intentionally overfit to the first patient.

7. The system of claim 1 , wherein the first neural network parameters are determined based on training in a point-of-care setting.

8. The system of claim 1, wherein the processor is configured to retrieve a first neural network parameter file comprising the first neural network parameters from a memory in communication with the processor, wherein the first neural network parameter file is associated with only the first patient.

9. The system of claim 8, wherein the processor is configured to retrieve the first neural network parameter file when the processor retrieves patient data associated with only the first patient to initiate the first ultrasound examination.

10. The system of claim 1, wherein the processor is configured to modify the first neural parameters during a training based on the first ultrasound examination.

39

11. The system of claim 1, wherein the processor is configured to: determine a confidence score representative of the processor identifying the first region of interest; and output the confidence score to the display.

12. The system of claim 1, wherein the graphical representation of the first region of interest comprises a graphical overlay on the first ultrasound image.

13. The system of claim 12, wherein the processor is configured to: identify the first region of interest in a plurality of ultrasound image frames; and output the plurality of ultrasound image frames to the display, wherein the graphical representation of the first region of interest moves to track the first region of interest within the plurality of ultrasound image frames.

14. The system of claim 1, wherein the processor is configured to: identify the first region of interest, using the neural network implemented with the first neural network parameters, during a plurality of ultrasound examinations of the first patient; store, in a memory in communication with the processor, a respective ultrasound image frame comprising the first region of interest for each plurality of ultrasound examinations; and output, to the display, a screen display simultaneously displaying each of the respective ultrasound image frames.

15. The system of claim 1, wherein the neural network comprises a convolutional neural network (CNN) and the first neural network parameters comprise CNN parameters.

16. The system of claim 1, wherein the processor comprises a graphics processing unit (GPU).

17. An ultrasound imaging method, comprising:

40 receiving, at a processor in communication with an ultrasound probe, an ultrasound image frame representative of a patient during an ultrasound examination of the patient; receiving neural network parameters at the processor, wherein the neural network parameters are associated only with the patient; identifying, by the processor, a region of interest within the ultrasound image frame using a neural network implemented with the neural network parameters; and outputting, to a display in communication with the processor, the ultrasound image frame and a graphical representation of the region of interest.

18. A medical imaging system, comprising: a processor configured for communication with a medical imaging device, the processor configured to: receive a user input related to a region of interest within a first set of medical images; train a neural network on the first set of medical images using the user input, thereby generating a patient-specific neural network; obtain a second set of medical images from the first patient; apply the patient-specific neural network to the second set of medical images to identify the region of interest; and provide, based on the application, a graphical representation related to the region of interest in the second set of medical images.

Description:
ULTRASOUND IMAGE-BASED PATIENT- SPECIFIC REGION OF INTEREST IDENTIFICATION, AND ASSOCIATED DEVICES, SYSTEMS, AND METHODS

TECHNICAL FIELD

[0001] The present disclosure relates generally to systems for ultrasound imaging. In particular, patient-specific deep learning networks can be trained and implemented in point-of- care settings to identify regions of interest within a patient’s anatomy and display the regions of interest to a user during ultrasound imaging examinations.

BACKGROUND

[0002] Ultrasound imaging systems are widely used for medical imaging. For example, a medical ultrasound system may include an ultrasound transducer probe coupled to a processing system and one or more display devices. The ultrasound transducer probe may include an array of ultrasound transducer elements that transmit acoustic waves into a patient’s body and record acoustic waves reflected from anatomical structures within the patient’s body, which may include tissues, blood vessels, internal organs, tumors, cysts, or other anatomical features. The transmission of the acoustic waves and the reception of reflected acoustic waves or echo responses can be performed by the same set of ultrasound transducer elements or different sets of ultrasound transducer elements. The processing system can apply beamforming, signal processing, and/or image processing to the received echo responses to create an image of the patient’s internal anatomical structures. The image may then be presented to a user for analysis. [0003] Ultrasound imaging is a safe, useful, and in some applications, non-invasive tool for diagnostic examination, interventions, or treatment. Ultrasound imaging can provide insights into an anatomy before a surgery or other major procedure is performed as well as monitor and track changes to a particular anatomical feature over time. The rapid growth in point-of-care ultrasound treatment has enabled the availability of ultrasound in many point-of-care settings, such as during an emergency situation, critical care unit, or other specialized care facility. However, use of point-of-care ultrasound can be challenging for users, specifically for novice users, in terms of structure identification, rapid assessment of the condition, or tracking differences in anatomical structures or features of the same patient over time between ultrasound examinations. SUMMARY

[0004] Embodiments of the present disclosure are directed to systems, devices, and methods for deep learning networks used for ultrasound applications. The ultrasound imaging system described herein identifies a region of interest in a patient’s anatomy during a first ultrasound examination, labels the region of interest, and trains a deep learning network to identify the same region of interest during subsequent ultrasound examinations. In subsequent examinations, the same region of interest may be identified, labelled, and displayed to a user in real time in a point- of-care setting. Regions of interest may include such anatomical features as tumors, cysts, blood clots, blockages, or other features. The system may utilize convolutional neural networks to learn and identify regions of interest. The system may intentionally overfit regions of interest identified and use anatomical features surrounding a region of interest to identify a patient’s specific anatomy and specific regions of interest within that anatomy. The system may utilize high speed processors to capture ultrasound image frames at a high frame rate and process ultrasound image frames for display to a user in real time. The system may additionally calculate a number of metrics associated with an identified region of interest including volume of a region of interest, blood flow, or any number of other metrics. The system may assist a user to track changes to a region of interest over several ultrasound examinations by displaying image frames or ultrasound videos from different examinations simultaneously. The system may generate and store several anatomy-specific deep learning network parameters associated with one patient and may store patient-specific deep learning network parameters corresponding to many patients. The ultrasound imaging system described herein increases a user’s ability to diagnose, monitor, and treat various medical conditions by more easily and reliably identifying, measuring, and comparing important anatomical features in a patient’s anatomy.

[0005] In an exemplary aspect of the present disclosure, an ultrasound imaging system is provided. The ultrasound imaging system includes a processor configured for communication with an ultrasound probe, the processor configured to: receive a first ultrasound image frame representative of a first patient during a first ultrasound examination of the first patient; receive first neural network parameters, wherein the first neural network parameters are associated only with the first patient; identify, using a neural network implemented with the first neural network parameters, a first region of interest within the first ultrasound image frame; and output, to a display in communication with the processor, the first ultrasound image frame and a first graphical representation of the first region of interest.

[0006] In some aspects, the processor is configured to: receive a second ultrasound image frame representative of a second patient during a second ultrasound examination of the second patient; receive second neural network parameters, wherein the second neural network parameters are associated only with the second patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest. In some aspects, the first neural network parameters are associated with only a first anatomy of the first patient, and the first region of interest comprises the first anatomy of the first patient. In some aspects, the processor is configured to: receive a second ultrasound image frame representative of the first patient; receive second neural network parameters, wherein the second neural network parameters are associated only with a second anatomy of the first patient; identify, using the neural network implemented with the second neural network parameters, a second region of interest within the second ultrasound image frame, wherein the second region of interest comprises the second anatomy of the first patient; and output, to the display in communication with the processor, the second ultrasound image frame and a second graphical representation of the second region of interest. In some aspects, the first neural network parameters are determined based on training during a previous ultrasound examination of the first patient. In some aspects, the first neural network parameters are intentionally overfit to the first patient. In some aspects, the first neural network parameters are determined based on training in a point-of-care setting. In some aspects, the processor is configured to retrieve a first neural network parameter file comprising the first neural network parameters from a memory in communication with the processor, and the first neural network parameter file is associated with only the first patient. In some aspects, the processor is configured to retrieve the first neural network parameter file when the processor retrieves patient data associated with only the first patient to initiate the first ultrasound examination. In some aspects, the processor is configured to modify the first neural parameters during a training based on the first ultrasound examination. In some aspects, the processor is configured to: determine a confidence score representative of the processor identifying the first region of interest; and output the confidence score to the display. In some aspects, the graphical representation of the first region of interest comprises a graphical overlay on the first ultrasound image. In some aspects, the processor is configured to: identify the first region of interest in a plurality of ultrasound image frames; and output the plurality of ultrasound image frames to the display, wherein the graphical representation of the first region of interest moves to track the first region of interest within the plurality of ultrasound image frames. In some aspects, the processor is configured to: identify the first region of interest, using the neural network implemented with the first neural network parameters, during a plurality of ultrasound examinations of the first patient; store, in a memory in communication with the processor, a respective ultrasound image frame comprising the first region of interest for each plurality of ultrasound examinations; and output, to the display, a screen display simultaneously displaying each of the respective ultrasound image frames. In some aspects, the neural network comprises a convolutional neural network (CNN) and the first neural network parameters comprise CNN parameters. In some aspects, the processor comprises a graphics processing unit (GPU).

[0007] In an exemplary aspect of the present disclosure, an ultrasound imaging method is provided. The ultrasound imaging method includes receiving, at a processor in communication with an ultrasound probe, an ultrasound image frame representative of a patient during an ultrasound examination of the patient; receiving neural network parameters at the processor, wherein the neural network parameters are associated only with the patient; identifying, by the processor, a region of interest within the ultrasound image frame using a neural network implemented with the neural network parameters; and outputting, to a display in communication with the processor, the ultrasound image frame and a graphical representation of the region of interest.

[0008] In an exemplary aspect of the present disclosure, a medical imaging system is provided. The medical imaging system includes: a processor configured for communication with a medical imaging device, the processor configured to: receive a user input related to a region of interest within a first set of medical images; train a neural network on the first set of medical images using the user input, thereby generating a patient-specific neural network; obtain a second set of medical images from the first patient; apply the patient-specific neural network to the second set of medical images to identify the region of interest in the second set of medical images; and provide, based on the application, a graphical representation related to the region of interest in the second set of medical images.

[0009] Additional aspects, features, and advantages of the present disclosure will become apparent from the following detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] Illustrative embodiments of the present disclosure will be described with reference to the accompanying drawings, of which:

[0011] Fig. 1 is a schematic diagram of an ultrasound imaging system, according to aspects of the present disclosure.

[0012] Fig. 2 is a schematic diagram of a plurality of patient-specific ultrasound image frames, video clips, and deep learning networks stored in a memory, according to aspects of the present disclosure.

[0013] Fig. 3 is a flow diagram of an ultrasound imaging method of a patient’s ultrasound imaging examination, according to aspects of the present disclosure.

[0014] Fig. 4 is a diagrammatic view of a graphical user interface for an ultrasound imaging system identifying a region of interest, according to aspects of the present disclosure.

[0015] Fig. 5 is a flow diagram of a method of training a patient-specific deep learning network to identify a predetermined region of interest, according to aspects of the present disclosure.

[0016] Fig. 6 is a schematic diagram of a method of training a patient-specific deep learning network to identify a predetermined region of interest, according to aspects of the present disclosure.

[0017] Fig. 7 is a flow diagram of a method of identifying a region of interest with a previously trained patient-specific deep learning network, according to aspects of the present disclosure.

[0018] Fig. 8 is a schematic diagram of a method of identifying and displaying to a user a region of interest with a previously trained patient-specific deep learning network, according to aspects of the present disclosure.

[0019] Fig. 9 is a diagrammatic view of a graphical user interface for an ultrasound imaging system identifying a region of interest, according to aspects of the present disclosure.

[0020] Fig. 10 is a diagrammatic view of a graphical user interface for an ultrasound imaging system displaying to a user a plurality of video clips of a region of interest, according to aspects of the present disclosure.

[0021] Fig. 11 is a schematic diagram of a processor circuit, according to aspects of the present disclosure. DETAILED DESCRIPTION

[0022] For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It is nevertheless understood that no limitation to the scope of the disclosure is intended. Any alterations and further modifications to the described devices, systems, and methods, and any further application of the principles of the present disclosure are fully contemplated and included within the present disclosure as would normally occur to one skilled in the art to which the disclosure relates. For example, while the focusing system is described in terms of cardiovascular imaging, it is understood that it is not intended to be limited to this application. The system is equally well suited to any application requiring imaging within a confined cavity. In particular, it is fully contemplated that the features, components, and/or steps described with respect to one embodiment may be combined with the features, components, and/or steps described with respect to other embodiments of the present disclosure. For the sake of brevity, however, the numerous iterations of these combinations will not be described separately.

[0023] The patient-specific and/or anatomy-specific deep learning network described herein can be advantageously utilized for a given patient and/or a particular anatomy for a given patient. Typically, a deep learning networks that are overfit to a specific patient or the patient’s specific anatomy is considered a poor deep learning network because it does not have predictive use outside of that patient. Thus, typically, overfitting is avoided and deep learning networks are trained and implemented with many different patients. Training of deep learning networks is also typically done outside of the point-of-care environment (e.g., during hardware and/or software development by a manufacturer). However, the present disclosure intentionally and advantageously overfits the deep learning network to the given patient and/or a particular anatomy for a given patient in a point-of-care environment. In this way, the deep learning network may only recognize and identify images for which it is trained. This allows a clinician to generate a deep learning network quickly and that is unique to the patient and/or part of the patient’s anatomy. This deep learning network can be saved as a relatively small size (e.g., 30 MB), saved as part of the patient’s file, and used in a longitudinal evaluation of that patient (e.g., multiple different ultrasound examinations over time). [0024] Fig. 1 is a schematic diagram of an ultrasound imaging system 100, according to aspects of the present disclosure. The system 100 is used for scanning an area or volume of a patient’s body. The system 100 includes an ultrasound imaging probe 110 in communication with a host 130 over a communication interface or link 120. The probe 110 may include a transducer array 112, a beamformer 114, a processor circuit 116, and a communication interface 118. The host 130 may include a display 132, a processor circuit 134, a communication interface 136, a memory 138 storing patient files 140.

[0025] In some embodiments, the probe 110 is an external ultrasound imaging device including a housing configured for handheld operation by a user. The transducer array 112 can be configured to obtain ultrasound data while the user grasps the housing of the probe 110 such that the transducer array 112 is positioned adjacent to or in contact with a patient’s skin. The probe 110 is configured to obtain ultrasound data of anatomy within the patient’s body while the probe 110 is positioned outside of the patient’s body. In some embodiments, the probe 110 can be an external ultrasound probe and/or a transthoracic echocardiography (TTE) probe.

[0026] In other embodiments, the probe 110 can be an internal ultrasound imaging device and may comprise a housing configured to be positioned within a lumen of a patient’s body, including the patient’s coronary vasculature, peripheral vasculature, esophagus, heart chamber, or other body lumen or body cavity. In some embodiments, the probe 110 may be an intravascular ultrasound (IVUS) imaging catheter or an intracardiac echocardiography (ICE) catheter. In other embodiments, probe 110 may be a transesophageal echocardiography (TEE) probe. Probe 110 may be of any suitable form for any suitable ultrasound imaging application including both external and internal ultrasound imaging.

[0027] In some embodiments, aspects of the present disclosure can be implemented with medical images of patients obtained using any suitable medical imaging device and/or modality. Examples of medical images and medical imaging devices include x-ray images (angiographic image, fluoroscopic images, images with or without contrast) obtained by an x-ray imaging device, computed tomography (CT) images obtained by a CT imaging device, positron emission tomography-computed tomography (PET-CT) images obtained by a PET-CT imaging device, magnetic resonance images (MRI) obtained by an MRI device, single-photon emission computed tomography (SPECT) images obtained by a SPECT imaging device, optical coherence tomography (OCT) images obtained by an OCT imaging device, and intravascular photoacoustic (IVPA) images obtained by an IVPA imaging device. The medical imaging device can obtain the medical images while positioned outside the patient body, spaced from the patient body, adjacent to the patient body, in contact with the patient body, and/or inside the patient body. [0028] For an ultrasound imaging device, the transducer array 112 emits ultrasound signals towards an anatomical object 105 of a patient and receives echo signals reflected from the object 105 back to the transducer array 112. The ultrasound transducer array 112 can include any suitable number of acoustic elements, including one or more acoustic elements and/or a plurality of acoustic elements. In some instances, the transducer array 112 includes a single acoustic element. In some instances, the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration. For example, the transducer array 112 can include between 1 acoustic element and 10000 acoustic elements, including values such as 2 acoustic elements, 4 acoustic elements, 36 acoustic elements, 64 acoustic elements, 128 acoustic elements, 500 acoustic elements, 812 acoustic elements, 1000 acoustic elements, 3000 acoustic elements, 8000 acoustic elements, and/or other values both larger and smaller. In some instances, the transducer array 112 may include an array of acoustic elements with any number of acoustic elements in any suitable configuration, such as a linear array, a planar array, a curved array, a curvilinear array, a circumferential array, an annular array, a phased array, a matrix array, a one-dimensional (ID) array, a 1.x dimensional array (e.g., a 1.5D array), or a two- dimensional (2D) array. The array of acoustic elements (e.g., one or more rows, one or more columns, and/or one or more orientations) can be uniformly or independently controlled and activated. The transducer array 112 can be configured to obtain one-dimensional, two- dimensional, and/or three-dimensional images of a patient’s anatomy. In some embodiments, the transducer array 112 may include a piezoelectric micromachined ultrasound transducer (PMUT), capacitive micromachined ultrasonic transducer (CMUT), single crystal, lead zirconate titanate (PZT), PZT composite, other suitable transducer types, and/or combinations thereof.

[0029] The object 105 may include any anatomy or anatomical feature, such as blood vessels, nerve fibers, airways, mitral leaflets, cardiac structure, abdominal tissue structure, appendix, large intestine (or colon), small intestine, kidney, liver, and/or any other anatomy of a patient. In some aspects, the object 105 may include at least a portion of a patient’s large intestine, small intestine, cecum pouch, appendix, terminal ileum, liver, epigastrium, and/or psoas muscle. The present disclosure can be implemented in the context of any number of anatomical locations and tissue types, including without limitation, organs including the liver, heart, kidneys, gall bladder, pancreas, lungs; ducts; intestines; nervous system structures including the brain, dural sac, spinal cord and peripheral nerves; the urinary tract; as well as valves within the blood vessels, blood, chambers or other parts of the heart, abdominal organs, and/or other systems of the body. In some embodiments, the object 105 may include malignancies such as tumors, cysts, lesions, hemorrhages, or blood pools within any part of human anatomy. The anatomy may be a blood vessel, as an artery or a vein of a patient’s vascular system, including cardiac vasculature, peripheral vasculature, neural vasculature, renal vasculature, and/or any other suitable lumen inside the body. In addition to natural structures, the present disclosure can be implemented in the context of man-made structures such as, but without limitation, heart valves, stents, shunts, filters, implants and other devices.

[0030] The beamformer 114 is coupled to the transducer array 112. The beamformer 114 controls the transducer array 112, for example, for transmission of the ultrasound signals and reception of the ultrasound echo signals. In some embodiments, the beamformer 114 may apply a time-delay to signals sent to individual acoustic transducers within an array in the transducer 112 such that an acoustic signal is steered in any suitable direction propagating away from the probe 110. The beamformer 114 may further provide image signals to the processor circuit 116 based on the response of the received ultrasound echo signals. The beamformer 114 may include multiple stages of beamforming. The beamforming can reduce the number of signal lines for coupling to the processor circuit 116. In some embodiments, the transducer array 112 in combination with the beamformer 114 may be referred to as an ultrasound imaging component. [0031] The processor 116 is coupled to the beamformer 114. The processor 116 may also be described as a processor circuit, which can include other components in communication with the processor 116, such as a memory, beamformer 114, communication interface 118, and/or other suitable components. The processor 116 may include a central processing unit (CPU), a graphical processing unit (GPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a controller, a field programmable gate array (FPGA) device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 116 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The processor 116 is configured to process the beamformed image signals. For example, the processor 116 may perform filtering and/or quadrature demodulation to condition the image signals. The processor 116 and/or 134 can be configured to control the array 112 to obtain ultrasound data associated with the object 105.

[0032] The communication interface 118 is coupled to the processor 116. The communication interface 118 may include one or more transmitters, one or more receivers, one or more transceivers, and/or circuitry for transmitting and/or receiving communication signals. The communication interface 118 can include hardware components and/or software components implementing a particular communication protocol suitable for transporting signals over the communication link 120 to the host 130. The communication interface 118 can be referred to as a communication device or a communication interface module.

[0033] The communication link 120 may be any suitable communication link. For example, the communication link 120 may be a wired link, such as a universal serial bus (USB) link or an Ethernet link. Alternatively, the communication link 120 nay be a wireless link, such as an ultra- wideband (UWB) link, an Institute of Electrical and Electronics Engineers (IEEE) 802.11 WiFi link, or a Bluetooth link.

[0034] At the host 130, the communication interface 136 may receive the image signals. The communication interface 136 may be substantially similar to the communication interface 118. The host 130 may be any suitable computing and display device, such as a workstation, a personal computer (PC), a laptop, a tablet, or a mobile phone.

[0035] The processor 134 is coupled to the communication interface 136. The processor 134 may also be described as a processor circuit, which can include other components in communication with the processor 134, such as the memory 138, the communication interface 136, and/or other suitable components. The processor 134 may be implemented as a combination of software components and hardware components. The processor 134 may include a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a controller, an FPGA device, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 134 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. The processor 134 can be configured to generate image data from the image signals received from the probe 110. The processor 134 can apply advanced signal processing and/or image processing techniques to the image signals. In some embodiments, the processor 134 can form a three-dimensional (3D) volume image from the image data. In some embodiments, the processor 134 can perform real-time processing on the image data to provide a streaming video of ultrasound images of the object 105.

[0036] The memory 138 is coupled to the processor 134. The memory 138 may be any suitable storage device, such as a cache memory (e.g., a cache memory of the processor 134), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, solid state drives, other forms of volatile and nonvolatile memory, or a combination of different types of memory.

[0037] The memory 138 can be configured to store the patient files 140 relating to a patient’s medical history, history of procedures performed, anatomical or biological features, characteristics, or medical conditions associated with a patient, computer readable instructions, such as code, software, or other application, as well as any other suitable information or data. The patient files 140 may include other forms of medical history, such as but not limited to ultrasound images, ultrasound videos, and/or any imaging information relating to the patient’s anatomy. The memory 138 can also be configured to store patient files 140 relating to the training and implementation of patient-specific deep learning networks (e.g., neural networks). Mechanisms for training and implementing the patient-specific deep learning networks (e.g., patient-specific neural networks) are described in greater detail herein.

[0038] The display 132 is coupled to the processor circuit 134. The display 132 may be a monitor or any suitable display. The display 132 is configured to display the ultrasound images, image videos, and/or any imaging information of the object 105.

[0039] The system 100 may be used to assist a sonographer in performing an ultrasound scan at a point-of-care setting. For instance, the host 130 may be a mobile device, such as a tablet, a mobile phone, or portable computer. During an imaging procedure, the ultrasound system can implement a patient-specific deep learning network to automatically label or flag the region of interest and place a bounding box around the region of interest. In some embodiments, upon locating a region of interest within the anatomy of a patient, the sonographer may direct the ultrasound imaging system 100 to label or flag the region of interest and place a bounding box around the region of interest to assist the sonographer in locating, comparing, and displaying the region of interest in the same or subsequent ultrasound imaging examinations in a point-of-care setting.

[0040] In some aspects, the processor 134 may train one or more new deep learning-based prediction networks to identify the region of interest selected by the sonographer within the anatomy of the patient based on input ultrasound images. The training of the one or more new deep learning-based networks may include receiving a video clip comprising ultrasound image frames acquired by the probe 110, and using them to train a patient-specific deep learning network to identify a region of interest. The training may further comprise using ultrasound imaging frames acquired by the probe 110 to test the deep learning network to ensure that it correctly identifies the region of interest labeled by the sonographer or other user. In some embodiments, the same ultrasound system 100 is used for training and implementation of the patient-specific deep learning networks. In other embodiments, different ultrasound systems are used for training and implementation of the patient-specific deep learning networks. For example, a patient-specific deep learning network configuration file (e.g., storing network parameters and/or weights) may be generated by one ultrasound imaging system, transferred to a different, second ultrasound imaging system (e.g., via a local network or the internet, or using physical storage media), and implemented on the second ultrasound imaging system. The host 130 of the ultrasound imaging system 100 may further include a deep learning network API and GPU hardware.

[0041] In some aspects, the ultrasound imaging system 100 may further perform various calculations relating to a region of interest during an imaging procedure with a patient. These calculations may then be displayed to the sonographer or other user via the display 132. [0042] Fig. 2 is a schematic diagram of a plurality of patient-specific ultrasound image frames 210, video clips 220, and deep learning networks 230 stored in the memory 138, according to aspects of the present disclosure. As shown in Fig. 2, at a first examination, a sonographer may conduct an ultrasound imaging examination of the anatomy of a patient. During the examination, multiple ultrasound image frames 210 may be captured by the probe 110. In some embodiments, every image frame 210 captured during an ultrasound imaging examination may be stored in the memory 138 within a particular patient’s file 205. In other embodiments, only a portion of captured image frames 210 may be stored in the memory 138 in the patient’s file 205 (e.g., those image frames 210 that are selected by the user for storage). The ultrasound image frames 210 may be of any suitable image format or extension, including but not limited to IMG files, high dynamic range (HDR) files, Nil files associated with NIfTI-1 data formats, MNC files, DCM files, digital imaging and communications in medicine (DICOM) files or other image file formats or extensions. In addition, the ultrasound image frames 210 may include both vector image files or raster image files. For example, the ultrasound image frames 210 may be in the form of joint photographic experts group (JPEG) files, portable network graphics (PNG) files, tagged image file (TIFF), portable document format (PDF) files, encapsulated postscript (EPS) files, raw image formats (RAW) or other file types. Further, the ultrasound image frames 210 may be captured and stored at any suitable bit depth, depending on the particular application of ultrasound imaging, characteristics of the region of interest, storage space within the memory 138, the number of frames in a given set of the ultrasound image frames 210, or other constraints. The ultrasound image frames 210 may be stored or captured at a rate of 8 bits, 16 bits, 24 bits, 32 bits, 64 bits, 128 bits, 256 bits, or more, or at any suitable rate therebetween.

[0043] In some embodiments, the user of the ultrasound imaging system 100 may, in a point- of-care setting during an examination or at a later time, identify and label individual image frames 210 which may be of particular interest. The user may annotate, enlarge, crop, compress, or otherwise modify an individual ultrasound image frame 210 in any suitable manner.

[0044] In some embodiments, the ultrasound image frames 210 may be captured at multiple patient examinations at different times. For example, a set of the ultrasound image frames 210 from a first examination may be captured and stored in the memory 138 within the patient’s file 205. At a later date and/or time, a second examination may be conducted and the same region may be examined using the ultrasound imaging system 100. At this second examination, an additional or second set of ultrasound image frames 210 may be captured and also stored in the memory 138 within the patient’s file 205. After this second examination, the patient’s file 205 may then include two sets of ultrasound image frames 210: one from a first examination or “Exam 1”, and one from a second examination or “Exam 2,” as shown in Fig. 2. The anatomy of the patient may then be examined at a third examination (or any subsequent examination) using the ultrasound imaging system 100 and a third set (or any subsequent set) of ultrasound image frames 210 may also be stored. Subsequent examinations may be conducted and corresponding sets of ultrasound image frames 210 may be captured and stored in the memory 138 within the patient’s file 205 and organized according to the date and time the ultrasound image frames 210 were captured corresponding to an examination. In some embodiments, the ultrasound imaging system 100 may store, in the memory 138 in communication with the processor 134, a respective ultrasound image frame or ultrasound video clip depicting the same region of interest for each ultrasound examination of several ultrasound examinations.

[0045] Ultrasound video clips 220 may also be stored on the memory 138 within a patient’s specific file 205. In some embodiments, a multiple ultrasound image frames from a set of ultrasound image frames 210 captured from any given patient examination may be compiled to form the ultrasound video clip 220. The ultrasound video clips 220 may be sorted or organized based on the date and time the ultrasound image files 210 used to generate an ultrasound video clip 220 were captured. The ultrasound video clips 220 may be of any suitable file format or extension. For example, the ultrasound video clips 220 may be captured, created, and/or stored in the form of an audio video interleave (AVI) file, flash video (FLV or F4V) file, Windows® media video (WMV) file, QuickTime® movie (MOV) file, motion picture experts group (MPEG) file, MP4 file, interplay multimedia (MVL) file, Volusion® 4D ultrasound scan (.V00) file, or any other suitable video file. In addition, ultrasound video clips 220 may be stored or captured at a rate of 8 bits, 16 bits, 24 bits, 32 bits, 64 bits, 128 bits, 256 bits, or more, or any suitable rate therebetween. In some embodiments, video clips 220 may additionally or alternatively be video loops. For example, the system may be configured to automatically loop the video when the user plays the video.

[0046] In some embodiments, the user of the ultrasound imaging system 100 may either in a point-of-care setting during an examination or at a later time, identify and label an ultrasound video clip 220 which may be of particular interest. The user may annotate, enlarge, compress, or otherwise modify an individual ultrasound video clip 220 in any suitable manner.

[0047] Similar to the ultrasound image frames 210 being captured at different times corresponding to different patient examinations of the same patient, the ultrasound video clips 220 may be captured, created, or stored at each patient examination. For example, a set of ultrasound video clips 220 from a first exam may be captured and stored in the memory 138 within the patient’s file 205. At a second examination (or any subsequent examination), a second set (or any subsequent set) of ultrasound video clips 220 may be captured and stored in the memory 138 within the particular patient’s file 205 and so on at any additional examinations. The ultrasound video clips 220 may be stored within the memory 138, and organized according to the date and time of the patient examination in which the ultrasound video clip 220 was captured.

[0048] One or more training ultrasound video clips 222 may be captured by the ultrasound imaging system 100 and used to create and/or train a patient-specific deep learning network 230 corresponding to a region of interest identified by a user. For example, a training ultrasound video clip 222 may comprise a plurality of ultrasound image frames 210 which depict a region of interest selected, labeled, and/or flagged by a user of ultrasound imaging system 100. Additional details regarding the training of patient-specific deep learning networks 230 corresponding to regions of interest will be discussed in more detail hereafter, and particularly with reference to Figs. 4-7.

[0049] The ultrasound video clips 220 may be of any suitable length of time. For example, in some embodiments, the ultrasound imaging system 100 may store a video clip 220 comprising every ultrasound image frame 210 captured during a given patient examination, such that the ultrasound video clip 220 is the same length as the entire patient examination. In other embodiments, the ultrasound imaging system 100 may create and store such a full-length video clip 220 within a patient’s file 205 for each patient examination with or without direction from a user of ultrasound imaging system 100. In other embodiments, a full-length video clip 220 is not stored in the memory 138. For example, an ultrasound video clip 220 may be a fraction of a second. An ultrasound video clip 220 may comprise only two ultrasound image frames or may be of a duration of only 1-10 milliseconds. The ultrasound video clips 220 may be of a duration of just 1 millisecond, 1 second, 10 seconds, 20 seconds, 50 seconds, 1 minute, 10 minutes, an hour, or of any suitable duration therebetween. In some embodiments, the ultrasound imaging system 100 may determine the length of any or all of the ultrasound video clips 220. In other embodiments, a user may dictate the length of any or all of the ultrasound video clips 200. Any temporal constraints may, in some embodiments, be imposed by the storage capacity of the memory 138, among other constraints. [0050] As previously mentioned, the ultrasound video clips 220 may be captured and stored at any suitable frame rate. In some embodiments in which processor 116 and/or processor 134 includes GPU(s) in addition or in lieu of CPU(s), frame rates of acquisition of ultrasound image frames 210 may be up to ten times that of frame rates acquired with CPU processor circuits. For example, ultrasound imaging system 100 may capture ultrasound image frames 210 at 100 frames per second, 120 frames per second, 180 frames per second, or 200 frames per second, or at any suitable frame rate therebetween.

[0051] Due to the high frame rate achieved by the use of GPU(s) implemented as part of the processors 116 and 134, and the ability to process frames and train and implement patientspecific deep learning networks 230 at a high rate, it is possible for the ultrasound imaging system 100 to train a new patient-specific deep learning network 230 in a point-of-care setting while an ultrasound examination is being conducted. In addition, the ultrasound imaging system 100 is able to implement previously trained patient-specific deep learning networks 230 and display to a user previously identified regions of interest in real time in a point-of-care setting. This allows sonographers to immediately recognize regions of interest while an examination is being performed and easily compare them to images or videos from past examinations to track changes in the anatomy. The patient-specific deep learning networks 230 may be capable of recognizing and distinguishing individual regions of interest where multiple regions of interest exist.

[0052] It is further noted that, although the ultrasound image frames 210 and the ultrasound video clips 220 may pertain to two-dimensional data or depictions, both the ultrasound image frames 210 and the ultrasound video clips 220 may be captured, stored, saved, or displayed to a user in both two-dimensional or three-dimensional formats. Data corresponding to three- dimensional ultrasound image frames 210 and ultrasound video clips 220 may be stored and organized in the memory 138 in a substantially similar way to two-dimensional data. In some embodiments, data relating to three-dimensional ultrasound image frames 210 and ultrasound video clips 220 may be of a larger file size.

[0053] As further shown in Fig. 2, a patient file 205 may comprise a plurality of deep learning network files 230. A deep learning network file 230 stores parameters and/or weights of the deep learning networks. That is, the deep learning network file 230 stores the patientspecific and/or anatomy-specific data needed to implement the patient-specific and/or anatomy- specific deep learning network. In some embodiments, the file does not include the deep learning architecture (e.g., the various layers of a convolutional neural network). Rather, in these embodiments, the file only stores the parameters and/or weights used in the layers of the CNN. This advantageously minimizes the file size of the deep learning network file 230. The ultrasound imaging system that is implementing the deep learning network can have software and/or hardware to implement the deep learning architecture. Accordingly, the same deep learning architecture may be implemented by the ultrasound imaging system, but it is patientspecific and/or anatomy-specific when it uses the deep learning network file 230. In other embodiments, the deep learning network file 230 also stores the deep learning architecture. In some embodiments, a deep learning network file 230 may correspond to one anatomical feature within a patient’s anatomy, including an organ, tumor, lesion, diseased region, or other previously listed features. Examples are depicted within Fig. 2 including a deep learning network 230 relating to a cardiac region, a breast region, a liver region, and a kidney region. These regions are merely exemplary. In some embodiments, a deep learning network file 230 may correspond to more than one region of interest 250. For example, within a deep learning network file 230 for a cardiac region, a first region of interest 250 identified and labeled by a user could correspond to a lesion within a lumen and a second region of interest 250 could correspond to an impedance or blockage of the same lumen or a different lumen within the anatomical region. Additional regions of interest 250 may also be included in the same deep learning network file 230. A deep learning network file 230 may correspond to one, two, three, four, five, ten, 15, 20, or more regions of interest 250. The region of interest 250 depicted in Fig. 2 may, in some embodiments, a deep learning network parameter file stored within a patient file 205.

[0054] As shown in Fig. 2, a plurality of patient files 205 may be stored in the memory 138. For example, two patient files 205 associated with two different patients are depicted in Fig. 2, each containing ultrasound image frames 210 from multiple patient examinations, ultrasound video clips 220 from multiple patient examinations including training ultrasound video clips 222, and files corresponding to patient-specific deep learning networks 230. Each saved deep learning network 230 corresponding to a particular anatomical region may include one or more regions of interest, as shown in Fig. 2. As further shown in Fig. 2, patient files 140 may include many more patient files 205 than the two depicted. Patient files 140 may include one, two, five, ten, 100, 1000, and/or any suitable number of 205. [0055] A patient file 205 may contain any suitable types of data or files corresponding to a patient’s health, history, or anatomy. For example, a patient file 205 may contain multiple deep learning network files 230, or may additionally include other medical/health records of the patient, such as electronic health records or electronic medical records. When a patient undergoes an ultrasound imaging procedure, the ultrasound imaging system 100 may access the patient’s deep learning network file 230 corresponding to a particular region or anatomy as well as any other patient data stored within the patient’s file 205.

[0056] As previously mentioned, the memory 138 may be any suitable storage device, or a combination of different types of memory. For example, a first set of patient files 205 may be stored on one storage device, including any type of storage device previously listed, and a second set of patient files 205 may be stored on a separate storage device. The first storage device may be in communication with the second storage device, and the two may subsequently be in communication with the processor 134 of Fig. 1. Similarly, the total number of patient files 205 may be stored on any number of storage devices in communication with one another and with the processor 134. In addition, all or some of the patient files 205 may be stored on a server, or cloud server, and accessed remotely by the host 130. All or some of the patient files 205 may further be copied such that a second copy or back-up of all or some of the patient files 205 are stored on a separate storage device, server, or cloud based server.

[0057] Fig. 3 is a flow diagram of an ultrasound imaging method 300 of a patient’s ultrasound imaging examination, according to aspects of the present disclosure. In some embodiments, this ultrasound imaging examination may be a first or otherwise initial examination (e.g., before subsequent examinations). One or more steps of the method 300 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1). As illustrated, method 300 includes a number of enumerated steps, but embodiments of method 300 may include additional steps before, after, or in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently.

[0058] One or more steps of the method 300 will be described with reference to Fig. 4, which is a schematic diagram of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 identifying a region of interest 420, according to aspects of the present disclosure. When the anatomy of a patient is examined using the ultrasound imaging system 100, a region of interest 420 is generally identified by the user for clinical diagnostic and/or treatment purposes. This region of interest 420 may include an anatomical feature 450. Anatomical feature 450 may be a specific organ including any of the previously listed anatomical features or regions listed in the description of object 105 of Fig. 1 above.

[0059] At step 305, method 300 includes receiving ultrasound image frames 210 from a patient examination. The ultrasound image frames 210 may be created corresponding to signals sent and received via the probe 110 and displayed via the display 132 (Fig. 1). In addition, the ultrasound image frames 210 may be stored in the patient’s file 205 (Fig. 2) in the memory 138. In some embodiments, at step 305, method 300 may not include storing the ultrasound image frames 210.

[0060] At step 310, method 300 includes receiving a user input designating the location of an anatomical feature 450 within a region of interest 420 (Fig. 4). The region of interest 420 may be substantially similar to any of the previously depicted and described regions of interest 250 in Fig. 2. Once a region of interest 420 has been designated using the ultrasound imaging system 100, a user may designate the location of an anatomical feature 450 within the region of interest 420 through a user interface. It is also noted that the region of interest 420 may comprise a portion of the anatomy of the patient, such as an anatomical feature 450, or in some embodiments may comprise an entire anatomy of the patient. Designating the location of an anatomical feature 450 may be completed by the user selecting (e.g., with any suitable input device that is in communication with the processor 134 and/or part of the host 130 in Fig. 1) a location on an ultrasound image frame 210 displayed via the display 132. In some embodiments, the display 132 may comprise a touch screen and designating the location of an anatomical feature 450 may comprise touching a location within an ultrasound image frame 210. Other components, such as a computer keyboard, mouse, touchpad, various hard or soft buttons on an ultrasound console, or any components configured to receive user inputs may be used to designate an anatomical feature 450 within a region of interest 420. In some embodiments, after an anatomical feature 450 within a region of interest 420 is selected by a user, the user may adjust the selected location of the region of interest 420 or anatomical feature 450 by either moving the selected location in any two dimensional or three dimensional direction via a computer mouse, keyboard, or any suitable input device, and/or by stepping forward or backward through the captured ultrasound image frames 210. In some embodiments, the anatomical feature 450 and/or the region of interest 420 within the body of a given patient is identified and designated during the ultrasound imaging examination in a point-of-care setting. In other embodiments, the anatomical feature 450 and/or the region of interest 420 may be identified and designated at some point after an examination has taken place. In some embodiments, the user only designates either the anatomical feature 450 or the region of interest 420 (and not both). In other embodiments, the user designates both the anatomical feature 450 and the region of interest 420.

[0061] Step 310 of method 300 may further include identifying more than one anatomical feature 450 and associated region of interest 420. For example, one, two, three, four, five, ten, 15, 20 or more anatomical features 450 may be identified during one patient ultrasound imaging examination within a region of interest 420. In embodiments involving more than one anatomical feature 450 per patient examination, the remaining steps of method 300 may be completed concurrently or simultaneously, or may be completed at different times, including completing any remaining steps with regards to one anatomical feature 450, and then completing the same steps immediately thereafter with regards to any additional anatomical features 450.

[0062] At step 315, method 300 includes labeling the previously identified anatomical feature 450 with a graphical element 470 within any ultrasound image frames 210 which depict the region of interest 420. The ultrasound imaging system 100 may label the anatomical feature 450 by any suitable method or with any suitable signifier. For example, the ultrasound imaging system 100 may label the anatomical feature 450 by placing a graphical element 470 at and/or around the user-selected location. The graphical element 470 may also be referred to as a flag, label, bookmark, bookmark label, indicator, or any other suitable term and may be a graphical representation of the region of interest 420. The graphical element 470 may be overlaid over the ultrasound image frame or may be a graphical overlay. The graphical element 470 may be of any suitable shape, color, size, or orientation. The graphical element may include a two-dimensional element and/or a three-dimensional element. For example, the graphical element 470 may be the shape of a flag or triangle as depicted in Fig. 4, or may be a circle, square, rectangle, triangle, any other polygon, or any other geometric or non-geometric shape. Graphical element 470 may also include text of any length and font, numerals, alpha-numeric characters, or any other symbols. In some embodiments, the shape of the graphical element 470 may symbolize any number of appropriate characteristic pertaining to the anatomical feature 450 or region of interest 420. For example, the shape of the graphical element 470 could represent the order in which the anatomical feature 450 was identified and designated with respect to other identified anatomical features 450, the relative size of the anatomical feature 450, the level of severity of the medical condition associated with the anatomical feature 450, the urgency with which the condition the region of interest 420 portrays is to be treated, or any other relevant characteristic. Similarly, the color, shading, pattern, or any other feature of the graphical element 470 could also be used to symbolize or convey these same characteristics to a user as well as any other suitable characteristic. The size, shape, color, pattern, or any other feature of the graphical element 470 used to identify an anatomical feature 450 may be selected by a user, or by the ultrasound imaging system 100. Any characteristic of a graphical element 470 may be fully customizable by a user of ultrasound imaging system 100 and additional characteristics or features of a graphical element 470 used to label an anatomical feature 450 may be added by the user.

[0063] As a user moves the probe 110 along the surface of a patient’s body, or within a lumen of a patient’s body, the graphical element 470 used to label an anatomical feature 450 within a region of interest 420 may continue to be displayed at or around the anatomical feature 450. For example, the anatomical feature 450 may move from one location on the display 132 to another location as a user moves the probe 110, and the graphical element 470 used to label the anatomical feature 450 may move with the anatomical feature 450 on the display 132.

[0064] At step 320, method 300 includes creating a bounding box 460 around the selected region of interest 420 within any ultrasound image frames 210 which depict the region of interest 420. When an anatomical feature 450 is identified and a graphical element 470 is generated within an ultrasound image frame 210, a bounding box 460 may be created and displayed surrounding the graphical element 470 and anatomical feature 450 and specifying the region of interest 420. Bounding box 460 may be a graphical representation of the region of interest 420. two-dimensional as depicted in Fig. 4, or may be a three-dimensional box. The bounding box 460 may be overlaid over the ultrasound image frame or may be a graphical overlay. Bounding box 460 may be automatically created and displayed to a user by the ultrasound imaging system 100 based on the location of the anatomical features 450 or other characteristics, or may be created based on user input. In some embodiments, the bounding box 460 is created automatically by the ultrasound imaging system 100 and a prompt may be displayed to a user of the ultrasound imaging system 100 allowing a user to modify the dimensions and location of bounding box 460. One primary purpose of bounding box 460 may be to specify to the ultrasound imaging system 100 which portions of an ultrasound image frame 210 should be used to train the patient-specific deep learning network 230. In other embodiments, the bounding box 460 may serve additional purposes, such as but not limited to identifying regions of interest 420 to a user more clearly, or conveying other characteristics, measurements, or metrics to a user. In some embodiments, both the bounding box 460 and the graphical element 470 are generated and/or displayed. In some embodiments, only one of the bounding box 460 or the graphical element 470 is generated and/or displayed.

[0065] At step 325, method 300 includes saving a plurality of the ultrasound image frames 210 which depict the region of interest 420 as an ultrasound video clip 222. The image frames 210 and/or the video clip 222 can be saved in a memory. The image frames 210 and/or the ultrasound video clip 222 may be used to train patient-specific deep learning networks 230. [0066] Fig. 5 is a flow diagram of a method 500 of training a patient-specific deep learning network to identify a predetermined region of interest 420, according to aspects of the present disclosure. One or more steps of the method 500 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1). One or more steps of the method 500 will be described with reference to Fig. 6, which is a schematic diagram of a method of training a patient-specific deep learning network to identify a region of interest, according to aspects of the present disclosure. As illustrated, method 500 includes a number of enumerated steps, but embodiments of method 500 may include additional steps before, after, or in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently. In some embodiments, the training of a deep learning network may be initiated immediately after an ultrasound video clip is completed and may be completed as a background process as the ultrasound imaging system 100 performs other functions. It is additionally noted that the training of any deep learning networks by the ultrasound imaging system 100, as well as any other suitable step or method disclosed in the present application, may be performed in a point-of-care setting. A point-of-care setting may be any instance in which a patient receives an examination, counsel, or any medical care or assistance from a medical provider, including but not limited to a patient’s visit to a hospital, emergency room, clinic, doctor’s office, or other patient’s room, as well as any other appropriate setting in a which a patient may receive medical care or attention. [0067] At step 505, method 500 includes receiving a plurality of ultrasound image frames and/or an ultrasound video clip. The ultrasound image frames that form the ultrasound video clip are identified in Fig. 6 as ultrasound image frames 622. The ultrasound image frames 622 may each depict the region of interest (e.g., the region of interest 420 for Fig. 4). The ultrasound image frames 622 may be selected from a set 610 of the ultrasound image frames 210 captured via the ultrasound imaging system 100 during an ultrasound examination. Although 10 ultrasound image frames 622 are depicted in Fig. 6, there may be any suitable number of ultrasound image frames 622. For example, the ultrasound image frames 622 corresponding to the ultrasound video clip 222 may include 10, 50, 100, 200, 240, 300, lOOOor more frames, or any suitable number of frames therebetween. In some embodiments, the number of ultrasound image frames 622 may be determined by the ultrasound imaging system 100. In other embodiments, the number of ultrasound image frames 622 may be user defined. In some embodiments, a patient-specific deep learning network may be trained on a few specific ultrasound images frames 622. For example, a patient-specific deep learning network may be satisfactorily trained on about 100 ultrasound image frames 622.

[0068] At step 510, method 500 includes a patient-specific deep learning network training. Step 510 further includes a number of sub-steps within step 510. As depicted in Figs. 5-6, a set of ultrasound image frames 622 corresponding to the ultrasound video clip 222 may be extracted from the broader set 610 of ultrasound image frames 210 and used to train a patient-specific deep learning network corresponding to the region of interest 420. In some embodiments, the deep learning network is anatomy-specific for a specific patient.

[0069] At sub-step 515 of step 510, method 500 includes identifying a subset 630 (shown in Fig. 6) of ultrasound image frames 622. The ultrasound image frames forming subset 630 are labelled as ultrasound image frames 632 in Fig. 6. Each ultrasound image frame 632 may depict the region of interest 420 and may be included in ultrasound video clip 222. In some embodiments, the ultrasound image frames 632 may constitute 80% of the ultrasound image frames 622 of the ultrasound video clip 222. In other embodiments, the ultrasound image frames 632 may constitute different percentages of ultrasound image frames 622, such as 10%, 20%, 40%, 60%, 90%, or higher, or any suitable percentage therebetween.

[0070] At sub-step 520 of step 510, method 500 includes using the subset 630 of ultrasound image frames 632 to train a patient-specific deep learning network 230 to identify the region of interest 420. As shown in Fig. 6, the ultrasound image frames 632 are used by a training component 670 to train the deep learning network, while the remaining frames 642 of the ultrasound image frames 622 may be used for testing. In some embodiments, the deep learning network trained in method 500 may be a neural network, such as a convolutional neural network (CNN). In other embodiments, the neural network may be a deep convolutional network (DCN), a deconvolutional network (DN), a deep convolutional inverse graphics network (DCIGN), a generative adversarial network (GAN), a deep residual network (DRN), an extreme learning machine (ELM), or any other application of machine learning or deep learning algorithms suitable for the purposes of the present application. In embodiments with a CNN, training the deep learning network 230 at sub-step 520 may include a set of convolutional layers followed by a set of fully connected layers. Each convolutional layer may include a set of filters configured to extract features from an input, such as the ultrasound image frames 632. The number of convolutional and fully connected layers as well as the size of the associated filters may vary depending on the embodiments. In some instances, the convolutional layers and the fully connected layers may utilize a leaky rectified non-linear (ReLU) activation function and/or batch normalization. The fully connected layers may be non-linear and may gradually shrink the highdimensional output to a dimension of the prediction result (e.g., the classification output). Thus, the fully connected layers may also be referred to as a classifier. The training component 670 and/or the testing component 680 may be any suitable software and/or hardware implemented in or by a processor circuit of an ultrasound imaging system 100, e.g. the host 130 of Fig. 1.

[0071] To train the deep learning network to identify the region of interest 420, a subset 630 of ultrasound image frames 632 that depict the region of interest 420 can be captured or extracted from a set 610 of ultrasound image frames 210. The subset 630 of ultrasound image frames 632 may include annotated B-mode images. A user may annotate the B-mode images by selecting the area with the anatomical objects and/or imaging artifacts, e.g., with a bounding box and/or label. For example, a processor and/or a processor circuit can receive a user input or user feedback related to the region of interest within the ultrasound image frames. Training of the deep learning network may consider these annotated B-mode images as the ground truth. The B-mode images in the subset 630 may include annotations corresponding to the region of interest 420. [0072] A processor and/or processor circuit trains a neural network on the ultrasound image frames based on the user input, thereby generating a patient-specific neural network file. The deep learning network 230 can be applied to each ultrasound image frame 632 in the subset 630, for example, using forward propagation, to obtain an output for each input ultrasound image frame 632. The training component 670 may adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers, for example, by using backward propagation to minimize a prediction error (e.g., a difference between the ground truth and the prediction result). The prediction result may include regions of interests identified from the input ultrasound image frames 632. In some instances, the training component 670 adjusts the coefficients of the filters in the convolutional layers and weightings in the fully connected layers per each input ultrasound image frame 632. In some other instances, the training component 670 applies a batch-training process to adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers based on a prediction error obtained from a set of input images.

[0073] In some aspects, instead of including bounding boxes 460 or annotations in a training image, the subset 630 may store image-class pairs. For instance, each ultrasound image frame 632 may be associated with the region of interest 420. The deep learning network may be fed with the image-class pairs from the subset 630 and the training component 670 can apply similar mechanisms to adjust the weightings in the convolutional layers and/or the fully-connected layers to minimize the prediction error between the ground truth (e.g., the specific region of interest 420 in the image-class pair) and the prediction output.

[0074] At sub-step 525 of step 510, method 500 includes identifying a subset 640 of ultrasound image frames 642 for testing. This testing subset 640 may include a number of ultrasound image frames 642 that also depict the region of interest 420. The ultrasound image frames 642 used for testing may also be referred to as testing frames 642 and may be included in the ultrasound video clip 222. In some embodiments, the testing frames 642 may comprise about 20% of the ultrasound image frames 622 corresponding to the ultrasound video clip 222. In other embodiments, the ultrasound image frames 642 may comprise different percentages of the ultrasound image frames 622, such as 10%, 20%, 40%, 60%, 90%, or higher, or any suitable percentage therebetween.

[0075] At sub-step 530 of step 510, method 500 includes testing the deep learning network trained during sub-step 520 as previously described. As shown in Fig. 6, the testing frames 642 are used by the testing component 680 of the deep learning network to verify that the coefficients of the filters of the convolutional layers and the weightings of in the fully connected layers are accurate. At sub-step 530, a single testing frame 642 may be presented to the deep learning network and, based on the determined coefficients and weights, a confidence score output may be generated for each region of interest 420. In some embodiments, a user may determine a threshold value associated with the confidence score output of the deep learning network. This threshold may be specific to one of the regions of interest 420, or generally applied to all the regions of interest 420 if multiple regions of interest have been identified. In other embodiments, the threshold confidence score may be determined and input into the ultrasound imaging system 100 before or at the point-of-care setting, by a user, or alternatively by a manufacturer of one or more components of the ultrasound imaging system 100. In other embodiments, the ultrasound imaging system 100 may determine the threshold confidence score based on characteristics of the region of interest 420, trends or other collected data from previous examinations for the same patient or from other patients’ examinations, or any other relevant criteria.

[0076] At sub-step 535 of step 510, method 500 includes determining whether the region of interest 420 was correctly identified by the deep learning network 230. For example, this determination could be based on the confidence score output for each region of interest 420. For example, if a confidence score associated with one region of interest exceeds a predetermined threshold, the deep learning network may indicate that that region of interest is depicted in the ultrasound testing frame 642. If another confidence score output associated with an additional region of interest does not exceed a predetermined threshold, the deep learning network may indicate that that additional region of interest is not depicted in the ultrasound testing frame 642. For each testing frame 642, the testing component 680 of the deep learning network may produce a prediction error (e.g., a difference between the ground truth and the confidence score). If the prediction error is below a certain threshold level, the testing component 680 of the deep learning network may determine that the region of interest 420 was correctly identified and will proceed to sub-step 530 again and an additional testing frame 642 may be presented. If, however, the prediction error is calculated to be above a certain threshold level, the testing component 680 of the deep learning network may determine that the region of interest 420 was not correctly identified and will proceed to sub-step 540 of step 510.

[0077] At sub-step 540 of step 510, method 500 includes adjusting the deep learning network parameters. The deep learning network may include coefficients of filters of the convolutional layers of the deep learning network as well as weights of the fully connected layers in a convolutional neural network application. The parameters of the deep learning network may include additional coefficients, weights, or other values depending on the particular type of deep learning network used and its intended application using forward propagation, to obtain an output 650 for each input ultrasound image frame 632. In some embodiments, at sub-step 540, the testing component 680 may adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers by using backward propagation to minimize the prediction error. In some instances, the testing component 680 adjusts the coefficients of the filters in the convolutional layers and weightings in the fully connected layers for each input ultrasound image frame 642. In some other instances, the testing component 680 applies a batchtraining process to adjust the coefficients of the filters in the convolutional layers and weightings in the fully connected layers based on a prediction error obtained from a set of input images. After sub-step 540 of step 510 is completed, the deep learning network then returns to sub-step 530 and an additional testing frame 642 is presented.

[0078] Testing component 680 may iteratively present each testing frame 642 of subset 640 and may adjust the coefficients and weights of the neural network’s convolutional and fully connected layers until multiple or all of testing frames 642 have been presented. In some instances, after multiple or all testing frames 642 have been presented and tested, the ultrasound imaging system 100 may present an indicator to the user indicating the success of the training and testing processes. For example, if the prediction error for each subsequent testing frame 642 consistently decreased, such that a convergence was observed, an indication of success may be displayed to a user via display 132. In other instances, if the prediction error for each subsequent testing frame 642 did not decrease, such that a divergence of prediction error was observed, an indication of failure may be displayed via display 132. In some embodiments, this indication may be accompanied with a directive to the user to redo the examination, adjust the size of the bounding box 460, reposition the graphical element 470, or perform another remedial action. In some embodiments, the ultrasound imaging system 100 may train a new patient-specific deep learning network based on as few as 100 ultrasound image frames 622 in very little time. For example, a deep learning network may be trained in as little as a few minutes. In other embodiments, a deep learning network may be trained in much less time depending primarily on processing speeds of components of the ultrasound imaging system 100. Because this patient- specific deep learning network is intentionally overfitted to the anatomy of the patient, it is customized for the patient anatomy.

[0079] At step 545, method 500 includes saving the deep learning network parameters as a deep learning network file 230. If the testing component 680 observed a convergence of the prediction error, coefficients and weights of the neural network corresponding to all regions of interest 420 within the trained neural network may be saved as a deep learning network file 230. [0080] At step 550, method 500 includes storing the deep learning network file 230 in a patient’s file 205 in a memory accessible by the ultrasound imaging system (e.g., the memory 138 of Fig. 1). The deep learning network file 230 that is saved is therefore a patient-specific and/or anatomy-specific deep learning network with coefficients and weights trained or calculated to identify the associated patient’s specific regions of interest 420 within that patient’s anatomy. This deep learning network file 230 may be loaded to the ultrasound imaging system 100 during subsequent examinations to assist a user in locating and identifying the same regions of interest 420 within a patient’s anatomy and comparing differences over time in any measurable or observable characteristics of the region of interest 420, as will be discussed in more detail hereafter It is noted that any or all of the steps of method 500 may be performed by the ultrasound imaging system 100 either during a patient examination or after a patient examination. In addition, the steps of training a deep learning network as outlined above and/or according to other methods presented may be performed concurrently while the ultrasound imaging system 100 acquires ultrasound image frames during an examination, or may perform these steps at a later time.

[0081] Fig. 7 is a flow diagram of a method 700 of identifying a region of interest 420 with a previously trained patient-specific and/or anatomy-specific deep learning network, according to aspects of the present disclosure. One or more steps of the method 700 can be performed by a processor circuit of the ultrasound imaging system 100, including, e.g., the processor 134 (Fig. 1). One or more steps of the method 700 will be described with reference to Fig. 8, which is a schematic diagram of a method of identifying and displaying to a user a region of interest 420. As illustrated, method 700 includes a number of enumerated steps, but embodiments of method 700 may include additional steps before, after, or in between the enumerated steps. In some embodiments, one or more of the enumerated steps may be omitted, performed in a different order, or performed concurrently. [0082] At step 705, method 700 includes loading a previously saved patient-specific deep learning network file 230. The deep learning network file 230 may include several parameters, such as coefficients of filters of convolutional layers and weights of fully connected layers configured to recognize one or more regions of interest 420 within a patient’s anatomy.

[0083] At step 710, method 700 includes implementing the patient-specific deep learning network trained to recognize regions of interest 420 during a subsequent ultrasound examination for that patient. In some embodiments, the patient-specific deep learning network may be loaded and implemented by a user of the ultrasound imaging system 100 in a point-of-care setting, such that the system 100 may receive and analyze ultrasound image frames 210 in real time. Step 710 may be divided into several sub-steps.

[0084] At sub-step 715 of step 710, method 700 includes receiving an ultrasound image frame 812 captured by the probe 110. Sub-step 715 may be initiated at a subsequent patient examination, in which one or more regions of interest 420 are to be examined on a second, third, fourth, or subsequent occasion. The ultrasound image frame 812 may depict one of the regions of interest 420 within the patient’s anatomy or may not.

[0085] At sub-step 720 of step 710, method 700 includes determining whether the ultrasound image frame 812 received from the probe 110 depicts one of the regions of interest 420. A processor and/or processor circuit can apply the patient-specific neutral network to the ultrasound image frames to identify the region of interest within the ultrasound image frames. The region of interest can be identified automatically (e.g., without user input required to identify the region of interest). To determine whether the ultrasound image frame 812 received from the probe 110 depicts one of the regions of interest 420, the ultrasound imaging system 100 may retrieve the deep learning network file for that patient which may contain multiple deep learning parameter files as shown by the regions of interest 250 in Fig. 2. In some embodiments, whether the ultrasound image frame 812 depicts one of the regions of interest 420 may be determined based on the confidence score output for that region of interest 420. If the ultrasound imaging system 100 determines that no region of interest 420 is depicted in the ultrasound image frame 812, the system reverts back to sub-step 715, and another ultrasound image frame 812 is received. If, however, the ultrasound imaging system 100 does determine that a region of interest 420 is depicted in an ultrasound image frame 812, the system proceeds to sub-step 725 of step 710. [0086] At sub-step 725 of step 710, method 700 includes labelling the anatomical feature 450 identified in a previous ultrasound examination with a graphical element 870 within the ultrasound image frame 812. Graphical element 870 may be substantially similar to graphical element 470 previously mentioned and described with reference to Fig. 4. At sub-step 725, the ultrasound imaging system 100 may place the graphical element 870 in the same location in relation to the anatomical feature 450 and/or other anatomical features surrounding the region of interest 420 as the user placed graphical element 470 in relation to the same elements during a patient’s first examination. In some embodiments, however, the graphical element 870 may be placed at a different location. This different location could correspond to a movement or shifting in the anatomical feature 850 or denote any other suitable characteristic of anatomical feature 450. In addition, in some embodiments, the graphical element 870 may be substantially different from the graphical element 470. The graphical element 870 may be of any suitable shape, color, size, or orientation, including any of the features previously described in relation to the graphical element 470 of Fig. 4. In some embodiments, changes between graphical element 470 and graphical element 870 may symbolize to a user any number of appropriate characteristic pertaining to the region of interest 420. For example, changes could reflect changes in the size of the anatomical feature 450 within the region of interest 420, the level of urgency with which the condition the region of interest 420 portrays is to be treated, or any other relevant characteristic. [0087] At sub-step 730 of step 710, method 700 includes calculating one or more metrics associated with the anatomical feature 450 or region of interest 420. Metrics associated with the region of interest 420 may include, but are not limited to blood flow through a lumen or body cavity, the volume of a particular region of interest 420 such as body cavity, a tumor, cyst, or any other suitable region of interest 420, other dimensions of an anatomical feature 450, including length, width, depth, circumference, diameter, area, and other metrics.

[0088] At sub-step 735 of step 710, method 700 includes creating a bounding box 860 around the region of interest 420 in an ultrasound image frame 812. The bounding box 860 may be substantially similar to the bounding box 460 described with reference to Fig. 4. The bounding box 860 may be two-dimensional as depicted in Fig. 8, or may be a three-dimensional box. One purpose of the bounding box 860 may be to specify the boundaries of the region of interest 420 and therefore which features should be used by the ultrasound imaging system 100 to further train the deep learning network. The size or other features of the bounding box 860 may be determined in a similar manner as the bounding box 460 may be generated or modified as discussed previously. In some embodiments, the ultrasound imaging system 100 may prompt a user to decrease or increase the size of the bounding box 860 depending on the prediction error calculated by the ultrasound imaging system 100 and whether or not the prediction error converges or diverges based on the data used to train and test the deep learning network. In some embodiments, both the bounding box 860 and the graphical element 870 are generated and/or displayed. In some embodiments, only one of the bounding box 860 or the graphical element 870 is generated and/or displayed.

[0089] At sub-step 740 of step 710, method 700 includes outputting the ultrasound image frame 812 showing the graphical element 870 added in sub-step 725, the bounding box 860 added in sub-step 735, and/or any calculated metrics added in sub-step 730 to a user display 132. The processor and/or processor circuit can provide (e.g., to a display device in communication therewith), a graphical representation related to the region of interest in the ultrasound image frames. A user may then see an image similar to that shown in Fig. 8, showing an ultrasound image frame 812 depicting an anatomical feature 850 within a region of interest 420 with a graphical element 870 and bounding box 860 positioned nearby. As previously mentioned, these elements may be displayed in real time in a point-of-care setting such that as a user moves the probe 110 on or within a patient’s anatomy, the graphical element 870 and the bounding box 860 may move along display 132 together with the region of interest 420. As illustrated in Fig. 7, after sub-step 740 is complete and a single ultrasound image frame 812 is displayed to a user via display 132, the ultrasound imaging system 100 returns to sub-step 715 in which an additional ultrasound image frame 812 is received from the probe 110 and the ultrasound image frame 812 is again analyzed according to the same process and displayed to the user. Due to increased processing speeds available from GPU’s used for the processor circuit 116 and/or the processor circuit 134, this process may be completed in real time in a point-of-care setting. Specifically, the ultrasound image frames 812 may be analyzed to determine if a region of interest 420 is depicted, the graphical element 870, bounding box 860, and/or any calculated metrics may be added, and the frame 812 may be displayed in real time, or at substantially the same frame rate at which the ultrasound image frames 812 are captured by the probe 110. The ability of the ultrasound imaging system 100 to display to a user in real time the locations of the regions of interest 420 identified in previous examinations along with metrics relating to the regions of interest 420 provides the user with valuable insight as to the current state of any depicted anatomical features 450 or regions of interest 420 within a patient’s anatomy. In addition, it allows a user to more easily track changes to the regions of interest 420 over time, and determine the best method of diagnosing and/or remedying medical issues relating to any of the regions of interest 420.

[0090] At step 745, method 700 includes saving a set of ultrasound image frames 812 as an additional video clip 820. The ultrasound imaging system may save the frames as a video clip in response to a user input to record the particular frames. Step 745 may be completed simultaneously with step 710 while a user moves the probe 110 along or within a patient’s body, or it may be completed after the probe 110 has completed capturing the frames 812 during a patient examination. Additionally, step 745 may occur during a point-of-care setting or may occur afterwards. An additional video clip 820 may include ultrasound image frames 822 which depict the region(s) of interest 420 within a patient’s anatomy. The additional video clip 820 may be of any suitable length of time, similar to the video clip 222 previously discussed. The additional ultrasound video clip 820 may be stored within a patient’s file 205 along with the other ultrasound video clips 220 previously stored. The additional ultrasound video clip 820 may be organized according to the date on which the procedure was completed, its use in training the deep learning network, or by any other suitable characteristic. The ultrasound imaging system 100 may generate and store any number of additional ultrasound video clips 820 during subsequent patient examinations.

[0091] At step 750, method 700 includes initiating the patient-specific deep learning network training step 510 of method 500 using the additional video clip 820 rather than the ultrasound video clip 222 from a patient’s initial ultrasound imaging procedure. In some embodiments, image frames from the additional video clip 820 may be combined with image frames from the initial ultrasound video clip 222 to further train the deep learning network. In such an embodiment, the deep learning network may more effectively identify and monitor changes in regions of interest. Step 510 has been previously discussed with reference to Fig. 5 and Fig. 6 and the process may be substantially similar using the additional video clip 820. For example, the frames 822 of the additional video clip 820 may be divided into two sets. A set 830 of ultrasound image frames 832 selected from the ultrasound image frames 822 may be used to further train the deep learning network. This process may include adjusting the coefficients of filters of convolutional layers and weights of fully connected layers such that the prediction error may further decrease and the deep learning network is better able to identify the regions of interest 420. An additional set 840 of ultrasound image frames 842 may be additionally selected by the ultrasound imaging system 100 to test the deep learning network. This process may include presenting an ultrasound image frame 842 to the deep learning network and testing if the network correctly identifies the region(s) of interest 420. If the network identifies one of the regions of interest 420, an additional frame 842 is presented. If the network incorrectly identifies or incorrectly does not identify a region of interest 420, the deep learning network parameters may be further adjusted to reduce prediction error. After all frames 842 are presented to the testing component of the deep learning network, the new parameters, if any, may be saved as the deep learning network file 230 within the patient’s file 205 to be recalled at a later ultrasound examination.

[0092] Fig. 9 is a diagrammatic view of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 identifying a region of interest 420, according to aspects of the present disclosure. Fig. 9 may represent an exemplary display 132 which a user of the ultrasound imaging system 100 may see and interact with during a point-of-care setting ultrasound examination. As shown in Fig. 9, an ultrasound image frame 822 is displayed to a user. A region of interest 420 is depicted within the frame 822 and identified by the ultrasound imaging system 100. In some embodiments, it is not necessary for the graphical element 870 to be depicted within a frame which depicts a region of interest 420. A bounding box 860 may adequately identify to a user the region of interest 420 or an anatomical feature 450 within a region of interest 420. As shown in Fig. 9, a bounding box 860 is placed around the region of interest 420 by the ultrasound imaging system 100 so as to identify an anatomical feature 450 within the region of interest 420 and specify to the system 100 which features within the anatomy of the patient are to be used to further train the patient-specific deep learning network. Additionally depicted in Fig. 9 is a confidence score 910. The classification output may indicate the confidence score 910 for each region of interest 420 based on the input ultrasound image frame 822. When a deep learning network is trained for a specific patient, who may be examined, for example, for the presence of a plurality of tumors, one region of interest 420 may be assigned to one tumor, another region of interest 420 may be assigned to another tumor, and so on. The deep learning network may then output a confidence score for each region of interest 420 within the deep learning network file 230. A region of interest 420 indicating a high confidence score indicates that the input ultrasound image frame 812 is likely to include the region of interest 420. Conversely, a region of interest 420 indicating a low confidence score 910 indicates that the input ultrasound image frame 812 is unlikely to include the region of interest 420. A confidence score 910 may be concurrently displayed for each region of interest 420, or only the confidence scores of each corresponding region of interest 420 depicted within a frame 812 on display 132 may be displayed to the user. Confidence scores 910 may be positioned in any suitable location relative to their corresponding regions of interest. For example, confidence scores 910 may be positioned proximate and/or adjacent to the bounding box 860, overlaying the ultrasound image frame 822, beside the ultrasound image frame 822, or in any other suitable position. In addition, any calculated metrics 920 may be displayed to a user. The metrics 920 may be displayed in any suitable position within the display 132. For example, the metrics 920 may be displayed to the right of, left of, above, below or overlaid on top of the ultrasound image frame 812. Any number of other suitable indicators may also be included on the display 132 relating to, for example, the processing speed of the probe 110, the processing speed of the host 130, display qualities, characteristics, or settings associated with the display 132, battery life, position of the probe 110 relative to the regions of interest 420 or other notable landmarks within a patient’s anatomy, or any other suitable indicator or image.

[0093] Fig. 10 is a diagrammatic view of a graphical user interface (GUI) or screen display for the ultrasound imaging system 100 displaying to a user a plurality of video clips 220 depicting the region of interest 420, according to aspects of the present disclosure. Fig. 10 may be representative of a longitudinal evaluation of the patient (e.g., the same region of interest of the same patient over time). In addition to the enabling a user to identify the region of interest 420 in real time, the present disclosure also enables a user to view the same region of interest 420 in video clips 220 from different examinations to compare and track changes. As shown in Fig. 10, in some embodiments, the ultrasound imaging system 100 may display to a user a plurality of video clips 220 simultaneously within the display 132. A user of the ultrasound imaging system 100 may select which ultrasound video clips 220 to display and may determine the placement of each video clip 220 within the display 132. In some embodiments, the ultrasound imaging system 100 may determine the order and position of the ultrasound video clips 220 for display. The ultrasound imaging system 100 may be capable of displaying all video clips 220 saved to a patient’s file 205 simultaneously. In other embodiments, a user may select to view and compare a plurality of the ultrasound image frames 210 rather than video clips 220. Within the ultrasound image frames 210 or video clips 220 displayed, the graphical element 870, bounding box 860 and/or region of interest 420 may be displayed. The anatomical feature 450 may appear substantially different or may not appear at all in different image frames 210 or video clips 220. The anatomical feature 450 may be of a different volume or size. For example, an anatomical feature (e.g., a tumor) that is shrinking in size may be indicative that a treatment for that patient is efficacious. A label 1010 may be included with each displayed ultrasound image frame 210 or video clip 220. The label 1010 may comprise the date and time 1020 of a patient’s examination as well as other metrics 1030 associated with the anatomical feature 450. In addition, it is fully contemplated that a wide variety of graphical user interfaces and display designs may be implemented in accordance with the presently disclosed application.

[0094] FIG. 11 is a schematic diagram of a processor circuit 1100, according to embodiments of the present disclosure. The processor circuit 1100 may be implemented in the probe 110 and/or the host 130 of FIG. 1. In an example, the processor circuit 1100 may be in communication with the transducer array 112 in the probe 110. One or more processor circuits 1100 are configured to execute the operations described herein. As shown, the processor circuit 1100 may include a processor 1160, a memory 1164, and a communication module 1168. These elements may be in direct or indirect communication with each other, for example via one or more buses.

[0095] The processor 1160 may include a CPU, a GPU, a DSP, an application-specific integrated circuit (ASIC), a controller, an FPGA, another hardware device, a firmware device, or any combination thereof configured to perform the operations described herein. The processor 1160 may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

[0096] The memory 1164 may include a cache memory (e.g., a cache memory of the processor 1160), random access memory (RAM), magnetoresistive RAM (MRAM), read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read only memory (EPROM), electrically erasable programmable read only memory (EEPROM), flash memory, solid state memory device, hard disk drives, other forms of volatile and non-volatile memory, or a combination of different types of memory. In an embodiment, the memory 1164 includes a non-transitory computer-readable medium. The memory 1164 may store instructions 1166. The instructions 1166 may include instructions that, when executed by the processor 1160, cause the processor 1160 to perform the operations described herein with reference to the probe 110 and/or the host 130 (FIG. 1). Instructions 1166 may also be referred to as code. The terms “instructions” and “code” should be interpreted broadly to include any type of computer- readable statement(s). For example, the terms “instructions” and “code” may refer to one or more programs, routines, sub-routines, functions, procedures, etc. “Instructions” and “code” may include a single computer-readable statement or many computer-readable statements.

[0097] The communication module 1168 can include any electronic circuitry and/or logic circuitry to facilitate direct or indirect communication of data between the processor circuit 1100, the probe 110, and/or the display 132. In that regard, the communication module 1168 can be an input/output (I/O) device. In some instances, the communication module 1168 facilitates direct or indirect communication between various elements of the processor circuit 1100 and/or the probe 110 (FIG. 1) and/or the host 130 (FIG. 1).

[0098] Persons skilled in the art will recognize that the apparatus, systems, and methods described above can be modified in various ways. Accordingly, persons of ordinary skill in the art will appreciate that the embodiments encompassed by the present disclosure are not limited to the particular exemplary embodiments described above. In that regard, although illustrative embodiments have been shown and described, a wide range of modification, change, and substitution is contemplated in the foregoing disclosure. It is understood that such variations may be made to the foregoing without departing from the scope of the present disclosure. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the present disclosure.