Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A COMPUTER PROGRAM, APPARATUS, AND COMPUTER-IMPLEMENTED METHOD OF PRE-OPERATIVE PLANNING FOR PATIENT SURGERY
Document Type and Number:
WIPO Patent Application WO/2024/059902
Kind Code:
A1
Abstract:
Embodiments include a computer-implemented method of pre-operative planning for patient surgery, the method including: generating a personalised pre-operative software model based on received pre-operative medical imagery and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised pre-operative software model of the patient anatomical structure; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure; simulating by the at least one computer processor movement of the patient anatomical structure according to the modified personalised post-operative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised post-operative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide; and generating by the least one computer processor at least one of: 3D printing instructions based on the personalised surgical cutting guide model or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

Inventors:
BARZAN MARTINA (AU)
CARTY CHRIS (AU)
SMITH DEREK (AU)
LLOYD DAVID (AU)
BADE DAVID (AU)
Application Number:
PCT/AU2023/050905
Publication Date:
March 28, 2024
Filing Date:
September 20, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV GRIFFITH (AU)
International Classes:
A61B17/56; A61B17/00; A61B17/15; A61B17/80; A61B34/10; A61F2/30
Domestic Patent References:
WO2020163358A12020-08-13
WO2020231656A22020-11-19
WO2019245848A12019-12-26
WO2010099142A12010-09-02
Foreign References:
US20220354511A12022-11-10
Attorney, Agent or Firm:
FB RICE (AU)
Download PDF:
Claims:
CLAIMS:

1. A computer-implemented method of pre-operative planning for patient surgery, the method including: generating by at least one computer processor a personalised pre-operative software model of patient anatomical structure of an individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, the patient movement analysis data being generated at least in part by a movement sensor system observing movement of the patient anatomical structure of the individual patient over a period of time, wherein the personalised pre-operative software model of the patient includes bone position and dimension data of bones in the patient anatomical structure of the individual patient, muscle position and dimension data of muscles in the patient anatomical structure of the individual patient, and relationship definition data defining relationships between bones and muscles in the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised pre-operative software model of the patient anatomical structure of the individual patient; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure; simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the modified personalised post-operative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised postoperative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide model to facilitate the surgical procedure based on the personalised post-operative software model or the adjusted personalised post-operative software model; and generating by the least one computer processor at least one of:

3D printing instructions based on the personalised surgical cutting guide model to transmit to a 3D printer to form a surgical cutting guide; or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

2. The computer-implemented method of claim 1, wherein the surgery is one from among: an osteotomy; a femoral osteotomy; a distal tibial osteotomy; a proximal tibial osteotomy; a distal femoral osteotomy; a proximal femoral osteotomy; a tibial osteotomy; a high tibial osteotomy.

3. The computer-implemented of claim 1 or claim 2, wherein the surgical procedure includes one or more osteotomies, and the simulation output comprises the surgical cutting guide model, the surgical cutting guide model defining one or more osteotomy planes in which to cut a bone or bones of the patient anatomical structure to facilitate reconfiguration of the patient anatomical structure in accordance with the modified software model or the adjusted modified software model.

4. The computer-implemented of any one of claims 1 to 3, wherein the surgical procedure includes one or more osteotomies, and the simulation output comprises an implant configured to secure a first portion of a bone cut by the one or more osteotomies to a second, separate, portion of the same bone cut by the one or more osteotomies, in a configuration determined in accordance with the modified software model or the adjusted modified software model.

5. The computer-implemented method of any one of claims 1 to 4, wherein the software definition of the surgical procedure defines a value range for each of the following parameters: a number of osteotomies in each of one or more specified bones; for each osteotomy, a specific bone from the patient anatomical structure to be cut by the osteotomy; for each osteotomy, a position and orientation of an osteotomy plane; for each osteotomy, a relative position and orientation of the two or more post-osteotomy distinct bone portions; wherein the simulation iteratively constrains the software definition of the surgical procedure to include a patient-specific osteotomy plan by determining a specific value from within the value range for each of the parameters, and wherein the surgical cutting guide model and the implant configuration implement the specific values for the parameters.

6. The computer-implemented method of claim 5, wherein the software definition of the surgical procedure defines a value range for each of the following parameters: a repositioning of one or more specified bones within an allowable range; a range of available implants with corresponding chisels; wherein the simulation iteratively constrains the software definition of the surgical procedure to include a patient-specific implant plan by determining one or more repositionings and for each determined repositioning a selected implant and chisel from the available range.

7. The computer-implemented method of claim 5 or claim 6, wherein the surgical cutting guide model implements the specific values for one or more osteotomies from the patient-specific osteotomy plan, wherein the surgical cutting guide model is generated by, for each osteotomy: defining a mask portion of the surgical cutting guide model configured to conform to one or more curves or other geometric features of the bone position and dimension data of bones in the patient anatomical structure of the individual patient.

8. The computer-implemented method of claim 7, wherein the surgical cutting guide model is further generated by, for each osteotomy: defining an osteotomy slot portion of the surgical cutting guide model being an aperture in the mask portion positioned and orientated according to the specific values for the osteotomy; defining an osteotomy saw blade insertion profile and extruding the surgical cutting guide model according to the saw blade insertion profile to a predefined distance proximally and distally of the defined slot portion.

9. The computer implemented method of claim 4 and any one of claims 7 to 8, wherein the surgical cutting guide model is further generated by, for each osteotomy: defining an osteotomy chisel insertion as a location on the defined mask portion and a direction relative to the mask, extruding the surgical cutting guide model according to the location and direction by a predefined distance distal from the osteotomy chisel insertion location.

10. The computer implemented method of claim 4 and any one of claims 7 to 9, wherein the surgical cutting guide model is further generated by, for each osteotomy: defining one or more implant fixation slots as a location on the defined mask portion based on a shaft surface of the implant, extruding the surgical cutting guide model at the one or more defined implant fixation slots by a predefined distance distally.

11. The computer-implemented method of claim 10, wherein the defined osteotomy chisel insertion further comprises a hole in the surgical cutting guide model configured to removably receive a guide wire and a guide wire seat, and wherein the surgical cutting guide model is extruded distally around the hole to define the guide wire seat, the guide wire seat being configured for insertion into the hole at one end and to longitudinally receive the guide wire.

12. The computer-implemented method of any one of claims 5 to 11, wherein the surgical cutting guide model is converted to 3D printing instructions, and the method further comprises 3D printing surgical cutting guide from the 3D printing instructions.

13. The computer-implemented of any one of claims 1 to 12, wherein the received preoperative medical imagery of the patient anatomical structure of the individual patient is obtained by one or more from among: anthropometric data acquisition; attachment of MRI-compatible markers to the individual patient and MRI-scanning thereof, and analysis of the MRI-scanning to obtain MRI-images scans of the patient anatomical structure of the individual patient; placement of electromyography (EMG) units on the skin of the individual patient and measurement and analysis of EMG signals generated by the EMG units.

14. The computer-implemented of any one of claims 1 to 13, wherein generating the personalised pre-operative software model of the patient includes: generating at least one MRI-reconstructed portion by one or more from among: obtaining the bone position and dimension data of bones in the patient anatomical structure of the individual patient by executing a segmentation process on the MRI scans; obtaining the muscle position and dimension data of muscles in the patient anatomical structure of the individual patient by executing a segmentation process on the MRI scans; obtaining position and dimension data of a growth plate in one or more bones, by executing a segmentation process on the MRI scans.

15. The computer-implemented of any one of claims 1 to 14, wherein generating the personalised pre-operative software model of the patient includes: generating at least one CT-reconstructed portion by imaging the patient anatomical structure of the individual patient by a computerized tomography scan to obtain at least one CT scan, and obtaining the bone position and dimension data of bones in the patient anatomical structure of the individual patient by executing a segmentation process on the at least one CT scan.

16. The computer-implemented method of claim 14 and claim 15, further comprising registering common landmarks in the bone position and dimension data of bones in the CT- reconstructed portion and the MRI-reconstructed portion

17. The computer-implemented method of any one of claims 13 to 16, wherein generating the personalised pre-operative software model of the patient includes a 3D anatomical analysis of the bone position and dimension data, including defining one or more axes and planes in the bone position and dimension data, and measuring one or more 3D angles between the defined one or more axes and planes.

18. The computer-implemented method of claim 6 and claim 17, wherein generating the post-operative personalised software model of the patient anatomical structure of the individual patient includes: combining a CAD model of the selected implant and chisel with the bone position and dimension data to verify that adequate bone thickness remains in place once the implant is implanted into a bone orifice created by the chisel.

19. The computer-implemented method of claim 6 and claim 17, wherein generating the post-operative personalised software model of the patient anatomical structure of the individual patient includes: combining a CAD model of the selected implant and chisel with the growth plate position and dimension data to verify that adequate growth plate volume remains once the implant is implanted into a bone orifice created by the chisel.

20. The computer-implemented method of any one of claims 1 to 19, wherein the patient movement analysis data is measured motion data of the individual patient.

21. The computer-implemented method of claim 20, wherein the patient movement analysis data is measured motion data of the individual patient obtained by motion capture while the individual patient is walking, and the measured motion data is of motion about the hip joint or the knee joint, or performing another natural body movement.

22. The computer-implemented method of any one of claims 1 to 21, wherein the personalised pre-operative software model of patient anatomical structure of the individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, is a 4D personalised functional model, generated by one or more steps from among: identifying surfaces and landmarks in the received pre-operative medical imagery, and fitting the body parts to which the identified surfaces and landmarks belong to an anatomical structure reference system by mapping the identified surfaces and landmarks to equivalents in the anatomical structure reference system; adding body parts fitted to the anatomical reference system including one or more from among: bones; joints; muscles; to the 4D personalised functional model.

23. The computer-implemented method of claim 22, wherein the post-operative personalised software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure, is a 4D personalised functional model, generated by: identifying surfaces and landmarks in the pre-operative software model of patient anatomical structure as modified by the software definition of the surgical procedure, optionally as iteratively constrained according to claim 5 and claim 6, and fitting the body parts to which the identified surfaces and landmarks belong to an anatomical structure reference system by mapping the identified surfaces and landmarks to equivalents in the anatomical structure reference system; adding body parts fitted to the anatomical reference system including one or more from among: bones; joints; muscles; to the 4D personalised functional model.

24. The computer-implemented method of claim 22 and claim 23, further including simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the post-operative personalised software model to generate simulation output, wherein the simulation comprises comparing the 4D personalised functional model of the pre-operative software model of patient anatomical structure with the 4D personalised functional model of the post-operative personalised software model of patient anatomical structure.

25. The computer-implemented method of claim 24, wherein the software definition of the surgical procedure is iteratively constrained to a defined solution surgical procedure within the software definition of the surgical procedure by a machine learning model or another solving algorithm seeking to achieve a defined optimum outcome in the comparison of the 4D personalised functional model of the pre-operative software model of patient anatomical structure with the 4D personalised functional model of the post-operative personalised software model of patient anatomical structure.

26. The computer-implemented method according to any of claims 1 to 25, wherein the personalised surgical cutting guide model comprises two osteotomy planes for cutting a bone, which two osteotomy planes intersect at a line to define a personalised postoperative wedge geometry between the two planes terminating at the line, the wedge geometry corresponding to a region between portions of the bone post-operation; generating 3D printing instructions based on the personalised post-operative wedge geometry to transmit to a 3D printer to form a wedge-shaped surgical tool.

27. The computer-implemented method according to claim 26, further comprising: at a 3D printer, 3D printing the wedge-shaped surgical tool according to the 3D printing instruction based on the personalised post-operative wedge geometry.

28. The steps, features, integers, compositions and/or compounds disclosed herein or indicated in the specification of this application individually or collectively, and any and all combinations of two or more of said steps or features.

29. A computer program which, when executed by a computing apparatus comprising processor hardware and memory hardware, causes the processor hardware to perform the computer-implemented method according to any of claims 1 to 27.

30. A computer-readable medium storing the computer program according to claim 29.

31. A non-transitory computer-readable medium storing the computer program according to claim 29.

32. An apparatus comprising a processor and a memory, the processor being configured to execute processing instructions stored by the memory, and by executing the processing instructions to perform a computer-implemented method of pre-operative planning for patient surgery, the method including: generating by at least one computer processor a personalised pre-operative software model of patient anatomical structure of an individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, the patient movement analysis data being generated at least in part by a movement sensor system observing movement of the patient anatomical structure of the individual patient over a period of time, wherein the personalised pre-operative software model of the patient includes bone position and dimension data of bones in the patient anatomical structure of the individual patient, muscle position and dimension data of muscles in the patient anatomical structure of the individual patient, and relationship definition data defining relationships between bones and muscles in the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised pre-operative software model of the patient anatomical structure of the individual patient; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure; simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the modified personalised post-operative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised postoperative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide model to facilitate the surgical procedure based on the personalised post-operative software model or the adjusted personalised post-operative software model; and generating by the least one computer processor at least one of:

3D printing instructions based on the personalised surgical cutting guide model to transmit to a 3D printer to form a surgical cutting guide; or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

33. An apparatus comprising a processor and a memory, the processor being configured to execute processing instructions stored by the memory, and by executing the processing instructions to perform a computer-implemented method according to any of claims 1 to 27.

Description:
A computer program, apparatus, and computer-implemented method of pre-operative planning for patient surgery

Technical Field

[0001] Embodiments generally relate to methods, systems, and devices for modelling and simulation of orthopaedic surgery. In particular, embodiments relate to methods, systems, and devices for patient physiology modelling and surgical simulation for orthopaedic surgeries.

Background

[0002] Orthopaedic surgeons are increasingly relying on virtual surgery planning technologies to aid their clinical decision making. However, existing technologies are based on fitting three- dimensional (3D) images of a deformed bone to the 3D image of an idealised bone, with no consideration of functional consequences to a patient, i.e., muscular action, and local and wholebody movement. As a result, the patient’s functional capacity might not improve after undergoing corrective surgery.

[0003] It is desired to address or ameliorate one or more shortcomings or disadvantages of prior virtual surgery planning technologies, or to at least provide a useful alternative thereto.

[0004] Any discussion of documents, acts, materials, devices, articles or the like which has been included in the present specification is not to be taken as an admission that any or all of these matters form part of the prior art base or were common general knowledge in the field relevant to the present disclosure as it existed before the priority date of each claim of this application.

[0005] Throughout this specification the word "comprise", or variations such as "comprises" or "comprising", will be understood to imply the inclusion of a stated element, integer or step, or group of elements, integers or steps, but not the exclusion of any other element, integer or step, or group of elements, integers or steps.

Summary

[0006] Embodiments include a computer-implemented method of pre-operative planning for patient surgery, the method including: generating by at least one computer processor a personalised pre-operative software model of patient anatomical structure of an individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, the patient movement analysis data being generated at least in part by a movement sensor system observing movement of the patient anatomical structure of the individual patient over a period of time, wherein the personalised preoperative software model of the patient includes bone position and dimension data of bones in the patient anatomical structure of the individual patient, muscle position and dimension data of muscles in the patient anatomical structure of the individual patient, and relationship definition data defining relationships between bones and muscles in the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised pre-operative software model of the patient anatomical structure of the individual patient; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure; simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the modified personalised post-operative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised postoperative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide model to facilitate the surgical procedure based on the personalised post-operative software model or the adjusted personalised post-operative software model; and generating by the least one computer processor at least one of: 3D printing instructions based on the personalised surgical cutting guide model to transmit to a 3D printer to form a surgical cutting guide; or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

[0007] Optionally, the surgery is one or more from among: an osteotomy; a femoral osteotomy; a proximal femoral osteotomy; a tibial osteotomy; a distal tibial osteotomy; a proximal tibial osteotomy; a distal femoral osteotomy; a high tibial osteotomy.

[0008] Optionally, the surgical procedure includes one or more osteotomies, and the simulation output comprises the surgical cutting guide model, the surgical cutting guide model defining one or more osteotomy planes in which to cut a bone or bones of the patient anatomical structure to facilitate reconfiguration of the patient anatomical structure in accordance with the modified software model or the adjusted modified software model.

[0009] Optionally, the surgical procedure includes one or more osteotomies, and the simulation output comprises an implant configured to secure a first portion of a bone cut by the one or more osteotomies to a second, separate, portion of the same bone cut by the one or more osteotomies, in a configuration determined in accordance with the modified software model or the adjusted modified software model. [0010] Optionally, the software definition of the surgical procedure defines a value range for each of the following parameters: a number of osteotomies in each of one or more specified bones; for each osteotomy, a specific bone from the patient anatomical structure to be cut by the osteotomy; for each osteotomy, a position and orientation of an osteotomy plane; for each osteotomy, a relative position and orientation of the two or more post-osteotomy distinct bone portions; wherein the simulation iteratively constrains the software definition of the surgical procedure to include a patient-specific osteotomy plan by determining a specific value from within the value range for each of the parameters, and wherein the surgical cutting guide model and the implant configuration implement the specific values for the parameters.

[0011] Optionally, the software definition of the surgical procedure defines a value range for each of the following parameters: a repositioning of one or more specified bones within an allowable range; a range of available implants with corresponding chisels; wherein the simulation iteratively constrains the software definition of the surgical procedure to include a patient-specific implant plan by determining one or more repositioning’s and for each determined repositioning a selected implant and chisel from the available range.

[0012] Optionally, the surgical cutting guide model implements the specific values for one or more osteotomies from the patient-specific osteotomy plan, wherein the surgical cutting guide model is generated by, for each osteotomy: defining a mask portion of the surgical cutting guide model configured to conform to one or more curves or other geometric features of the bone position and dimension data of bones in the patient anatomical structure of the individual patient.

[0013] Optionally, the surgical cutting guide model is further generated by, for each osteotomy: defining an osteotomy slot portion of the surgical cutting guide model being an aperture in the mask portion positioned and orientated according to the specific values for the osteotomy; defining an osteotomy saw blade insertion profile and extruding the surgical cutting guide model according to the saw blade insertion profile to a predefined distance proximally and distally of the defined slot portion.

[0014] Optionally, the surgical cutting guide model is further generated by, for each osteotomy: defining an osteotomy chisel insertion as a location on the defined mask portion and a direction relative to the mask, extruding the surgical cutting guide model according to the location and direction by a predefined distance distal from the osteotomy chisel insertion location.

[0015] Optionally, the surgical cutting guide model is further generated by, for each osteotomy: defining one or more implant fixation slots as a location on the defined mask portion based on a shaft surface of the implant, extruding the surgical cutting guide model at the one or more defined implant fixation slots by a predefined distance distally. [0016] Optionally, the defined osteotomy chisel insertion further comprises a hole in the surgical cutting guide model configured to removably receive a guide wire and a guide wire seat, and wherein the surgical cutting guide model is extruded distally around the hole to define the guide wire seat, the guide wire seat being configured for insertion into the hole at one end and to longitudinally receive the guide wire.

[0017] Optionally, the surgical cutting guide model is converted to 3D printing instructions, and the method further comprises 3D printing surgical cutting guide from the 3D printing instructions.

[0018] Optionally, the received pre-operative medical imagery of the patient anatomical structure of the individual patient is obtained by one or more from among: anthropometric data acquisition; attachment of MRI-compatible markers to the individual patient and MRI-scanning thereof, and analysis of the MRI-scanning to obtain MRI-images scans of the patient anatomical structure of the individual patient; placement of electromyography (EMG) units on the skin of the individual patient and measurement and analysis of EMG signals generated by the EMG units.

[0019] Optionally, generating the personalised pre-operative software model of the patient includes: generating at least one MRI-reconstructed portion by one or more from among: obtaining the bone position and dimension data of bones in the patient anatomical structure of the individual patient by executing a segmentation process on the MRI scans; obtaining the muscle position and dimension data of muscles in the patient anatomical structure of the individual patient by executing a segmentation process on the MRI scans; obtaining position and dimension data of a growth plate in one or more bones, by executing a segmentation process on the MRI scans.

[0020] Optionally, generating the personalised pre-operative software model of the patient includes: generating at least one CT-reconstructed portion by imaging the patient anatomical structure of the individual patient by a computerized tomography scan to obtain at least one CT scan, and obtaining the bone position and dimension data of bones in the patient anatomical structure of the individual patient by executing a segmentation process on the at least one CT scan.

[0021] Optionally, methods further comprise registering common landmarks in the bone position and dimension data of bones in the CT-reconstructed portion and the MRI-reconstructed portion.

[0022] Optionally, generating the personalised pre-operative software model of the patient includes a 3D anatomical analysis of the bone position and dimension data, including defining one or more axes and planes in the bone position and dimension data, and measuring one or more 3D angles between the defined one or more axes and planes. [0023] Optionally, generating the post-operative personalised software model of the patient anatomical structure of the individual patient includes: combining a CAD model of the selected implant and chisel with the bone position and dimension data to verify that adequate bone thickness remains in place once the implant is implanted into a bone orifice created by the chisel.

[0024] Optionally, generating the post-operative personalised software model of the patient anatomical structure of the individual patient includes: combining a CAD model of the selected implant and chisel with the growth plate position and dimension data to verify that adequate growth plate volume remains once the implant is implanted into a bone orifice created by the chisel.

[0025] Optionally, the patient movement analysis data is measured motion data of the individual patient.

[0026] Optionally, the patient movement analysis data is measured motion data of the individual patient obtained by motion capture while the individual patient is walking or performing another natural body movement. The measured motion data may be of motion about one or more of the hip joint, the knee joint, and the ankle joint.

[0027] Optionally, the personalised pre-operative software model of patient anatomical structure of the individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, is a 4D personalised functional model, generated by one or more steps from among: identifying surfaces and landmarks in the received pre-operative medical imagery, and fitting the body parts to which the identified surfaces and landmarks belong to an anatomical structure reference system by mapping the identified surfaces and landmarks to equivalents in the anatomical structure reference system; adding body parts fitted to the anatomical reference system including one or more from among: bones; joints; muscles; to the 4D personalised functional model.

[0028] Optionally, the post-operative personalised software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure, is a 4D personalised functional model, generated by: identifying surfaces and landmarks in the pre-operative software model of patient anatomical structure as modified by the software definition of the surgical procedure, optionally as iteratively constrained (see above), and fitting the body parts to which the identified surfaces and landmarks belong to an anatomical structure reference system by mapping the identified surfaces and landmarks to equivalents in the anatomical structure reference system; adding body parts fitted to the anatomical reference system including one or more from among: bones; joints; muscles; to the 4D personalised functional model.

[0029] Optionally, the method further includes simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the post-operative personalised software model to generate simulation output, wherein the simulation comprises comparing the 4D personalised functional model of the preoperative software model of patient anatomical structure with the 4D personalised functional model of the post-operative personalised software model of patient anatomical structure.

[0030] Optionally, the software definition of the surgical procedure is iteratively constrained to a defined solution surgical procedure within the software definition of the surgical procedure by a machine learning model or another solving algorithm seeking to achieve a defined optimum outcome in the comparison of the 4D personalised functional model of the pre-operative software model of patient anatomical structure with the 4D personalised functional model of the postoperative personalised software model of patient anatomical structure.

[0031] Optionally, the personalised surgical cutting guide model comprises two osteotomy planes for cutting a bone, which two osteotomy planes intersect at a line to define a personalised post-operative wedge geometry between the two planes terminating at the line, the wedge geometry corresponding to a region between portions of the bone post-operation, and the method includes generating 3D printing instructions based on the personalised post-operative wedge geometry to transmit to a 3D printer to form a wedge-shaped surgical tool.

[0032] Furthermore, the method may include, at a 3D printer, 3D printing the wedge-shaped surgical tool according to the 3D printing instruction based on the personalised post-operative wedge geometry.

[0033] Embodiments may include a computer program which, when executed by a computing apparatus comprising processor hardware and memory hardware, causes the processor hardware to perform a computer-implemented method of an embodiment.

[0034] The computer program may be stored on a computer-readable medium.

[0035] The computer-readable medium storing the computer program may be non-transitory.

[0036] Embodiments may include an apparatus comprising a processor and a memory, the processor being configured to execute processing instructions stored by the memory, and by executing the processing instructions to perform a computer-implemented method comprising of pre-operative planning for patient surgery, the method including: generating by at least one computer processor a personalised pre-operative software model of patient anatomical structure of an individual patient based on received pre-operative medical imagery of the patient anatomical structure of the individual patient and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient, the patient movement analysis data being generated at least in part by a movement sensor system observing movement of the patient anatomical structure of the individual patient over a period of time, wherein the personalised pre-operative software model of the patient includes bone position and dimension data of bones in the patient anatomical structure of the individual patient, muscle position and dimension data of muscles in the patient anatomical structure of the individual patient, and relationship definition data defining relationships between bones and muscles in the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised preoperative software model of the patient anatomical structure of the individual patient; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure of the individual patient based on the personalised pre-operative software model and the software definition of the surgical procedure; simulating by the at least one computer processor movement of the bones and muscles of the patient anatomical structure of the individual patient according to the modified personalised post-operative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised post-operative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide model to facilitate the surgical procedure based on the personalised post-operative software model or the adjusted personalised post-operative software model; and generating by the least one computer processor at least one of: 3D printing instructions based on the personalised surgical cutting guide model to transmit to a 3D printer to form a surgical cutting guide; or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

[0037] Embodiments may include a computer-implemented method of pre-operative planning for patient surgery, the method including: generating a personalised pre-operative software model based on received pre-operative medical imagery and pre-operative patient movement analysis data related to movement of the patient anatomical structure of the individual patient; receiving by the least one computer processor a software definition of a surgical procedure to be performed in relation to the personalised pre-operative software model of the patient anatomical structure; generating by the least one computer processor a modified personalised post-operative software model of the patient anatomical structure; simulating by the at least one computer processor movement of the patient anatomical structure according to the modified personalised postoperative software model to generate simulation output; allowing by the least one computer processor adjustment of the personalised post-operative software model based on the simulation output; generating by the least one computer processor a personalised surgical cutting guide; and generating by the least one computer processor at least one of: 3D printing instructions based on the personalised surgical cutting guide model or personalised surgical operation instructions to transmit to a robotic surgical system for use in performing the surgical procedure.

Brief Description of Drawings

[0038] Embodiments are described in further detail below, by way of example and with reference to the accompanying drawings, in which:

[0039] Figure 1 shows a schematic diagram of an example orthopaedic surgery planning device, according to some embodiments;

[0040] Figure 2 shows a flowchart of a method of generation of surgical operation instructions and three-dimensional model printing instructions for an orthopaedic surgery, according to some embodiments;

[0041] Figure 3 shows a flowchart of a method of performing a medical imagery segmentation process, according to some embodiments;

[0042] Figures 4A to 4F illustrate patient muscular skeletal systems according to some embodiments;

[0043] Figures 5A to 5C illustrates planned rotational corrections for a proximal femoral osteotomy according to some embodiments;

[0044] Figures 6A to 6D illustrate constraints on software definition of surgical procedure according to some embodiments;

[0045] Figure 7 illustrates muscle segmentation according to an embodiment;

[0046] Figure 8 illustrates movement analysis data according to some embodiments;

[0047] Figure 9 illustrates surgical cutting guide design for a proximal femoral osteotomy according to some embodiments;

[0048] Figure 10 illustrates a surgical cutting guide manufacture for a proximal femoral osteotomy according to some embodiments;

[0049] Figures 11A and 11B illustrate osteotomy planes for a distal femoral osteotomy according to some embodiments;

[0050] Figures 11C and 11D illustrate recommended implant selection, configuration, and position for a distal femoral osteotomy according to some embodiments; [0051] Figure HE illustrates surgical guide design, configuration, and position for a distal femoral osteotomy according to some embodiments;

[0052] Figures 12A, 12B, and 12C illustrate osteotomy planes for a distal tibial osteotomy according to some embodiments;

[0053] Figures 12D and 12E illustrate recommended implant selection, configuration, and position for a distal tibial osteotomy according to some embodiments;

[0054] Figure 12F illustrates surgical guide design, configuration, and position for a distal tibial osteotomy according to some embodiments;

[0055] Figures 13A illustrates an osteotomy plane for a proximal tibial osteotomy according to some embodiments;

[0056] Figure 13B & 13C illustrates cutting guide design, configuration, and position for a proximal tibial osteotomy according to some embodiments; and

[0057] Figures 13D and 13E illustrate a surgical instrument (i.e., wedge) design, configuration, and position for a proximal tibial osteotomy according to some embodiments.

Description of Embodiments

[0058] Embodiments generally relate to methods, systems, and devices for modelling and simulation of orthopaedic surgery. Particular embodiments relate to methods, systems, and devices for patient physiology modelling and surgical simulation for orthopaedic surgeries, specifically osteotomy surgery. Embodiments also include computer programs, processing instructions, and/or other forms of software, for performing the method and processing steps (other than those specified as being performed by a human expert). Such software may be stored on a computer-readable medium such as a non-transitory computer-readable medium.

[0059] Referring to the drawings, Figure 1 shows a schematic illustration of an orthopaedic surgery planning device 100 (planning device 100) for generating orthopaedic surgery instructions and three-dimensional (3D) model printing instructions for an orthopaedic surgery, according to some embodiments. Corresponding worked examples are illustrated in Figures 4A to 13E. The worked examples are presented in the context of a proximal femoral osteotomy, a distal femoral osteotomy, a distal tibial osteotomy, and a proximal tibial osteotomy. However, methods, systems, and devices, of embodiments may also be applied to other osteotomies. The worked examples are presented in the context of a proximal femoral osteotomy, a distal femoral osteotomy, a distal tibial osteotomy, and a proximal tibial osteotomy, in order to enable the concepts disclosed herein to be understood in a consistent manner in particular anatomical implementation examples. [0060] In some embodiments, planning device 100 comprises device processor circuitry 111 (described herein as a processor 111 for convenient reference) and a memory 114 accessible to device processor circuitry 111. Processor 111 may be configured to access data stored in memory 114, to execute instructions stored in memory 114, and to read and write data to and from memory 114. Device processor circuitry 111 (i.e. processor 111) may comprise one or more microprocessors, microcontrollers, central processing units (CPUs), application specific instruction set processors (ASIPs), or other processor capable of reading and executing instruction code.

[0061] Memory 114 may comprise one or more volatile or non-volatile memory types, such as RAM, ROM, EEPROM, or flash, for example. Memory 114 may be configured to store executable applications for execution by processor 110. For example, memory 114 may store at least one 3D anatomical modelling module 126 configured to allow a user to generate at least one 3D anatomical model for use in orthopaedic surgery planning. Memory 114 may also store preoperative medical imagery 116, pre-operative movement analysis data 118, personalised surgical operation instructions 120, 3D model printing instructions 122, surgery planning module 124, and/or software definition module 128 for example.

[0062] Exemplary use cases are illustrated in Figures 4A to 4C, showing hip deformities in juvenile patients. Figure 4A shows an image of a hip with Perthes’ disease, Figure 4B shows an image of a hip with deformity subsequent to prior management of a slipped capital femoral epiphysis, and Figure 4C shows a hip joint of a patient with a neuromuscular disease such as cerebral palsy. Either case is a candidate for proximal femoral osteotomy: a complex corrective surgical procedure. In some embodiments, pre-operative medical imagery 116 may include a plurality of medical imaging scans of an individual patient, such as magnetic resonance imaging (MRI) scans and/or computed tomography (CT) scans. For example, pre-operative medical imagery 116 may include at least one of: a full lower limb MRI scan, a full length femur MRI scan, an affected hip MRI scan, and/or a full length femur CT scan. In some embodiments, a plurality of MRI-compatible markers may be used in obtaining the MRI scans of the pre-operative medical imagery 116. MRI and CT scans may be an input to a step of anatomical modelling. Figure 4D illustrates images collected in the pre-operative medical imagery step 116 including pelvis and full-length femurs MRI in Figure 4D (obtained with 1.5T, 3D PD SPACE, slice thickness 1.1mm, voxel size 0.83x0.83x1.0mm) and a right hip CT scan in Figure 4E (slice thickness 1.1mm). It is noted that the hip imaged in Figures 4D and 4E has deformity which has developed following previous stabilisation management of a right slipped capital femoral epiphysis fracture with a canulated screw. The pre-operative medical imagery may be, for example, segmented by an Al algorithm to generate a personalised pre-operative software model of the patient. It is noted that, in the alternative anatomical contexts of tibial osteotomies (see Figures 12A to 13E), the scanning would be of a tibia and may also include knee joint, ankle joint, and optionally also the fibula. Furthermore, it is noted that the segmented femurs as imaged in Figure 4F may be pre-operative medical imagery for a distal femoral osteotomy, such as illustrated in Figures 11 A to 1 IE.

[0063] For example, a segmentation algorithm may execute on the collected pre-operative medical imagery by processing either the MRI scans, or by combining the MRI scans with the CT scans, to: obtain the bone position and dimension data of bones in the patient anatomical structure of the individual patient; obtain the muscle position and dimension data of muscles in the patient anatomical structure of the individual patient; and/or obtain position and dimension data of a growth plate in one or more bones. Figure 4F illustrates, in 2D, 3D segmented femur MRI- reconstructed portions obtained by executing a segmentation algorithm on the obtained preoperative medical imagery, in the context of an osteotomy to be performed on a femur.

[0064] In some embodiments, the full lower limb MRI scan may be acquired with the individual patient laying supine with their legs extended. The full lower limb MRI scan may be conducted from the most superior point of the ilium to the feet. The full lower limb MRI scan may have a ‘3D PD SPACE’ space sequence, for example. The full lower limb MRI scan may have a slice thickness of about 1.1mm, for example. The full lower limb MRI scan may have a voxel of about 0.83x0.83x1.0mm, for example.

[0065] In some embodiments, the full length femur MRI scan may be acquired with the individual patient laying supine with their legs extended. The full length femur MRI scan may be conducted from the most superior point of the ilium to below the femoral condyles. The full length femur MRI scan may have a ‘3D DIXON’ space sequence, for example. The full length femur MRI scan may have a slice thickness of about 0.9mm, for example. The full length femur MRI scan may have a voxel of about 0.88x0.88x0.88mm, for example.

[0066] In some embodiments, the affected hip MRI scan may be acquired with the individual patient laying supine with their legs extended. The affected hip MRI scan may be conducted from the anterior superior iliac spine of the affected hip to the lesser trochanter. The affected hip MRI scan may have a ‘3D T2’ space sequence, for example. The affected hip MRI scan may have a slice thickness of about 0.7mm, for example. The affected hip MRI scan may have a voxel of about 0.43x0.43x0.43mm, for example.

[0067] In some embodiments, the full length femur CT scan may be acquired with the individual patient laying supine with their legs extended. The full length femur CT scan may be conducted from the anterior superior iliac spine of the affected hip to below the femoral condyles. The full length femur CT scan may have a slice thickness of about less than 1.0mm, for example.

[0068] In some embodiments, regardless of the anatomical context being a femoral osteotomy procedure, or alternatively a tibial osteotomy procedure, pre-operative movement analysis data 118 may include at least one of: anthropometric data, electromyography (EMG) data, and/or 3D gait data. Anthropometric data may include at least one of the individual patient’s: height, mass, leg length, frontal plane knee alignment, knee width, and/or ankle width. EMG data may be acquired via a motion capture system, such as a ‘Vicon system’, for example. EMG data may be acquired via the use of a plurality of EMG units attached to the individual patient’s body. That is, a plurality of EMG units may be attached on the skin of the individual patient and measurement and analysis of EMG signals generated by the EMG units conducted to obtain EMG data. In some embodiments, the pre-operative movement analysis data 118 may be measured motion data of the individual patient. In some embodiments, 3D gait data may be acquired via a motion capture system, such as a ‘Vicon system’, for example. 3D gait data may be acquired via a standing calibration trial and/or at least 10 walking trials, for example.

[0069] The pre-operative movement analysis data 118 may be data representing movement about the hip joint. Alternatively, pre-operative movement analysis data 118 may be data representing movement about the knee joint, as explained in Barzan M, Modenese L, Carty CP, Maine S, Stockton CA, Sancisi N, Lewis A, Grant J, Lloyd DG, Brito da Luz S. Development and validation of subject-specific pediatric multibody knee kinematic models with ligamentous constraints. J Biomech. 2019 Aug 27;93:194-203. doi: 10.1016/j.jbiomech.2019.07.001. Epub 2019 Jul 8. PMID: 31331662.

[0070] Planning device 100 further comprises an electronic interface 113 to allow communication between planning device 100 and a user. Electronic interface 113 may comprise one or more of a camera, a speaker, a mouse, a keyboard, a touchpad, buttons, sliders, and LEDs, for example. In some embodiments, electronic interface 113 may be used to alert the user of a particular event, such as the device being ready for use, for example.

[0071] To facilitate communication with external and/or remote devices, planning device 100 further comprises a communications module 112. Communications module 112 may allow for wired and/or wireless communication between planning device 100 and external computing devices and components. Communications module 112 may facilitate communication via Bluetooth, USB, Wi-Fi, Ethernet, or via a telecommunications network, for example. According to some embodiments, communication module 112 may facilitate communication with external devices and systems via a network 140. The external devices may include a computer server (or server system), a user device, such as a handheld computing device or other form of computing device, and a doctor device, which may also be a handheld computing device or other form of computing device. In some embodiments, processor 111, memory 114, and communications module 112 may be in the form of a microcontroller, such as an Arduino, for example.

[0072] Network 140 may comprise one or more local area networks or wide area networks that facilitate communication between planning device 100 and other devices, such as servers or computers, connected to network 140. For example, according to some embodiments, network 140 may be the internet. However, network 140 may comprise at least a portion of any one or more networks having one or more nodes that transmit, receive, forward, generate, buffer, store, route, switch, process, or a combination thereof, etc. one or more messages, packets, signals, some combination thereof, or so forth. Network 140 may include, for example, one or more of: a wireless network, a wired network, an internet, an intranet, a public network, a packet-switched network, a circuit-switched network, an ad hoc network, an infrastructure network, a public- switched telephone network (PSTN), a cable network, a cellular network, a satellite network, a fibre-optic network, or some combination thereof.

[0073] Figure 2 shows a flowchart of a method 200 for pre-operative planning for patient surgery, according to some embodiments. The pre-operative planning for patient surgery may include personalised surgical operation instructions 120 and 3D model printing instructions 122 for an orthopaedic corrective surgery. Processor 111 begins method 200 on execution of surgery planning module 124. At step S202, processor 111 generates a personalised pre-operative software model of the individual patient’s anatomical structure (pre-operative software model). In some embodiments, the personalised pre-operative software model is based on at least one of: the pre-operative medical imagery 116 and the pre-operative movement analysis data 118. In some embodiments, the pre-operative software model may include bone position and dimension data of bones in the anatomical structure of the individual patient. In some embodiments, the preoperative software model may include muscle position and dimension data of muscles in the anatomical structure of the individual patient. In some embodiments, the pre-operative software model may include relationship definition data defining relationships between bones and muscles in the anatomical structure of the individual patient. Figure 4F represents 3D segmented femurs in a personalised pre-operative software model of an individual patient’s anatomical structure. In the alternative anatomical context of a tibial osteotomy, a tibia may be segmented from surrounding muscles, bones and joints.

[0074] A segmented tibia is illustrated in Figures 12A to 12F, which illustrate various aspects of a distal tibial osteotomy, and Figures 13 A to 13E, which illustrate various aspects of a proximal tibial osteotomy. In each case, the bone illustrated is an image of a tibia obtained by segmentation of pre-operative medical imagery, such as CT-scanning or MRI-scanning.

[0075] In some embodiments, generation of the personalised pre-operative software model of the patient’s anatomical structure of an individual patient includes generation of at least one MRI- reconstructed portion. In some embodiments, generation of the personalised pre-operative software model of the patient’s anatomical structure of an individual patient includes generation of at least one CT-reconstructed portion. To generate the at least one MRI-reconstructed portion and/or the at least one CT-reconstructed portion processor 111 performs the steps of method 300 (Figure 3). That is, method 300 may form a part of step S202 of method 200, for example. In some embodiments, the steps of method 300 may be performed by processor 111 on execution of 3D anatomical modelling module 126. In some embodiments, the steps of method 300 may be performed by processor 111 during execution of the surgery planning module 124. In some embodiments, the 3D anatomical modelling module 126 is a commercially available solution, such as ‘Mimics’ and/or “3-matic”, for example.

[0076] Figure 3 shows a flowchart of method 300 for segmenting pre-operative medical imagery 116 of the individual patient’s anatomical structure, according to some embodiments. In some embodiments, performing method 300 generates at least one MRI-reconstructed portion by performing a segmentation process on the MRI scans included in pre-operative medical imagery 116. In some embodiments, performing method 300 generates at least one CT-reconstructed portion by performing a segmentation process on the CT scan included in pre-operative medical imagery 116. In some embodiments, processor 111 may perform the steps of method 300 multiple times to generate a plurality of reconstructed portions. That is, processor 111 may perform the steps of method 300 a first time to generate MRI-reconstructed bone portions, for example. Processor 111 may perform the steps of method 300 a second time to generate MRI-reconstructed muscle portions, for example. Processor 111 may perform the steps of method 300 a third time to generate an MRI-reconstructed growth plate portion, for example. Processor 111 may perform the steps of method 300 a fourth time to generate CT-reconstructed bone portions, for example. Figures 4D and 4E illustrate inputs to the segmentation process: MRI and CT scans; Figure 4F illustrates an output: a segmented bone image. Further, it is noted that a segmented bone image is used as the bone element of Figures 11 A to 1 IE, which illustrate various aspects of a distal femoral osteotomy, of Figures 12A to 12F, which illustrate various aspects of a distal tibial osteotomy, and of Figures 13 A to 13E, which illustrate various aspects of a proximal tibial osteotomy.

[0077] In some embodiments, the MRI-reconstructed bone portions are generated based on the full lower limb MRI scan and the affected hip MRI scan included in the pre-operative medical imagery 116. In the alternative anatomical context of the tibial osteotomy, such as illustrated in Figures 12A to 12F (distal tibial osteotomy) and in Figures 13A to 13E (proximal tibial osteotomy), it may be one or both of an affected knee and an affected ankle MRI scan included in the pre-operative medical imagery. Returning to the context of a proximal femoral osteotomy, the MRI-reconstructed bone portions may be of the individual patient’s lower limb bones, affected hip, and/or pelvis, for example.

[0078] The term ‘relevant bone portions’ is used to refer to the bone portions relevant to the particular surgery being performed. Noting that any two instances of the same surgery (for example, two proximal femoral osteotomies) do not necessarily require the same relevant bone portions, and the specific selection is dependent upon the circumstances of the patient, the scope of the surgery, and the affected joint. In the context of a femoral osteotomy the relevant bone portions comprise the femur and may also include one or more other lower limb bones, the hip joint, the knee joint, the tibia, and the fibula. Noting that for a distal femoral osteotomy the knee joint may be included and the hip joint excluded, whereas for a proximal femoral osteotomy the reverse may be true. In the context of a tibial osteotomy, the relevant bone portions comprise the tibia and may also include one or more other lower limb bones, the hip joint, the knee joint, the femur, and the fibula. Noting that for a distal tibial osteotomy the ankle joint may be included and the knee joint excluded, whereas for a proximal tibial osteotomy the reverse may be true. Further, it is noted that bone portion may be a part of a bone or may be the entire bone.

[0079] In some embodiments, the MRI-reconstructed muscle portions are generated based on the full length femur MRI scan included in the pre-operative medical imagery 116, or based on a full length tibia MRI scan included in the pre-operative medical imagery 116. The MRI- reconstructed muscle portions may be of the individual patient’s glutei muscles, for example, or of the individual patient’s hamstring, or quadricep, or gastrocnemius in the context of a distal femoral osteotomy. In the context of a tibial osteotomy, the reconstructed muscle portions may be of the individual patient’s gastrocnemius muscle and/or soleus muscle, as a particular muscle part of interest. In the proximal femoral osteotomy context, the MRI-reconstructed growth plate portion is generated based on the full length femur MRI scan included in the pre-operative medical imagery 116, similarly in the context of the distal femoral osteotomy. In the context of tibial osteotomies, the MRI-reconstructed growth plate portion is generated based on the full length tibia MRI scan included in the pre-operative medical imagery 116. The MRI-reconstructed growth plate portion may be of the individual patient’s growth plate, for example. In some embodiments, the CT-reconstructed bone portions are generated based on the full length femur CT scan included in the pre-operative medical imagery 116. [0080] The CT-reconstructed relevant bone portions may be of the individual patient’s affected femur and pelvis, for example. In some embodiments, the CT-reconstructed bone portions are generated based on the full length tibia CT scan included in the pre-operative medical imagery 116. The CT-reconstructed bone portions may be of the individual patient’s affected tibia and may also include one or more elements of the fibula, the ankle (in the case of a distal tibial osteotomy) and knee (in the case of a proximal tibial osteotomy), for example.

[0081] To generate the MRI-reconstructed relevant bone portions, processor 111, at step S302, creates a mask for each of the individual patient’s lower limb bones, affected hip, and pelvis (noting that the selection of bones and joints is peculiar to the surgery being performed and in the case of a tibial osteotomy may include knee and/or ankle and exclude hip and/or pelvis). That is, processor 111, receiving scans such as the full lower limb MRI scan and the affected hip MRI scan as input, isolates, or segments, each relevant bone portion, such as lower limb bone, the affected hip, and the pelvis in the full lower limb MRI scan and the affected hip MRI scan into a plurality of lower limb bone masks, a hip mask, and a pelvis mask, for example. In some embodiments, the created bone masks are for both legs of the individual patient. Processor 111, at step S304, then generates a plurality of 3D relevant bone portion images, such as lower limb bone, hip, and pelvis parts, one for each of the individual patient’s relevant bone portions, based on the corresponding bone masks. In the case of a tibial osteotomy, the relevant bone portions generated at S304 may include a tibia, femur, fibula, knee, and ankle parts, based on corresponding masks, respectively. In the case of a femoral osteotomy, the parts generated at S304 may be lower limb bone, hip, and pelvis parts.

[0082] Processor 111, at step S306, wraps each of the plurality of 3D generated relevant bone portion digital models from S304, in the case of a femoral osteotomy the wrapped parts may be lower limb bone, hip, and pelvis parts, resulting in a plurality of wrapped 3D lower limb bone, hip, and pelvis part digital models. In the case of the tibial osteotomies, the wrapped parts may be tibia, fibula, femur, knee, and ankle parts. In some embodiments, a smallest detail setting is configured to be equal to 0.3 mm. In some embodiments, a gap closing distance setting is configured to be equal to 0.5 mm. Again, it is noted that the generation of MRI-reconstructed bone portions follows an equivalent procedure in the context of a tibial osteotomy, save for the identity of the bones and affected joints. The bones include at least the tibia and may also include the fibula and may also include the femur. The affected joints may include one or both of the knee joint and the ankle joint. In some embodiments, even in the context of a tibial osteotomy, the affected joints may include the hip joint. [0083] Optionally, a smoothing step may be included after the wrapping S306. The smoothing step may comprise smoothing each of the wrapped 3D relevant bone portion digital models from S306, such as lower limb, hip, and pelvis parts to generate a plurality of final lower limb, hip, and pelvis parts. However, the smoothing step may be omitted. For example, omitting the smoothing step may be preferable in order that the 3D generated part digital models remain true to the original anatomy. Thus, the plurality of final lower limb, hip, and pelvis parts may be the output of the wrapping step S306, or where smoothing is included, the smoothing step. In the context of the tibial osteotomy the smoothed parts may include one or more from among the tibia, fibula, femur, knee, and ankle parts.

[0084] After wrapping (and optionally smoothing) each of the plurality of relevant bone portions, such as lower limb bone, hip, and pelvis parts, processor 111, at step S310, compares each of the plurality of final 3D relevant bone portion digital models to the plurality of medical imaging scans of the pre-operative medical imagery 116 (noting that the selection of relevant bone portions comprises bones and joints peculiar to the surgery being performed and in the case of a tibial osteotomy may include knee and/or ankle and exclude hip and/or pelvis). That is, processor 111 compares the contours of each of the plurality of final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts) to the plurality of medical imaging scans of the pre-operative medical imagery 116 to determine part generation accuracy. In some embodiments, a user (e.g. a medical expert or a medical imaging expert) may interact with the electronic interface 113 to confirm the accuracy of each of the plurality of final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts). In some embodiments, processor 111, upon determining that the plurality of generated final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts) are inaccurate, may return to step S302 and repeat the previously described steps, S302, S304, S306, and S310 in relation to generating the MRI- reconstructed bone portions.

[0085] In some embodiments, processor 111, upon determining that the plurality of generated final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts) are accurate, moves to step S312. At S312, processor 111 performs remeshing on each of the plurality of final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts) to generate the MRI-reconstructed bone portions. Remeshing rebuilds the geometry of each of the plurality of final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts) with an increased uniform topology. In some embodiments, remeshing may change the number, shape, size, or arrangement of polygons forming a surface mesh of each of the plurality of final 3D relevant bone portion digital models (such as lower limb bone, hip, and pelvis parts). In some embodiments, remeshing may result in a simpler and less computationally difficult surface mesh for each of the MRI-reconstructed bone portions.

[0086] In some embodiments, processor 111, upon generating the MRI-reconstructed bone portions may perform method 300 to generate the MRI-reconstructed muscle portions. Figure 7 illustrates the function of muscle segmentation in generating the personalised pre-operative software model of patient anatomical structure by reference to a two-dimensional image of a hip joint in which the gluteus medius and gluteus minimus muscles are segmented (again, noting that in the context of a tibial osteotomy the muscles of interest may be gastrocnemius and/or soleus). That is, processor 111 may perform method 300 again to generate the MRI-reconstructed muscle portions, for example.

[0087] The term ‘relevant muscle portions’ is used to refer to the muscle portions relevant to the particular surgery being performed. Noting that any two instances of the same surgery (for example, two proximal femoral osteotomies) do not necessarily require the same relevant muscle portions, and the specific selection is dependent upon the circumstances of the patient, the scope of the surgery, and the affected joint(s). In the context of a proximal femoral osteotomy the relevant muscle portions comprise the gluteus minimus and the gluteus medius. For a distal femoral osteotomy the relevant muscle portions may comprise the hamstring, the quadricep, the gastrocnemius. In the context of a tibial osteotomy, the relevant muscle portions may comprise the gastrocnemius and the soleus. In the context of a proximal tibial osteotomy, the relevant muscle portions may also comprise the hamstring and/or quadricep.

[0088] This discussion is in the specific context of gluteus medius and gluteus medius for a proximal femoral osteotomy, noting the process is adaptable according to the relevant muscle portions of the individual surgery. To generate the MRI-reconstructed muscle portions, processor 111, at step S302, creates a mask for the individual patient’s gluteus minimus and gluteus medius of the affected limb. That is, processor 111, receiving the full length femur MRI scan as input, isolates, or segments, the gluteus minimus and the gluteus medius of the affected limb in the full length femur MRI scan into a plurality of muscle masks, for example. Processor 111 , at step S304, then generates a plurality of 3D muscle part digital models, one for each of the gluteus minimus and the gluteus medius, based on the corresponding muscle masks. Processor 111, at step S306, wraps each of the plurality of 3D generated muscle part digital models, resulting in a plurality of wrapped 3D muscle part digital models. Wrapping provides a continuous surface around the mesh. In some embodiments, a smallest detail setting is configured to be equal to 0.3 mm. In some embodiments, a gap closing distance setting is configured to be equal to 0.5 mm. In the context of a tibial osteotomy, the relevant muscle portion may be gastrocnemius in particular, and may also include soleus.

[0089] In the context of planning a proximal tibial osteotomy, in order to enhance estimation of post-operative knee range of motion, segmentation processing may be applied to the knee ligaments and knee cartilage in addition to the bones and muscles.

[0090] Optionally, a smoothing step may be included after the wrapping S306. The smoothing step may comprise smoothing each of the wrapped 3D muscle part digital models to generate a plurality of final 3D muscle part digital models. However, the smoothing step may be omitted. For example, omitting the smoothing step may be preferable in order that the 3D generated muscle part digital models remain true to the original anatomy. Thus, the plurality of final 3D muscle part digital models may be the output of the wrapping step S306, or where smoothing is included, the smoothing step.

[0091] At step S310, processor 111 compares each of the plurality of final 3D muscle part digital models to the plurality of medical imaging scans of the pre-operative medical imagery 116. That is, processor 111 compares the contours of each of the plurality of final 3D muscle part digital models to the plurality of medical imaging scans of the pre-operative medical imagery 116 to determine part generation accuracy. In some embodiments, a user (e.g. a medical expert or a medical imaging expert) may interact with the electronic interface 113 to confirm the accuracy of each of the plurality of final 3D muscle part digital models. In some embodiments, processor 111, upon determining that the plurality of generated final 3D muscle part digital models are inaccurate, may return to step S302 and repeat the previously described steps, S302, S304, S306, and S310 in relation to generating the MRI-reconstructed muscle portions.

[0092] In some embodiments, processor 111, upon determining that the plurality of generated final 3D muscle part digital models are accurate, moves to step S312. At S312, processor 111 performs remeshing on each of the plurality of final 3D muscle part digital models to generate the MRI-reconstructed muscle portions. Remeshing rebuilds the geometry of each of the plurality of final 3D muscle part digital models with an increased uniform topology. In some embodiments, remeshing may change the number, shape, size, or arrangement of polygons forming a surface mesh of each of the plurality of final 3D muscle part digital models. In some embodiments, remeshing may result in a simpler and less computationally difficult surface mesh for each of the MRI-reconstructed muscle portions.

[0093] In some embodiments, processor 111, upon generating the MRI-reconstructed muscle portions may perform method 300 to generate the MRI-reconstructed growth plate portion. That is, processor 111 may perform method 300 again to generate the MRI-reconstructed growth plate portion, for example. To generate the MRI-reconstructed growth plate portion, processor 111, at step S302, creates a mask for the individual patient’s femoral physis of the affected limb. That is, processor 111, receiving the full length femur MRI scan as input, isolates, or segments, the femoral physis of the affected limb in the full length femur MRI scan into a growth plate mask, for example. Processor 111, at step S304, then generates a 3D growth plate part for the femoral physis based on the corresponding growth plate mask. Processor 111, at step S306, wraps the 3D generated growth plate part digital model, resulting in a wrapped 3D growth plate part digital model. Wrapping provides a continuous surface around the mesh. In some embodiments, a smallest detail setting is configured to be equal to 0.3 mm. In some embodiments, a gap closing distance setting is configured to be equal to 0.5 mm. In the context of a tibial osteotomy the same process is applied to the tibia rather than the femur.

[0094] Optionally, a smoothing step may be included after the wrapping S306. The smoothing step may comprise smoothing each of the wrapped 3D growth plate part digital models to generate a plurality of final 3D growth plate part digital models. However, the smoothing step may be omitted. For example, omitting the smoothing step may be preferable in order that the 3D generated part digital models remain true to the original anatomy. Thus, the plurality of final growth plate parts may be the output of the wrapping step S306, or where smoothing is included, the smoothing step.

[0095] At step S310, processor 111 compares the final 3D growth plate part digital model to the plurality of medical imaging scans of the pre-operative medical imagery 116. That is, processor 111 compares the contours the final 3D growth plate part digital model to the plurality of medical imaging scans of the pre-operative medical imagery 116 to determine part generation accuracy. In some embodiments, a user may interact with the electronic interface 113 to confirm the accuracy of the final 3D growth plate part digital model. In some embodiments, processor 111, upon determining that the generated final 3D growth plate part digital model is inaccurate, may return to step S302 and repeat the previously described steps, S302, S304, S306, and S310 in relation to generating the MRI-reconstructed growth plate portion.

[0096] In some embodiments, processor 111, upon determining that the final 3D growth plate part digital model is accurate, moves to step S312. At S312, processor 111 performs remeshing on the final 3D growth plate part digital model to generate the MRI-reconstructed growth plate portion. Remeshing rebuilds the geometry of the final 3D growth plate part digital model with an increased uniform topology. In some embodiments, remeshing may change the number, shape, size, or arrangement of polygons forming a surface mesh of the final 3D growth plate part digital model. In some embodiments, remeshing may result in a simpler and less computationally difficult surface mesh for the MRI-reconstructed growth plate portion.

[0097] In some embodiments, processor 111, upon generating the MRI-reconstructed growth plate portion digital model may perform method 300 to generate the CT-reconstructed bone portion digital models. That is, processor 111 may perform method 300 again to generate the CT- reconstructed bone portion digital models, for example. To generate the CT-reconstructed relevant bone portion digital models, processor 111, at step S302, creates a mask for each of the individual patient’s relevant bone portions, such as an affected femur and pelvis. That is, processor 111, receiving scans such as the full length femur CT scan as input, isolates, or segments, each relevant bone portion, such as the affected femur and pelvis in the full length femur CT scan into a plurality of bone masks, for example. In some embodiments, processor 111 may eliminate unwanted pixels and fill part cavities in the plurality of bone masks prior to moving to step S304.

[0098] Processor 111, at step S304, then generates a plurality of 3D relevant bone portion digital models, one for each of the individual patient’s relevant bone portions, such as the affected femur and pelvis, based on the corresponding bone masks (or for tibia, and/or knee, and/or ankle, in the context of a tibial osteotomy).

[0099] Processor 111, at step S306, wraps each of the plurality of 3D generated bone relevant bone portion digital models from S304, resulting in a plurality of wrapped 3D bone part digital models. Wrapping provides a continuous surface around the mesh. In some embodiments, a smallest detail setting is configured to be equal to 0.15 mm. In some embodiments, a gap closing distance setting is configured to be equal to 0.3 mm. Optionally, a smoothing step may be included after the wrapping S306. The smoothing step may comprise smoothing each of the wrapped 3D bone part digital models to generate a plurality of final bone parts. However, the smoothing step may be omitted. For example, omitting the smoothing step may be preferable in order that the 3D generated bone part digital models remain true to the original anatomy. Thus, the plurality of final bone part digital models may be the output of the wrapping step S306, or where smoothing is included, the smoothing step.

[0100] At S310, processor 111 compares each of the plurality of final 3D bone part digital models of the relevant portions to the plurality of medical imaging scans of the pre-operative medical imagery 116. That is, processor 111 compares the contours of each of the plurality of final 3D bone part digital models to the plurality of medical imaging scans of the pre-operative medical imagery 116 to determine part generation accuracy. In some embodiments, a user (i.e. a medical expert or a medical imaging expert) may interact with the electronic interface 113 to confirm the accuracy of each of the plurality of final 3D bone part digital models of the relevant bone portions. In some embodiments, processor 111, upon determining that any of the plurality of generated final 3D bone part digital models are inaccurate, may return to step S302 and repeat the previously described steps, S302, S304, S306, (and optional smoothing step where included), and S310 in relation to generating the CT-reconstructed bone portion digital models.

[0101] In some embodiments, processor 111, upon determining that the plurality of generated final 3D bone part digital models are accurate, moves to step S312. At S312, processor 111 performs remeshing on each of the plurality of final 3D bone part digital models to generate the CT-reconstructed bone portion digital models of the relevant bone portions. Remeshing rebuilds the geometry of each of the plurality of final 3D bone part digital models with an increased uniform topology. In some embodiments, remeshing may change the number, shape, size, or arrangement of polygons forming a surface mesh of each of the plurality of final 3D bone part digital models. In some embodiments, remeshing may result in a simpler and less computationally difficult surface mesh for each of the CT-reconstructed bone portions.

[0102] In some embodiments, processor 111, upon completing step S312 of method 300 may move to step S314. At step S314, processor 111 assigns labels to each of the reconstructed portions with the individual patient’s full name, unit record number, date of birth, and body side. In some embodiments, step S314 may be performed after generating each of the reconstructed portions. That is, step S314 may be performed each time method 300 is performed by processor 111. In some embodiments, step S314 may be performed after generating all of the reconstructed portions. That is, step S314 may be performed after generating the MRI-reconstructed bone portion digital models, the MRI-reconstructed muscle portion digital models, the MRI-reconstructed growth plate portion digital models, and the CT-reconstructed bone portion digital models, for example.

[0103] Processor 111, after generating the MRI-reconstructed bone portion digital models, and/or the MRI-reconstructed muscle portion digital models, and/or the MRI-reconstructed growth plate portion digital model, and/or the CT-reconstructed bone portion digital models, and performing step S316, continues to generate the personalised pre-operative software model of the patient’s anatomical structures. In some embodiments, processor 111, continuing step S202, performs an alignment of the related MRI-reconstructed portion digital models and the CT- reconstructed portion digital models. Processor 111 matches common landmarks on the femur and pelvis MRI-reconstructed portion digital models and the femur and pelvis CT-reconstructed portion digital models. Processor 111 then aligns the common landmarks of the femur and pelvis MRI-reconstructed portion digital models and the femur and pelvis CT-reconstructed portion digital models. In some embodiments, alignment of the femur and pelvis MRI-reconstructed portion digital models and the femur and pelvis CT-reconstructed portion digital models is performed by processor 111 until an average distance error of less than 0.1mm between the corresponding portions is obtained, for example. In some embodiments, alignment of the femur and pelvis MRI-reconstructed portion digital models and the femur and pelvis CT-reconstructed portion digital model generates a surfaced 3D model of the patient’s pelvis, hip, and lower limb bones. That is, processor 111 may generate a simulated model of the patient’s pre-operative bone structure (simulated bone structure model), for example. It is noted that the equivalent process is performed for the tibia with the knee joint and/or the ankle joint in the context of a tibial osteotomy procedure.

[0104] In some embodiments, generating the personalised pre-operative software model of the patient’s anatomical structures includes a 3D anatomical analysis. That is, a 3D anatomical analysis of the bone position and dimension data, for example. In some embodiments, the 3D anatomical analysis may include defining one or more axes and planes in the bone position and dimension data. In some embodiments, the 3D anatomical analysis may include measuring one or more 3D angles between the defined one or more axes planes. Processor 111, continuing step S202 of method 200, may perform the 3D anatomical analysis on the previously generated MRI- reconstructed bone portion digital models and/or the CT-reconstructed bone portion digital models, or simulated bone structure model. In some embodiments, any one of the MRI- reconstructed bone portion digital models or the CT-reconstructed bone portion digital models of the relevant bone portions may contain a plurality of surfaced bone models, such as a femur bone model, tibial bone model, for example.

[0105] Processor 111 may process different portions of the femur bone model to define the one or more axes and planes in the bone position and dimension data. Processor 111, in performing the 3D anatomical analysis, may define an axis of the femoral neck of the femur bone model. To determine the axis of the femoral neck, processor 111 performs slicing of the femoral neck at set distanced intervals to produce a plurality of slices. Processor 111 then computes a plurality of centroids for each of the plurality of slices. The axis of the femoral neck is then defined as the line of best fit through the plurality of centroids of the plurality of slices. In some embodiments, the set distanced intervals are at about 1mm, for example. In the context of a tibial osteotomy, the one or more axes and planes may include an axis or plane of the medial condyle, the lateral condyle, the intercondylar eminence, and/or the medial malleolus, and may include an indication of the length of each by defining end points of the respective axes.

[0106] Processor 111, in performing the 3D anatomical analysis, may define an axis of the posterior condylar of the femur bone model. To determine the axis of the posterior condylar, processor 111 first identifies the medial femoral condyles of the femur bone model and the lateral femoral condyles of the femur bone model. Processor 111 then computes a first most posterior point on the identified medial femoral condyles of the femur bone model. Processor 111 then computes a second most posterior point on the identified lateral femoral condyles of the femur bone model. The axis of the posterior condylar is then defined as the line that connects the first most posterior point and the second most posterior point. An equivalent process may be performed for the condyles, eminences, and malleolus of the tibia.

[0107] Processor 111, in performing the 3D anatomical analysis, may define a long axis of the femur bone model, or the tibial bone model, depending on anatomical context. In this particular example, determination of long axis of the femur bone model is presented. To determine the long axis of the femur, processor 111 first identifies the femoral condyles and the lesser trochanter. Processor 111 then computes the centroid of the widest cross-sectional area of the femoral condyles. Processor 111 then identifies a proximal femoral slice positioned just below the lesser trochanter. Processor 111 then computes the centroid of the proximal femoral slice. The long axis of the femur is then defined as the line that connects the computed centroid of the widest cross- sectional area of the femoral condyles and the computed centroid of the proximal femoral slice.

[0108] Processor 111, in performing the 3D anatomical analysis, may define a plane across the base of the epiphysis of the femur and/or tibia bone model. To determine the plane across the base of the epiphysis, processor 111 first identifies the epiphysis. Processor 111 then computes a plane that best fits the base of the epiphysis. The plane across the base of the epiphysis is then defined as the computed plane that best fits the base of the epiphysis.

[0109] Processor 111, in performing the 3D anatomical analysis, may measure a femoral anteversion angle. Processor 111, in measuring the femoral anteversion angle, first defines a plane perpendicular to the defined long axis of the femur. Processor 111 then computes the angle between the defined axis of the femoral neck and the defined axis of the posterior condylar. The computed angle is the femoral anteversion angle. In some embodiments, the femoral anteversion angle is measured as the angle between the defined axis of the femoral neck and the defined axis of the posterior condylar when projected onto the defined plane perpendicular to the defined long axis of the femur. An equivalent process may be performed to calculate the tibial torsion angle. An equivalent process may be performed for the long axis of the tibia. For example, an angle may be measured between the posterior tibial axis at the proximal tibia and a bimalleolar axis at the distal tibia, the measured angle being the tibial torsion angle.

[0110] Processor 111, in performing the 3D anatomical analysis, may measure a femoral neck shaft angle. Processor 111, in measuring the femoral neck shaft angle, computes the 3D angle between the axis of the femoral neck and the long axis of the femur. The computed angle is the femoral neck shaft angle. Processor 111, in performing the 3D anatomical analysis, may measure an inferior slip angle. Processor 111, in measuring the inferior slip angle, first defines a line perpendicular to the plane across the base of the epiphysis. Processor 111 then computes the 3D angle between the defined line perpendicular to the plane across the base of the epiphysis and the long axis of the femur. The computed angle is the inferior slip angle. Processor may also measure anatomic lateral distal femoral angle and/or mechanical lateral distal femoral angle. In the example of tibial osteotomies, the medial proximal tibial angle may be computed by the processor 111 as the angle between the tibial high pelvic line and the tibial epicondylar axis. Further, in the example of tibial osteotomies, the processor 111 may be configured to compute the medial lateral tibial angle as the angle between the ankle line and the tibia.

[0111] Processor 111, in performing the 3D anatomical analysis, may measure a posterior slip angle. Processor 111, in measuring the posterior slip angle, first manipulates the previously generated 3D surfaced model of the patient’s pre-operative bone structure such that the knee is flexed at a 30° angle, and the hip is abducted at a 45° angle. That is, processor 111 manipulates the 3D surfaced model of the patient’s pre-operative bone structure into a paediatric hip view (frog leg lateral view), for example. Processor 111 then defines a line perpendicular to the plane across the base of the epiphysis. Processor 111 then computes the 3D angle between the defined line perpendicular to the plane across the base of the epiphysis and the long axis of the femur. The computed angle is the posterior slip angle.

[0112] Processor 111, in performing the 3D anatomical analysis, may measure a length of the femur or tibia. Processor 111, in measuring the length of the femur or tibia, first computes a most proximal point on the bone. Processor 111 then computes a most distal point on the bone. Processor 111 then computes the length of the bone to be the distance between the most proximal point on the bone and the most distal point on the bone. In some embodiments, a user may interact with the electronic interface 113 to manually determine, or measure, any of the aforementioned measurements. Processor 111, upon completion of the 3D anatomical analysis, compiles the aforementioned generated data to finalise the personalised pre-operative software model of the individual patient’s anatomical structure. Processor 111, on finalising the pre-operative software model, moves onto step S204.

[0113] At S202, in addition to generating a personalised pre-operative software model of patient anatomical structure of an individual patient based on received pre-operative medical imagery of the patient anatomical structure, patient movement analysis data is used to generate the personalised pre-operative software model of patient anatomical structure. The patient movement analysis data may be generated by virtually mobilising the relevant joint in an anatomical simulator such as OpenSim. For example, an estimation of a muscle moment arm may be made in this manner and stored or otherwise used as all or part of the patient movement analysis data.

[0114] Alternatively or additionally, the patient movement analysis data may be generated by a movement sensor system observing movement of the patient anatomical structure of the individual patient over a period of time. Wherein the period of time may be, for example, time taken to perform a prescribed movement one or more times or a set of prescribed movements. The sensor system may include accelerometers attached to the skin of the patient, and may include cameras and other imaging technology. Figure 8 illustrates patient movement analysis data of a hip joint of a sample patient, wherein the pre-operative information is the relevant data in generating the personalised pre-operative software model of patient anatomical structure. Muscle moments of the glutei muscles at a range of hip flexion angles are calculated as patient movement analysis data and included in the personalised pre-operative software model of patient anatomical structure. In the context of a tibial osteotomy, equivalent analysis of movement around a knee joint and/or an ankle joint may be performed. Analysis of movement around a knee joint is explained further in as explained in Barzan M, Modenese L, Carty CP, Maine S, Stockton CA, Sancisi N, Lewis A, Grant J, Lloyd DG, Brito da Luz S. Development and validation of subjectspecific pediatric multibody knee kinematic models with ligamentous constraints. J Biomech. 2019 Aug 27;93: 194-203. doi: 10.1016/j.jbiomech.2019.07.001. Epub 2019 Jul 8. PMID: 31331662.

[0115] At step S204, processor 111 receives a software definition of a surgical procedure to be performed in relation the pre-operative software model. In some embodiments, processor 111 may receive the software definition from software definition module 128. In some embodiments, processor 111 the software definition may be received as input from a user via electronic interface 113. In some embodiments, the orthopaedic surgery planning device 100 may receive the software definition from an external devices via communications module 112 via network 140.

[0116] In embodiments where the software definition is received via user input or an external device, the software definition may define a value range for each of: at least one osteotomies in at least one specified bone; for each osteotomy, a specified bone from the patient’s anatomical structure to be cut by the osteotomy; for each osteotomy, a position and orientation of an osteotomy plane; for each osteotomy, and a relative position and orientation of at least two postosteotomy distinct bone portions. Processor 111, upon receiving the software definition, may iteratively constrain the software definition of the surgical procedure to include a patient-specific osteotomy plan via simulation. To constrain the received software definition, processor 111 may determine a specific value from within each of the plurality of defined value ranges for the surgical procedure. At least one software definition may be available for each of proximal femoral osteotomy, distal femoral osteotomy, proximal tibial osteotomy, distal tibial osteotomy.

[0117] Exemplary positions and orientations of osteotomy planes are illustrated in Figure 6A for a proximal femoral osteotomy, in Figure 11 A for a distal femoral osteotomy, in Figures 12A & 12B for a distal tibial osteotomy, and in Figure 13A for a proximal tibial osteotomy. In each case, the illustrated bone portions are individualised to the patient receiving surgery, such as may be obtained by the bone portion imaging and modelling techniques described above and with reference to the method 300. The specific plane and position of each plane is configurable by the processor 111 within predefined position and angular ranges.

[0118] In embodiments where the software definition is received via user input or an external device, the software definition may further define a value range for each of: a repositioning of at least one specified bone within an allowable range; and a range of available implants, optionally also with corresponding surgical tools such as chisels. Processor 111, upon receiving the software definition, the software definition may iteratively constrain the software definition of the surgical procedure to include a patient-specific implant plan via simulation. To constrain the software definition, processor 111 may determine at least one repositioning, and for each of the at least one determined repositioning’s, processor 111 determines a selected implant and corresponding tool (such as a chisel) from the available range. In some embodiments, the software definition includes a 3D model of the selected implant (implant model) and corresponding chisel (chisel model). The implant, and where required, the tool, are configurable according to the individualised anatomy of the individual patient and the planes and positions of the osteotomies determined by the processor 111. Exemplary implants (in recommended position and configurations as determined by processor 111) are illustrated in Figures 11C and 11D in the context of a distal femoral osteotomy, in Figures 12D and 12E in the context of a distal tibial osteotomy, and in Figures 13D and 13E in the context of a wedge implant for a proximal tibial osteotomy.

[0119] In embodiments where the software definition is received via software definition module 128, processor 111 executes the software definition module 128 to generate the receivable software definition. Processor 111, in executing the software definition module 128, defines the plurality of values for the specific surgery (i.e. for the particular patient) from among predefined value ranges for the surgical procedure. Processor 111 establishes the at least one osteotomy in the bone position and dimension data. In some embodiments, a user may interact with the electronic interface 113 to manually or semi-manually define the at least one osteotomy. Processor 111, for each defined osteotomy, determines the specific bone from the patient’s anatomical structure to be cut by the osteotomy. Processor 111, for each defined osteotomy, determines the position and orientation of an osteotomy plane. Processor 111, for each defined osteotomy, determines the relative position and orientation of at least two post-osteotomy distinct portions. In some embodiments, processor 111 may iteratively constrain the software definition of the surgical procedure, received via the software definition module 128, to include the patient-specific osteotomy plan via simulation. To constrain the software definition, processor 111 may determine a specific value from within each of the plurality of defined value ranges for the surgical procedure.

[0120] In some embodiments, Processor 111, in executing the software definition module 128, may further define the value from among a predefined range for each of: the repositioning of at least one specified bone within an allowable range; and the range of available implants with corresponding chisels. The allowable range is a range wherein a range of motion of the at least one repositioned specified bone is within feasible kinematic movement constraints of the related joint. Processor 111, in defining the value of the at least one specified bone, manipulates the simulated bone structure model to reposition the proximal femur, the distal femur, the proximal tibia, or the distal tibia. Processor 111 repositions the proximal femur, the distal femur, the proximal tibia, or the distal tibia until the desired corrections are achieved. In some embodiments, the desired corrections include increased range of motion in at least one of: hip flexion, hip adduction, and/or hip rotation, knee flexion and extension, ankle flexion, ankle adduction, ankle flexion and extension, and ankle rotation, for example. In some embodiments, a user may interact with the electronic interface 113 to manually reposition the proximal femur, the distal femur, the proximal tibia, or the distal tibia to achieve the desired corrections, for example, based on a recommended repositioning generated by the processor 111.

[0121] Processor 111, upon achieving the desired corrections, determines a desired implant from the range of available implants in order to implement in the surgical procedure the postoperative patient anatomical structure. Processor 111 may choose the desired implant such that it satisfies a combination of: the largest implant possible, the longest implant possible, and the most stable implant possible. In some embodiments, processor 111 may choose a plurality of implants that satisfy the above criteria, presenting a user with a plurality of options. Furthermore, to the extent that the chosen implant has one or more configurable elements, processor 111 may determine an optimum configuration. In some embodiments, processor 111 may perform a plurality of simulations of the simulated bone structure model to determine said implant and optimum configuration. In some embodiments, a user may interact with the electronic interface 113 to manually select an implant from the range of available implants. Processor 111, upon determining the implant from the range of available implants, determines the corresponding chisel, or chisels, respective of the selected implant. In some embodiments, processor 111 may iteratively constrain the software definition, received via the software definition module 128, to include a patient-specific implant plan via simulation. To constrain the software definition, processor 111 may determine at least one repositioning, and for each of the at least one determined repositionings, processor 111 determines a selected implant and corresponding chisel from the available range. Exemplary implants (in recommended position and configurations as determined by processor 111) are illustrated in Figures 11C and 11D in the context of a distal femoral osteotomy, in Figures 12D and 12E in the context of a distal tibial osteotomy, and in Figures 13D and 13E in the context of a wedge implant for a proximal tibial osteotomy.

[0122] Processor 111, upon constraining the software definition, continues at step S206. At step S206, processor 111 generates a personalised post-operative software model of the patient’s anatomical structure (post-operative software model). Processor 111 generates the post-operative software model based on the personalised pre-operative software model and the received software definition of the surgical procedure. In generating the post-operative software model, processor 111 integrates the received 3D model of the selected implant into the bone of interest of the simulated bone structure model, for example the femur or tibia. Processor 111 performs the integration of the implant model such that the implant model is in proximity to the centre of the femoral head, or the lateral or medial femoral condyle, or the medial epicondyle, or the tibial plateau, the medial malleolus.

[0123] Processor 111 further integrates the implant model such that the implant model is within predefined absolute distances of one or more predefined edges, surfaces, or regions of the bone being cut. In the specific example of the proximal femoral osteotomy: at least 2.5mm from the cortical bone; at least 3mm from the cortical bone; or at least 5mm from the cortical bone. Optionally, processor may initially seek to satisfy a relatively greater minimum cortical bone separation, for example 5mm, and if that is unachievable, may then (for example, after notifying an operator and optionally seeking approval), relax the minimum cortical bone separation requirement to a relatively smaller minimum cortical bone separation such as 3mm or 2.5mm. Such relaxation of minimum cortical bone separation may be particularly helpful in abnormal anatomy implementations.

[0124] Processor 111 further integrates the implant model such that the implant model has adequate structural fixation within the bone. The result of the integration of the implant model with the femur of the simulated bone structure model is a corrected femur model. The result of the integration of the implant model with the tibia of the simulated bone structure model is a corrected tibia model. [0125] Processor 111 then integrates the received chisel model into the corrected femur model. Processor 111 integrates the chisel model by overlapping it with the already integrated implant model.

[0126] Upon integration of the implant model and the chisel model with the simulated bone structure model, processor 111 continues step S206 and performs a bone thickness analysis. The bone thickness analysis verifies that adequate bone thickness remains in place once the chisel or implant are inserted into the bone. To perform the bone thickness analysis, processor 111 generates a post-operative software model of, for example, the proximal femur by subtracting the model of the chisel or implant from the pre-operative software model of the proximal femur. Processor 111 then analyses the generated post-operative software model of the proximal femur to ensure that the thickness of the bone is greater than 2.5mm, greater than 3mm, or greater than 5mm. Optionally, processor may initially seek to satisfy a relatively greater minimum bone thickness, for example 5mm, and if that is unachievable, may then (for example, after notifying an operator and optionally seeking approval), relax the minimum bone thickness requirement to a relatively smaller minimum bone thickness such as 3mm or 2.5mm. Such relaxation of minimum bone thickness may be particularly helpful in abnormal anatomy implementations. If the thickness of the bone is less than minimum bone thickness, processor 111 may return to S204 and require receipt of an alternate software definition. The same bone thickness thresholds may be applied in the case of distal femoral, proximal tibial, and distal tibial osteotomies, or bespoke bone thickness thresholds may be defined for each.

[0127] Processor 111, upon completing the bone thickness analysis, performs a growth plate violation analysis. The growth plate violation analysis verifies that adequate growth plate volume remains once the implant is implanted into a bone orifice created by the chisel. To perform the growth plate violation analysis, processor 111 generates a post-operative software model of the growth plate by subtracting the model of the implant from the pre-operative software model of the growth plate. Processor 111 then analyses the pre-operative software model of the growth plate to compute a volume of the pre-operative software model of the growth plate. Processor 111 then analyses the generated post-operative software model of the growth plate to compute a volume of the post-operative software model of the growth plate. Processor 111 then compares the volume of the post-operative software model of the growth plate to the volume of the preoperative software model of the growth plate.

[0128] Processor 111, upon generating the personalised post-operative software model of the patient’s anatomical structure, continues at step S208. At step S208, processor 111 simulates movement of the bones and muscles of the patient’s anatomical structure to generate a simulation output. To simulate movement of the bones and muscles of the patient’s anatomical structure, processor 111 first generates a pre-operative 4D personalised functional model of the patient’s anatomical structure. Processor 111 then generates a post-operative 4D personalised functional model of the patient’s anatomical structure. In some embodiments, a user may manually generate the pre-operative 4D personalised functional model via the electronic interface 113. In some embodiments, a user may manually generate the post-operative 4D personalised functional model via the electronic interface 113.

[0129] Processor 111, further to generating the personalised post-operative software model of the patient anatomical structure at S206, simulates movement of the bones and muscles of the patient’s anatomical structure to generate simulation output at S208. The simulation of movement predicts the range of movement and moment arms at different angles that will be achievable postoperation. Furthermore, a machine learning module may be trained to predict range of movement and/or moment arms at a range of angles based on training data input data encoding patient anatomical structure (for example including two-dimensional or three-dimensional images of hip joints including hip socket, femur, glutei muscles; or including two-dimensional or three- dimensional images of knee joints or ankle joints and including gastrocnemius and/or soleus for example) and labelled with ground truths being one or more parameters representing range of movement and muscle moment arms at one angle or a range of angles, and to illustrate the prediction in the post-operative 4D personalised functional model, for example. The machine learning module is thereby trained to predict the parameters representing range of movement and muscle moment arms based on arbitrary input patient anatomical structure (such as personalised post-operative software model of patient anatomical structure at S206). In a particular example, an optimisation algorithm running different constraints on the software definition of the surgical procedure iteratively (for example via simulated annealing, backward error propagation, or another optimisation technique) optimises the predicted parameters including range of movement and moment arm at one or more angles. In this way, desired post-surgical outcomes influence the values of the surgical parameters calculated by the processor 111, such as positions and planes of osteotomies.

[0130] In generating the pre-operative 4D personalised functional model, processor 111 identifies surfaces and landmarks of the plurality of bones in the simulated bone structure model. Processor 111 utilises the identified surfaces and landmarks to create body and joint reference systems. The body reference systems relate to, for example, the pelvis and femurs. The joint reference system may relate to the hip, or to the knee or ankle, depending on the surgical procedure. Processor 111 also identifies centroids of the glutei muscle attachment areas on the plurality of bones in the simulated bone structure model. The glutei muscle attachment areas may include a muscle origin and insertion point for each muscle. In some embodiments, a user may manually identify the surfaces and landmarks of the plurality of bones in the simulated bone structure model. In some embodiments, a user may manually identify the centroids of the glutei muscle attachment areas on the plurality of bones in the simulated bone structure model. In the context of the tibial osteotomy an equivalent procedure is performed for the centroids of the gastrocnemius and/or the soleus.

[0131] In some embodiments, processor 111 creates the body reference system for the pelvis using a stereophotogrammetric system and anatomical landmark calibration. Methods of stereophotogrammetry and anatomical landmark calibration can be found in “Cappozzo, A., Catani, F., Croce, U. D., & Leardini, A. (1995). Position and orientation in space of bones during movement: anatomical frame definition and determination. Clinical biomechanics (Bristol, Avon), 10(4), 171-178. https://doi.org/10.1016/0268-0033(95)91394-t”, for example.

[0132] In some embodiments, processor 111 creates the body reference system for the femur or tibia using a navigation system and anatomical reference frame definition methods. Methods of navigation systems and anatomical reference frame definition methods can be found in “Belvedere, C., Ensini, A., Leardini, A., Bianchi, L., Catani, F. and Giannini, S. (2007), Alignment of resection planes in total knee replacement obtained with the conventional technique, as assessed by a modem computer-based navigation system. Int. J. Med. Robotics Comput. Assist. Surg., 3: 117-124. https://doi.org/10.1002/rcs.131”, for example.

[0133] Processor 111 then creates a parent reference system and a child reference system for each hip. Processor 111 centres the parent and child reference systems on the hip joint centre for each hip. Processor 111 then pairs the appropriate bones and hip joints together to generate a 4D functional bone structure. Processor 111 then integrates the glutei muscles with the 4D functional bone structure by defining lines of action of the glutei muscles between each identified muscle origin and insertion point. The 4D functional bone structure integrated with the glutei muscles results in the pre-operative 4D personalised functional model.

[0134] An equivalent process may be performed for knee or ankle joints, the processor 111 creating a parent reference system and a child reference system for each joint and centring the reference systems on a defined central point for each joint. The appropriate bones and joints are paired by the processor to generate a 4D functional bone structure by defining lines of action of the relevant muscles, such as gastrocnemius and/or soleus in the case of a tibial osteotomy. The 4D functional bone structure integrated with the digital models of the muscles results in the preoperative 4D personalised functional model. [0135] Upon generating the pre-operative 4D personalised functional model, processor 111 generates the post-operative 4D personalised functional model. Processor 111 identifies surfaces and landmarks of the plurality of bones in the simulated bone structure model and the postoperative software model. Specifically in the context of a femoral osteotomy, processor 111 utilises the corrected femur model in generating the post-operative 4D personalised function model. In the context of a tibial osteotomy, processor 111 utilises the corrected tibia model in generating the post-operative 4D personalised function model. Processor 111 utilises the identified surfaces and landmarks to create corrected body and corrected joint reference systems. The corrected body reference systems relate to, for example the pelvis and corrected femur, the knee and corrected femur, the knee and corrected tibia, or the ankle and corrected tibia. The corrected joint reference system relates to the hip, knee, or ankle, as appropriate. Processor 111 also identifies centroids of the muscle attachment areas such as glutei muscle attachment areas, gastrocnemius muscles attachment areas, and/or soleus muscle attachment areas, on the plurality of bones in the simulated bone structure model and the post-operative software model. The muscle attachment areas may include a muscle origin and insertion point for each muscle. In some embodiments, a user may manually identify the surfaces and landmarks of the plurality of bones in the simulated bone structure model and the post-operative software model. In some embodiments, a user, or an object recognition algorithm, may identify the centroids of the glutei muscle attachment areas on the plurality of bones in the simulated bone structure model, noting that an equivalent procedure may be performed in the context of other bones and muscles such as tibia, gastrocnemius, soleus.

[0136] The processor 111 may iteratively repeat the process of constraining the software definition and generating the 4D personalised functional model. For example, an algorithm may calculate one or more parameter values or metrics from the 4D personalised functional model, such as range of movement, gait symmetry (i.e. comparison of stepping with left foot down to stepping with right foot down), power, among others, to represent, analyse, or otherwise assess the 4D personalised functional model. Based on a computational solver technique such as backward error propagation or simulated annealing, for example, the processor 111 may be configured to modify the constraints applied to the software definition in order to optimise the calculated parameter value or metric to achieve a maximum/minimum, a local maximum/minimum, or a predefined target. The software definition is effectively a generic definition of a surgical procedure with one or more degrees of freedom, wherein each degree of freedom is selectable within a defined range. The processor 111 may be configured, by the iterative process outlined herein, to calculate, select, or otherwise to determine, the values within the defined range that provide an optimised output, that is, to generate a specific version of the generic definition of the surgical procedure. Optimised outputs may be absolute or may be based on comparison of the pre-operative and post-operative functional models of the patient. As an example, an absolute range of movement about the hip, knee, or ankle, may be sought as an optimum, or an improvement of X degrees or Y percent.

[0137] The processor 111 may, based on the post-operative 4D personalised functional model, determine one or more rotational corrections that will improve patient range of motion, power, or gait, and then determine an osteotomy plane or planes and implant selection and/or positions to achieve those rotational corrections. Figures 5A to 5C illustrate planned rotational corrections in the specific example of the patient imaged in Figures 4D to 4F. The femoral neck shaft angle is to be rotated by 17.5 degrees in a first rotational direction. The femoral anteversion is to be rotated 38.8 degrees in a second rotational direction opposing the first rotational direction. A femoral head flexion angle is to be rotated 32.7 degrees. The determined osteotomy plane or planes and implant selection and/or positions are exemplary of constraints on the software definition of the surgical procedure.

[0138] For example, the rotational corrections resulting from osteotomies may calculated deterministically by an algorithm. A machine learning algorithm may be trained, via ground truth training data which is pre-osteotomy bone (and/or muscle) images, osteotomy planes and positions, and resultant rotational corrections, to predict a rotational correction resulting from osteotomies at particular planes and positions on a particular pre-osteotomy bone. Then, by a solving algorithm such as backward error propagation or simulated annealing, the machine learning algorithm determines, based on input images of a bone and desired rotational corrections, osteotomy planes and positions to achieve the desired rotational corrections or to optimise correlation between the desired rotational corrections and the predicted rotational corrections. Similarly, the machine learning algorithm is trained, given the determined osteotomy planes and positions and the desired rotational corrections, to select an implant (from a predefined list forming part of the software definition of the surgical procedure) and implant position to best correlate the predicted rotational correction (as modified by the implant selection and position) with the desired rotational correction.

[0139] Figure 6 A illustrates a bone portion from the personalised pre-operative model of the patient anatomical structure, onto which candidate osteotomy planes and positions are drawn for illustrative purposes. A predicted rotational correction resulting from the osteotomies of Figure 6A is illustrated in Figure 6B, representative of a personalised post-operative model of the patient anatomical structure (e.g. predicted outcome of surgical procedure). [0140] Figures 11A and 11B illustrate (from different angles) osteotomy planes calculated by the processor 111 to achieve a desired correction in the case of a distal femoral osteotomy.

[0141] Figures 12A and 12B illustrate (from different angles) osteotomy planes calculated by the processor 111 to achieve a desired correction in the case of a distal tibial osteotomy. Figure 12C highlights a wedge of 1.6cm thickness defined between the planes of the two osteotomies and the bone surface.

[0142] Figure 13A illustrates a plane of a single osteotomy in a proximal tibial osteotomy surgical procedure.

[0143] Figure 6C illustrates a selection and position of an implant to best achieve the desired rotational correction in the proximal femoral osteotomy example (following on from the osteotomies in Figures 6A and 6B), and Figure 6D illustrates an initial position of the implant.

[0144] Figures 11C and 11D illustrate selection and position of an implant to best achieve a desired rotational and positional correction in the distal femoral osteotomy example (following on from the osteotomies represented in Figures 11 A & 1 IB).

[0145] Figures 12D and 12E illustrate selection and position of an implant to best achieve a desired rotational and positional correction in the distal tibial osteotomy example (following on from the osteotomies represented in Figures 12A to 12C).

[0146] Figures 13D and 13E illustrate selection and position of a wedge surgical instrument to best achieve a desired rotational and positional correction in the proximal tibial osteotomy example (following on from the osteotomy represented in Figures 13 A). The wedge is configured specifically to conform to the physiology of the patient and in particular to place the two bone portions remaining post-osteotomy into the spatial relation to one another determined by the processor 111 to be the desired post-operative patient anatomical structure to achieve, or to come as close to achieving as is feasible, a desired rotational correction. The wedge is manufactured, for example by 3D printing nylon 12. The wedge, in surgery, maintains the two portions of the tibia in the correct position to achieve the desired correction. The wedge is a surgical tool rather than an implant. The wedge is only within the patient’s body during the surgery, and does not remain in the patient’s body post-surgery. The wedge may be used to shape an allograft (i.e. bone tissue obtained from a human donor). The allograft is then inserted into a gap vacated by the wedge once the wedge is removed from the tibia. The wedge geometry is computed by the processor 111 based on the two osteotomy planes for the surgery, which intersect at a line to define a personalised post-operative wedge geometry between the two planes terminating at the line, the wedge geometry corresponding to a region between portions of the bone post-operation. [0147] The machine learning algorithm may be trained by multi -variant analysis to predict rotational corrections to a bone (represented by the personalised pre-operative model of the patient anatomical structure) how variables including: osteotomy plane, osteotomy position, implant selection, and implant position; will influence positional and rotational correction. Therefore, in use, the trained machine learning algorithm can, based on the personalised pre-operative model of the patient anatomical structure, determine one or more parameters from among osteotomy plane, osteotomy position, implant selection, and implant position, to best correlate a predicted rotational correction with a desired rotational correction. Bearing in mind those parameters are selectable within ranges or other borders or limits defined in the software definition of the surgical procedure.

[0148] In some embodiments, processor 111 creates the corrected body reference system for the pelvis using the stereophotogrammetric system and anatomical landmark calibration. In some embodiments, processor 111 creates the corrected body reference system for the corrected femur or corrected tibia using the navigation system and anatomical reference frame definition methods.

[0149] Processor 111 then creates a corrected parent reference system and a corrected child reference system for each hip. Processor 111 centres the corrected parent and corrected child reference systems on the hip joint centre for each hip. Processor 111 then pairs the appropriate bones, corrected femur, and hip joints together to generate a 4D corrected functional bone structure. Processor 111 then integrates the glutei muscles with the 4D corrected functional bone structure by defining lines of action of the glutei muscles between each identified muscle origin and insertion point. The 4D corrected functional bone structure integrated with the glutei muscles results in the post-operative 4D personalised functional model, or the corrected-anatomy model.

[0150] An equivalent process may be performed for knee or ankle joints, the processor 111 creating a corrected parent reference system and a corrected child reference system for each corrected joint and centring the corrected reference systems on a defined central point for each corrected joint. The appropriate corrected bones and joints are paired by the processor to generate a 4D corrected functional bone structure by defining lines of action of the relevant muscles, such as gastrocnemius and/or soleus in the case of a tibial osteotomy. The 4D corrected functional bone structure integrated with the digital models of the muscles results in the post-operative 4D personalised functional model, or the corrected anatomy model.

[0151] Processor 111, upon generating the pre-operative 4D personalised functional model and the post-operative 4D personalised functional model, simulates movement of the bones and muscles of the patient’s anatomical structure to generate a simulation output. Processor 111 generates the simulation output by determining hip range of motion limits and estimating glutei muscle moment arms, knee range of motion limits and estimating hamstring or gastrocnemius muscle arms, or ankle range of motion limits and estimating gastrocnemius or soleus muscle moment arms. To determine, for example, the hip range of motion limits, processor 111 mobilises the hip through its passive range of motion and the range of motion in each degree of freedom. That is, processor 111 mobilises the hip such that is undergoes hip flexion, hip extension, internal rotation, external rotation, hip abduction, and hip adduction, for example. The range of motion limits are then defined by processor 111 when there is bone-on-bone contact during mobilisation. Processor 111 then estimates the glutei muscle moment arms by performing a muscle analysis. Muscle moment arms are the perpendicular distance of the muscle line of action from a joint axis. In the alternative context of a tibial osteotomy, an equivalent process may be performed in the context of a knee joint, noting that rotation may be excluded in that context, or of an ankle joint, in which case rotation may be included.

[0152] Upon generating the simulation output, processor 111 moves to step S210. At step S210, processor 111 may adjust the personalised post-operative software model based on the simulation output. That is, processor 111 may alter the positioning of the implant or the positioning of the corrected body and corrected joint reference systems to improve the range of motion limits and estimated muscle moment arms (such as glutei muscle moment arms, quadricep muscle moment arms, hamstring muscle moment arms, gastrocnemius muscle moment arms, soleus muscle moment arms), for example. In some embodiments, processor 111 may utilise artificial intelligence and/or machine learning to adjust the post-operative software model based on the simulation output. An artificial intelligence and/or machine learning code module for performing the adjustment of the post-operative software model based on the simulation output may be implemented within the surgery planning module 124. The machine learning algorithm may be trained using desired, or ideal, 4D functional software models, for example.

[0153] Processor 111, upon completing adjustment of the personalised post-operative software model based on the simulation output (if required), continues executing method 200 and proceeds to step S212. At step S212, processor 111 generates a personalised surgical cutting guide model (cutting guide model) based on the personalised post-operative software model or the adjusted personalised post-operative software model. Surgical guide design is illustrated in Figure 9 for the example of the proximal femoral osteotomy, in Figure HE for the example of the distal femoral osteotomy, in Figure 12F for the example of the distal tibial osteotomy, and in Figures 13B and 13C for the example of the proximal tibial osteotomy. In generating the cutting guide model, processor 111 defines a mask portion 910 of the surgical cutting guide model. The mask portion is configured to conform to the at least one curve or other geometric feature of the bone position and dimension data of bones in the patient’s anatomical structure. Processor 111 defines each mask portion such that it has a minimum span of at least 160° over the contour of respective bone 905. On defining each mask portion, processor 111 then uniformly externally offsets each mask portion. In some embodiments, the uniform external offset is about 3.2mm. Processor 111 then smooths the contour of each mask portion. In some embodiments, processor 111 performs smoothing of each mask portion with an influence distance of 0.5mm. Processor 111 then finalises each mask portion by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error.

[0154] In some embodiments, after defining each mask portion, processor 111 may further generate the cutting guide model by defining an osteotomy slot portion 920 of the surgical cutting guide model for each osteotomy. Each osteotomy slot portion may be an aperture in the respective mask portion, positioned and orientated according to the specific values for the respective osteotomy, such as illustrated in corresponding Figures 6A, 11 A, 1 IB, 12A, 12B, 12C, & 13A. In defining each osteotomy slot portion, processor 111 generates a sketch that best fits the respective osteotomy on the pre-operative software model. Processor 111 then generates a profile for each osteotomy slot portion 920. Each profile is generated by externally (distally) offsetting the respective bone contour of the respective osteotomy. In some embodiments, the external offset is about at least 15mm. Processor 111 then extrudes each profile in both the proximal and distal directions. In some embodiments, each profile is extruded by about 1.6mm in both the proximal and distal directions. The configuration of the surgical cutting guide model by the processor 111 is to transform the skeleton of the patient from the pre-operative patient anatomical structure imaged and digitalised at S202 to a physical realisation of the post-operative software model of the patient anatomical structure generated at S206 as informed by S208 and S210.

[0155] After defining the osteotomy slot portion, processor 111 then defines an osteotomy saw blade insertion profile for each osteotomy slot portion. Processor 111 generates each saw blade profile for the respective osteotomy slot portion with respect to the respective bone contour. Processor 111 then extrudes each saw blade profile in both the proximal and distal directions. In some embodiments, each saw blade profile is extruded by about 0.4mm in both the proximal and distal directions. Processor 111 then removes the saw blade insertion profile from the osteotomy slot portion. Processor 111 then finalises the osteotomy slot portion 920 by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error. [0156] In some embodiments, after defining the mask portion 910 for each osteotomy, processor 111 may further generate the cutting guide model by defining one or more from among a saw blade orientation and an osteotomy chisel insertion slot 930 for each osteotomy. Each osteotomy chisel insertion slot 930 may be defined as a location on the respective defined mask portion. Each osteotomy chisel insertion slot 930 may be defined in a direction relative to the respective mask portion 910. Processor 111 generates a sketch of the chisel insertion slot. In some embodiments, the sketch of the chisel insertion slot is perpendicular to the respective saw blade insertion profile for each osteotomy on the pre-operative software model. Processor 111 then inserts the respective chisel model received at step S204 into the sketch of the chisel insertion slot for each osteotomy. Processor 111 then externally (distally) offsets each inserted chisel model. In some embodiments, the external (distal) offset is about 2.5mm. Processor 111 then externally extrudes each inserted chisel model. In some embodiments, each chisel model is extruded by a distance of about at least 15mm. Processor 111 then finalises each chisel insertion slot by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error.

[0157] In some embodiments, the defined chisel insertion slot of each osteotomy may further comprise a hole to removably receive a guide wire 1010 and a guide wire seat as illustrated in the manufactured surgical guide of Figure 10. That is, processor 111 may further define the surgical cutting guide model by defining a chisel insertion hole 1020. In embodiments that include a chisel insertion hole, the chisel model includes a hole for the guide wire insertion. Processor 111, after finalising the chisel insertion slot utilising the chisel mode that includes the hole for the guide wire insertion, generates a removable guide wire seat. In some embodiments, the removable guide wire seat may be L-shaped. In some embodiments, the guide wire seat may be configured for insertion into the hole at one end. In some embodiments, the guide wire seat may be configured to longitudinally receive the guide wire. Processor 111 then merges the chisel insertion slot with the generated removable guide wire seat. Processor 111 then finalises each chisel insertion slot with the chisel insertion hole by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error.

[0158] Guide wire seats 940 for guide wires may also be included in the absence of chisel insertion slots, as illustrated in Figures HE, 12F, 13B, and 13C.

[0159] In some embodiments, after defining the mask portion for each osteotomy, processor 111 may further generate the cutting guide model by defining at least one implant fixation slot. The at least one implant fixation slot may be defined as a location on the respective defined mask portion based on a shaft surface of the implant. Processor 111 generates a sketch of the implant fixation slot for each osteotomy on the pre-operative software model by best fitting the implant shaft surface. Processor 111 then inserts the respective implant model received at step S204 into the sketch of the implant fixation slot for each osteotomy. In some embodiments, the implant model may include at least one implant fixation screw. Processor 111 then externally offsets each inserted implant model. In some embodiments, the external (distal) offset is about 2mm. Processor 111 then externally extrudes each inserted implant model. In some embodiments, each implant model is extruded by a distance of about at least 15mm. Processor 111 then finalises each implant model by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error.

[0160] In some embodiments, processor 111, after defining at least one of: the saw blade insertion portion; the chisel insertion slot; and/or the at least one implant fixation screw; may remove at least one of: the saw blade insertion portion; the chisel insertion slot; or the at least one implant fixation screw; from the defined mask portion of the respective osteotomy. That is, processor 111 may remove the saw blade insertion portion from the mask portion of the respective osteotomy, for example. That is, processor 111 may remove the chisel insertion slot from the mask portion of the respective osteotomy, for example. That is, processor 111 may remove the at least one implant fixation screw from the mask portion of the respective osteotomy, for example.

[0161] In some embodiments, processor 111, after removing at least one of: the saw blade insertion portion; the chisel insertion slot; and/or the at least one implant fixation screw; from the defined mask portion of the respective osteotomy, may combine, for each osteotomy, the mask portion and at least one of: the osteotomy slot portion; the chisel insertion slot; and/or the implant fixation slot. That is, processor 111 may combine the mask portion with at least one of the previously defined slots or portions to further generate the surgical cutting guide model. In some embodiments, processor 111 may then assign data points (labels) and/or patient identifiers to the generated surgical cutting guide model. Processor 111 then finalises the surgical cutting guide model by ensuring that the mesh of each respective bone has no errors. If the mesh of the respective bone includes at least one error, processor 111 may automatically fix the at least one error or by raising an alert to a user to review the error and to fix the error.

[0162] Upon finalising the surgical cutting guide model, processor 111 moves to step S214. At step S214, processor 111 generates the personalised surgical operation instructions 120. The personalised surgical operation instructions 120 may include the personalised pre-operative software model, the personalised post-operative software model, the selected implant and corresponding chisel, and/or the personalised cutting guide model.

[0163] Processor 111 upon completion of step S214 moves to step S216. In some embodiments, processor 111 may perform step S216 prior to step S214. At step S216, processor 111 generates 3D model printing instructions 122 of at least one from: the personalised pre-operative software model, the personalised post-operative software model, and/or the personalised cutting guide model. Upon completion of both steps S214 and S216, processor 111 stops executing method 200 of surgery planning module 124. 3D printing may be with a biocompatible PA2200 polyamide powder. [0164] In some embodiments, at least part of method 200 may be performed by a user via electronic interface 113. The user may utilise any one of, or a combination of, commercially available software solutions such as ‘Mimics’, ‘3-matic’, ‘MATLAB’, ‘Materialise Magics’, and/or ‘EOS Print’, for example.

[0165] It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the above-described embodiments, without departing from the broad general scope of the present disclosure. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive.