Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TERAHERTZ RADIATION BASED METHOD AND SYSTEM FOR IDENTIFICATION OF OBJECTS AND PREDICTION OF OBJECT CLASS
Document Type and Number:
WIPO Patent Application WO/2023/143702
Kind Code:
A1
Abstract:
Aspects concern a method and system for classifying an object. The method includes receiving a power intensity heatmap image associated with the object, identifying at least one region of interest (ROI) on the power intensity heatmap image; extracting important features from the ROI used to differentiate between objects, and classifying the object based on the extracted features. Deep machine learning and/or artificial intelligence modules may be utilized for the identification of ROI, extracting of features, and/or the classification.

Inventors:
YAN WAI (SG)
KANUPRIYA MALHOTRA
NGO CHI TRUNG (SG)
TAN TIAN
MOOKHERJEE DEBOSMIT
MIYAJIMA MASAFUMI
ANDALAM SIDHARTA (SG)
JEON JIN HAN (SG)
Application Number:
PCT/EP2022/051677
Publication Date:
August 03, 2023
Filing Date:
January 26, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSCH GMBH ROBERT (DE)
International Classes:
B07C5/342; G06V10/70
Domestic Patent References:
WO2017140729A12017-08-24
Foreign References:
US20150144537A12015-05-28
US20140367316A12014-12-18
Other References:
DIRK NUESSLER ET AL: "THz Imaging for Recycling of Black Plastics", MICROWAVE CONFERENCE (GEMIC), 24 March 2014 (2014-03-24), Germany, pages 1 - 4, XP055362814, ISBN: 978-3-8007-3585-3
JINSONG ZHANG ET AL: "Terahertz Image Detection with the Improved Faster Region-Based Convolutional Neural Network", SENSORS, vol. 18, no. 7, 18 July 2018 (2018-07-18), pages 2327, XP055662357, DOI: 10.3390/s18072327
"Gaussian blur", GAUSSIAN BLUR - AN OVERVIEW I SCIENCEDIRECT TOPICS, Retrieved from the Internet
"A computational approach to edge detection", IEEE XPLORE, Retrieved from the Internet
O. RONNEBERGERP. FISCHERT. BROX: "U-Net: Convolutional networks for biomedical image segmentation", ARXIV.ORG, 18 May 2015 (2015-05-18), Retrieved from the Internet
ABREHERET: "Abreheret/Pixelannotationtool: Annotate quickly images", GITHUB, Retrieved from the Internet
D. P. KINGMAJ. BA, ADAM: A METHOD FOR STOCHASTIC OPTIMIZATION, 2014, Retrieved from the Internet
Download PDF:
Claims:
CLAIMS

1. A method (100) for classifying an object (201) comprising the steps of:

(a) receiving a power intensity heatmap image (204) associated with the object, the power intensity heatmap image (204) generated by an electromagnetic radiation-based image capturing device operating in a Terahertz (THz) frequency range;

(b) identifying at least one region of interest ( RO I) (320) on the power intensity heatmap image (204);

(c) extracting a first feature (402, 404) and second feature (410) from the ROI (320), the first feature associated with a first area of the object and the second feature associated with a second area of the object; and

(d) classifying the object based on the extracted first feature (440) and the extracted second feature (460).

2. The method of claim 1, wherein the first feature is a key differentiator to distinguish between two or more classes of object, and the second feature is associated with an area of the object where power intensity signal associated with a class of the object can be captured regardless of one or more of the following: shapes, edges, noises associated with the object.

3. The method of claim 1 or 2, wherein the object to be classified is a plastic bottle belonging to a class of either High Density Poly Ethylene (HDPE) or Polyethylene terephthalate (PET).

4. The method of claim 3, wherein the first feature is a moulding structure of the plastic bottle base, and the second feature is a center area of the plastic bottle.

5. The method of claim 4, wherein the moulding structure is either an extrusion blow moulding resultant structure or an injection blow moulding resultant structure.

6. The method of claim 4 or 5, wherein the step of extracting the first feature and the second feature include segmenting the first feature and the second feature from the ROI, and combining the segmented first feature and the segmented second feature as an integrated input (480) for classification.

7. The method of any one of the preceding claims, wherein at least one of the steps of: identifying at least one region of interest (ROI) on the power intensity heatmap image, extracting a first and second feature from the ROI, the first feature associated with a first area of the object and the second feature associated with a second area of the object, or classifying the object based on the first feature and the second feature, include a step of passing at least one of the following image data through a deep learning module: power intensity heatmap image data, ROI image data, the extracted first feature and extracted second feature data.

8. A system (200) for classifying an object (201) comprising an electromagnetic radiation source (202) and scanner (206) operating in the Terahertz (THz) frequency range and configured to capture a power intensity heatmap image (204) associated with the object (201); a processor (210) comprising a localization module (212) to identify at least one region of interest (ROI) on the power intensity heatmap image; a feature extractor (214) to extract a first and second feature from the ROI, the first feature associated with a first area of the object and the second feature associated with a second area of the object; and a classifier (216) to classify the object based on the extracted first feature and the extracted second feature.

9. The system of claim 8, wherein the first feature is a key differentiator to distinguish between two or more classes of object, and the second feature is associated with an area of the object where power intensity signal associated with a class of the object can be captured regardless of one or more of the following: shapes, edges, noises of the object.

10. The system of claim 8 or 9, wherein the object to be classified is a plastic bottle belonging to a class of either High Density Poly Ethylene (HDPE) or Polyethylene terephthalate (PET).

11. The system of claim 10, wherein the first feature is a moulding structure of the plastic bottle base, and the second feature is a center area of the plastic bottle.

12. The system of claim 11, wherein the moulding structure is either an extrusion blow moulding resultant structure or an injection blow moulding resultant structure.

13. The system of claim 11 or 12, wherein the feature extractor is operable to segment the first feature and the second feature from the ROI, and combine the segmented first feature and the segmented second feature as input for classification.

14. The system of any one of claims 9 to 13, wherein at least one of the feature extractor and the classifier comprises a deep learning module.

15. The system of claim 14, wherein the deep learning module comprises a deep convolutional neural network (DCNN).

16. The system of any one of claims 8 to 15, wherein the localization module comprises a contour-based object localization algorithm.

17. A non-transitory computer readable medium storing instructions that, when executed by a processor, performs the method of claim 1.

Description:
TERAHERTZ RADIATION BASED METHOD AND SYSTEM FOR IDENTIFICATION OF OBJECTS AND PREDICTION OF OBJECT CLASS

TECHNICAL FIELD

[0001] The disclosure relates to a system and method for identification of objects and prediction of object class. The system and method relate, but are not limited to, a terahertz radiation-based method and system for identification of plastic objects and classification of such plastic objects.

BACKGROUND

[0002] Plastic objects such as plastic bottles are known consumables. Unfortunately, excessive consumption and relative lack of recycling efforts have led to pollution. Sorting of plastics is a crucial step in plastic recycling industry. A solution is based on NIR (near infrared) hyper spectral optical sensor systems. However, such NIR optical sorters are not able to sort labels on plastic bottles, black plastics, and multi-layers plastics. Current sorting methods also require complex pre-sorting and long waste stream chain to reach certain level of accuracy, which increase costs and are not suitable for small-scale plastics recycling facilities.

[0003] RGB cameras may be used to capture images to facilitate sorting. However, the quality of captured RGB images may be susceptible to poor lighting conditions, optical illusion of overlapping items and/or fully covered labels.

[0004] Terahertz radiation has been used for identification of industrial plastics and analysis. However, methods and devices such as that disclosed in US patent publication US20140367316A1 and PCT publication WO2017140729A1 may be relatively expensive and slow. In addition, these disclosures are suitable for industrial based classification and not for small-scale materials recovery facilities (MRFs).

[0005] There exists a need to provide a solution to alleviate at least one of the aforementioned problems.

SUMMARY

[0006] The disclosure uses electromagnetic radiation operating in the Terahertz (THz) range/spectrum to capture the power intensity (in the form of image data) of PET and HDPE bottles penetration intensity. It furthermore adopts Artificial Intelligence (Al) or deep learning algorithms such as Deep Convolutional Neural Network (DCNN) to extract key features and uses image classification techniques to automatically differentiate between HDPE and PET bottles, eliminating the need for sorters to judge the type of plastic solely based on visual and tactile factors.

[0007] The electromagnetic radiation operating in Terahertz (0.1 THz or higher) can penetrate through not only visually transparent objects, but also opaque objects, and provide a THz power intensity heatmap image based on the transmittance signals of the material to THz radiation. This disclosure aims to not only minimise sorting based purely on human judgment using visual and tactile features, but also focus on overcoming the limitations of video/web camera-based image information.

[0008] According to the present disclosure, a method for predicting a class of an object as claimed in claim 1 is provided. A system for predicting a class of an object according to the disclosure is defined in claim 8.

[0009] The dependent claims define some examples associated with the method and system, respectively.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] The disclosure will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which:

- FIG. 1 is a flow chart of a method for identifying an object and predicting a class of the object;

- FIG. 2 A is a system setup for identification of objects and prediction of object class; FIG. 2B shows an example of the image captured by an electromagnetic radiationbased image capturing device operating in a Terahertz frequency range, compared with an RGB image; FIG. 2C shows an example of various image processing modules that may be used to facilitate classification of objects;

- FIGS. 3A and 3B show the process flow for identifying or obtaining at least one region of interest from a heatmap image according to some embodiments;

- FIGS. 4A to 4F illustrate various feature extraction processes associated with a first feature and a second feature according to some embodiments;

- FIG. 5 shows results demonstrating the efficacy of the classification of two different types of objects - PET objects and HDPE objects; and

- FIG. 6 shows a schematic illustration of a processor 210 for identification of objects and/or prediction of object class according to some embodiments. DETAILED DESCRIPTION

[0011] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and structural, logical changes may be made without departing from the scope of the disclosure. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.

[0012] Embodiments described in the context of one of the systems or methods are analogously valid for the other systems or methods.

[0013] Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.

[0014] In the context of some embodiments, the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements. [0015] As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.

[0016] As used herein, the term “object” includes any object, particularly recyclable or reusable object that may be sorted according to type and/or class. As a non-limiting example, plastic objects may be sorted according to whether they belong to the class of High-Density Poly Ethylene (HDPE) or Polyethylene terephthalate (PET). The objects may include bottles, jars, containers, plates, bowls etc. of various shapes, forms (flattened, compressed, distorted, deformed etc.) and sizes.

[0017] As used herein, the term “associate”, “associated”, “associate”, and “associating” indicate a defined relationship (or cross-reference) between at least two items. For instance, an image associated with an object may include a defined region of interest which focuses on the object for further processing via object detection and/or tracking algorithm(s).

[0018] As used herein, the term “module” refers to, or forms part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the processor.

[0019] The disclosure utilizes electromagnetic radiation operating in a Terahertz (THz) frequency range to capture image signals or data of an object. The THz frequency range may be operable from 0.1 THz upwards. The image data or signal may include the penetration intensity of the electromagnetic radiation and therefore the term “power intensity” throughout the description is construed broadly to include power penetration intensity. It follows that image data or signal based on the “power intensity” may include heat maps showing or representing power penetration distribution. The disclosure further includes automated object identification, feature extraction and classification to extract key features and uses image classification techniques to automatically differentiate between at least two classes of objects, such as between an HDPE and a PET bottle.

[0020] FIG. 1 shows a flow chart of a method 100 for classifying an object comprising the steps of: receiving a power intensity heatmap image associated with the object, the power intensity heatmap image generated by an electromagnetic radiation-based image capturing device operating in a Terahertz (THz) frequency range (step S102); identifying at least one region of interest (ROI) on the power intensity heatmap image (step S104); extracting a first and a second feature from the ROI, the first feature associated with a first area of the object and the second feature associated with a second area of the object (step S106); and classifying the object based on the first feature and the second feature (step S108).

[0021] FIG. 2A shows a setup of a system 200 for predicting classes of objects according to the method 100. The system 200 for predicting a class (e.g. HDPE plastic class, PET plastic class) of an object 201, such as a plastic bottle, comprises an electromagnetic radiation-based image capturing device. In some embodiments the electromagnetic based image capturing device may include an electromagnetic radiation source 202 and a scanner/receiver 206 operating in a Terahertz (THz) frequency range or near THz frequency range configured to capture a power intensity heatmap image 204 associated with the object 201. The object 201 may be positioned on a platform 205, such as a conveyor belt. The system 200 comprises a processor 210 arranged in data or signal communication with the electromagnetic radiation source 202 and scanner 206 to receive or acquire the power intensity heatmap image 204. The electromagnetic wave(s) (in the THz frequency) may penetrate the object 201, and the penetrated electromagnetic wave(s) may be picked up by the scanner 206 to produce a heat map representative of the power penetration intensity distribution of the object 201. It is contemplated that multiple power intensity heatmap images 204 of the same object may be captured at different time stamps in the form of multiple frames. It is envisaged that the object

201 may be in various forms, sizes and shapes.

[0022] The electromagnetic radiation source 202 and scanner 206 may be operating in a Terahertz (THz) frequency range. In some embodiments, the electromagnetic radiation source

202 generates a continuous wave THz signal with output power of up to l.OWatts (W) in a sub-THz region in the frequency range from 0.1 to 0.7 THz. The scanner 206 is configured to capture power intensity heatmap images of the object 201 at different positions on the platform 205. In some embodiments, the radiation source 202 and/or scanner 206 may be remotely operated to capture images of the object simultaneously or concurrently at predetermined intervals as defined by a user. In some embodiments, the radiation source 202 and scanner 206 may be configured to capture or record the respective images continuously. One or more heatmaps of the object 201 may be acquired while the object 201 is positioned on a moving platform such as a conveyor belt, which moves at a predetermined speed, ranging from 0.4 to 1.5 meters per second.

[0023] The Terahertz (THz) based electromagnetic radiation source 202 may be arranged with a linear imaging scanner 206 located at a suitable position proximate the platform 205 (e.g. below a conveyor belt of the platform 205) to receive electromagnetic radiation from the source 202 to facilitate the acquisition of 2-dimensional heatmap images associated with the object(s) on the platform 205. Upon completion of data acquisition, each image may be saved in a database and labelled with its corresponding plastic type.

[0024] FIG. 2B shows a graphical example of a heatmap image 204 (also referred to as THz output image or THz image). In some embodiments, the captured heatmaps at a specific THz spectrum may be stored in an associated database of the electromagnetic radiation source 202 and converted into one or more heatmap images 204. For comparison, a corresponding RGB image 204A is also shown in FIG. 2B. The heatmap images shown in FIG. 2B were acquired from a top view perspective of the objects in the form of PET and/or HDPE bottles passing on the conveyer belt 205, with a fixed distance of, for example, 450 millimeters (mm) from the THz source 202.

[0025] The captured/acquired images 204 may be sent to the processor 210 for further processing.

[0026] In some embodiments as shown in FIG. 2C, the processor 210 comprises an object localization and detection module 212, a feature selection and extraction module 214, and a classification module 216. The modules 212, 214, and 216 implement the method 100. The modules 212, 214, and 216 are described for the purpose of clarity and for ease of understanding. It is contemplated that one or more of the aforementioned modules 212, 214, 216 may be combined. It is further contemplated that all the three modules 212, 214, 216 may be combined to form a single module.

[0027] The localization and detection module 212 is operable to receive and/or parse the image data of the heatmap image 204 for analysis so as to obtain or identify at least one region of interest (ROI) associated with each object on the heatmap image 204.

[0028] In some embodiments, the localization and detection module 212 comprises a contour-based object detection algorithm to identify one or more ROI 320 from the heatmap image 204, each ROI 320 corresponds with an object 201 to be classified. In some embodiments, the contour-based object detection algorithm may be a pixels contours-based object localization algorithm to extract the ROI 320 of the individual plastic bottles 201 on the conveyor belt 205 filtering out the noises from irrelevant background. Contour-based detection algorithm includes a curve-fitting or extrapolation technique joining all the continuous points having same colour or intensity, which is used to identify or define a boundary of objects in the heatmap image 204.

[0029] In some embodiments, one or more pre-processing steps may be applied to minimise noises and improve the accuracy of contour detection/localisation. A pre-processing technique applied to the object image is Gaussian blurring [1] with a user-defined filter size, e.g. a filter size of (21, 21) to reduce the irrelevant (e.g. background) noises in the picture. Another pre-processing technique is the application of Canny Edge Detection [2] with a standard deviation of 10, and a lower threshold of 80, to detect the significant edges of the object(s) present in the object image. In order to increase the detection accuracy of the contours and make the edges clearer, edges may be dilated before contour detection. Contour detection is applied on the threshold image to obtain the respective Regions of Interest (ROI) of the objects, before feature extraction. FIG. 3A shows an embodiment of the method for localization and extraction 300 so as to obtain one or more ROIs. The method 300 comprises the steps of receiving the heatmap image (step S302), applying a Gaussian blur filter (step S304), applying an edge threshold (step S306), applying an edge dilation/magnification (step S308), and detecting the contours (step S310) to obtain at least one ROI. It is appreciable that should there be multiple objects within the object image, multiple corresponding ROIs, each ROI associated with one object, will be output. FIG. 3B illustrates the application of method 300 on an image with two objects, resulting in two ROIs.

[0030] The feature selection and extraction module 214 is used to extract a first and second feature from the ROI. The first feature is associated with a first area of the object and the second feature associated with a second area of the object. In some embodiments, the first feature corresponds to an end or edge of the object, in the form of a bottle as shown in FIG. 4A. The edge or end may correspond to a bottom area 402 for a PET plastic bottle or bottom area 404 for an HDPE plastic bottle, wherein the difference in moulding structure arising from different moulding methods can be used as a basis for differentiating between the PET plastic bottle and HDPE plastic bottle. In particular, the uneven thickness of the bottom of PET bottles (due to injection blow moulding) can cause variation in THz radiation absorption, resulting in a dome-shaped 406 darkness at the base of the PET bottles in the THz heatmap 204, as depicted in FIG. 4A. In contrast, HDPE bottles are produced by a process known as extrusion blow moulding, which results in a relatively uniform thickness 408 at the base of the bottle, compared to PET bottles. FIG. 4A also shows two examples of bases 402a, 402b of PET bottles, and two examples of bases 404a, 404b of HDPE bottles.

[0031] The second feature associated with the second area of the object may be a central region of the ROI of the object. The central region (also referred to as center patch) may correspond to an area of the ROI containing image data that is relatively less affected by noise, edges and shape. The center patch may be dynamically defined for each ROI 320 to have a width being 1/3 of the of the total ROI’s width, and a height being 1/5 of the total ROI height. In summary, the feature selection and extraction module 214 selects a first feature that contains a key differentiator to distinguish between two or more classes of object, and a second feature that is able to display a relatively unbiased power intensity heatmap of the object even if the object is contaminated by foreign material (e.g. oil, labels, grease, etc.), and/or distorted from its true form, (i.e. torn, compressed, etc.). In other words, when a power intensity heatmap image is generated, the second feature is able to show a THz signal intensity heatmap associated with the class of the object regardless of shapes, edges, noises (e.g. labels, partial contamination) of the object. FIG. 4B illustrates the process of identifying central patches 410 from different types of objects 201, and extracting such central patches 410 from obtained ROIs 320 from the heatmap images 204 for further analysis.

[0032] FIG. 4C shows a process of extracting the first feature for analysis. The first feature may be extracted using a convolutional neural network-based segmentation model, such as U-net segmentation model 430 [3]. The U-net segmentation model 430 is used to extract features at the relevant end/edge portions of the object(s) defined within the ROI, e.g. bottom areas of PET and HDPE plastic bottles 402, 404. Once the relevant first feature(s) are extracted from the ROI, a first mask 432, typically white colour, will be applied or overlaid onto the extracted first feature(s). A background mask 434, typically black colour (zero pixel), will be applied or overlaid on the rest of the ROI 320. The first mask 432 may eventually be removed or filtered to expose the extracted first feature(s) to form an extracted first feature 440. [0033] As shown in FIG. 4D, the second feature may be extracted/detected using an image processing method to dynamically define the center patch 410 as described. The remaining part of the ROI is then overlaid with a background mask 434 (zero pixel) to form an extracted second feature 460.

[0034] After the first feature and the second feature are extracted, the extracted first feature 440 and extracted second feature 460 may be overlaid on each other as shown in FIG. 4E to form an integrated input 480 for classification. Some examples of input images in the form of HDPE plastic or PET plastic are shown in FIG. 4F.

[0035] The classification module 216 is operable to receive the integrated input 480 for classification between PET and HDPE plastic bottles. The classification module 216 may include a deep convolutional neural network (DCNN) algorithm/model comprising four convolutional layers, four max pooling layers and one fully connected layer. Of the four convolutional layers, the first two convolutional layers may utilize 32 filters, while the other two convolutional layers may utilize 64 filters. The DCNN may operate in tandem with an optimizer and/or a regularization algorithm to reduce overfitting or overtraining. In some embodiments, the optimizer comprises an error minimization algorithm to minimize any errors in classification. An example of such error minimization algorithm is a stochastic gradient descent [5] algorithm. In some embodiments, resulting data following the convolutions and the max pooling are flattened to be sent to the full connected layers comprising a regularization algorithm (which may have a regularization parameter of 0.04) used to reduce overfitting, also known as a dropout. In some embodiments, the first dense layer comprises 128 neurons with a 3% dropout of neurons, which allows the model to generalize the features better. The second dense layer has 2 neurons which is used to predict the output (HDPE and PET).

[0036] In some embodiments, the integrated input 480 may undergo pre-processing before classification. The pre-processing may include resizing each integrated input image to a 300x300 pixels image.

[0037] In some embodiments, the U-net segmentation model 430 in FIG. 4C may be trained via pairing the generated heatmap images 204 and their corresponding masks 432, 434 applied. The background mask 434 and the white mask 432 may be applied using an image annotation tool, a non-limiting example which is a PixelAnnotationTool [4] software, which is operable to mask the bottom area and segment it from the background. In some embodiments, there were 160 manually annotated masks and images prepared for training and the number may be increased to approximately 500 after applying augmentation (new images for training generated via flipping or rotating) for each PET and HDPE class. Each paired heatmap image 204 and masked image is utilized in training and building a supervised learning-based segmentation model.

[0038] In some embodiments, a binary-cross-entropy loss function was utilized together with a method for stochastic optimization known as Adam optimizer [6] fortraining. The training process may include iterative forward propagation, backward propagation and network paraments updating until a minimum error value is achieve in relation to segmenting the bottom area of HDPE & PET plastic bottles from the background.

[0039] In some embodiments, the optimizer operating in tandem with the DCNN network for classification may use a learning rate of 0.01. The loss that is being measured every step may be a sparse categorical cross-entropy. The model may be trained for a number of predetermined iterations, such as 100 epochs, and statistical values associated with the training accuracy, training loss, validation accuracy, and validation loss were measured at each iteration to keep track of the learning.

[0040] In some embodiments where the integrated input 480 undergo resizing, e.g. to a 300x300 pixels image, a training and a testing dataset are generated via applying augmentation (new images for training/testing generated via manipulation of existing images through image processing techniques such as flipping or rotating). The training set may comprise 80% (267 images) of the data and the testing set may comprise the remaining 20% (65 images).

[0041] The efficacy of DCNN model in classification of objects, based on a mix of HDPE and PET objects, is shown in FIG. 5. In FIG. 5, the darker shaded lines correspond to the training accuracy and loss whereas the lighter lines correspond to the validation accuracy and loss. As shown in the graph on the right, it is evident that the training loss and the validation loss curves overlap each other. Since the training and validation loss decrease at the same rate, it shows that the model is not overfitting and is therefore able to generalise well. The validation accuracy at the end of 100 epochs was 93% which can be seen from the validation accuracy curve.

[0042] It is contemplated that the machine learning and/or artificial intelligence-based algorithms deployed in various modules may undergo training and testing before operationally deployed. Such training may include the formation of testing and training dataset, and supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning. [0043] It is contemplated that other computer vision/ image processing techniques known to a skilled person may be combined/supplemented to form further embodiments to supplement and/or replace the ML/AI algorithms. [0044] In some embodiments, the processor 210 may include hardware components such as server computer(s) arranged in a distributed or non-distributed configuration to implement the modules 212, 214, 216. The hardware components may be supplemented by a database management system configured to compile one or more industry-specific characteristic databases. In some embodiments, the industry-specific characteristic databases may include analysis modules to correlate one or more dataset with an industry. Such analysis modules may include an expert rule database, a fuzzy logic system, or any other artificial intelligence module.

[0045] FIG. 6 shows a server computer system 600 according to an embodiment. The server computer system 600 includes a communication interface 602 (e.g. configured to receive captured images from the radiation source 204). The server computer 600 further includes a processing unit 604 and a memory 606. The memory 606 may be used by the processing unit 604 to store, for example, data to be processed, such as data associated with the captured images and intermediate results output from the modules 212, 214 and/or final results output from the module 216. The server computer is configured to perform the method of FIG. 1. It should be noted that the server computer system 600 can be a distributed system including a plurality of computers.

[0046] In some embodiments, a computer-readable medium is provided including program instructions, which, when executed by one or more processors, cause the one or more processors to perform the methods according to the embodiments described above. The computer-readable medium may include a non-transitory computer-readable medium.

[0047] In some embodiments, the ML/AI algorithms may include algorithms such as neural networks, fuzzy logic, evolutionary algorithms etc.

[0048] Although the embodiments refer to the classification of HDPE and PET bottles, it is appreciable that the disclosure may be utilised for the classification of other objects, such as, but not limited to plastic objects of other types, including Polypropylene (PP), Polystyrene (PS), Low-density polyethylene (LDPE), Polyvinyl chloride (PVC), etc., utilizing the same principle of feature extractions of at least two features based on a key differentiator to distinguish between two or more classes of object, and a second feature that shows a THz signal intensity associated with the class of the object regardless of shapes, edges, noises (e.g. labels, partial contamination) of objects.

[0049] While the disclosure has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims. The scope of the disclosure is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.

References

[1] “Gaussian blur,” Gaussian Blur - an overview | ScienceDirect Topics. [Online]. Available: https://www.sciencedirect.com/topics/engineering/gaussian-bl ur.

[2] “A computational approach to edge detection,” IEEE Xplore. [Online]. Available: https://ieeexplore.ieee.org/document/4767851.

[3] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” arXiv.org, 18- May-2015. [Online]. Available: https://arxiv.org/abs/1505.04597.

[4] Abreheret, “Abreheret/Pixelannotationtool: Annotate quickly images.,” GitHub. [Online].

Available: https://github.com/abreheret/PixelAnnotationTool.

[5] 1.5. stochastic GRADIENT Descents. scikit. (n.d.). https://scikit- learn.org/stable/modules/sgd.html.

[6] D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv.org, 2014. https://arxiv.org/abs/1412.6980.