Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMPUTER-ASSISTED SYSTEM AND METHOD FOR ILLUMINATING IDENTIFIED OBJECTS
Document Type and Number:
WIPO Patent Application WO/2024/051935
Kind Code:
A1
Abstract:
A computer-assisted system for illuminating identified objects on a moving platform (106) comprising an input module configured to receive an identified object profile associated with an object on the moving platform; an illumination control module (204) arranged in signal or data communication with the input/output module to the identified object profile, the illumination control module configured to (i.) calculate a switch on time and a switch off time of at least one illumination array comprising a plurality of illumination sources; (ii.) determine on the at least one illumination array, at least one illumination source associated with the calculated switch on time and switch off time; (iii.) output the switch on time, the switch off time, and an identifier of the at least one illumination source; and (iv.) activate the at least one illumination source associated with the identifier, the switch on time, and the switch off time; wherein the identified object profile comprises an image of the object and a set of location data associated with a relative location of the identified object on the moving platform.

Inventors:
BUI KHAC PHUONG UYEN (VN)
TRAN TA TUAN (VN)
JEON JIN HAN (SG)
ANDALAM SIDHARTA (SG)
NGO CHI TRUNG (SG)
YAN WAI (SG)
LE TUAN KIET (VN)
NGUYEN VIET LAM (VN)
Application Number:
PCT/EP2022/074828
Publication Date:
March 14, 2024
Filing Date:
September 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BOSCH GMBH ROBERT (DE)
International Classes:
H05B47/125; B07C7/00; H05B47/155; H05B47/16; H05B47/175
Foreign References:
US20190193118A12019-06-27
US20180243800A12018-08-30
US20220042814A12022-02-10
US20160078678A12016-03-17
US20130249943A12013-09-26
Download PDF:
Claims:
CLAIMS

1. A computer-assisted system (100) for illuminating an identified object (102) on a moving platform (106) comprising an input module configured to receive an identified object profile (110) associated with the identified object (102) on the moving platform (106); an illumination control module (204) arranged in signal or data communication with the input module to the identified object profile (110), the illumination control module configured to

(i.) calculate a switch on time and a switch off time of at least one illumination array (112) comprising a plurality of illumination sources (114);

(ii.) determine on the at least one illumination array (112), at least one illumination source (114) associated with the calculated switch on time and switch off time;

(iii.) output the switch on time, the switch off time, and an identifier of the at least one illumination source (114); and

(iv.) activate the at least one illumination source (114) associated with the identifier, the switch on time, and the switch off time; wherein the identified object profile (110) comprises an image of the object and a set of location data associated with a relative location of the identified object (102) on the moving platform (106).

2. The system of claim 1, wherein the identified object profile (110) comprises a bounding box of the identified object profile (110), a pair of x-y coordinates of a center of the bounding box with respect to the moving platform, and a depth data measured from an image sensor (104) to the object.

3. The system of claims 1 or 2, wherein the identified object profile (110) is generated by an artificial intelligence module (202).

4. The system of any one of the preceding claims, wherein the at least one illumination array (112) comprises a plurality of laser sources, a velocity or speed sensor (208), and an angle sensor (210).

5. The system of any one of the preceding claims, wherein the illumination control module (204) comprises a controller (312) and a scheduler (314), wherein the controller (312) is configured to send the switch on time, the switch off time, and the identifier of the at least one illumination source (114) to the scheduler (314).

6. The system of claim 2, wherein the switch on time and the switch off time are calculated based on: a distance measure between the center of the bounding box and a reference illumination line, a speed of the moving platform (106), a time stamp when the input module received the identified object profile (110), and a height coefficient (ch) of the bounding box.

7. The system of claim 6, wherein the at least one illumination source (114) is determined based on: a distance (diaser) between the center of the at least one illumination source and an adjacent illumination source, an x-coordinate of the center of the bounding box, and a width coefficient (cw) of the bounding box.

8. The system of claim 3, further comprises a gesture recognition module (320) configured to receive the identified object profile (110) and determine if an associated object (102a, 102b) is retrieved by a user.

9. The system of claim 8, further comprising a notification module (316), wherein if the associated object (102a, 102b) is not retrieved by the user or if a wrong object is retrieved by the user, the notification module (316) is configured to send a notification to the user.

10. Recycling system with a moving platform (106) and a computer-assisted system (100) for illuminating an identified object (102) on the moving platform (106) according to any one of the preceding claims.

11. A computer-assisted method (800) for illuminating identified objects on a moving platform, comprising the steps of

(a.) receiving an identified object profile (110) associated with an object on the moving platform (802);

(b.) calculating a switch on time and a switch off time of at least one illumination array comprising a plurality of illumination sources (804);

(c.) determining on the at least one illumination array, at least one illumination source associated with the calculated switch on time and switch off time (806);

(d.) outputting the switch on time, the switch off time, and an identifier of the at least one illumination source (808); and (e.) activating the at least one illumination source associated with the identifier, the switch on time, and the switch off time (810); wherein the identified object profile (110) comprises an image of the object and a set of location data associated with a relative location of the identified object on the moving platform.

12. The method of claim 11, wherein the identified object profile (110) comprises a bounding box of the identified object profile (110), a pair of x-y coordinates of a center of the bounding box with respect to the moving platform, and a depth data measured from an image sensor (104) to the object.

13. The method of claims 11 or 12, wherein the identified object profile (110) is generated by an artificial intelligence module (202).

14. The method of any one of claims 11 to 13, wherein the at least one illumination array (112) comprises a plurality of laser sources, a velocity or speed sensor (208), and an angle sensor (210).

15. The method of claim 12, wherein the switch on time and the switch off time are calculated based on: a distance measure between the center of the bounding box and a reference illumination line, a speed of the moving platform (106), a time stamp when the input module received the identified object profile (110), and a height coefficient (ch) of the bounding box.

16. The method of claim 15, wherein the at least one illumination source (114) is determined based on: a distance (diaser) between the center of the at least one illumination source and an adjacent illumination source, a x-coordinate of the center of the bounding box, and a width coefficient (cw) of the bounding box.

17. A computer program comprising instructions which, when executed by a computer, cause the computer to carry out the steps of

(a.) receiving an identified object profile (110) associated with an object on a moving platform (106);

(b.) calculating a switch on time and a switch off time of at least one illumination array comprising a plurality of illumination sources (114); (c.) determining on the at least one illumination array (112), at least one illumination source (114) associated with the calculated switch on time and switch off time;

(d.) outputting the switch on time, the switch off time, and an identifier of the at least one illumination source (114); and

(e.) activating the at least one illumination source (114) associated with the identifier, the switch on time, and the switch off time; wherein the identified object profile (110) comprises an image of the object and a set of location data associated with a relative location of the identified object (102) on the moving platform (106).

Description:
COMPUTER-ASSISTED SYSTEM AND METHOD FOR ILLUMINATING IDENTIFIED OBJECTS

TECHNICAL FIELD

[0001] This disclosure relates to a computer-assisted method and a computer-assisted system for illuminating identified objects.

BACKGROUND

[0002] Waste management is increasingly important as global waste continues to increase at an exponential rate. Waste sorting of recyclable or reusable used objects, e.g. plastic object plays a crucial role in waste management.

[0003] One solution for sorting used objects is based on manual labor in the form of human workers or sorters. The workers are trained to recognize different plastic types. At a sorting area that may include a conveyor belt, each sorter along the conveyor belt is assigned to pick one particular type of plastic waste and place it into a respective container or receptacle for sorting.

[0004] However, training human workers may incur training costs for sorting facilities.

[0005] Another solution for sorting used objects is fully automated sorting. In this process, the sorting and picking activities are performed automatically with the assistance of different plastic identification technologies like computer vision or Near Infrared (NIR) and/or robotics. [0006] Although the automatic sorting with plastic sensing or identification technologies and robotics can significantly improve the sorting speed, efficiency and accuracy compared to manual sorters, they may be relatively more expensive for mid or small-size sorting facilities to afford.

[0007] Accordingly, there exists a need to provide a relatively low-cost computer-assisted object identification solution.

SUMMARY

[0008] This disclosure was conceptualized to aid a manual sorting process and is particularly suited for assisting human sorters to pick up identified objects for sorting.

[0009] A technical solution is provided in the form of a computer-assisted system and method for illuminating identified objects assigned to one or more manual sorters. In some embodiments, the illumination module may be a retrofit solution adaptable for existing sorting facilities. The illumination of identified objects assists manual sorters to pick up the correct objects for sorting, thereby improving productivity and reducing training time and cost.

[0010] According to the present disclosure, a computer-assisted system as claimed in claim 1 is provided. A computer-assisted method according to the invention is defined in claim 10. A computer program comprising instructions to execute the computer-assisted method is defined in claim 16.

[0011] The dependent claims define some examples associated with the system and method, respectively.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] The invention will be better understood with reference to the detailed description when considered in conjunction with the non-limiting examples and the accompanying drawings, in which:

- FIG. 1 shows a schematic diagram of a system for illuminating identified objects as part of a sorting system, in according to some embodiments;

- FIG. 2A shows a schematic illustration of an Al based object classification module and an illumination control module for receiving the output from the Al based object classification module according to some embodiments;

- FIG. 2B shows schematic diagram of an illumination array according to some embodiments;

- FIG. 3 is a schematic block diagram of another embodiment of a system for identification and illumination of identified objects including a controller and a scheduler forming the illumination control module, and a gesture recognition module;

- FIG. 4 is a schematic block diagram illustrating a possible implementation of the system with the controller implemented as a MQTT broker acting as a server, and the scheduler acting as a subscriber;

- FIG. 5 is a flow diagram illustrating the flow of information between the various components shown in FIG. 4;

- FIG. 6A illustrates a calculation of a switch on time and switch off time associated with the control of the illumination array;

- FIG. 6B illustrates a selection of one or more illumination sources to be activated for illuminating an object;

- FIG. 7 shows a flow chart associated with an embodiment for controlling one or more illumination sources on an illumination array; - FIG. 8 shows a generic flow chart of a computer-assisted method for illuminating identified objects on a moving platform; and

- FIG. 9 shows a schematic illustration of a processor for illuminating identified objects according to some embodiments.

DETAILED DESCRIPTION

[0013] The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and embodiments in which the disclosure may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the disclosure. Other embodiments may be utilized and structural, and logical changes may be made without departing from the scope of the disclosure. The various embodiments are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.

[0014] Embodiments described in the context of one of the systems or methods are analogously valid for the other systems or methods.

[0015] Features that are described in the context of an embodiment may correspondingly be applicable to the same or similar features in the other embodiments. Features that are described in the context of an embodiment may correspondingly be applicable to the other embodiments, even if not explicitly described in these other embodiments. Furthermore, additions and/or combinations and/or alternatives as described for a feature in the context of an embodiment may correspondingly be applicable to the same or similar feature in the other embodiments.

[0016] In the context of various embodiments, the articles “a”, “an” and “the” as used with regard to a feature or element include a reference to one or more of the features or elements. [0017] As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.

[0018] As used herein, the term “image sensor(s)” broadly refers to any device that facilitates sensing or detection of one or more objects. The term can refer to hardware sensors, software-based sensors, and/or combinations of hardware and software-based sensors. The term can also refer to active or passive sensors. An image sensor may operate continuously upon activation until deactivated, or may operate for a predetermined period of time. Examples of image sensor may include a camera or video recorder, and devices emitting an electromagnetic radiation (e.g. X-rays, electromagnetic radiation in the Terahertz (THz) range) for detection by a linear imaging scanner to generate images of the one or more objects. [0019] As used herein, the term “module” refers to, or forms part of, or include an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor (shared, dedicated, or group) that executes code; other suitable hardware components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip. The term module may include memory (shared, dedicated, or group) that stores code executed by the processor.

[0020] As used herein, the term “artificial intelligence module” broadly include any machine learning module, deep learning module, which may be trained using supervised learning, unsupervised learning, semi-supervised learning, reinforcement learning, and/or deep learning methods. In some embodiments, the ML/AI algorithms may include algorithms such as neural networks, fuzzy logic, evolutionary algorithms, and combinations of the aforementioned algorithms, etc.

[0021] As used herein, the term “object” includes any object, particularly recyclable or reusable object that may be identified according to type, class or categories. For example, plastic objects may be identified according to whether they are High Density Poly Ethylene (HDPE), Polyethylene terephthalate (PET), Polypropylene (PP), Polystyrene (PS), Low- density polyethylene (LDPE), Polyvinyl chloride (PVC) plastic objects. Such objects may include bottles, jars, containers, plates, bowls etc. of various shapes, forms (distorted, flattened) and sizes.

[0022] As used herein, the term “data” may be understood to include information in any suitable analog or digital form, for example, provided as a file, a portion of a file, a set of files, a signal or stream, a portion of a signal or stream, a set of signals or streams, and the like. The term data, however, is not limited to the aforementioned examples and may take various forms and represent any information as understood in the art.

[0023] According to an aspect of the disclosure there is a computer-assisted system for illuminating identified objects on a (moveable) moving platform comprising an input module configured to receive an identified object profile on the moving platform; an illumination control module arranged in signal or data communication with the input/output module to the identified object profile, the illumination control module configured to calculate a switch on time and a switch off time of at least one illumination array comprising a plurality of illumination sources; determine on the at least one illumination array, at least one illumination source associated with the calculated switch on time and switch off time; output the switch on time, the switch off time, and an identifier of the at least one illumination source; and activate the at least one illumination source associated with the identifier, the switch on time, and the switch off time; wherein the identified object profile comprises an image of the object and a set of location data associated with a relative location of the identified object on the moving platform. The image of the object may be obtained or received from an image sensor, as will be elaborated. [0024] An embodiment of the disclosure is shown in FIG. 1, which illustrates the illumination control module used in conjunction with a system 100 for sorting objects, such as plastic objects, for a variety of purposes. The system 100 comprises one or more image sensors 104 positioned or arranged to detect objects 102 on a moving platform 106; a processor 108 arranged in signal or data communication with the image sensor 104 to receive object data 110. The image sensor 104 may be used to detect the presence of objects 102 and may also be used to track the movement of the objects 102 along the moving platform 106 moving in a direction as indicated by the label ‘A’. The tracked object data may then be sent to the processor 108, which may include various image processing modules, such as one or more gesture recognition modules as will be subsequently elaborated with reference to FIG. 3. The moving platform 106 may form part of a conveyor belt system, or may be the conveyor belt system.

[0025] The processor 108 is configured to receive object data 110 via one or more input modules. From another perspective, the image sensor 104 sends object data 110 to the processor 108. The processor 108 may include an artificial intelligence (Al) module configured to classify, based on the object data 110, the object into one or more predetermined object class/type, for example HDPE or PET plastic types. The output of the (Al) module comprises (i.) border parameters (for example length and width) around the image of each object. The border parameters may be or form part of a bounding box, which is an imagery perimeter, typically a rectangle serving as reference for object detection, (ii.) the respective coordinates of the object on the moving platform 106, and (iii.) the depth between the image sensor 104 and the object. The respective coordinates of the object on the platform may be determined based on a pre-calibrated x-axis and y-axis on the moving platform 106.

[0026] The processor 108 is operable to determine, based on the classification of the object, whether the object is one intended or associated object 102a to be retrieved from the moving platform 106, for example retrieved by a person 120, who may be personnel assigned SI to retrieve the object 102 (e.g. object 102a) based on a specific object type (for example PET plastic), or another intended or associated object 102b to be retrieved from the moving platform 106, by a personnel assigned S2 to retrieve the object 102b based on the specific object type HDPE plastic. The retrieval performance may be for purpose of further tasks such as sorting. If the object 102 is intended to be retrieved from the moving platform 106, the processor 108 operates to determine an object location of the object 102 on the moving platform 106. To facilitate the determination of the object location, a coordinate system, such as a cartesian coordinate system, may be associated with the moving platform 106.

[0027] Upon determining the object location of the identified object 102a on the moving platform 106, the processor 108 may include an illumination control module and operate to activate an illumination array 112 to illuminate the object 102 on the moving platform 106. The illumination array 112 may comprise a plurality of light source 114, each light source 114 positioned or arranged to illuminate a part of the moving platform 106 corresponding to the object location or position on the moving platform 106. In some embodiments, the light source 114 may comprise one or more laser light sources emitting visible light of a particular wavelength. In some embodiments, the illumination array 112 may comprise different laser light sources 114 emitting at least two different wavelengths, each wavelength intended to illuminate one object type. The illumination control module may generate an illumination signal and transmit the illumination signal to activate at least one light emitter 114 to illuminate the part of the moving platform 106 that corresponds to the object location of the intended object 102a intended to be retrieved.

[0028] FIG. 2A shows an embodiment of the processor 108 comprising an artificial intelligence (Al) module 202 and an illumination control module 204. The Al module 202 may function as an Al-based plastic classification module 202 configured to output identified object profile data, including the bounding box profile of a detected or identified object, to the illumination control module 204. In the embodiment shown in FIG. 2A, the following data are sent to the illumination control module 204: x-coordinate (px) of the center of the bounding box on the moving platform, y-coordinate (py) of the center of the bounding box on the moving platform, the depth from the camera to the object (pz); and the width (w) and height (/?) of the bounding box. A database 240 may be arranged in data or signal communication with the artificial intelligence (Al) module 202 to store image data associated with objects on the moving platform 106 captured by the one or more image sensors 104.

[0029] FIG. 2B shows an embodiment of the illumination array 112 arranged to receive control signal(s) from the illumination control module 204. The illumination array 112 may be in the form of a reference frame wherein various components may be mountable thereto. The reference frame 112 may comprises a plurality of laser sources 206, a speed sensor 208 and an angle sensor 210. The speed sensor 208 is used to measure the speed of the conveyor belt. The angle sensor 210 is used to measure the angle of projection a of each the laser light source 206 pivotably mounted to the reference frame 112, which may in turn be mounted vertically (i.e. at about 90 degrees) with respect to the movable surface of the conveyor belt 106. Various parameters, such as a height parameter hi aser between the laser sources 206 and the platform 106, a width parameter of the movable platform Wbeit, a distance parameter between each laser source 206 and an adjacent laser source di aser , and a total number of laser sources 206 may be inter-dependent and configurable. For example, the total number of laser sources 206 may be derived by dividing the width Wbeit by the distance di aser . It is appreciable the angle sensor 210 may not be necessary, but may be used in case the laser sources 206 are configured to pivot from the reference frame 112 for tracking the moving objects on the conveyor belt.

[0030] FIG. 3 illustrates a schematic block diagram of another embodiment of a system 300 for identification and illumination of identified objects, wherein the illumination control module 204 may be modelled as a controller 312 and a scheduler 314. In addition to sending the identified object profile to the illumination control module 204, the Al module 202 may also be configured to send the identified object profile to a gesture recognition module 320 and a notification module 316. In this embodiment, the controller 312 is configured to process the received object profile, calculate the switch on time and switch off time of one or more illumination sources (e.g. lasers), and the respective lasers to be selected based on identifiers. These are sent to the scheduler 314 as a new scheduling task to illuminate light on an identified object. A sorter can easily pick up the identified object illuminated by the lights.

[0031] The gesture monitoring module 320 operates to receive the identified object profile and may further receive data from one or more image sensors, such as cameras to detect movement(s) of the person(s) 120. Based on the received sensor inputs, if a part (e.g. hand) of the person 120 associated with retrieving the object 102a is determined to be in a grasped state and an object identifier (ID) is absent, the gesture monitoring module 320 may conclude that the intended object 102a has been retrieved by a person 120. If it is determined that the object ID of object 102a is still present after a predetermined time or across multiple image frames captured (the number of multiple image frame may be defined by a user), OR if it is determined that a wrong object has been retrieved, the gesture monitoring module 320 is programmed or configured to send a notification via the notification module 316 to the sorter/person 120 via a device (e.g. a smart phone, smart watch, or a buzzer) allocated to the person 120. It is contemplated that the notification may be in the form of an alert, such as a sound alert or a vibration alert, or may be in the form of a text message, an email message, and/or combinations of the aforementioned.

[0032] In some embodiments, upon successful retrieval of the object 102a by the person 120, a different notification is sent to the person 120 confirming the correct object 102a has been picked. [0033] FIG. 4 depicts an example of the data or signal communication between the Al module 202 and the illumination control module 204. A lightweight, publish-subscribe, machine to machine network protocol MQTT may be utilized to facilitate the data and signal communication between the Al module 202 and the illumination control module 204. The Al module 202 or part thereof may be generalized to a computer vision system (publisher) 402 arranged to communicate with the illumination control module 204, which is modelled as an MQTT Broker acting as a controller server 404 to receive the identified object profile from the computer vision system 402. The server 404 may be set up on the controller of the lighting control system. Through TCP/IP communication, the computer vision system 402 publishes object profiles to the server 404. Whenever the server 404 receives a newly published object profile, it sends the data to a scheduler 406 forming at least part of a laser control program. If there are more than one object published in a rather short period of time, the server 404 may add the later object profiles into a data queue or data buffer (based on a first in first out queue management process), awaiting to be processed by the laser control program.

[0034] FIG. 5 shows an example of the software algorithm implementing the control logic of the controller 404 and the scheduler 406 depicted in FIG. 4 as a laser control program. The laser control program comprises three processes 510, 520, 530 running sequentially.

[0035] The first process 510 may be implemented as a data acquisition service for acquiring data from the Al module 202 via a data queue or database, and add the identified object profile into an object list, and may comprise a data receiver 512 configured to receive data from the data queue. When a new message is received 514, the new message is then processed as an input object 516 for input to the second process 520.

[0036] The second process 520 may be implemented as an object processing service. The second process 520 is configured to receive input object from the list 522 and calculate the time to turn on and off one or more illumination sources, as well as determine the illumination sources’ identifiers or indices 524. The set of processed information for each object may then be inserted into a task scheduler 534 from the third process 530 - the laser control process.

[0037] The third process 530 may be implemented as a laser control service and implements a task scheduler for illuminating one or more objects on the moving platform depending on the location of the same. The task scheduler may be initialized 532 before inserting the parameters of processed object 534 into the task scheduler. A separate thread to control the on-off operations of the one or more illumination sources according to the time calculated may then be created 536.

[0038] In some embodiments, a laser control thread is created for each associated object to monitor the illuminating status of the one or more illumination sources related to that object, until the object completely passes over the illuminated laser light line on the moving platform conveyor belt. At this point, the corresponding lasers are turned off and the thread may automatically be terminated.

[0039] In the various described embodiments, the calculation of a switch on time and a switch off time of the one or more illumination sources, and the selection of one or more illumination sources from an illumination array, may be performed with reference to FIG. 6A. [0040] With reference to FIG. 6A, it is assumed that there is a time synchronization between the Al module 202 and the illumination control module 204, and the sending time between two modules via TCP/IP is negligible. In other words, the data transfer or transmission between the Al module 202 and the illumination control module 204 may be performed real-time or near real-time. As shown in FIG. 6A, the lighting control system receives object profile (x, y-coordinate corresponding to the center point of the bounding box, width w and height h of the bounding box (h may be the dimension along the direction of movement of the moving platform, and w may be the dimension along a direction perpendicular of the direction of movement of the moving platform), and material type at a time team. Ignoring any jam, slip or rolling of object on the conveyor belt and assuming the object moves with the conveyor belt speed Vconveyor, the calculated on-off time points dt O ff, dt on based on the lighting control system time may be calculated according to the mathematical expressions as follows:

Laser ON time:

Laser OFF time: ON time point: tp On = t Cam + dtoyi OFF time point: tp O ff = tp On + dt O ff

[0041] wherein Ch is the height coefficient of the bounding box. This parameter is used to ensure the laser lights only illuminate on the surface of the targeted object and not on its neighbors when the height of the bounding box is over a certain threshold.

[0042] With reference to FIG. 6B, a lighting control coordinate system (LCCS) is located at the laser frame. Using the distance between the laser centers and the x-coordinate of the center of the bounding box forming a perimeter around the image of the object, the relevant illumination sources (lasers) selected for illuminating an object is as follows: [0043] where c w is the width coefficient of the bounding box. The c w parameter is used to ensure the laser lights only illuminate on the surface of the targeted object and not on its neighbors when the width of the bounding box is over a certain threshold.

[0044] FIG. 7 shows a flow chart of a possible embodiment of a thread creation and control 536. As illustrated in FIG. 5, once the object processing service 520 outputs an object, the laser control service 530 creates a separate thread to handle the laser controlling for each specific processed object. After the thread completes its task, it is terminated automatically. At initialization, the thread waits until a Laser_On_ Time 702 based on the calculation depicted in FIG. 6A, which is when the identified object reaches the illuminated-light line as shown in FIG. 6A. Then, it turns on the required lasers 704. After that, the thread goes to a loop 706. Inside the loop, the thread constantly checks if any of the required lasers is not active based on the identifier(s) of each laser source 708, and turns it on if needed 710. This checking function is used to prevent the conflict of on-off operations from other threads, which may happen when there are multiple objects overlapping. In other words, the loop 706 ensures that the required lasers are active until the object completely passes the illuminated-light line. The loop ends when the system time clock reaches the Laser_Off_ Time, which corresponds to the object already passed the illuminated-light line. All of the corresponding lasers are turned off 712 and the thread is terminated and disposed 714. It is contemplated that one thread is associated with one identified object.

[0045] In some embodiments, the system 100, 300 and the associated modules of processor 108 may be applied to a sorting system having multiple sorting stations. In such a sorting system, each sorting station may be configured to implement the system 100, 300 preprogrammed or pre-determined to identify one type of object for sorting by a sorter 120. For example, the sorting system may comprise a first sorting station implementing a system 100 pre-configured to identify HDPE plastic objects for sorting and a second sorting station implementing the system 100 pre-configured to identify PET plastic objects for sorting. It is appreciable that the terms ‘first’ and ‘second’ are used for purposes of clarity and do not imply order or precedence.

[0046] The Al module(s) 202 as described may be trained and tested before deployment. In some embodiments, testing and training datasets may be generated to augment object data 110 received. Such augmentation may include generation additional object data based on image processing function such as flip and/or rotate. In some embodiments, the additional object data may include supplementing the original object data with additional location data on the moving platform 106. [0047] In some embodiments, the artificial intelligence modules may include one or more neural networks. Such neural networks may be single-layered or multi-layered. The weights associated with each neuron of the layers may be trained and adjusted using an optimization algorithm modelled to minimize errors. In such an arrangement, the output parameter or result predicted by the neural network after each iteration of training is compared with a reference parameter and feedback to the neural network for weights modification/adjustment. Any input training data may comprise object data including image data, and the desired output of the localization module 202 may be identify region of interests (ROI) around each object to be classified. The output of the training is an identified object dataset.

[0048] FIG. 8 is a flow chart 800 depicting a computer-assisted method for illuminating identified objects on a moving platform, comprising the steps of

[0049] 802- receiving an identified object profile associated with an object on the moving platform;

[0050] 804- calculating a switch on time and a switch off time of at least one illumination array comprising a plurality of illumination sources;

[0051] 806- determining on the at least one illumination array, at least one illumination source associated with the calculated switch on time and switch off time;

[0052] 808- outputting the switch on time, the switch off time, and an identifier of the at least one illumination source; and

[0053] 810- activating the at least one illumination source associated with the identifier, the switch on time, and the switch off time;

[0054] wherein the identified object profile comprises an image of the object and a set of location data associated with a relative location of the identified object on the moving platform. [0055] FIG. 9 shows a server computer system 900 according to an embodiment. The server computer system 900 includes a communication interface 902 (e.g. configured to receive input data from the image sensors 904). The server computer 900 further includes a processing unit 904 and a memory 906. The memory 906 may be used by the processing unit 904 to store, for example, data to be processed, such as data associated with the input data and results output from the modules 202, 204. The server computer is configured to perform the method of FIGS. 5, 6A, 6B, 7 and 8. It should be noted that the server computer system 900 can be a distributed system including a plurality of computers.

[0056] It is contemplated that the modules 202, 204 may be realized (e.g., compiled together) as one executable software program (e.g., software application or simply referred to as an “app”), which for example may be stored in the memory 906 and executable by the at least one processor 108 to perform the functions/operations as described herein according to various embodiments.

[0057] Some portions of the present disclosure are explicitly or implicitly presented in terms of algorithms and functional or symbolic representations of operations on data within a computer memory. These algorithmic descriptions and functional or symbolic representations are the means used by those skilled in the data processing arts to convey most effectively the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps leading to a desired result. The steps are those requiring physical manipulations of physical quantities, such as electrical, magnetic or optical signals capable of being stored, transferred, combined, compared, and otherwise manipulated.

[0058] Furthermore, one or more of the steps of a computer program/module or method described herein may be performed in parallel rather than sequentially. Such a computer program may be stored on any computer readable medium. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with a general-purpose computer. The computer program when loaded and executed on such a general-purpose computer effectively results in an apparatus that implements the steps of the methods described herein.

[0059] While the disclosure has been particularly shown and described with reference to specific embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. The scope of the invention is thus indicated by the appended claims and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced.