Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR SYNTHETIC OVERSAMPLING FOR BOOSTING SUPERVISED ANOMALY DETECTION
Document Type and Number:
WIPO Patent Application WO/2024/033771
Kind Code:
A1
Abstract:
Methods, systems, and computer program products may formulate an iterative data mix up problem into a Markov decision process (MDP) with a tailored reward signal to guide a learning process. To solve the MDP, a deep deterministic actor-critic framework may be modified to adapt a discrete-continuous decision space for training a data augmentation policy.

Inventors:
LAI KWEI-HERNG (US)
WANG LAN (US)
CHEN HUIYUAN (US)
BENDRE MANGESH (US)
DAS MAHASHWETA (US)
YANG HAO (US)
Application Number:
PCT/IB2023/057912
Publication Date:
February 15, 2024
Filing Date:
August 04, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VISA INT SERVICE ASS (US)
International Classes:
G06N3/08; G06F7/00; G06F11/07; G06F11/30; G06N3/00; G06N20/00
Foreign References:
US20190227536A12019-07-25
US20200314127A12020-10-01
US20160110584A12016-04-21
US20190259041A12019-08-22
Attorney, Agent or Firm:
PREPELKA, Nathan, J. et al. (US)
Download PDF:
Claims:
Attorney Docket No.08223-2304281 (6575WO01) WHAT IS CLAIMED IS: 1. A method, comprising: obtaining, with at least one processor, a training dataset Xtrain including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and executing, with the at least one processor, a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset Xtrain in a validation dataset Xval, a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; 5MM1325.DOCX Page 59 of 71 Attorney Docket No.08223-2304281 (6575WO01) (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward rt; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. 2. The method of claim 1, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. 5MM1325.DOCX Page 60 of 71 Attorney Docket No.08223-2304281 (6575WO01) 3. The method of claim 1, wherein the reward rt is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, Xval is the validation data set, yval is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. 4. The method of claim 1, wherein the actor loss function is defined according to the following Equation: where N is a number of transitions, ^(^i|^2) is a projected action for a state si, and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state si, and wherein the critic loss function is defined according to the following Equation: 5MM1325.DOCX Page 61 of 71 Attorney Docket No.08223-2304281 (6575WO01) where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. 5. The method of claim 1, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. 6. The method of claim 5, further comprising: receiving, with the at least one processor, transaction data associated with a transaction currently being processed in the transaction processing network; processing, with the at least one processor, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, denying, with the at least one processor, authorization of the transaction in the transaction processing network. 7. The method of claim 1, further comprising: before executing the training episode: training, with the at least one processor, using the training dataset Xtrain, the machine learning classifier ^; and pre-computing, with the at least one processor, each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset Xtrain. 8. A system, comprising: at least one processor programmed and/or configured to: obtain a training dataset Xtrain including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and 5MM1325.DOCX Page 62 of 71 Attorney Docket No.08223-2304281 (6575WO01) execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset Xtrain in a validation dataset Xval, a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; MM1325.DOCX Page 63 of 71 Attorney Docket No.08223-2304281 (6575WO01) (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward rt; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. 9. The system of claim 8, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. 5MM1325.DOCX Page 64 of 71 Attorney Docket No.08223-2304281 (6575WO01) 10. The system of claim 8, wherein the reward rt is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, Xval is the validation data set, yval is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. 11. The system of claim 8, wherein the actor loss function is defined according to the following Equation: where N is a number of transitions, ^(^i|^2) is a projected action for a state si, and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state si, and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. 5MM1325.DOCX Page 65 of 71 Attorney Docket No.08223-2304281 (6575WO01) 12. The system of claim 8, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. 13. The system of claim 12, wherein the at least one processor is further programmed and/or configured to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. 14. The system of claim 8, wherein the at least one processor is further programmed and/or configured to: before executing the training episode: train, using the training dataset Xtrain, the machine learning classifier ^; and pre-compute each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset Xtrain. 15. A computer program product including a non-transitory computer readable medium including program instructions which, when executed by at least one processor, cause the at least one processor to: obtain a training dataset Xtrain including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; 5MM1325.DOCX Page 66 of 71 Attorney Docket No.08223-2304281 (6575WO01) (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset Xtrain in a validation dataset Xval, a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: MM1325.DOCX Page 67 of 71 Attorney Docket No.08223-2304281 (6575WO01) training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward rt; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. 16. The computer program product of claim 15, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. 17. The computer program product of claim 15, wherein the reward rt is determined according to the following Equations: 5MM1325.DOCX Page 68 of 71 Attorney Docket No.08223-2304281 (6575WO01) where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, Xval is the validation data set, yval is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. 18. The computer program product of claim 15, wherein the actor loss function is defined according to the following Equation: where N is a number of transitions, ^(^i|^2) is a projected action for a state si, and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state si, and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. 19. The computer program product of claim 15, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with 5MM1325.DOCX Page 69 of 71 Attorney Docket No.08223-2304281 (6575WO01) a plurality of non-fraudulent transactions of the plurality of transactions, wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions, and wherein the program instructions, when executed by at least one processor, further cause the at least one processor to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. 20. The computer program product of claim 15, wherein the program instructions, when executed by at least one processor, further cause the at least one processor to: before executing the training episode: train, using the training dataset Xtrain, the machine learning classifier ^; and pre-compute, each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset Xtrain. 5MM1325.DOCX Page 70 of 71
Description:
Attorney Docket No.08223-2304281 (6575WO01) METHOD, SYSTEM, AND COMPUTER PROGRAM PRODUCT FOR SYNTHETIC OVERSAMPLING FOR BOOSTING SUPERVISED ANOMALY DETECTION CROSS-REFERENCE TO RELATED APPLICATION [0001] This application claims the benefit of U.S. Provisional Application No.63/397,719, filed August 12, 2022, the disclosure of which is hereby incorporated by reference in its entirety. BACKGROUND 1. Field [0002] This disclosure relates to synthetic oversampling and, in some non-limiting embodiments or aspects, to methods, systems, and computer program products for synthetic oversampling for boosting supervised anomaly detection. 2. Technical Considerations [0003] Training an anomaly detector may be challenging due to label sparsity and/or a diverse distribution of known anomalies. Existing approaches typically use unsupervised or semi-supervised learning in an attempt to alleviate these issues. However, semi-supervised learning may directly adopt the limited label information, which may lead to a model that overfits on existing anomalies, while unsupervised learning may ignore the label information, which may lead to low precision. SUMMARY [0004] Accordingly, provided are improved systems, devices, products, apparatus, and/or methods for synthetic oversampling for boosting supervised anomaly detection. [0005] According to some non-limiting embodiments or aspects, provided is a method, comprising: obtaining, with at least one processor, a training dataset X train including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and executing, with the at least one processor, a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is 5MM1325.DOCX Page 1 of 71 Attorney Docket No.08223-2304281 (6575WO01) determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward r t , wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward r t is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t ; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. 5MM1325.DOCX Page 2 of 71 Attorney Docket No.08223-2304281 (6575WO01) [0006] In some non-limiting embodiments or aspects, the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0007] In some non-limiting embodiments or aspects, the reward r t is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [0008] In some non-limiting embodiments or aspects, the actor loss function is defined according to the following Equation: 5MM1325.DOCX Page 3 of 71 Attorney Docket No.08223-2304281 (6575WO01) where N is a number of transitions, ^(^i|^2) is a projected action for a state s i , and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state s i , and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0009] In some non-limiting embodiments or aspects, the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0010] In some non-limiting embodiments or aspects, the method further comprises: receiving, with the at least one processor, transaction data associated with a transaction currently being processed in the transaction processing network; processing, with the at least one processor, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non- fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, denying, with the at least one processor, authorization of the transaction in the transaction processing network. [0011] In some non-limiting embodiments or aspects, the method further comprises: before executing the training episode: training, with the at least one processor, using the training dataset X train , the machine learning classifier ^; and pre- computing, with the at least one processor, each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0012] According to some non-limiting embodiments or aspects, provided is a system, comprising: at least one processor programmed and/or configured to: obtain a training dataset X train including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to 5MM1325.DOCX Page 4 of 71 Attorney Docket No.08223-2304281 (6575WO01) generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k- nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t ; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. 5MM1325.DOCX Page 5 of 71 Attorney Docket No.08223-2304281 (6575WO01) [0013] In some non-limiting embodiments or aspects, the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0014] In some non-limiting embodiments or aspects, the reward r t is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [0015] In some non-limiting embodiments or aspects, the actor loss function is defined according to the following Equation: 5MM1325.DOCX Page 6 of 71 Attorney Docket No.08223-2304281 (6575WO01) where N is a number of transitions, ^(^i|^2) is a projected action for a state s i , and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state s i , and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0016] In some non-limiting embodiments or aspects, the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0017] In some non-limiting embodiments or aspects, the at least one processor is further programmed and/or configured to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. [0018] In some non-limiting embodiments or aspects, the at least one processor is further programmed and/or configured to: before executing the training episode: train, using the training dataset X train , the machine learning classifier ^; and pre-compute each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0019] According to some non-limiting embodiments or aspects, provided is a computer program product including a non-transitory computer readable medium including program instructions which, when executed by at least one processor, cause the at least one processor to: obtain a training dataset X train including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the 5MM1325.DOCX Page 7 of 71 Attorney Docket No.08223-2304281 (6575WO01) timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward rt; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes 5MM1325.DOCX Page 8 of 71 Attorney Docket No.08223-2304281 (6575WO01) executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. [0020] In some non-limiting embodiments or aspects, the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0021] In some non-limiting embodiments or aspects, the reward r t is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [0022] In some non-limiting embodiments or aspects, the actor loss function is defined according to the following Equation: 5MM1325.DOCX Page 9 of 71 Attorney Docket No.08223-2304281 (6575WO01) where N is a number of transitions, ^(^i|^2) is a projected action for a state s i , and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state s i , and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0023] In some non-limiting embodiments or aspects, the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0024] In some non-limiting embodiments or aspects, the program instructions, when executed by at least one processor, further cause the at least one processor to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. [0025] In some non-limiting embodiments or aspects, the program instructions, when executed by at least one processor, further cause the at least one processor to: before executing the training episode: train, using the training dataset X train , the machine learning classifier ^; and pre-compute, each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0026] Further embodiments or aspects are set forth in the following numbered clauses: [0027] Clause 1: A method, comprising: obtaining, with at least one processor, a training dataset X train including a plurality of source samples including a plurality of 5MM1325.DOCX Page 10 of 71 Attorney Docket No.08223-2304281 (6575WO01) labeled normal samples and a plurality of labeled anomaly samples; and executing, with the at least one processor, a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t ; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that 5MM1325.DOCX Page 11 of 71 Attorney Docket No.08223-2304281 (6575WO01) the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. [0028] Clause 2: The method of clause 1, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0029] Clause 3: The method of clauses 1 or 2, wherein the reward rt is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [0030] Clause 4: The method of any of clauses 1–3, wherein the actor loss function is defined according to the following Equation: 5MM1325.DOCX Page 12 of 71 Attorney Docket No.08223-2304281 (6575WO01) where N is a number of transitions, ^(^i|^2) is a projected action for a state s i , and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state s i , and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0031] Clause 5: The method of any of clauses 1–4, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0032] Clause 6: The method of any of clauses 1–5, further comprising: receiving, with the at least one processor, transaction data associated with a transaction currently being processed in the transaction processing network; processing, with the at least one processor, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, denying, with the at least one processor, authorization of the transaction in the transaction processing network. [0033] Clause 7: The method of any of clauses 1–6, further comprising: before executing the training episode: training, with the at least one processor, using the training dataset X train , the machine learning classifier ^; and pre-computing, with the at least one processor, each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0034] Clause 8: A system, comprising: at least one processor programmed and/or configured to: obtain a training dataset X train including a plurality of source samples 5MM1325.DOCX Page 13 of 71 Attorney Docket No.08223-2304281 (6575WO01) including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward r t , wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward r t is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward rt; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes 5MM1325.DOCX Page 14 of 71 Attorney Docket No.08223-2304281 (6575WO01) executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. [0035] Clause 9: The system of clause 8, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0036] Clause 10: The system of clauses 8 or 9, wherein the reward rt is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [0037] Clause 11: The system of any of clauses 8–10, wherein the actor loss function is defined according to the following Equation: 5MM1325.DOCX Page 15 of 71 Attorney Docket No.08223-2304281 (6575WO01) where N is a number of transitions, ^(^i|^2) is a projected action for a state s i , and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state s i , and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0038] Clause 12: The system of any of clauses 8–11, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0039] Clause 13: The system of any of clauses 8–12, wherein the at least one processor is further programmed and/or configured to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. [0040] Clause 14: The system of any of clauses 8–13, wherein the at least one processor is further programmed and/or configured to: before executing the training episode: train, using the training dataset X train , the machine learning classifier ^; and pre-compute each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0041] Clause 15: A computer program product including a non-transitory computer readable medium including program instructions which, when executed by at least one processor, cause the at least one processor to: obtain a training dataset X train including 5MM1325.DOCX Page 16 of 71 Attorney Docket No.08223-2304281 (6575WO01) a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t ; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response 5MM1325.DOCX Page 17 of 71 Attorney Docket No.08223-2304281 (6575WO01) to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. [0042] Clause 16: The computer program product of clause 15, wherein the current pair of source samples are combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to the following Equations: where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [0043] Clause 17: The computer program product of clauses 15 or 16, wherein the reward rt is determined according to the following Equations: where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. 5MM1325.DOCX Page 18 of 71 Attorney Docket No.08223-2304281 (6575WO01) [0044] Clause 18: The computer program product of any of clauses 15–17, wherein the actor loss function is defined according to the following Equation: where N is a number of transitions, ^(^i|^2) is a projected action for a state si, and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state si, and wherein the critic loss function is defined according to the following Equation: where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [0045] Clause 19: The computer program product of any of clauses 15–18, wherein the plurality of source samples is associated with a plurality of transactions in a transaction processing network, wherein the plurality of labeled normal samples is associated with a plurality of non-fraudulent transactions of the plurality of transactions, and wherein the plurality of labeled anomaly samples is associated with a plurality of fraudulent transactions of the plurality of transactions. [0046] Clause 20: The computer program product of any of clauses 15–19, wherein the program instructions, when executed by at least one processor, further cause the at least one processor to: receive transaction data associated with a transaction currently being processed in the transaction processing network; process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction; and in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in the transaction processing network. [0047] Clause 21: The computer program product of any of clauses 15–20, wherein the program instructions, when executed by at least one processor, further cause the at least one processor to: before executing the training episode: train, using the training dataset X train , the machine learning classifier ^; and pre-compute, each k- 5MM1325.DOCX Page 19 of 71 Attorney Docket No.08223-2304281 (6575WO01) nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . [0048] These and other features and characteristics of the present disclosure, as well as the methods of operation and functions of the related elements of structures and the combination of parts and economies of manufacture, will become more apparent upon consideration of the following description and the appended claims with reference to the accompanying drawings, all of which form a part of this specification, wherein like reference numerals designate corresponding parts in the various figures. It is to be expressly understood, however, that the drawings are for the purpose of illustration and description only and are not intended as a definition of limits. As used in the specification and the claims, the singular form of “a,” “an,” and “the” include plural referents unless the context clearly dictates otherwise. BRIEF DESCRIPTION OF THE DRAWINGS AND APPENDICES [0049] Additional advantages and details are explained in greater detail below with reference to the exemplary embodiments that are illustrated in the accompanying schematic figures, in which: [0050] FIG. 1 is a diagram of non-limiting embodiments or aspects of an environment in which systems, devices, products, apparatus, and/or methods, described herein, may be implemented; [0051] FIG.2 is a diagram of non-limiting embodiments or aspects of components of one or more devices and/or one or more systems of FIG.1; [0052] FIGS. 3A-3C are a flowchart of non-limiting embodiments or aspects of a process for synthetic oversampling for boosting supervised anomaly detection; [0053] FIG.4 is graphs providing a comparison between random mix up anomalies (left graph) and random mix up anomalies with normal samples (right graph); [0054] FIG.5 illustrates non-limiting embodiments or aspects of an iterative mix-up process; [0055] FIG. 6 illustrates non-limiting embodiments or aspects of a network architecture for synthetic oversampling for boosting supervised anomaly detection; [0056] FIG.7 illustrates an implementation of non-limiting embodiments or aspects of a training procedure of a process for synthetic oversampling for boosting supervised anomaly detection; [0057] FIG.8 is a table summarizing statistics of datasets; 5MM1325.DOCX Page 20 of 71 Attorney Docket No.08223-2304281 (6575WO01) [0058] FIG.9 is a table summarizing a horizontal comparison between a framework according to non-limiting embodiments or aspects of the present disclosure and representative data augmentation methods; [0059] FIG.10 is a table summarizing a vertical comparison between a framework according to non-limiting embodiments or aspects of the present disclosure and label- informed detection algorithms; and [0060] FIG.11 a table of macro-averaged scores of a framework according to non- limiting embodiments or aspects of the present disclosure. DESCRIPTION [0061] It is to be understood that the present disclosure may assume various alternative variations and step sequences, except where expressly specified to the contrary. It is also to be understood that the specific devices and processes illustrated in the attached drawings, and described in the following specification, are simply exemplary and non-limiting embodiments or aspects. Hence, specific dimensions and other physical characteristics related to the embodiments or aspects disclosed herein are not to be considered as limiting. [0062] No aspect, component, element, structure, act, step, function, instruction, and/or the like used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more” and “at least one.” Furthermore, as used herein, the term “set” is intended to include one or more items (e.g., related items, unrelated items, a combination of related and unrelated items, etc.) and may be used interchangeably with “one or more” or “at least one.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” or the like are intended to be open-ended terms. Further, the phrase “based on” is intended to mean “based at least partially on” unless explicitly stated otherwise. [0063] As used herein, the term “communication” may refer to the reception, receipt, transmission, transfer, provision, and/or the like, of data (e.g., information, signals, messages, instructions, commands, and/or the like). For one unit (e.g., a device, a system, a component of a device or system, combinations thereof, and/or the like) to be in communication with another unit means that the one unit is able to directly or indirectly receive information from and/or transmit information to the other unit. This 5MM1325.DOCX Page 21 of 71 Attorney Docket No.08223-2304281 (6575WO01) may refer to a direct or indirect connection (e.g., a direct communication connection, an indirect communication connection, and/or the like) that is wired and/or wireless in nature. Additionally, two units may be in communication with each other even though the information transmitted may be modified, processed, relayed, and/or routed between the first and second unit. For example, a first unit may be in communication with a second unit even though the first unit passively receives information and does not actively transmit information to the second unit. As another example, a first unit may be in communication with a second unit if at least one intermediary unit processes information received from the first unit and communicates the processed information to the second unit. [0064] It will be apparent that systems and/or methods, described herein, can be implemented in different forms of hardware, software, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods are described herein without reference to specific software code, it being understood that software and hardware can be designed to implement the systems and/or methods based on the description herein. [0065] Some non-limiting embodiments or aspects are described herein in connection with thresholds. As used herein, satisfying a threshold may refer to a value being greater than the threshold, more than the threshold, higher than the threshold, greater than or equal to the threshold, less than the threshold, fewer than the threshold, lower than the threshold, less than or equal to the threshold, equal to the threshold, etc. [0066] As used herein, the term “transaction service provider” may refer to an entity that receives transaction authorization requests from merchants or other entities and provides guarantees of payment, in some cases through an agreement between the transaction service provider and an issuer institution. For example, a transaction service provider may include a payment network such as Visa® or any other entity that processes transactions. The term “transaction processing system” may refer to one or more computing devices operated by or on behalf of a transaction service provider, such as a transaction processing server executing one or more software applications. A transaction processing system may include one or more processors 5MM1325.DOCX Page 22 of 71 Attorney Docket No.08223-2304281 (6575WO01) and, in some non-limiting embodiments, may be operated by or on behalf of a transaction service provider. [0067] As used herein, the term “account identifier” may include one or more primary account numbers (PANs), tokens, or other identifiers associated with a customer account. The term “token” may refer to an identifier that is used as a substitute or replacement identifier for an original account identifier, such as a PAN. Account identifiers may be alphanumeric or any combination of characters and/or symbols. Tokens may be associated with a PAN or other original account identifier in one or more data structures (e.g., one or more databases and/or the like) such that they may be used to conduct a transaction without directly using the original account identifier. In some examples, an original account identifier, such as a PAN, may be associated with a plurality of tokens for different individuals or purposes. [0068] As used herein, the terms “issuer institution,” “portable financial device issuer,” “issuer,” or “issuer bank” may refer to one or more entities that provide one or more accounts to a user (e.g., a customer, a consumer, an entity, an organization, and/or the like) for conducting transactions (e.g., payment transactions), such as initiating credit card payment transactions and/or debit card payment transactions. For example, an issuer institution may provide an account identifier, such as a PAN, to a user that uniquely identifies one or more accounts associated with that user. The account identifier may be embodied on a portable financial device, such as a physical financial instrument (e.g., a payment card), and/or may be electronic and used for electronic payments. In some non-limiting embodiments or aspects, an issuer institution may be associated with a bank identification number (BIN) that uniquely identifies the issuer institution. As used herein, the term “issuer institution system” may refer to one or more computer systems operated by or on behalf of an issuer institution, such as a server computer executing one or more software applications. For example, an issuer institution system may include one or more authorization servers for authorizing a payment transaction. [0069] As used herein, the term “merchant” may refer to an individual or entity that provides goods and/or services, or access to goods and/or services, to users (e.g. customers) based on a transaction (e.g. a payment transaction). As used herein, the terms “merchant” or “merchant system” may also refer to one or more computer systems, computing devices, and/or software application operated by or on behalf of a merchant, such as a server computer executing one or more software applications. 5MM1325.DOCX Page 23 of 71 Attorney Docket No.08223-2304281 (6575WO01) A “point-of-sale (POS) system,” as used herein, may refer to one or more computers and/or peripheral devices used by a merchant to engage in payment transactions with users, including one or more card readers, near-field communication (NFC) receivers, radio frequency identification (RFID) receivers, and/or other contactless transceivers or receivers, contact-based receivers, payment terminals, computers, servers, input devices, and/or other like devices that can be used to initiate a payment transaction. A POS system may be part of a merchant system. A merchant system may also include a merchant plug-in for facilitating online, Internet-based transactions through a merchant webpage or software application. A merchant plug-in may include software that runs on a merchant server or is hosted by a third-party for facilitating such online transactions. [0070] As used herein, the term “mobile device” may refer to one or more portable electronic devices configured to communicate with one or more networks. As an example, a mobile device may include a cellular phone (e.g., a smartphone or standard cellular phone), a portable computer (e.g., a tablet computer, a laptop computer, etc.), a wearable device (e.g., a watch, pair of glasses, lens, clothing, and/or the like), a personal digital assistant (PDA), and/or other like devices. The terms “client device” and “user device,” as used herein, refer to any electronic device that is configured to communicate with one or more servers or remote devices and/or systems. A client device or user device may include a mobile device, a network- enabled appliance (e.g., a network-enabled television, refrigerator, thermostat, and/or the like), a computer, a POS system, and/or any other device or system capable of communicating with a network. [0071] As used herein, the term “computing device” may refer to one or more electronic devices configured to process data. A computing device may, in some examples, include the necessary components to receive, process, and output data, such as a processor, a display, a memory, an input device, a network interface, and/or the like. A computing device may be a mobile device. As an example, a mobile device may include a cellular phone (e.g., a smartphone or standard cellular phone), a portable computer, a wearable device (e.g., watches, glasses, lenses, clothing, and/or the like), a PDA, and/or other like devices. A computing device may also be a desktop computer or other form of non-mobile computer. [0072] As used herein, the term “payment device” may refer to a portable financial device, an electronic payment device, a payment card (e.g., a credit or debit card), a 5MM1325.DOCX Page 24 of 71 Attorney Docket No.08223-2304281 (6575WO01) gift card, a smartcard, smart media, a payroll card, a healthcare card, a wristband, a machine-readable medium containing account information, a keychain device or fob, an RFID transponder, a retailer discount or loyalty card, a cellular phone, an electronic wallet mobile application, a PDA, a pager, a security card, a computer, an access card, a wireless terminal, a transponder, and/or the like. In some non-limiting embodiments or aspects, the payment device may include volatile or nonvolatile memory to store information (e.g., an account identifier, a name of the account holder, and/or the like). [0073] As used herein, the term "server" and/or “processor” may refer to or include one or more computing devices that are operated by or facilitate communication and processing for multiple parties in a network environment, such as the Internet, although it will be appreciated that communication may be facilitated over one or more public or private network environments and that various other arrangements are possible. Further, multiple computing devices (e.g., servers, POS devices, mobile devices, etc.) directly or indirectly communicating in the network environment may constitute a "system.” Reference to “a server” or “a processor,” as used herein, may refer to a previously-recited server and/or processor that is recited as performing a previous step or function, a different server and/or processor, and/or a combination of servers and/or processors. For example, as used in the specification and the claims, a first server and/or a first processor that is recited as performing a first step or function may refer to the same or different server and/or a processor recited as performing a second step or function. [0074] As used herein, the term “acquirer” may refer to an entity licensed by the transaction service provider and/or approved by the transaction service provider to originate transactions using a portable financial device of the transaction service provider. Acquirer may also refer to one or more computer systems operated by or on behalf of an acquirer, such as a server computer executing one or more software applications (e.g., “acquirer server”). An “acquirer” may be a merchant bank, or in some cases, the merchant system may be the acquirer. The transactions may include original credit transactions (OCTs) and account funding transactions (AFTs). The acquirer may be authorized by the transaction service provider to sign merchants of service providers to originate transactions using a portable financial device of the transaction service provider. The acquirer may contract with payment facilitators to enable the facilitators to sponsor merchants. The acquirer may monitor compliance of the payment facilitators in accordance with regulations of the transaction service 5MM1325.DOCX Page 25 of 71 Attorney Docket No.08223-2304281 (6575WO01) provider. The acquirer may conduct due diligence of payment facilitators and ensure that proper due diligence occurs before signing a sponsored merchant. Acquirers may be liable for all transaction service provider programs that they operate or sponsor. Acquirers may be responsible for the acts of its payment facilitators and the merchants it or its payment facilitators sponsor. [0075] As used herein, the term “payment gateway” may refer to an entity and/or a payment processing system operated by or on behalf of such an entity (e.g., a merchant service provider, a payment service provider, a payment facilitator, a payment facilitator that contracts with an acquirer, a payment aggregator, and/or the like), which provides payment services (e.g., transaction service provider payment services, payment processing services, and/or the like) to one or more merchants. The payment services may be associated with the use of portable financial devices managed by a transaction service provider. As used herein, the term “payment gateway system” may refer to one or more computer systems, computer devices, servers, groups of servers, and/or the like operated by or on behalf of a payment gateway. [0076] As used herein, the terms “authenticating system” and “authentication system” may refer to one or more computing devices that authenticate a user and/or an account, such as but not limited to a transaction processing system, merchant system, issuer system, payment gateway, a third-party authenticating service, and/or the like. [0077] As used herein, the terms “request,” “response,” “request message,” and “response message” may refer to one or more messages, data packets, signals, and/or data structures used to communicate data between two or more components or units. [0078] As used herein, the term “application programming interface” (API) may refer to computer code that allows communication between different systems or (hardware and/or software) components of systems. For example, an API may include function calls, functions, subroutines, communication protocols, fields, and/or the like usable and/or accessible by other systems or other (hardware and/or software) components of systems. [0079] As used herein, the term “user interface” or “graphical user interface” refers to a generated display, such as one or more graphical user interfaces (GUIs) with 5MM1325.DOCX Page 26 of 71 Attorney Docket No.08223-2304281 (6575WO01) which a user may interact, either directly or indirectly (e.g., through a keyboard, mouse, touchscreen, etc.). [0080] Anomaly detection is widely adopted in a broad range of domains, such as intrusion detection in cybersecurity, fault detection in manufacturing, and fraud detection in finance. Anomalies in data often come from diverse factors, resulting in diverse behaviors of anomalies with distinctly dissimilar features. For example, different fraudulent transactions can embody entirely dissimilar behaviors. By definition, anomalies often occur rarely, and unpredictably, in a dataset. Therefore, it is difficult, if not impossible, to obtain well-labeled training data. For example, given a set of transactions, distinguishing fraudulent transactions from normal transactions costs much less effort than categorizing the fraudulent transactions, especially when there is no clear definition of the behavior categories. This often results in diverse semantic meanings for a limited amount of single-type label information and is thus unsuitable for supervised training of an anomaly detector. [0081] Still, label information plays a significant role in enhancing detection performance. To better exploit sparse label information, existing efforts focus on weakly/semi-supervised learning and data augmentation to overcome the label sparsity issue. Weakly/semi-supervised learning methods seek to extract extra information from the given labels with tailored loss functions or scoring functions. Though weakly/semi-supervised learning is capable of capturing label information, weakly/semi-supervised learning focus on learning the knowledge from limited anomaly samples and therefore ignore the supervisory signals of possible anomalies in the unlabeled data. To overcome this limitation, some data augmentation approaches focus on synthetic oversampling to synthesize minority classes (e.g., anomalies) to create a balanced dataset for training supervised classifiers. However, the behaviors of anomalies are diverse, and synthesizing anomalies based on two arbitrary anomalies may introduce noise into the training dataset. [0082] For example, there are mainly two existing strategies to exploit limited label information for anomaly detection problems: label-informed anomaly detection and data augmentation. [0083] Weakly/semi-supervised anomaly detection are the two main existing strategies to tackle the problem under the scenario of either labeled normal or anomaly samples are accessible. To leverage the large number of labeled normal samples, SAnDCat selects top-K representative samples from the dataset as a reference for 5MM1325.DOCX Page 27 of 71 Attorney Docket No.08223-2304281 (6575WO01) evaluating anomaly scores based on a model learned from pairwise distances between labeled normal instances. On the other hand, to exploit a limited number of labeled anomalies, DevNet enforces the anomaly scores of individual data instances to fit a one-sided Gaussian distribution for leveraging prior knowledge of labeled normal samples and anomalies. PRO introduces a two-stream ordinal-regression network to learn the pair-wise relations between two data samples, which is assumption-free on the probability distribution of the anomaly scores. [0084] Recently, several endeavors further generalize the label-informed anomaly detection problem into semi-supervised classification setting that both limited numbers of normal and anomaly samples are accessible. The main underlying assumptions assume that similar points are likely to be of the same class and are therefore densely distributed within the same high-density region of a low-dimensional feature space. XGBOD extracts feature representation based on the anomaly score of multiple unsupervised anomaly detectors for training a supervised gradient boosting tree classifier. DeepSAD points out the semi-supervision assumptions only work for normal samples and further develops a one-class classification framework to cluster labeled normal samples while maximizing the distance between the labeled anomalies and the cluster in the high-dimensional space. However, weakly/semi-supervised learning methods focus on modeling the given label information without considering the relations between two labeled instances. Therefore, it is infeasible to generalize the label information when anomaly behaviors are diverse. By considering correlations between labeled samples and generating beneficial training data correspondingly, non-limiting embodiments or aspects of the present disclosure may be able to generalize the label information for training arbitrary classifiers. [0085] Data augmentation has been extensively studied for a wide range of data types to enlarge training data size and generalize model decision boundaries for improving performance and robustness. To tackle the imbalanced classification problem, there are mainly two different categories: algorithm-wise and data-wise methods. Algorithm-wise approaches directly tailor the loss function of classification models to better fit the data distribution. However, modifying the loss function only facilitates fitting the label information well and may suffer from generalizing label information when the behaviors of the minority class are diverse. Data-wise approaches generate new samples into the datasets for minority classes or remove existing samples from the datasets for majority classes. Synthetic Minority 5MM1325.DOCX Page 28 of 71 Attorney Docket No.08223-2304281 (6575WO01) Oversampling (SMOTE) generates new minority samples by linearly combining a minority sample with its k-nearest minority instances with a manually selected neighborhood size and number of synthetic instances. A series of advancements on SMOTE further introduce density estimation, data distribution-aware sampling to tackle the class imbalance problem without manual selection of neighborhood size and number of synthetic instances. [0086] Instead of conducting synthetic data sampling on a single class, Mixup achieved significant improvements in the image domain by synthesizing data points through linearly combining two random samples from different classes with a given combination ratio and creating soft labels for training the neural networks. As the Mixup assumes that all the classes are uniformly distributed for the image classification task, it is not applicable when the class distribution is skewed. To tackle this limitation, Mix-Boost introduces a skewed probability distribution to sample the combination ratio for linearly combining two heterogeneous samples. However, the imbalanced classification problem assumes that minority samples are clustered within the feature space, which may not be true when the minority class are anomalies. To this end, non-limiting embodiments or aspects of the present disclosure may consider the attributes of a pair of normal and anomaly samples for jointly identifying the best k-nearest neighborhood and the combination ratio. Non-limiting embodiments or aspects of the present disclosure may generate the synthetic samples with the combination ratio and identify the next pair of samples within the k-nearest neighborhood. In this way, non-limiting embodiments or aspects of the present disclosure may be capable of exploiting the label information while exploring the diversely distributed anomalies. [0087] Motivated by the recent success of domain-agnostic data mix up techniques in image domain and imbalance classification problems, a preliminary study to compare the random mix up of anomalies with the random mix up of anomalies and normal samples was conducted on a toy dataset. The dataset simulates the diverse behaviors of anomalies. As shown in FIG.4, which is graphs providing a comparison between random mix up anomalies (left graph) and random mix up anomalies with normal samples (right graph) in which data points with black colors are synthetic samples and the edge colors are their assigned label (red as anomaly and blue as normal sample), randomly mixing up anomalies creates synthetic samples which seem to have no difference to normal samples, labeled as anomalies (degrading the F1- 5MM1325.DOCX Page 29 of 71 Attorney Docket No.08223-2304281 (6575WO01) score from 0.73 to 0.66). Whereas, with proper control of the composition ratio of anomalies and normal samples, synthetic samples generated by mixing up anomalies with normal samples are apparently less noisy and therefore beneficial to the underlying classifier (improving the F1-score from 0.73 to 0.86). Nevertheless, it is non-trivial to select the proper composition ratio and, therefore, different pairs of anomalies and normal samples may require different mix-up strategies toward quality synthetic samples. [0088] To address the issue above, it may be necessary to develop an integrated framework to generalize the knowledge of labeled anomalies for arbitrary classifiers with the goal of advancing supervised anomaly detection. Specifically, given a set of labeled samples, a goal may be to identify a data augmentation strategy to mix up labeled normal samples with anomalies. In this way, the prior knowledge of label information can be generalized and the resulting synthetic samples can be adopted for training the classifiers toward maximal performance improvements. To achieve the goal, non-limiting embodiments or aspects of the present disclosure may learn a sample-wise policy which maps the feature attributes of each data sample into a data augmentation strategy. Meanwhile, the status of model training can be used as a reference to guide the data augmentation. [0089] However, it may be very challenging to develop such a framework for the following reasons. First, as existing data augmentation techniques create synthetic samples only according to feature distribution, there is no existing technique to simultaneously consider feature distribution and model status for synthesizing new samples. Second, even though the model status can be considered to create synthetic samples, the model may not necessarily be converged when synthesizing samples. In this way, the generated synthetic samples may not be beneficial when the model has not converged yet. Third, the augmentation strategy may be composed of discrete and continuous values, and learning such a mapping may be challenging. For example, composition ratio is a continuous number where the number of oversampling is a discrete number. [0090] Non-limiting embodiments or aspects of the present disclosure may obtain a training dataset X train including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples; and execute a training episode by: (i) initializing a timestamp t; (ii) receiving, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an 5MM1325.DOCX Page 30 of 71 Attorney Docket No.08223-2304281 (6575WO01) action vector ^^ for the timestamp ^, wherein the actor network ^ is configured to generate the action vector ^^ based on a state ^^, wherein the state ^^ is determined based on a current pair of source samples of the plurality of source samples, and wherein the action vector ^^ includes a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^; (iii) combining the current pair of source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn; (iv) training, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^; (v) obtaining, based on the size of a nearest neighborhood ^, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn; (vi) generating, with the machine learning classifier ^, for the source samples in the k- nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs; (vii) selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples; (viii) storing, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward rt, wherein the next state ^^+1 is determined based on the next pair of source samples, and wherein the reward rt is determined based on the plurality of classifier outputs; (ix) determining whether the termination probability ^ satisfies a termination threshold; (x) in response to determining that the termination probability ^ fails to satisfy the termination threshold, incrementing the timestamp t, for a number of training steps S: training the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t ; and training the actor network ^ according to an actor loss function that depends on an output of the critic network, and after training the actor network ^ and the critic network ^ for the number of training steps S, returning to step (ii) with the next pair of source samples as the current pair of source samples; (xi) in response to determining that the termination probability ^ satisfies the termination threshold, determining whether the number of training episodes executed satisfies a threshold number of training episodes; (xii) in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return to step (i) to execute a next training episode; and (xiii) in response to determining that the number of training episodes 5MM1325.DOCX Page 31 of 71 Attorney Docket No.08223-2304281 (6575WO01) executed satisfies the threshold number of training episodes, provide the machine learning classifier ^. [0091] In this way, non-limiting embodiments or aspects of the present disclosure may formulate a synthetic oversampling procedure into a Markov decision process and/or tailor an exploratory reward function for learning a data augmenter through exploring an uncertainty of an underlying supervised classifier. For example, by traversing through the feature space of the original dataset with the guidance of model performance and model uncertainty, the generated synthetic samples may follow the original data distribution and/or contain information that is not reflected in the original dataset but may be beneficial to improve the model performance. For example, non- limiting embodiments or aspects of the present disclosure may train a “Data Mixer” that generalizes label information into synthetic data points for training the classifier. As an example, in each step, a pair of data samples with different labels may be sampled as the input of the data mixer and/or a “mix up” or composition ratio may be output to create synthetic samples for the training data, and/or an output k of the data mixer may be leveraged to decide a next pair of samples from a k-nearest neighborhood of the created synthetic sample. In the meantime, an ε output by the data mixer may be leveraged to draw a probability to stop the synthetic oversampling process to inhibit or prevent the model from overfitting to the synthetic data samples. In each step, a combinatorial reward signal, which aims at improving classification performance on a validation dataset while exploring the uncertainty of the underlying classifier may be used. [0092] Accordingly, non-limiting embodiments or aspects of the present disclosure may formulate a feature space traversal into a Markov decision process to solve the problem as a sequential decision-making problem with a deep reinforcement learning algorithm. In this way, instead of having a unified strategy to create synthetic data samples, non-limiting embodiments or aspects of the present disclosure may customize a synthetic strategy to individual data points and different underlying classifiers to create fine-grained synthetic samples that provide beneficial information that boosts the performance of anomaly detection. Further, non-limiting embodiments of the present disclosure may provide a reward function that focuses on an improvement of the classification performance, rather than the performance itself. In this way, even though the underlying classifier may not be converged during the training procedure, the classifier may still provide meaningful feedback for training the 5MM1325.DOCX Page 32 of 71 Attorney Docket No.08223-2304281 (6575WO01) data mixer. Still further, the reward function according to non-limiting embodiments or aspects of the present disclosure may explore the model uncertainty, which may enable the data to identify potentially beneficial information that was missing in the original dataset for further creating synthetic samples. [0093] Referring now to FIG.1, FIG.1 is a diagram of an example environment 100 in which devices, systems, methods, and/or products described herein, may be implemented. As shown in FIG. 1, environment 100 includes transaction processing network 101, which may include merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, issuer system 110, user device 112, and/or communication network 116. Transaction processing network 101, merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, issuer system 110, and/or user device 112, may interconnect (e.g., establish a connection to communicate, etc.) via wired connections, wireless connections, or a combination of wired and wireless connections. [0094] Merchant system 102 may include one or more devices capable of receiving information and/or data from payment gateway system 104, acquirer system 106, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.) and/or communicating information and/or data to payment gateway system 104, acquirer system 106, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.). Merchant system 102 may include a device capable of receiving information and/or data from user device 112 via a communication connection (e.g., an NFC communication connection, an RFID communication connection, a Bluetooth® communication connection, etc.) with user device 112 and/or communicating information and/or data to user device 112 via the communication connection. For example, merchant system 102 may include a computing device, such as a server, a group of servers, a client device, a group of client devices, and/or other like devices. In some non-limiting embodiments or aspects, merchant system 102 may be associated with a merchant as described herein. In some non-limiting embodiments or aspects, merchant system 102 may include one or more devices, such as computers, computer systems, and/or peripheral devices capable of being used by a merchant to conduct a payment transaction with 5MM1325.DOCX Page 33 of 71 Attorney Docket No.08223-2304281 (6575WO01) a user. For example, merchant system 102 may include a POS device and/or a POS system. [0095] Payment gateway system 104 may include one or more devices capable of receiving information and/or data from merchant system 102, acquirer system 106, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.) and/or communicating information and/or data to merchant system 102, acquirer system 106, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.). For example, payment gateway system 104 may include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, payment gateway system 104 is associated with a payment gateway as described herein. [0096] Acquirer system 106 may include one or more devices capable of receiving information and/or data from merchant system 102, payment gateway system 104, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.) and/or communicating information and/or data to merchant system 102, payment gateway system 104, transaction service provider system 108, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.). For example, acquirer system 106 may include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, acquirer system 106 may be associated with an acquirer as described herein. [0097] Transaction service provider system 108 may include one or more devices capable of receiving information and/or data from merchant system 102, payment gateway system 104, acquirer system 106, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.) and/or communicating information and/or data to merchant system 102, payment gateway system 104, acquirer system 106, issuer system 110, and/or user device 112 (e.g., via communication network 116, etc.). For example, transaction service provider system 108 may include a computing device, such as a server (e.g., a transaction processing server, etc.), a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, transaction service provider system 108 may be associated with a transaction service provider as described herein. In some non-limiting embodiments or aspects, 5MM1325.DOCX Page 34 of 71 Attorney Docket No.08223-2304281 (6575WO01) transaction service provider 108 may include and/or access one or more internal and/or external databases including transaction data. [0098] Issuer system 110 may include one or more devices capable of receiving information and/or data from merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or user device 112 (e.g., via communication network 116, etc.) and/or communicating information and/or data to merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or user device 112 (e.g., via communication network 116 etc.). For example, issuer system 110 may include a computing device, such as a server, a group of servers, and/or other like devices. In some non-limiting embodiments or aspects, issuer system 110 may be associated with an issuer institution as described herein. For example, issuer system 110 may be associated with an issuer institution that issued a payment account or instrument (e.g., a credit account, a debit account, a credit card, a debit card, etc.) to a user (e.g., a user associated with user device 112, etc.). [0099] In some non-limiting embodiments or aspects, transaction processing network 101 includes a plurality of systems in a communication path for processing a transaction. For example, transaction processing network 101 can include merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or issuer system 110 in a communication path (e.g., a communication path, a communication channel, a communication network, etc.) for processing an electronic payment transaction. As an example, transaction processing network 101 can process (e.g., initiate, conduct, authorize, etc.) an electronic payment transaction via the communication path between merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or issuer system 110. [00100] User device 112 may include one or more devices capable of receiving information and/or data from merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or issuer system 110 (e.g., via communication network 116, etc.) and/or communicating information and/or data to merchant system 102, payment gateway system 104, acquirer system 106, transaction service provider system 108, and/or issuer system 110 (e.g., via communication network 116, etc.). For example, user device 112 may include a client device and/or the like. In some non-limiting embodiments or aspects, user device 112 5MM1325.DOCX Page 35 of 71 Attorney Docket No.08223-2304281 (6575WO01) may be capable of receiving information (e.g., from merchant system 102, etc.) via a short range wireless communication connection (e.g., an NFC communication connection, an RFID communication connection, a Bluetooth® communication connection, and/or the like), and/or communicating information (e.g., to merchant system 102, etc.) via a short range wireless communication connection. In some non- limiting embodiments or aspects, user device 112 may include an application associated with user device 112, such as an application stored on user device 112, a mobile application (e.g., a mobile device application, a native application for a mobile device, a mobile cloud application for a mobile device, an electronic wallet application, an issuer bank application, and/or the like) stored and/or executed on user device 112. In some non-limiting embodiments or aspects, user device 112 may be associated with a sender account and/or a receiving account in a payment network for one or more transactions in the payment network. [00101] Communication network 116 may include one or more wired and/or wireless networks. For example, communication network 116 may include a cellular network (e.g., a long-term evolution (LTE) network, a third generation (3G) network, a fourth generation (4G) network, a fifth generation (5G) network, a code division multiple access (CDMA) network, etc.), a public land mobile network (PLMN), a local area network (LAN), a wide area network (WAN), a metropolitan area network (MAN), a telephone network (e.g., the public switched telephone network (PSTN)), a private network, an ad hoc network, an intranet, the Internet, a fiber optic-based network, a cloud computing network, and/or the like, and/or a combination of these or other types of networks. [00102] The number and arrangement of devices and systems shown in FIG.1 is provided as an example. There may be additional devices and/or systems, fewer devices and/or systems, different devices and/or systems, or differently arranged devices and/or systems than those shown in FIG.1. Furthermore, two or more devices and/or systems shown in FIG. 1 may be implemented within a single device and/or system, or a single device and/or system shown in FIG. 1 may be implemented as multiple, distributed devices and/or systems. Additionally or alternatively, a set of devices and/or systems (e.g., one or more devices or systems) of environment 100 may perform one or more functions described as being performed by another set of devices and/or systems of environment 100. 5MM1325.DOCX Page 36 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00103] Referring now to FIG.2, FIG.2 is a diagram of example components of a device 200. Device 200 may correspond to one or more devices of merchant system 102, one or more devices of payment gateway system 104, one or more devices of acquirer system 106, one or more devices of transaction service provider system 108, one or more devices of issuer system 110, and/or user device 112 (e.g., one or more devices of a system of user device 112, etc.). In some non-limiting embodiments or aspects, one or more devices of merchant system 102, one or more devices of payment gateway system 104, one or more devices of acquirer system 106, one or more devices of transaction service provider system 108, one or more devices of issuer system 110, and/or user device 112 (e.g., one or more devices of a system of user device 112, etc.) may include at least one device 200 and/or at least one component of device 200. As shown in FIG. 2, device 200 may include bus 202, processor 204, memory 206, storage component 208, input component 210, output component 212, and communication interface 214. [00104] Bus 202 may include a component that permits communication among the components of device 200. In some non-limiting embodiments or aspects, processor 204 may be implemented in hardware, software, or a combination of hardware and software. For example, processor 204 may include a processor (e.g., a central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), etc.), a microprocessor, a digital signal processor (DSP), and/or any processing component (e.g., a field-programmable gate array (FPGA), an application- specific integrated circuit (ASIC), etc.) that can be programmed to perform a function. Memory 206 may include random access memory (RAM), read-only memory (ROM), and/or another type of dynamic or static storage device (e.g., flash memory, magnetic memory, optical memory, etc.) that stores information and/or instructions for use by processor 204. [00105] Storage component 208 may store information and/or software related to the operation and use of device 200. For example, storage component 208 may include a hard disk (e.g., a magnetic disk, an optical disk, a magneto-optic disk, a solid state disk, etc.), a compact disc (CD), a digital versatile disc (DVD), a floppy disk, a cartridge, a magnetic tape, and/or another type of computer-readable medium, along with a corresponding drive. [00106] Input component 210 may include a component that permits device 200 to receive information, such as via user input (e.g., a touch screen display, a keyboard, 5MM1325.DOCX Page 37 of 71 Attorney Docket No.08223-2304281 (6575WO01) a keypad, a mouse, a button, a switch, a microphone, etc.). Additionally or alternatively, input component 210 may include a sensor for sensing information (e.g., a global positioning system (GPS) component, an accelerometer, a gyroscope, an actuator, etc.). Output component 212 may include a component that provides output information from device 200 (e.g., a display, a speaker, one or more light-emitting diodes (LEDs), etc.). [00107] Communication interface 214 may include a transceiver-like component (e.g., a transceiver, a separate receiver and transmitter, etc.) that enables device 200 to communicate with other devices, such as via a wired connection, a wireless connection, or a combination of wired and wireless connections. Communication interface 214 may permit device 200 to receive information from another device and/or provide information to another device. For example, communication interface 214 may include an Ethernet interface, an optical interface, a coaxial interface, an infrared interface, a radio frequency (RF) interface, a universal serial bus (USB) interface, a Wi-Fi® interface, a cellular network interface, and/or the like. [00108] Device 200 may perform one or more processes described herein. Device 200 may perform these processes based on processor 204 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), etc.) executing software instructions stored by a computer-readable medium, such as memory 206 and/or storage component 208. A computer-readable medium (e.g., a non-transitory computer- readable medium) is defined herein as a non-transitory memory device. A non- transitory memory device includes memory space located inside of a single physical storage device or memory space spread across multiple physical storage devices. [00109] Software instructions may be read into memory 206 and/or storage component 208 from another computer-readable medium or from another device via communication interface 214. When executed, software instructions stored in memory 206 and/or storage component 208 may cause processor 204 to perform one or more processes described herein. Additionally or alternatively, hardwired circuitry may be used in place of or in combination with software instructions to perform one or more processes described herein. Thus, embodiments or aspects described herein are not limited to any specific combination of hardware circuitry and software. [00110] Memory 206 and/or storage component 208 may include data storage or one or more data structures (e.g., a database, etc.). Device 200 may be capable of receiving information from, storing information in, communicating information to, or 5MM1325.DOCX Page 38 of 71 Attorney Docket No.08223-2304281 (6575WO01) searching information stored in the data storage or one or more data structures in memory 206 and/or storage component 208. [00111] The number and arrangement of components shown in FIG.2 are provided as an example. In some non-limiting embodiments or aspects, device 200 may include additional components, fewer components, different components, or differently arranged components than those shown in FIG.2. Additionally or alternatively, a set of components (e.g., one or more components) of device 200 may perform one or more functions described as being performed by another set of components of device 200. [00112] Referring now to FIGS.3A-3C, FIGS.3A-3C is a flowchart of non-limiting embodiments or aspects of a process 300 for synthetic oversampling for boosting supervised anomaly detection. In some non-limiting embodiments or aspects, one or more of the steps of process 300 may be performed (e.g., completely, partially, etc.) by transaction service provider system 108 (e.g., one or more devices of transaction service provider system 108, etc.). In some non-limiting embodiments or aspects, one or more of the steps of process 300 may be performed (e.g., completely, partially, etc.) by another device or a group of devices separate from or including transaction service provider system 108, such as merchant system 102 (e.g., one or more devices of merchant system 102), payment gateway system 104 (e.g., one or more devices of payment gateway system 104), acquirer system 106 (e.g., one or more devices of acquirer system 106), issuer system 110 (e.g., one or more devices of issuer system 110), and/or user device 112 (e.g., one or more devices of a system of user device 112). [00113] As shown in FIG.3A, at step 302, process 300 includes obtaining a training dataset including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples. For example, transaction service provider system may obtain a training dataset X train including a plurality of source samples including a plurality of labeled normal samples and a plurality of labeled anomaly samples. As an example, following a setting of semi-supervised anomaly detection, a training dataset of supervised anomaly detection X train = {^1, ^2, ... , ^^+^} may be composed of a set of labeled non-anomalies or normalies N = {^1, ^2, ... , ^^} and a set of labeled anomalies A = {^^+1, ^^+2, . . . , ^^+^}, where ^ ≫ ^. A goal of supervised anomaly detection may be to learn a classifier ^ : X → ℝ which evaluates 5MM1325.DOCX Page 39 of 71 Attorney Docket No.08223-2304281 (6575WO01) the probability of individual data points as anomalies in the given dataset X via exploiting the prior knowledge of labeled normalies N and anomalies A, so that ^ (^^) > ^ (^^) when ^^ ∈ A and ^^ ∈ N. In some non-limiting embodiments or aspects, the plurality of source samples may be associated with a plurality of transactions in transaction processing network 101, the plurality of labeled normal samples may be associated with a plurality of non-fraudulent transactions of the plurality of transactions, and/or the plurality of labeled anomaly samples may be associated with a plurality of fraudulent transactions of the plurality of transactions. [00114] To better generalize the knowledge from the label information, a problem of strategic data augmentation may be defined as follows: Given a dataset X train = {N,A} where X train with a supervised classifier ^, non-limiting embodiments or aspects of the present disclosure may have a target or objective of augmenting the dataset X train with a synthetic dataset X syn according to the ^, where the synthetic dataset X syn ∈ ℝ ^×^ is generated via mixing up samples from N with samples from A. For example, an objective of non-limiting embodiments or aspects of the present disclosure may be to properly sample pairs of data instances from N and A with corresponding mix up ratio ^ to create synthetic instances ^ syn such that the performance of ^ can be improved or maximized by being trained on X train = {X train ∪ [00115] To leverage label information from two different classes, Mixup, which is disclosed in the paper titled “Mixup: Beyond empirical risk minimization” by Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz, 2017 (arXiv preprint arXiv:1710.09412), the disclosure of which is hereby incorporated by reference in its entirety, performs synthetic data generation over two samples from different classes, which has been extensively studied to augment image and textual data. An idea of Mixup is to linearly combine two samples according to the following Equation (1): where ^ ∈ [0.0, 1.0] controls the composition of ^ syn . Although existing works generate a soft label of ^ syn in the same fashion for the imbalance classification problem, the diverse behaviors of the anomalies lead to the similar labels with high granularity on 5MM1325.DOCX Page 40 of 71 Attorney Docket No.08223-2304281 (6575WO01) diverse synthetic samples, which may prompt the model to over-fit on noisy synthetic labels. To this end, instead of generating soft labels, non-limiting embodiments or aspects of the present disclosure may synthesize hard labels for ^ syn according to the following Equation (2): (2) [00116] Due to the diverse behavior of anomalies, arbitrarily mixing up two random source samples from the dataset X train may lead to noisy samples. To tackle this problem, non-limiting embodiments or aspects of the present disclosure may seek to identify a meaningful pair of samples for synthesizing new samples. As normal samples are often concentrated in the latent space, and borderline samples are more informative, non-limiting embodiments or aspects of the present disclosure may traverse the feature space of X train with the guidance of the decision boundary of the model ^ for synthetic oversampling. For example, given a pair of arbitrary source samples, non-limiting embodiments or aspects of the present disclosure may consider the attributes of the two samples to identify corresponding ^ and number of oversampling for generating a set of ^ ^^^ ∈ X syn . Meanwhile, an optimal range for uniformly sampling the next pair of source samples may be identified according to the model status for iterating to the next round of the mix-up process. An intuition behind the uniform sampling is to consider the relationship between the attributes of the source samples and their entire neighborhood information instead of focusing on a certain sample in the neighborhood. Referring also to FIG. 5, FIG. 5 illustrates non- limiting embodiments or aspects of an iterative mix-up process. In FIG. 5, the background colors indicate the model decision boundary. The attributes of the source samples (purple circled) are considered to specify the composition ratio ^ for oversampling the synthetic sample (black points) with corresponding labels (outer circle of black points) ^ times. A ^-nearest neighborhood is identified for randomly sampling the next pair of source samples. The process iterates to the next round with the new pair of source samples. For example, as shown in FIG. 5, based on the attributes of source samples ^0 and ^1, the composition ratio ^ = 4 is specified and the number of oversampling ^ = 1 for creating synthetic sample ^ syn . A ^-nearest 5MM1325.DOCX Page 41 of 71 Attorney Docket No.08223-2304281 (6575WO01) neighborhood is identified to randomly sample the next pair of source samples ^2 and ^3 for the next round of mix-up. [00117] An iterative mix-up process according to non-limiting embodiments or aspects of the present disclosure may have several desirable properties. The iterative mix-up process can make personalized decisions. As an example, more samples may be generated for some instances and fewer samples may be generated for other instances. The iterative mix-up process can incorporate various information to guide the mix-up process. As an example, data attributes and model status can be considered and serve as guidance for generating samples. The iterative mix-up process, by simultaneously considering the model status with the feature distribution, may directly generate information that is missing in the original dataset but beneficial for model training. [00118] Still referring to FIG.5, and referring also to FIG.6, which illustrates non- limiting embodiments or aspects of a network architecture for synthetic oversampling for boosting supervised anomaly detection, an iterative mix-up process according to non-limiting embodiments or aspects of the present disclosure can be formulated as a Markov decision process (MDP) with a quintuple , where ^ is a finite set of states, ^ is a finite set of actions, ^ : ^ → ^ is the state transition function that maps the current state ^, action ^ to the next state ^′, ^ : ^ × ^ → R is the immediate reward function that reflects the quality of action ^ for the state ^, and ^ is a decade factor to gradually consider the future transitions. To sample-wise tailor the mix-up strategy, a data mixer that maps the attributes of source data samples into an augmentation strategy for creating synthetic samples while exploring model decision boundaries may be learned. The MDP may be defined as follows: [00119] State Space (^): At each timestamp ^, state ^^ ∈ ^ may be defined as ^ " = (^ $ " , ^ % " ), where ^^ ∈ ℝ 2^ is a concatenation of two ^-dimensional feature vectors of the two source samples. Therefore, the state space may be defined as ^ = [00120] Action Space (^): At each timestamp ^, the action ^^ ∈ ^ where ^^ = (^, ^, ^, ^) may be a vector composed of the size of neighborhood ^, composition ratio ^, number of oversampling ^, and the termination probability of the iterative mix-up process ^. Therefore, the action space may be defined as a discrete-continuous space ^ = {( , ^^ , ^^, ^^) | ^^, ^^, ^^ ∈ N, ^^ ∈ ℝ + }. 5MM1325.DOCX Page 42 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00121] Transition Function (^): Given a state ^ " = (^ $ " , ^ % " ) and an action ^^ = (^, ^, ^, ^), the transition function may adopt ^ using Equations (1) and (2) to oversample ^syn for ^ times. The resulting synthetic samples Xsyn may be adopted for training the classifier and lead to a classifier ^^ in timestamp ^. The transition function may shift to the next state ^ ".% = (^ $ ".% , ^ % ".% ) where the is randomly sampled from the ^- nearest neighborhood of the ^syn and the ^ % ".% is identified as the nearest data point with different label from ^ $ ".% . [00122] Reward Function (^): The reward signal /^ for each timestamp ^ may be designed to encourage performance improvement while exploring the decision boundaries of the classifier ^. Therefore, the reward function may be defined according to the following Equation (3): where 0 is a hyperparameter to define the strength of the reward signal, ℳ is an evaluation metric, and Δℳ(^^) measures the performance improvement of ^^. The 2(^^|^^, ^^) evaluates the model confidence to encourage exploring the decision space of ^^. In this way, the reward signal may drive the data mixer to explore the classifier while achieving maximum improvement with the newly synthesized data samples. [00123] To solve the MDP, a parameterized policy ^ 3 may be defined as the data mixer to maximize the reward signal of the MDP, where an ultimate goal is to learn an optimal policy π 3 that maximize the cumulative reward 67∑ 9 " :$ ^ " / " . However, the action space of the iterative mix-up process is a discrete-continuous vector and the reward signals generated from an under-fitted ^^ may be instable. To this end, non- limiting embodiments or aspects of the present disclosure may employ the deep deterministic policy gradient (DDPG) as disclosed in the paper titled “Continuous control with deep reinforcement learning” by Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra, 2015 (arXiv preprint arXiv:1509.02971, the disclosure of which is hereby incorporated by reference in its entirety, which an actor-critic framework that equips with two separate networks: actor and critic. The critic network (^^, ^^|^1) approximates the reward signal for a state-action pair from the MDP, while the actor network (^^|^2) aims to learn the policy for given a state ^^ based on the critic network. Additionally, or alternatively, an advanced actor-critic framework such as soft actor-critic as disclosed 5MM1325.DOCX Page 43 of 71 Attorney Docket No.08223-2304281 (6575WO01) in the paper titled “Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor” by Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine, 2018, In International conference on machine learning. PMLR, 1861– 1870, the disclosure of which is hereby incorporated by reference in its entirety, may be adopted to learn the policy π 3. [00124] To perform a continuous action, the DDPG learns an actor network (^^|^1) that deterministically maps a given state ^^ to an action vector ^^ and trains the network by maximizing the approximated cumulative reward generated by the critic network (.|^2). For example, given ; transitions, a projected action (^^|^1) may be generated as the input of the critic to minimize a loss function defined according to the following Equation (4): where the action ^(^^) is a 4-dimensional real number vector. To fully leverage the expressive power of the deep neural network during the training while outputting a discrete-continuous vector for the MDP, the continuous action vector (^^) may be transformed with a sigmoid function and yield the action vector ^^ = < ◦ =(^ (^^|^1)) where < specifies the value constraints of individual entries. For example, if the maximum for the ^ and ^ are 10 and 5, then < = [10, 1, 5, 1] since ^ and ^ are expected to be ranging from 0 to 1. The outcome for the ^ and ^ may be rounded to the nearest integer. [00125] To tackle the in-stable reward signal issue, the DDPG approximates the reward signal with the critic network (.|^2) and trains the networks in an off-policy fashion. It introduces a replay buffer to store historical and randomly sample transitions to minimize the temporal correlation between two transitions for learning across a set of uncorrelated transitions. For example, the critic network (.|^2) may map a state- action pair into a real value ^^ via minimizing a loss function defined according to the following Equation (5): 5MM1325.DOCX Page 44 of 71 Attorney Docket No.08223-2304281 (6575WO01) where ^^ = ^(^^, ^^) + ^^(^^+1, ^(^^+1||^1) |^2) is a signal derived from the Bellman equation which considers the recursive relation between the current real and the future approximated reward signals for maximizing cumulative reward where ^(^^+1|^1) is an action specified by the actor network and the ^ is the decade factor. [00126] As shown in FIG.3A, at step 304, process 300 includes initializing a time stamp. For example, transaction service provider system 108 may initialize a timestamp t. As an example, and referring also to FIG. 7, which illustrates an implementation of non-limiting embodiments or aspects of a training procedure of a process for synthetic oversampling for boosting supervised anomaly detection, transaction service provider system 108 may initialize the timestamp t when executing a training episode e of a number of training episodes E to be executed. In such an example, transaction service provider system 108 may initialize the timestamp t in response to executing each training episode e of the number of training episodes E to be executed (e.g., in response to executing a new or next training episode, etc.). [00127] As shown in FIG.3A, at step 306, process 300 includes receiving, from an actor network of an actor critic framework including the actor network and a critic network, an action vector for the timestamp. For example, and referring again to FIGS. 6 and 7, transaction service provider system 108 may receive, from an actor network ^ of an actor critic framework including the actor network ^ and a critic network ^, an action vector ^^ for the timestamp ^. The actor network ^ may be configured to generate the action vector ^^ based on a state ^^. The state ^^ may be determined based on a current pair of source samples of the plurality of source samples. The action vector ^^ may include a size of a nearest neighborhood ^, a composition ratio ^, a number of oversampling ^, and a termination probability ^. [00128] As shown in FIG. 3A, at step 308, process 300 includes combining the current pair of source samples according to the composition ratio and the number of oversampling to generate a labeled synthetic sample associated with a label. For example, transaction service provider system 108 may combine the current pair of 5MM1325.DOCX Page 45 of 71 Attorney Docket No.08223-2304281 (6575WO01) source samples according to the composition ratio ^ and the number of oversampling ^ to generate a labeled synthetic sample ^syn associated with a label ysyn. [00129] In some non-limiting embodiment or aspects, the current pair of source samples may be combined according to the composition ratio ^ to generate the labeled synthetic sample ^syn according to Equations (1) and (2), where x0 is a first sample of the current pair of samples, x1 is a second sample of the current pair of samples, ysyn is a hard label for the labeled synthetic sample ^syn, y0 is a first hard label value, and y1 is a second hard label value. [00130] As shown in FIG.3A, at step 310, process 300 includes training, using the labeled synthetic sample and the label, a machine learning classifier. For example, transaction service provider system 108 may train, using the labeled synthetic sample ^syn and the label ysyn, a machine learning classifier ^. As an example, transaction service provider system 108 may provide, as input to the machine learning classifier ^, the labeled synthetic sample ^syn, and modify one or more parameters of the machine learning classifier ^, according to an objective function that depends on the output of the machine learning classifier ^ for the labeled synthetic sample ^syn and the label ysyn. For example, and referring again to FIG.6, in each step, a pair of normal samples and anomalies may be input to the data mixer, the corresponding action may be generated to create synthetic samples, and the synthetic samples are adopted to train the classifier and yield the reward signal for updating the data mixer. [00131] As shown in FIG. 3A, at step 312, process 300 includes obtaining, based on the size of a nearest neighborhood, source samples in the k-nearest neighborhood of the labeled synthetic sample. For example, transaction service provider system 108 may obtain, based on the size of a nearest neighborhood ^, source samples in the k- nearest neighborhood of the labeled synthetic sample ^syn. As an example, transaction service provider system 108 may computing, using a k-nearest neighbors algorithm, source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn. [00132] In some non-limiting embodiments or aspects, transaction service provider system 102 may, before executing the training episode (e.g., before executing any training episode, etc.), train, using the training dataset X train , the machine learning classifier ^; and pre-compute each k-nearest neighborhood for each source sample of the plurality of source samples in the training dataset X train . Transaction service provider system 102 may store the pre-computed each k-nearest neighborhood for 5MM1325.DOCX Page 46 of 71 Attorney Docket No.08223-2304281 (6575WO01) each source sample of the plurality of source samples in the training dataset X train for use during the training procedure. In this way, non-limiting embodiments or aspects of the present disclosure may reduce a computational cost during the training procedure. [00133] As shown in FIG. 3A, at step 314, process 300 includes generating, with the machine learning classifier, for the source samples in the k-nearest neighborhood of the labeled synthetic sample and a subset of the plurality of source samples of the training dataset in a validation dataset, a plurality of classifier outputs. For example, transaction service provider system 108 may generate, with the machine learning classifier ^, for the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , a plurality of classifier outputs. As an example, transaction service provider system 108 may provide, as input to the machine learning classifier ^, the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn and a subset of the plurality of source samples of the training dataset X train in a validation dataset X val , and receive, as output from the machine learning classifier ^, a plurality of classifier outputs. [00134] As shown in FIG.3B, at step 316, process 300 includes selecting, from the source samples in the k-nearest neighborhood of the labeled synthetic sample, a next pair of source samples. For example, transaction service provider system 108 may select (e.g., randomly select, etc.), from the source samples in the k-nearest neighborhood of the labeled synthetic sample ^syn, a next pair of source samples. [00135] As shown in FIG. 3B, at step 318, process 300 includes storing, in a memory buffer, the state, the action vector, a next state, and a reward. For example, transaction service provider system 108 may store, in a memory buffer, the state ^^, the action vector ^^, a next state ^^+1, and a reward r t . The next state ^^+1 may be determined based on the next pair of source samples. The reward r t may be determined based on the plurality of classifier outputs. [00136] In some non-limiting embodiments or aspects, the reward rt may be determined according to the following Equations (6) and (7): 5MM1325.DOCX Page 47 of 71 Attorney Docket No.08223-2304281 (6575WO01) where M is an evaluation metric, ΔM(^^) measures a performance improvement of the trained classifier ^^, X val is the validation data set, y val is a label set for the training data set, is a baseline for the timestamp t, m is a hyperparameter to define a buffer size for forming the baseline, C(^^|^^, ^^) evaluates a model confidence of the trained classifier ^^, P is a model exploration function, k is the size of the nearest neighborhood specified by the action vector ^^, xi is a k-nearest neighborhood of the labeled synthetic sample ^syn in timestamp t, and yi is a label for xi. [00137] For example, the reward signal may include an improvement stimulation component Δℳ(^^) and model exploration component A(^^|^^, ^^). To learn an optimal policy for the target tasks, existing solutions directly adopt the performance on a validation dataset as a reward signal. However, as the convergence of the underlying classifier is not guaranteed, directly learning a policy with the performance on a validation set may lead to noisy reward signals. As a result, rather than using the current model’s performance Bℎ^^, non-limiting embodiments or aspects of the present disclosure provide an improvement stimulation to pursue the maximum model improvement on the validation set with a baseline performance according to (6). In such an example, synthetic samples may be created by mixing normal samples and anomalies while iteratively training the classifier ^, while exploring model decision boundaries to create beneficial samples and reduce or prevent generating noisy samples. Accordingly, a model exploration signal to quantify the instance-wise prediction uncertainty may be defined according to Equation (7) to encourage the data mixer to explore the uncertain area in the feature space. 5MM1325.DOCX Page 48 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00138] As shown in FIG. 3B, at step 320, process 300 includes determining whether the termination probability satisfies a termination threshold. For example, transaction service provider system 108 may determine whether the termination probability ^ satisfies a termination threshold (e.g., a predetermined termination threshold, a hyperparameter set by a user, etc.) As an example, and referring again to FIG.7, transaction service provider system 108 may iteratively train the classifier ^ until the termination probability ^ satisfies the terminal threshold. [00139] As shown in FIG. 3B, at step 322, process 300 includes, in response to determining that the termination probability fails to satisfy the termination threshold in step 320, incrementing the timestamp. For example, transaction service provider system 108 may, in response to determining that the termination probability ^ fails to satisfy the termination threshold in step 320, increment the timestamp t. [00140] As shown in FIG.3B, at step 324, process 300 includes training the critic network according to a critic loss function that depends on the state, the action vector, and the reward. For example, transaction service provider system 108 may train the critic network ^ according to a critic loss function that depends on the state ^^, the action vector ^^, and the reward r t . As an example, the critic loss function may be defined according to Equation (5), where ^^ = R(^^, ^^) + ^^(^^+1, ^(^^+1 |^1) |^2), ^(^^+1|^1) is an action specified by the actor network, and ^ is a decade factor. [00141] As shown in FIG.3B, at step 326, process 300 includes training the actor network according to an actor loss function that depends on an output of the critic network. For example, transaction service provider system 108 may train the actor network ^ according to an actor loss function that depends on an output of the critic network. As an example, the actor function may be defined according to Equation (4), where N is a number of transitions, ^(^i|^2) is a projected action for a state si, and ^(^^, ^(^^)|^2) is an output of the critic network for the projected action ^(^i|^2) and state si, [00142] As shown in FIG. 3B, at step 328, process 300 includes determining whether the actor network and the critic network have been trained for a number of training steps. For example, transaction service provider system 108 may determine whether the actor network ^ and the critic network ^ have been trained for a number of training steps S. [00143] In response to determining that the actor network and the critic network have not been trained for the number of training steps in step 328, processing may 5MM1325.DOCX Page 49 of 71 Attorney Docket No.08223-2304281 (6575WO01) return to step 324 to train the actor network and the critic network in the next training step. For example, transaction service provider system 108 may, in response to determining that the actor network ^ and the critic network ^ have not been trained for the number of training steps S in step 328, return processing to step 324 to train the actor network ^ and the critic network ^ in the next training step. [00144] In response to determining that the actor network and the critic network have been trained for the number of training steps in step 328, processing may return to step 306 with the next pair of source samples as the current pair of source samples. For example, transaction service provider system 108 may, in response to determining that the actor network ^ and the critic network ^ have been trained for the number of training steps S in step 328, return processing to step 306 with the next pair of source samples as the current pair of source samples. [00145] As shown in FIG. 3B, at step 330, process 300 includes, in response to determining that the termination probability satisfies the termination threshold in step 320, determining whether the number of training episodes executed satisfies a threshold number of training episodes. For example, transaction service provider system 108 may, in response to determining that the termination probability ^ satisfies the termination threshold in step 320, determine whether the number of training episodes executed satisfies a threshold number of training episodes. [00146] In response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes in step 330, processing may return to step 304 to execute a next training episode. For example, transaction service provider system 108 may, in response to determining that the number of training episodes executed fails to satisfy the threshold number of training episodes, return processing to step 304 to execute a next training episode. [00147] As shown in FIG. 3B, at step 332, process 300 includes, in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, providing the machine learning classifier. For example, transaction service provider system 108 may, in response to determining that the number of training episodes executed satisfies the threshold number of training episodes, provide the machine learning classifier ^ (e.g., provide the trained machine learning classifier, etc.). 5MM1325.DOCX Page 50 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00148] As shown in FIG. 3C, at step 334, process 300 includes receiving transaction data associated with a transaction currently being processed in the transaction processing network. For example, transaction service provider system 108 may receive transaction data associated with a transaction currently being processed in transaction processing network 101. [00149] In some non-limiting embodiments or aspects, transaction data may include parameters associated with a transaction, such as an account identifier (e.g., a PAN, etc.), a transaction amount, a transaction date and time, a type of products and/or services associated with the transaction, a conversion rate of currency, a type of currency, a merchant type, a merchant name, a merchant location, a transaction approval (and/or decline) rate, and/or the like. [00150] As shown in FIG.3C, at step 336, process 300 includes processing, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction. For example, transaction service provider system 108 may process, using the trained machine learning classifier, the transaction data to classify the transaction as a fraudulent or non-fraudulent transaction. As an example, transaction service provider system 108 may provide, as input to the trained machine learning classifier, one or more parameters of the transaction data and receive, as output from the trained machine learning classifier, a prediction (e.g., a probability, a likelihood, a yes/no, etc.) that the transaction is a fraudulent or non-fraudulent transaction. [00151] As shown in FIG. 3C, at step 338, process 300 includes authorizing or denying, based on the classification, the transaction the transaction processing network. For example, transaction service provider system 108 may authorize or deny, based on the classification, the transaction the transaction processing network. As an example, transaction service provider system 108 may, in response to classifying the transaction as a fraudulent transaction, deny authorization of the transaction in transaction processing network 101. As an example, transaction service provider system 108 may, in response to classifying the transaction as a non- fraudulent transaction, authorize the transaction in transaction processing network 101. [00152] Experiment [00153] Discussed below are experiments in which a universal data mixer with supervised anomaly detection according to non-limiting embodiments or aspects of 5MM1325.DOCX Page 51 of 71 Attorney Docket No.08223-2304281 (6575WO01) the present disclosure (e.g., referred to as “AnoMix” in FIGS. 9-11) is evaluated against the state-of-the-art label-informed anomaly detection algorithms and numerous data augmentation approaches. The experiments aimed to answer the following questions: RQ: How does the proposed framework AnoMix compare against existing data augmentation methods?; RQ2: How does the proposed framework AnoMix compare against existing label-informed anomaly detection methods?; and RQ3: How does each component contribute to the performance of the proposed framework AnoMix? [00154] Horizontal analysis was conducted to compare a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with data augmentation methods on three different classifiers. Vertical analysis that compares the proposed framework with label-informed anomaly detectors was also conducted. FIG. 8 is a table summarizing statistics of datasets used for the experiment. As shown in FIG.8, the following five benchmark datasets from different domains were used: a Japanese Vowels dataset, an Annthyroid dataset, a Mammography dataset, a Satellite dataset, and an SMTP dataset. [00155] The Japanese Vowels dataset contains utterances of /ae/ that were recorded from nine speakers with 12 LPC cepstrum coefficients. The goal is to identify the outlier speaker. [00156] The Annthyroid dataset is a set of clinical records that record 21 physical attributes of over 7200 patients. The goal is to identify the patients that are potentially suffering from hypothyroidism. [00157] The Mammography dataset is composed of 6 features extracted from the images, including shape, margin, density, etc. The goal is to identify malignant cases that could potentially lead to breast cancer. [00158] The Satellite dataset contains the remote sensing data of 6,435 regions, where each region is segmented into a 3x3 square neighborhood region and is monitored by 4 different light wavelengths captured from the satellite images of the Earth. The goal is to identify regions with abnormal soil status. [00159] The SMTP has 95,156 server connections with 41 connection attributes including duration, src_byte, dst_byte, and so on. The task is to identify malicious attacks from the connection log. [00160] Each of the datasets is publically available in OpenML. Widely adopted protocol proposed by the ODDS Library was used to process the data. 5MM1325.DOCX Page 52 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00161] FIG. 9 is a table summarizing a horizontal comparison between a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) and the following representative data augmentation methods conducted on 3 different classifiers (i.e., KNN, XGBoost, MLP). Random is a basic baseline which generates synthetic anomalies by randomly averaging two existing anomalies. SMOTE linearly combines existing anomalies with their K-nearest anomalies through randomly sampled combination ratios. BorderlineSMOTE identifies a borderline between anomalies and normal samples with the K-nearest neighborhood of each anomaly. Then, the SMOTE is performed on the anomalies in the borderline area. SVMSMOTE introduces a support vector classifier to identify a borderline between anomalies and normal samples and perform SMOTE on anomalies near to the borderline. ADASYN calculates the number of synthetic samples generated from individual anomalies using the estimated local distribution and then performs SMOTE on them. MixBoost uses a randomly sampled combination ratio to mix anomalies and normals. The number of oversamples for individual anomalies is weightily sampled based on the entropy of the underlying classifier. [00162] FIG.10 is a table summarizing a vertical comparison between a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) and label-informed detection algorithms to study the effectiveness of synthetic oversampling. XGBOD exploits the anomalous scores of individual data points generated from multiple unsupervised algorithms as the input feature for the underlying XGBoost classifier. DeepSAD develops a semi-supervised loss function which classifies the known normal samples and unlabeled samples into a unified cluster and deviates the known anomalies from the cluster. DevNet uses label information with a prior probability to enforce significant deviations in anomaly scores from the majority of normal samples. [00163] Macro-averaged precision, recall, and F1-score, which compute the scores separately for each class and average the scores, were adopted as an evaluation protocol. The intuition behind this is to equalize the significance of anomaly detection and normal sample classification since the minimum false alarms is also a critical evaluation criterion. 5-fold cross validation was conducted with 80% data for training and 20% for testing. In addition, 40% of the training data is further split into a validation set for our framework to generate reward signals or for baseline methods to perform model tuning. The average performance on the testing set is reported. 5MM1325.DOCX Page 53 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00164] For the horizontal analysis, a KNN classifier with ^ = 15, a XGBoost classifier with the linear kernel, and the Adam optimizer with relu activation function for a 128-64 multi-layer perceptron classifier were used. For the vertical analysis, XGBOD from PyOD and public available implementations of DevNet and DeepSAD were adopted. Since the output of Dev and DeepSAD are anomaly scores, the thresholds for the two methods from {0.5^, 1.0^, 1.5^, 2.0^} of the anomaly ratio are searched for to perform classification and report the best result. For theframework according to non-limiting embodiments or aspects of the present disclosure, a maximum neighborhood size F = 15, a reward coefficient 0 = 10.0, a window size for baseline ^ = 25 were used and the macro-averaged F1-score for the reward signal was adopted. [00165] Referring again to FIG.9, to answer RQ1, a framework according to non- limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) is compared to cutting-edge synthetic oversampling methods for anomaly detection. As shown in FIG.9, the macro-averaged precision, recall, and F1-score of each data augmentation method across 3 different classifiers are tabulated. The performances without data augmentation on the original classifiers are also reported to show insights into how different classifiers impact the performances. In general, the framework according to non-limiting embodiments or aspects of the present disclosure is able to significantly outperform all of the baseline augmentation methods and achieve at least 8.5% and at most 29.4% improvements on the F1-score. Based on FIG. 9, the following observations can be made. [00166] First, by comparing the baseline augmentation methods with the classifiers without data augmentation, it can be observed that the performance of the baseline augmentation methods are generally inferior to the classifier trained without data augmentation. Specifically, the average F1-scores of the baseline augmentation methods are consistently lower than the vanilla classifiers on the five datasets. The only exception is the Mixboost with KNN classifier, which is due to its decent performance on the SMTP dataset. Further investigation into this phenomenon suggests that randomly mixing up normal samples with anomalies when anomalies are extremely sparse is able to create beneficial synthetic normal samples that concrete the decision boundary. This supports the claim that the existing data augmentation methods are not capable of handling the diverse behavior of anomalies 5MM1325.DOCX Page 54 of 71 Attorney Docket No.08223-2304281 (6575WO01) and may lead to noisy synthetic samples, but that generalizing label anomaly information by mixing up normal samples with anomalies could alleviate the problem. [00167] Second, by comparing a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with the vanilla classifier without data augmentation, we observe that a framework according to non-limiting embodiments or aspects of the present disclosure consistently outperforms all of the vanilla classifiers. On the five datasets, a framework according to non-limiting embodiments or aspects of the present disclosure improves the F1-score of KNN, XGBoost, and MLP classifiers by 17.1%, 16.7%, and 12.8%, respectively. This phenomenon suggests that a framework according to non-limiting embodiments or aspects of the present disclosure is able to adaptively create synthetic samples for different classifiers toward performance improvements. In addition, it is also observed that the KNN classifier has the maximum improvement, which suggests that the nearest-neighbor exploration of the transition function favors the classifier with similar attributes. Another interesting observation is that the more complex the models, the fewer the improvements. A possible explanation is that complex models tend to be overconfident on the prediction, which may lead to noisy prediction uncertainty reward and therefore mislead the learning procedure of the augmentation strategy. [00168] Third, by comparing a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with all other data augmentation methods, it is observed that a framework according to non-limiting embodiments or aspects of the present disclosure outperforms all baselines with the three classifiers on Macro-F1 scores. Specifically, a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) averagely outperforms the F1- score of the second best augmentation method with KNN, XGBoost, and MLP classifiers on the five datasets by 8.5%, 18.2% and 18.9%, respectively. Because Mixboost is similar to AnoMix with a random mix up policy, it implies that the proposed framework can learn tailored mix up policies for different classifiers and data samples. It can also be observed that, although a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) may not always be superior to all other baselines on precision and recall, the F1-scores are always the best. This phenomenon suggests that a framework according to non-limiting embodiments or aspects of the present disclosure is able to balance the trade-off between precision and recall, and therefore leads to superior F1-scores in all settings. 5MM1325.DOCX Page 55 of 71 Attorney Docket No.08223-2304281 (6575WO01) A reason behind this is that a framework according to non-limiting embodiments or aspects of the present disclosure may adopt the Macro-F1 to form a reward signal. A user may also tailor their own metrics (e.g., precision, recall, tailored metrics, etc.) to obtain an anomaly detector that meets their requirements. [00169] Fourth, by taking a detailed comparison between a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with SVMSMOTE and BorderlineSMOTE, it can be observed that the F1-score of a framework according to non-limiting embodiments or aspects of the present disclosure is superior to the two baselines by at least 17.1%, 18.2% and 18.9% with three different classifiers. This phenomenon suggests that the instance-wise prediction uncertainty in the reward function is a better approach to generate tailored beneficial synthetic samples for different classifiers. The rationale behind this is that the two baselines identify the class boundary in the label space and the hyperspace of the SVM, where a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) identifies the boundary that is directly defined by the underlying classifier. By encouraging the policy to generate samples on the boundary defined by the classifier, it is more likely to create beneficial information that cannot be observed from the original feature space or the hyperspace of another classifier. [00170] Referring again to FIG.10, to answer RQ2, a framework according to non- limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) is compared to the most advanced label-informed anomaly detection algorithms and baseline augmentation methods. FIG. 10 presents the macro-averaged precision, recall, and F1-score of each method on the 5 datasets. "Best baseline" refers to the horizontal baseline with the best F1-score on individual datasets. The following two observations can be made from FIG.10. [00171] First, data augmentation methods generally outperform label informed anomaly detectors. Comparing the best data augmentation baseline to the three label- informed approaches, the best data augmentation baseline outperforms the best label- informed algorithm by 6.2%. This suggests that, data augmentation method may be a more effective way to generalize label information when incorporated with proper strategy. Additionally, a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with a properly learned strategy achieves superior performance, which further validates the suggestion above. 5MM1325.DOCX Page 56 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00172] Second, label-informed methods achieve better precision and lower recall. By cross comparison to FIG.9, it can be observed that the average precision of label- informed approaches is generally superior to data augmentation methods on all three classifiers, whereas the average recall is generally inferior to those baselines. The possible explanation is that label-informed approaches can only exploit the labels themselves, while data augmentation methods are capable of exploring potentially beneficial information from the limited label information. This suggests that label- informed approaches tend to over-fit existing labels and are therefore suboptimal. [00173] Referring now to FIG.11, which is a table of macro-averaged scores of a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”) with the KNN classifier and the ablations on the Japanese Vowels dataset, an ablation study on the Japanese Vowels dataset with the KNN classifier was conducted to answer the question RQ3. As the reward signal may be the most critical guidance toward optimal augmentation strategies, the reward function is ablated to study the contribution of individual components. Note that random reward generates a reward signal with a random floating number from 0 to 1, which potentially leads to a random augmentation strategy. The following observations can be made from FIG.11. [00174] First, the proposed MDP is solvable, and the tailored RL agent is capable of learning an optimal strategy. By comparing the random reward baseline with a framework according to non-limiting embodiments or aspects of the present disclosure (e.g., “AnoMix”), it can be observed that there are significant improvements on all three scores. As both the two ablations on Equations (6) and (7) are significantly better than the random reward baseline, this suggests that the tailored RL agent is capable of addressing the MDP toward an optimal augmentation strategy. [00175] Second, the two components in the proposed reward signal play significant roles in an optimal augmentation strategy. Specifically, both components are capable of increasing the exploitation of the label information and therefore lead to significant improvements in precision. On one hand, as the classifier may suffer from underfitting during the training procedure, learning the augmentation strategy without Equation (6) may lead to a significant performance drop. On the other hand, without considering the model status via Equation (7), it is less possible to identify potentially beneficial information for the underlying classifier and therefore lead to lower recall. 5MM1325.DOCX Page 57 of 71 Attorney Docket No.08223-2304281 (6575WO01) [00176] Accordingly, non-limiting embodiments or aspects of the present disclosure may provide a universal data mixer that is capable of incorporating different classifiers to exploit and explore potentially beneficial information from label information for supervised anomaly detection by using an iterative mix-up process to consider feature distribution and model status at the same time, by formulating the iterative mix up into a Markov decision process (MDP), and/or by providing a reward function to guide the policy learning procedure while the classifier is under-fitting. To solve the MDP, non- limiting embodiments or aspects of the present disclosure provide a deep actor-critic framework to optimize on a discrete-continuous action space. In this way, non-limiting embodiments or aspects of the present disclosure may generalize label information by simultaneously traversing the feature space while considering the model status, formulate the iterative mix up into a Markov decision process and design a combinatorial reward signal to guide the mix-up process, and/or tailor a deep reinforcement learning algorithm to address the discrete-continuous action space for learning an optimal mix-up policy. [00177] Although embodiments or aspects have been described in detail for the purpose of illustration and description, it is to be understood that such detail is solely for that purpose and that embodiments or aspects are not limited to the disclosed embodiments or aspects, but, on the contrary, are intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present disclosure contemplates that, to the extent possible, one or more features of any embodiment or aspect can be combined with one or more features of any other embodiment or aspect. In fact, any of these features can be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set. 5MM1325.DOCX Page 58 of 71