Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TEXT TO SPEECH SYNTHESIS WITHOUT USING PARALLEL TEXT-AUDIO DATA
Document Type and Number:
WIPO Patent Application WO/2024/072481
Kind Code:
A1
Abstract:
An unsupervised text to speech system utilizing a lexicon to map input text to the phoneme sequence, which is expanded to the frame-level forced alignment with a speaker-dependent duration model. An alignment mapping module that converts the forced alignment to the unsupervised alignment (UA). Afterword, a Conditional Disentangled Sequential Variational Auto-encoder (C-DSVAE), serving as the self-supervised TTS AM, takes the predicted UA and a target speaker embedding to generate the mel spectrogram, which is ultimately converted to waveform with a neural vocoder.

Inventors:
ZHANG CHUNLEI (US)
LIAN JIACHEN (US)
YU DONG (US)
Application Number:
PCT/US2023/016025
Publication Date:
April 04, 2024
Filing Date:
March 23, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TENCENT AMERICA LLC (US)
International Classes:
G10L13/10; G06N3/088; G10L13/02; G10L15/14; G10L15/16; G10L15/18; G10L15/187
Foreign References:
US20200342849A12020-10-29
US20090248395A12009-10-01
US20200357381A12020-11-12
US20160093304A12016-03-31
US20180336197A12018-11-22
US20210049999A12021-02-18
US20220108680A12022-04-07
US20210304769A12021-09-30
Other References:
SERGIOS KARAGIANNAKOS: "Speech Synthesis: A review of the best text to speech architectures with Deep Learning", AI SUMMER, 13 May 2021 (2021-05-13), XP093158719, Retrieved from the Internet
Attorney, Agent or Firm:
RABENA, John F. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. An unsupervised text to speech method performed by at least one processor and comprising: receiving an input text; generating an acoustic model comprising: breaking the input text into at least one composite sound of a target language via a lexicon; predicting a duration of speech generated from the input text; aligning the least one composite sound to regularize the input text to follow the sounds of the target language as an aligned output; auto-encoding the aligned output and the duration of speech generated from the target input text to an output waveform; and outputting a sound from the outputted waveform.

2. The unsupervised text to speech method of claim 1, wherein predicting the duration of speech comprises: sampling a speaker pool containing at least one voice; and calculating the duration of speech by mapping the lexicon sounds with a length of an input text and the speaker pool.

3. The unsupervised text to speech method of claim 1, wherein the lexicon contains at least one phoneme sequence.

4. The unsupervised text to speech method of claim 1, wherein the auto-encoding the aligned output further comprises: predicting an unsupervised alignment which aligns the sounds of the target language with the duration of speech; encoding the input text; encoding a prior content with the output of the predicted unsupervised alignment; encoding a posterior content with the encoded input text; decoding the prior content and posterior content; generating a mel spectrogram from the decoded prior content and posterior content; and processing the mel spectrogram through a neural vocoder to generate a waveform.

5. The unsupervised text to speech method of claim 1, wherein the target input text is selected from a group consisting of: a book, a text message, an email, a newspaper, a printed paper, and a logo.

6. The unsupervised text to speech method of claim 1, wherein the aligning further comprises: mapping the text as a forced alignment; and converting the forced alignment to an unsupervised alignment.

7. The unsupervised text to speech method of claim 2, wherein the aligning further comprises: the predicted duration is calculated in at least one logarithmic domain.

8. An unsupervised text to speech device comprising: at least one memory configured to store computer program code; at least one processor configured to operate as instructed by the computer program code, the computer program code including: acoustic modeling code configured to cause the at least one processor to generate an acoustic model having at least one lexicon including sounds of a target language, the acoustic modeling code further including: duration predictor code configured to cause the at least one processor to predict a duration of speech generated from a target input text; alignment code configured to cause the at least one processor to regularize the input text to follow the sounds of the target language as an aligned output; and auto-encoder code configured to cause the at least one processor to transform the aligned output and the duration of speech generated from the target input text to an output waveform.

9. The unsupervised text to speech device of claim 8, wherein the duration predictor code further includes duration calculator code configured to cause the at least one processor to calculate the duration of the speech by mapping the lexicon sounds with a length of an input text, wherein the duration predictor code further causes the processer to predict the duration of speech based on speaker pool data containing at least one sampled voice.

10. The unsupervised text to speech device of claim 8, wherein the lexicon comprises at least one phoneme sequence.

11. The unsupervised text to speech device of claim 8, wherein the auto-encoder code configured further comprises: predicted unsupervised alignment code configured to cause the at least one processor to align the sounds of the target language with the duration of speech; shared encoder code configured to cause the at least one processor to encode the input text; prior content encoder code configured to cause the at least one processor to encode the output of the predicted unsupervised alignment posterior, posterior content encoder code configured to cause the at least one processor to encode the output of the shared encoder; decoder code configured to cause the at least one processor to combine the output of the prior content encoder and the posterior content encoder and generates a mel spectrogram; and a neural vocoder which generates a waveform from the mel spectrogram.

12. The unsupervised text to speech device of claim 8, wherein the target input text is selected from a group consisting of: a book, a text message, an email, a newspaper, a printed paper, and a logo.

13. The unsupervised text to speech device of claim 8, wherein the alignment code is configured to cause the at least one processor to: map the text as a forced alignment; and convert the forced alignment to an unsupervised alignment.

14. The unsupervised text to speech device of claim 9, wherein the duration predictor code is configured to cause the processor to predict the duration of speech in at least one logarithmic domain.

15. A non-transitory computer readable medium having instructions stored therein, which when executed by a processor cause the processor to: receive an input text; generate an acoustic model comprising: break the input text into at least one composite sound of a target language via a lexicon; predict a duration of speech generated from the input text; align the least one composite sound to regularize the input text to follow the sounds of the target language as an aligned output; auto-encode the aligned output and the duration of speech generated from the input text as an output waveform; and outputting a sound from the outputted waveform,

16. The non-transitory computer readable medium according to claim 15, wherein predicting the duration comprises: sampling a speaker pool containing at least one voice; and calculating the duration of speech by mapping the lexicon sounds with a length of an input text and the speaker pool.

17. The non-transitory computer readable medium according to claim 15, wherein the lexicon comprises at least one phoneme sequence.

18. The non-transitory computer readable medium according to claim 15, wherein the instructions are configured to further cause the processor to: predict an unsupervised alignment which aligns the sounds of the target language with the duration of speech; encode the input text; encode a prior content with the output of the predicted unsupervised alignment; encode a posterior content with the encoded input text; decode the prior content and posterior content; generate a mel spectrogram from the decoded prior content and posterior content; and process the mel spectrogram through a neural vocoder to generate a waveform.

19. The non-transitory computer readable medium according to claim 15, wherein the target input text is selected from a group consisting of: a book, a text message, an email, a newspaper, a printed paper, and a logo.

20. The non-transitory computer readable medium according to claim 15, wherein the instructions further cause the processor to: map the text as a forced alignment; and convert the forced alignment to an unsupervised alignment.

Description:
TEXT TO SPEECH SYNTHESIS WITHOUT USING PARALLEL TEXT-AUDIO DATA

CROSS-REFERENCE TO RELATED APPLICATION

This application is based on and claims priority to U.S. Patent Application No. 17/953,851, filed on September 27, 2022, the disclosure of which is incorporated by reference herein in its entirety.

TECHNICAL FIELD

[0001] The present disclosure relates generally to text to speech, and more particularly to methods and apparatuses for text to speech for converting text to sound.

BACKGROUND OF THE INVENTION

[0002] Text-to-speech (TTS) synthesis plays an important role for human computer interaction. With the continuous development of neural-based TTS systems (e.g., Tacotron DurTAN, FastSpeech, or more recently Glow-TTS series), high-fidelity synthetic speech has reduced the gap between machine generated speech and human speech. This is especially true for languages with rich resources (e.g., languages with sizeable high quality parallel speech and textual data). Usually, a supervised TTS system requires dozens of hours of single-speaker high quality data to generate a quality performance. However, collecting and labeling such data is a non-trivial task, time-consuming, and expensive. Therefore, current supervised solutions still have their limitations on the demanding needs of ubiquitous deployment of customized speech synthesizers for Al assistants, gaming or entertainment industries. Natural, flexible, and controllable TTS pathways become more essential when facing these diverse needs. SUMMARY OF THE INVENTION

[0003] According to embodiments, systems and methods are provided for an unsupervised text to speech method performed by at least one processor and comprising receiving an input text; generating an acoustic model comprising breaking the input text into at least one composite sound of a target language via a lexicon; predicting a duration of speech generated from the input text; aligning the least one composite sound to regularize the input text to follow the sounds of the target language as an aligned output; auto-encoding the aligned output and the duration of speech generated from the target input text to an output waveform; and outputting a sound from the outputted waveform.

[0004] In some embodiments, wherein predicting the duration of speech comprises sampling a speaker pool containing at least one voice; and calculating the duration of speech by mapping the lexicon sounds with a length of an input text and the speaker pool.

[0005] According to some embodiments, wherein the lexicon contains at least one phoneme sequence.

[0006] According to some embodiments, predicting an unsupervised alignment which aligns the sounds of the target language with the duration of speech; encoding the input text; encoding a prior content with the output of the predicted unsupervised alignment; encoding a posterior content with the encoded input text; decoding the prior content and posterior content; generating a mel spectrogram from the decoded prior content and posterior content; and processing the mel spectrogram through a neural vocoder to generate a waveform.

[0007] According to some embodiments, wherein the target input text is selected from a group consisting of: a book, a text message, an email, a newspaper, a printed paper, and a logo. [0008] According to some embodiments, mapping the text as a forced alignment; and converting the forced alignment to an unsupervised alignment.

[0009] According to some embodiments the predicted duration is calculated in at least one logarithmic domain.

BRIEF DESCRIPTION OF THE DRAWINGS

[0010] Fig. 1 is a system overview of an embodiment of the unsupervised text to speech system.

[0011] Fig. 2 is a block diagram of an embodiment of the process of the unsupervised text to speech system.

[0012] Fig. 3 is an embodiment of the C-DSVAE system and training.

[0013] Fig. 4 is an embodiment of voice conversion of the unsupervised text to speech system.

[0014] Fig. 5 is an embodiment of the alignment driven voice generation.

[0015] Fig. 6 is a block diagram of an embodiment of the unsupervised text to speech system.

DETAILED DESCRIPTION OF THE INVENTION

[0001] The following detailed description of example embodiments refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements.

[0002] The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. Further, one or more features or components of one embodiment may be incorporated into or combined with another embodiment (or one or more features of another embodiment). Additionally, in the flowcharts and descriptions of operations provided below, it is understood that one or more operations may be omitted, one or more operations may be added, one or more operations may be performed simultaneously (at least in part), and the order of one or more operations may be switched.

[0003] It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code. It is understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.

[0004] Even though particular combinations of features are recited in the claims and/or disclosed in the specification, these combinations are not intended to limit the disclosure of possible implementations. In fact, many of these features may be combined in ways not specifically recited in the claims and/or disclosed in the specification. Although each dependent claim listed below may directly depend on only one claim, the disclosure of possible implementations includes each dependent claim in combination with every other claim in the claim set.

[0005] No element, act, or instruction used herein should be construed as critical or essential unless explicitly described as such. Also, as used herein, the articles “a” and “an” are intended to include one or more items, and may be used interchangeably with “one or more.” Where only one item is intended, the term “one” or similar language is used. Also, as used herein, the terms “has,” “have,” “having,” “include,” “including,” or the like are intended to be open- ended terms. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Furthermore, expressions such as “at least one of [A] and [B]” or “at least one of [A] or [B]” are to be understood as including only A, only B, or both A and B.

[0016] Embodiments of the present disclosure are directed to an unsupervised text to speech system developed to overcome the problems discussed above. Embodiments of the present disclosure include an unsupervised text-to-speech (UTTS) framework, which does not require text-audio pairs for the TTS acoustic modeling (AM). In some embodiments, this UTTS may be a multi-speaker speech synthesizer developed from the perspective of disentangled speech representation learning. A method framework offers a flexible choice of a speaker’s duration model, timbre feature (identity) and content for TTS inference. The unsupervised text to speech system may leverage recent advancements in self-supervised speech representation learning as well as speech synthesis front-end techniques for the system development.

[0017] Specifically, in some embodiments, the unsupervised text to speech system may utilize a lexicon to map input text to the phoneme sequence, which is expanded to the framelevel forced alignment (FA) with a speaker-dependent duration model. The input text may be any type of text, such as a book, a text message, an email, a newspaper, a printed paper, and a logo or any other alphabetic, or word representative pictogram. Next, an alignment mapping module may convert the FA to the unsupervised alignment (UA). Finally, a Conditional Disentangled Sequential Variational Auto-encoder (C-DSVAE), serving as the self-supervised TTS AM, may take the predicted UA and a target speaker embedding to generate the mel spectrogram, which is ultimately converted to a waveform with a neural vocoder. Unsupervised text-to- speech does not require parallel speech and textual data for training the TTS acoustic models (AM). Thus, the unsupervised text to speech system enables speech synthesis without using a paired TTS corpus.

[0018] Fig. 1 illustrates an exemplary system 100 of an embodiment for using the unsupervised text to speech. The exemplary system 100, may be one of a variety of systems such as a personal computer, a mobile device, a cluster of computers, a server, embedded device, ASIC, microcontroller, or any other device capable of running code. Bus 110 connects the exemplary system 100 together such that all the components may communication with one another. The bus 110 connects the processor 120, the memory 130, the storage component 140, the input component 150, the output component 160 and the interface component.

[0019] The processor 120 may be a single processor, a processor with multiple processors inside, a cluster (more than one) of processors, and/or a distributed processing. The processor carries out the instructions stored in both the memory 130 and the storage component 140. The processor 120 operates as the computational device, carrying out operations for the unsupervised text to speech process. Memory 130 is fast storage and retrieval to any of the memory devices can be enabled through the use of cache memory, which can be closely associated with one or more CPU. Storage component 140 may be one of any longer term storage such as a HDD, SSD, magnetic tape or any other long term storage format.

[0020] Input component 150 may be any file type or signal from a user interface component such as a camera or text capturing equipment. Output component 160 outputs the processed information to the communication interface 170. The communication interface may be a speaker or other communication device which may display information to a user or another observer such as another computing system.

[0021] Fig. 2 details process steps of an exemplary embodiment of an unsupervised text- to-speech process. The process may start at step SI 10, where text is inputted into the unsupervised text-to-speech (UTTS). The input text may be any text in any language for which there are spoken words. The process proceeds to step S120, where the text is then broken down into phonemes, which may correspond to distinct sounds of the language and each word. After the phonemes are determined, the process makes a determination of the length of the speech to be produced. In the case of lengthy text or words with a large amount of sounds and/or syllables, the process may determine that the length of speech will be longer. In the case of a short amount of text or sounds and/or syllables, the process may determine that the length of speech will be smaller.

[0022] The process proceeds from step S 130 to step SI 40, where an unsupervised alignment is performed. As an example, forced alignment (FA) may refer to the process by which orthographic transcriptions are aligned to audio recordings to automatically generate phone level segmentation. Unsupervised alignment (UA) may refer to a process to condition and regularize the output of a text to speech system to follow the phonetic structure. Next, the Nn mapping step SI 50 converts the FA to the unsupervised alignment. Finally, a mel spectrogram is generated SI 60 which may be played out of a speaker finishing the text to speech process.

[0023] Fig. 3 is an embodiment of the Conditional Disentangled Sequential Variational Auto-encoder (C-DSVAE) system and training. The backbone of this encoder may be the DSVAE architecture which consists of a shared encoder 305, a posterior speaker encoder 320, a posterior content encoder 325, a prior speaker encoder 315, a prior content encoder 330, a decoder 345 and finally the synthesized mel speech 350. The mel spectrogram 300 may be passed into shared encoder 305, followed by posterior encoder 320 and posterior content encoder 325, which encodes the speaker posterior distribution q(z_s |X) and the content posterior distribution q(z_c |X). After the distributions are generated, the distributions pass respectively to the speaker embedding 335 and the content embedding 340. Next, both the data acted upon by the speaker embedding 335 and the content embedding 340 are decoded by the decoder 345 which results in synthesized mel speech 350.

[0024] For the prior modeling, the prior speaker encoder 315 may encode the speaker prior p(z_s) and the posterior content encoder 325 may encode the content prior p(z_c). During the decoding/generation stage, the speaker embedding 335 (z_s ) and content embedding 340 (z_c) are sampled from either the posteriors q(z_s |X) and q(z_c |X), or the priors p(z_s) and p(z c), and the concatenation of them is passed into decoder D to generate the synthesized speech.

[0025] In some embodiments, in order to generate phonetically meaningful and continuous speech with stable vocalizations, the unsupervised text to speech system may use the acoustic alignment 310 (A X) as the condition for content prior distribution. In some embodiments, the unsupervised text to speech system may use two types of acoustic alignment: forced alignment (FA) and unsupervised alignment (UA). In the present embodiment, A_X A FA may represent the forced alignment of the utterance X. The forced alignment may be Montreal forced alignment (MFA) to extract the forced alignment given audio-text pair. The unsupervised text to speech system adopts the WavLM-Base model to extract the acoustic features. In some embodiments, in order to capture the robust and long-range temporal relations over acoustic units and to generate more continuous speech, the unsupervised text to speech system adopts the

Masked Unit Prediction (MUP) when training the prior content encoder 330 (Ecp).

[0026] Computationally, the variable M(A_X )c[T] may denote the collection of T masked indices for a specific condition A X, where the masking configuration is consistent. The variable (AX) ''may represent a corrupted version of A_X, in which A_Xt will be masked out if tGM(A_X ). The variable z_cp may represent the sample of output of E cp (i.e., z_cp~E_cp (A X)). The negative loglikelihood loss (NLL) LMUP-C for condition modeling is defined in Eq. 1, where p(z_cpi |(A_Xi j') is the softmax categorical distribution. EAX denotes the expectation over all A X. An embodiment of the masked prediction loss may be formulated as follows:

[0028] In some embodiments, the C-DSVAE loss objective may be formulated as follows:

[0031] Fig. 4 is an embodiment of voice conversion of the unsupervised text to speech system. The voice conversion includes a target mel spectrogram 400 and a source mel spectrogram 405 which are fed into a shared encoder 410/415 (differing depending on the separate paths). After being processed by the shared encoder 410, the first text 400 is then passed to the posterior speaker encode 420 and then the speaker embedding 430. The second text 405 is moved through the posterior content encoder 425 then moves to the content embedding 435. Penultimately, the branches are fed together into the decoder 440 which ends with synthesized mel speech 445. [0032] Fig. 5 is an embodiment of the alignment driven voice generation. The voice conversion includes a target mel spectrogram 500 and an acoustic alignment 505 which are fed into a shared encoder 510/515 (differing depending on the separate paths). After being processed by the shared encoder 510, the target mel spectrogram 500 is then passed to the posterior speaker encode 520 and then the speaker embedding 530. The acoustic alignment 505 is moved through the posterior content encoder 525, then moves to the content embedding 535. Penultimately, the branches are fed together into the decoder 540 which ends with synthesized mel speech 545.

[0033] Fig. 6 is a block diagram of an embodiment of the unsupervised text to speech system. First an input text 600 is fed into the system. The input text 600 is first broken down to obtain the phoneme sequence of the text transcription with the lexicon 605. The lexicon 605 may be one of Librispeech Lexicon, CMUdict, Amazon Polly, or other defined lexicons. The phoneme sequence is then converted to a list of token indices.

[0034] At the same time, the duration predictor 620 takes the phoneme sequence (lexicon 605 output) as well as sampled 615 information from the speaker pool 610 as input to predict the speaker-aware duration for each phoneme. Specifically, the phoneme sequence is first passed into a trainable look-up table to obtain the phoneme embeddings. Afterwards, a four-layer multihead attention (MHA) module is followed to extract the latent phoneme representation. A two- layer conv-lD module is then used to take the summation of latent phoneme representation and speaker embedding sampled from the speaker pool. A linear layer is finally applied to generate the predicted duration in the logarithmic domain. The predicted duration outputted to the Speaker- Aware Duration Prediction (SADP) 625 as a length of the speech predicted. [0035] The phoneme sequence together with a random speaker embedding is passed into the Speaker-Aware Duration Prediction (SADP) 625 which delivers the predicted forced alignment (FA). The forced alignment to unsupervised alignment (FA2UA) 630 module takes the predicted FA as input and predicts the unsupervised alignment 635 (UA). The UA 635 along with an input utterance 640 is fed into the Conditional Disentangled Sequential Variational Autoencoder (C-DSVAE) 670 to generate the mel spectrogram. The predicted unsupervised alignment 635 is fed to the prior content encoder 650 then to the decoder 660. At the same time the input utterance 640 is fed to the shared encoder 645 then to the posterior speaker encoder 655 and finally meets the data of prior content encoder 650 in the decoder 660 to generate a mel spectrogram. A neural vocoder 665 is then applied to convert the mel spectrogram to waveform. It is observable that the proposed UTTS system performs zero-shot voice cloning for the target utterance. Both the modules including C-DSVAE 670 are trained separately. The detailed model architectures are presented in Table. 1. _

Table 1

[0036] Table 1 illustrates the UTTS system in detail. For ConvlD, the configuration is (filter size, kernel size, padding, and stride). For Multi -Head- Attend on (MHA) the configuration may be model dimension, key dimension, value dimension, no. of heads. For LSTM/BiLTSM/RNN, the configuration may be hidden dim, layers. For Dense layer, the configuration may be output dim.

[0037] The UTTS system in table 1 breaks down the architecture into further component parts, the C-DSVAE 670, the duration predictor and the FA2UA. The C-DSVAE 670 comprises a shared encoder 645, a prior speaker encoder, a posterior speaker encoder 655, a posterior content encoder, a prior content encoder 650, a prior content decoder, and a posterior content decoder. The shared encoder 645 is comprised of ConvlD, an InstanceNorm2D and a ReLU.

The posterior speaker encoder comprises of BiLSTM, an Average Pooling, a Dense(64) to mean and a Dense(64) to standard deviation. The prior speaker consists of an Identity Mapping. The posterior content encoder comprises of BiLSTM, a RNN a Dense(64) to mean and a Dense(64) to standard deviation. The prior content encoder comprises a BiLSTM, a Dense(64) to mean, a Dense(64) to standard deviation and a Linear Classifier. The prior content decoder comprises a LSTM (512,1) to a LSTM (1024, 2) to a Dense(80). The posterior content decoder comprises a ConclD to tanh to an InstanceNorm2D. Further, the Duration predictor is comprised of nn.Embedding to MHA to a ConvlD to a Dense(l). Finally, the FA2UA is comprised nn.Embedding to BiLSTM to a linear classifier.

[0038] During training, the MSE (Mean Squared Error) may be adopted between the predicted duration 620 and the target duration. The target duration for the text is obtained from the forced alignment extracted by Montreal Forced Alignment (MFA) or other forced alignment methods. In some embodiments, the target duration may be in the logarithmic domain. During inference, duration may be rounded up. [0039] The FA2UA 630 module takes the forced alignment (FA) as input and predicts the corresponding unsupervised alignment (UA). Specifically, FA is first passed into a learnable look-up table to obtain the FA embeddings. Subsequently, a 3 -layer Bi-LSTM module may be employed to predict the UA embeddings given FA embeddings. During training, a mask prediction is adopted training strategy to train our FA2UA module as masked prediction is expected to be good at capturing the long-range time dependency across tokens and encode more contextualized information for each token.

[0040] Computationally, the unsupervised text to speech denotes M(FA)<=T as the collection of T masked indices for a specific unsupervised alignment FA. The variable (FA) may be a corrupted version of FA in which HFA2 _i may be masked out if iGM(FA). The variable HUA3 _i may correspond to the frame of FA. The negative loglikelihood loss (NLL)

L FA2UA for masked prediction training may be defined in the following, where p( H UA 2 i

|( HFA3 _i ) ) is the softmax categorical distribution. The variable E_((FA,UA)) denotes the expectation over all (FA,UA) pairs. During the inference, the token with the maximum probability is chosen, p(UA_i | KFA_i), at each time step i to form the predicted UA sequence. Thus, the FA2UA 630 may be defined as follows:

[0042] The foregoing disclosure provides illustration and description, but is not intended to be exhaustive or to limit the implementations to the precise form disclosed. Modifications and variations are possible in light of the above disclosure or may be acquired from practice of the implementations. [0043] Some embodiments may relate to a system, a method, and/or a computer readable medium at any possible technical detail level of integration. Further, one or more of the above components described above may be implemented as instructions stored on a computer readable medium and executable by at least one processor (and/or may include at least one processor). The computer readable medium may include a computer-readable non-transitory storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out operations.

[0044] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. [0045] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.

[0046] Computer readable program code/instructions for carrying out operations may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects or operations.

[0047] These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, implement the operations specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to operate in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the operations specified in the flowchart and/or block diagram block or blocks.

[0048] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the operations specified in the flowchart and/or block diagram block or blocks.

[0049] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer readable media according to various embodiments. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical operation(s). The method, computer system, and computer readable medium may include additional blocks, fewer blocks, different blocks, or differently arranged blocks than those depicted in the Figures. In some alternative implementations, the operations noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be executed concurrently or substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified operations or acts or carry out combinations of special purpose hardware and computer instructions.

[0050] It will be apparent that systems and/or methods, described herein, may be implemented in different forms of hardware, firmware, or a combination of hardware and software. The actual specialized control hardware or software code used to implement these systems and/or methods is not limiting of the implementations. Thus, the operation and behavior of the systems and/or methods were described herein without reference to specific software code — it being understood that software and hardware may be designed to implement the systems and/or methods based on the description herein.