Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CONVERSATIONAL INTERFACE FOR CONTENT CREATION AND EDITING USING LARGE LANGUAGE MODELS
Document Type and Number:
WIPO Patent Application WO/2024/086216
Kind Code:
A1
Abstract:
Example embodiments of the present disclosure provide for an example method that includes obtaining via a conversational campaign assistant interface, by a custom language model, natural language input. The method includes generating, by the custom language model, an output comprising a predicted user intent. The method includes determining actions to perform and determining a natural language response. The method includes transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action. The method includes transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface. The method includes obtaining user input indicative of a validation of the action data structure or the response data structure and updating the custom language model based on the user input.

Inventors:
BENT III (US)
KOC MEHMET LEVENT (US)
LUO WEI (US)
ZHOU XIAOLAN (US)
Application Number:
PCT/US2023/035397
Publication Date:
April 25, 2024
Filing Date:
October 18, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
GOOGLE LLC (US)
International Classes:
G06Q30/0273; G06N3/0455; G06Q30/0241; G06Q30/0242
Foreign References:
US20210327413A12021-10-21
US202318322543A2023-05-23
US17968472A
Other References:
XUEYING ZHANG ET AL: "Automatic Product Copywriting for E-Commerce", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 15 December 2021 (2021-12-15), XP091118377
YOUNGMANN BRIT ET AL: "Algorithmic copywriting: automated generation of health-related advertisements to improve their performance", INFORMATION RETRIEVAL, KLUWER ACADEMIC PUBLISHERS, NL, vol. 24, no. 3, 13 April 2021 (2021-04-13), pages 205 - 239, XP037459464, ISSN: 1386-4564, [retrieved on 20210413], DOI: 10.1007/S10791-021-09392-6
HALEEM ABID ET AL: "Artificial intelligence (AI) applications for marketing: A literature-based study", INTERNATIONAL JOURNAL OF INTELLIGENT NETWORKS, vol. 3, 5 September 2022 (2022-09-05), pages 119 - 132, XP093107317, ISSN: 2666-6030, Retrieved from the Internet DOI: 10.1016/j.ijin.2022.08.005
NIM ALEXANDER: "Knowledge Distillation of Language Models", 31 October 2021 (2021-10-31), XP093107553, Retrieved from the Internet [retrieved on 20231130]
Attorney, Agent or Firm:
FANT, Lillian G. et al. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computer-implemented method, comprising: obtaining, via a conversational campaign assistant interface, by a custom language model, natural language input; generating, by the custom language model, an output comprising a predicted user intent; determining one or more actions to perform by: parsing the output generated by the custom language model; determining an action associated with the output; generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action; determining a natural language response by: parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action; transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface; obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure; and updating the custom language model based on the user input.

2. The computer-implemented method of claim 1 , wherein the action data structure comprises data to be populated within one or more fields of a content creation structured interface.

3. The computer-implemented method of any of claim 1 or claim 2, wherein the natural language input comprises a uniform resource locator (URL), and wherein the action component comprises a generative model that generates a summary of information parsed from a page associated with the URL.

4. The computer-implemented method of any of claim 1 to claim 3, wherein the action component comprises a content campaign performance model.

5. The computer-implemented method of any of claim 1 to claim 4, wherein the action component comprises a content campaign analysis model.

6. The computer-implemented method of any of claim 1 to claim 5, wherein the action component comprises a bidding strategy model.

7. The computer-implemented method of any of claim 1 to claim 6, wherein the action component comprises a generative model, wherein the generative model obtains the action data structure as an input prompt and generates an output comprising a creative asset.

8. A computer-implemented method comprising: obtaining, via a conversational campaign assistant interface, by a language model, natural language input; generating, by a custom language model, an output comprising a predicted user intent; determining one or more actions to perform by: parsing the output generated by the custom language model; determining an action associated with the output; generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action; determining a natural language response by: parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action; transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface; and obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure.

9. The computer-implemented method of claim 8, comprising: updating the custom language model based on the user input.

10. A computer-implemented method, comprising: obtaining, via a conversational campaign assistant interface, by a language model, natural language input; generating, by a custom language model, an output comprising a predicted user intent; determining one or more actions to perform by: parsing the output generated by the custom language model; determining an action associated with the output; generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action; determining a natural language response by: parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action; and transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface.

11. The computer-implemented method of claim 10, wherein the action component comprises a generative model, wherein the generative model obtains the action data structure as an input prompt and generates an output comprising a creative asset.

12. The computer-implemented method of claim 11, wherein the generative model comprises a machine learning model.

13. The computer-implemented method of claim 12, wherein the generative model is trained using a knowledge distillation technique.

14. A computer-implemented method, comprising: providing input data to a first machine learned model; obtaining output data indicative of one or more suggested content item components; evaluating the first machine learned model based at least in part on the output data by: obtaining data indicative of a quality score associated with the output data of the first machine learned model; comparing the uality score associated with the output data of the first machine learned model to a threshold quality score; determining that the first machine learned model has a quality score above the threshold quality score; and in response to determining that the first machine learned model has a quality score above the threshold quality score, implementing the first machine learned model in a content creation flow.

15. The computer-implemented method of claim 14, comprising: generating an initial user interface comprising a content assistant component; obtaining data indicative of user input; processing, by the first machine learned model, the data indicative of user input; obtaining output data, from the first machine learned model, indicative of one or more content item components; transmitting data which causes the one or more content item components to be provided for display on a user interface; obtaining data indicative of user selection of approval of the one or more content item components; and generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items comprising a plurality of the content item components.

16. The computer-implemented method of claim 15, wherein the content assistant component comprises one or more input fields.

17. The computer-implemented method of claim 14 or claim 15, wherein the first machine learned model has been trained using a knowledge distillation training method.

18. The computing system of any of claim 14 to claim 17. wherein the first machine learned model has been trained based at least in part on output from a pre-trained second machine learned model.

19. The computing system of claim 18, wherein the pre-trained second machine learned model is a large language model.

20. The computing system of claim 19, wherein the pre-trained second machine learned model is tuned using one or more prompts.

21 . The computing system of any of claim 14 to claim 20, wherein the input data comprises at least one of free-form input or landing page content.

22. The computing system of claim 21, wherein the input data comprises natural language input.

23. The computing system of claim 14 or claim 15, wherein the one or more content item components comprises at least one of a headline or description.

24. A computer-implemented method comprising: generating an initial user interface comprising a content assistant component; obtaining data indicative of an input received from a user; processing, by a machine learned model interfacing with the content assistant component, the data indicative of the input received from the user; obtaining output data, from the machine learned model interfacing with the content assistant component, indicative of one or more content item components; transmitting data which causes the one or more content item components to be provided for display via an updated user interface; obtaining data indicative of user selection of approval of the one or more content item components; and generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items.

25. The computer-implemented method of claim 24, comprising training the machine learned model using a knowledge distillation training method.

26. The computer-implemented method of claim 24 or 25, comprising training the machine learned model by: inputting labeled data into a first machine learned model; obtaining output data from the first machine learned model; comparing the output data from the first machine learned model to output data from a second machine learned model; and adjusting the first machine learned model based on comparing the output data of the first and second machine learned models.

27. A computer-implemented method of claim 26. wherein the labeled data comprises data that is output by a second machine learned model, wherein the second machine learned model is a pre-trained model.

28. The computer-implemented method of claim 27, wherein the output by the second machine learned model comprises annotated data.

29. The computer-implemented method of any of claim 26 to claim 28, wherein the labeled data comprises at least one of (i) business and products descriptions, (ii) proxy data from websites associated with a content creator, (iii) human curated data, or (iv) free-form inputs.

30. The computer-implemented method of any of claim 24 to claim 29, wherein training the machine learned model comprises: inputting unlabeled data into the machine learned model and a pre-trained second machine learned model; obtaining output data from the machine learned model and the pre-trained second machine learned model; comparing the output data from the machine learned model to the output data from the pre-trained second machine learned model; and adjusting the machine learned model based on comparing the output data of the machine learned model and the output of the pre-trained second machine learned model.

31. The computer-implemented method of any of claim 24 to claim 30, wherein the initial user interface comprises a graphical user interface.

32. A computer-implemented method, comprising: providing input data to a first machine learned model; obtaining output data indicative of one or more suggested content item components; evaluating the first machine learned model based at least in part on the output data by: obtaining data indicative of a quality score associated with the output data of the first machine learned model; comparing the quality score associated with the output data of the first machine learned model to a threshold quality score; determining that the first machine learned model has a quality score above the threshold quality score; in response to determining that the first machine learned model has a quality score above the threshold quality score, implementing the first machine learned model in a content creation flow.

33. The computer-implemented method of claim 32, comprising: generating an initial user interface comprising a content assistant component; obtaining data indicative of an input received from a user; processing, by the first machine learned model, the data indicative of the input received from the user; obtaining output data, from the first machine learned model, indicative of one or more content item components; transmitting data which causes the one or more content item components to be provided for display on a user interface; obtaining data indicative of user selection of approval of the one or more content item components; and generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items.

34. A computing system, comprising: one or more processors; and one or more non-transitory computer-readable media storing instructions that are executable to cause the one or more processors to perform operations, the operations comprising the method of any of claims 1-33.

35. One or more transitory or non-transitory computer readable media storing instructions that are executable by one or more processors to perform operations comprising the method of any of claims 1-33.

Description:
CONVERSATIONAL INTERFACE FOR CONTENT CREATION AND EDITING USING LARGE LANGUAGE MODELS

PRIORITY

[0001] The present application claims the benefit of priority to U.S. Non-Provisional Patent Application No. 18/322,543, filed May 23. 2023, which claims priority to NonProvisional Patent Application No. 17/968,472, filed, October 18, 2022, which are incorporated by reference herein in their entirety.

FIELD

[0002] The present disclosure relates generally to systems and methods for providing a conversational interface for content creation and editing using large language models. More particularly, the present disclosure relates to training and implementing one or more language models to facilitate a conversational interface for content creation and editing.

BACKGROUND

[0003] Computing devices can perform many tasks and provide interactive interfaces for content generation. Interactive interfaces for content generation can include a plurality of interactive components including input fields.

SUMMARY

[0004] Aspects and advantages of embodiments of the present disclosure will be set forth in part in the following description, or can be learned from the description, or can be learned through practice of the embodiments.

[0005] In one example aspect, the present disclosure provides for an example system including one or more processors and one or more memory devices storing instructions that are executable to cause the one or more processors to perform operations. In the example system, the operations include obtaining, via a conversational campaign assistant interface, by a custom language model, natural language input. In the example system, the operations include generating, by the custom language model, an output comprising a predicted user intent. In the example system, the operations include determining one or more actions to perform by parsing the output generated by the custom language model; determining an action associated with the output; or generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action. In the example system, the operations include determining a natural language response by parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; or transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action. In the example system, the operations include transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface. In the example system, the operations include obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure. In the example system, the operations include updating the custom language model based on the user input.

[0006] In an example aspect, the present disclosure provides for a computer-implemented method, the method includes obtaining, via a conversational campaign assistant interface, by a custom language model, natural language input. The example method includes generating, by the custom language model, an output comprising a predicted user intent. The example method includes determining one or more actions to perform by parsing the output generated by the custom language model; determining an action associated with the output; or generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action. The example method includes determining a natural language response by parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; or transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action. The example method includes transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface. The example method includes obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure. The example method includes updating the custom language model based on the user input. [0007] In the example non-transitory computer readable medium, the operations include generating an initial user interface comprising a content assistant component. In the example non-transitory computer readable medium, the operations include obtaining, via a conversational campaign assistant interface, by a custom language model, natural language input. In the example non-transitory computer readable medium, the operations include generating, by the custom language model, an output comprising a predicted user intent. In the example non-transitory computer readable medium, the operations include determining one or more actions to perform by parsing the output generated by the custom language model; determining an action associated with the output; or generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action. In the example non-transitory computer readable medium, the operations include determining a natural language response by parsing the output generated by the custom language model; generating a response data structure comprising a natural language response to the obtained natural language input; or transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action. In the example non-transitory computer readable medium, the operations include transmitting to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface. In the example non-transitory computer readable medium, the operations include obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure. In the example non-transitory computer readable medium, the operations include updating the custom language model based on the user input. [0008] In one example aspect, the present disclosure provides for an example system including one or more processors and one or more memory devices storing instructions that are executable to cause the one or more processors to perform operations. In the example system, the operations include providing input data to a first machine learned model. In the example system, the operations include obtaining output data indicative of one or more suggested content item components. In the example system, the operations include evaluating the first machine learned model based at least in part on the output data by obtaining data indicative of a quality score associated with the output data of the first machine learned model. In the example system, the operations include evaluating the first machine learned model based at least in part on the output data by comparing the quality score associated with the output data of the first machine learned model to a threshold quality score. In the example system, the operations include determining that the first machine learned model has a uality score above the threshold quality score. In the example system, the operations include in response to determining that the first machine learned model has a quality score above the threshold quality score, implementing the first machine learned model in a content creation flow.

[0009] In some embodiments of the example system, the operations include generating an initial user interface comprising a content assistant component. In some embodiments of the example system, the operations include obtaining data indicative of user input. In some embodiments of the example system, the operations include processing, by the first machine learned model, the data indicative of user input. In some embodiments of the example system, the operations include obtaining output data, from the first machine learned model, indicative of one or more content item components. In some embodiments of the example system, the operations include transmitting data which causes the one or more content item components to be provided for display on a user interface. In some embodiments of the example system, the operations include obtaining data indicative of user selection of approval of the one or more content item components. In some embodiments of the example system, the operations include generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items comprising a plurality of the content item components.

[0010] In some embodiments of the example system, the content assistant component comprises one or more input fields.

[0011] In some embodiments of the example system, the first machine learned model has been trained using a knowledge distillation training method.

[0012] In some embodiments of the example system, the first machine learned model has been trained based at least in part on output from a pre-trained second machine learned model.

[0013] In some embodiments of the example system, the pre-trained second machine learned model is a large language model.

[0014] In some embodiments of the example system, the pre-trained second machine learned model is tuned using one or more prompts. [0015] In some embodiments of the example system, the input data comprises at least one of free-form input or landing page content.

[0016] In some embodiments of the example system, the input data comprises natural language input.

[0017] In some embodiments of the example system, the one or more content item components comprises at least one of a headline or description.

[0018] In an example aspect, the present disclosure provides for a computer-implemented method, the method includes generating an initial user interface comprising a content assistant component. The example method includes obtaining data indicative of an input received from a user. The example method includes processing, by a machine learned model interfacing with the content assistant component, the data indicative of the input received from the user. The example method includes obtaining output data, from the machine learned model interfacing with the content assistant component, indicative of one or more content item components. The example method includes transmitting data which causes the one or more content item components to be provided for display via an updated user interface. The example method includes obtaining data indicative of user selection of approval of the one or more content item components. The example method includes generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items.

[0019] In some embodiments of the example method, training the machine learned model using a knowledge distillation training method.

[0020] In some embodiments of the example method, the method can include training the machine learned model by inputting labeled data into a first machine learned model. The example method can include training the machine learned model by obtaining output data from the first machine learned model. The example method can include training the machine learned model by comparing the output data from the first machine learned model to output data from a second machine learned model. The example method can include training the machine learned model by adjusting the first machine learned model based on comparing the output data of the first and second machine learned models.

[0021] In some embodiments of the example method, the labeled data comprises data that is output by a second machine learned model, the second machine learned model is a pretrained model. [0022] In some embodiments of the example method, the output by the second machine learned model comprises annotated data.

[0023] In some embodiments of the example method, the labeled data comprises at least one of (i) business and products descriptions, (ii) proxy data from websites associated with a content creator, (iii) human curated data, or (iv) free-form inputs.

[0024] In some embodiments of the example method, the method can include training the machine learned model by inputting unlabeled data into the machine learned model and a pretrained second machine learned model. In some embodiments of the example method, the method can include training the machine learned model by obtaining output data from the machine learned model and the pre-trained second machine learned model. In some embodiments of the example method, the method can include training the machine learned model by comparing the output data from the machine learned model to the output data from the pre-trained second machine learned model. In some embodiments of the example method, the method can include training the machine learned model by adjusting the machine learned model based on comparing the output data of the machine learned model and the output of the pre-trained second machine learned model. In some embodiments of the example method, the initial user interface comprises a graphical user interface.

[0025] In an example aspect, the present disclosure provides for an example non- transitory computer readable medium embodied in a computer-readable storage device and storing instructions that, when executed by a processor, cause the processor to perform operations. In the example non-transitory computer readable medium, the operations include providing input data to a first machine learned model. In the example non-transitory computer readable medium, the operations include obtaining output data indicative of one or more suggested content item components. In the example non-transitory computer readable medium, the operations include evaluating the first machine learned model based at least in part on the output data by obtaining data indicative of a quality score associated with the output data of the first machine learned model. In the example non-transitory computer readable medium, the operations include evaluating the first machine learned model based at least in part on the output data by comparing the quality score associated with the output data of the first machine learned model to a threshold qual ity score. In the example non-transitory computer readable medium, the operations include determining that the first machine learned model has a quality score above the threshold quality score. In the example non-transitory computer readable medium, the operations include in response to determining that the first machine learned model has a quality score above the threshold quality score, implementing the first machine learned model in a content creation flow.

[0026] In the example non-transitory computer readable medium, the operations include generating an initial user interface comprising a content assistant component. In the example non-transitory computer readable medium, the operations include obtaining data indicative of an input received from a user. In the example non-transitory computer readable medium, the operations include processing, by the first machine learned model, the data indicative of the input received from the user. In the example non-transitory computer readable medium, the operations include obtaining output data, from the first machine learned model, indicative of one or more content item components. In the example non-transitory computer readable medium, the operations include transmitting data which causes the one or more content item components to be provided for display on a user interface. In the example non-transitory computer readable medium, the operations include obtaining data indicative of user selection of approval of the one or more content item components. In the example non-transitory computer readable medium, the operations include generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items.

BRIEF DESCRIPTION OF THE DRAWINGS

[0027] Detailed discussion of embodiments directed to one of ordinary skill in the art is set forth in the specification, which makes reference to the appended figures, in which: [0028] Figure 1 depicts a block diagram of an example system for providing a conversational interface for content creation and editing using large language models according to example embodiments of the present disclosure.

[0029] Figure 2 depicts a block diagram associated with machine-learned model(s) according to example embodiments of the present disclosure.

[0030] Figure 3 depicts a flowchart of an example method for training machine-learned model(s) according to example embodiments of the present disclosure.

[0031] Figure 4 depicts a flowchart of an example process flow according to example embodiments of the present disclosure.

[0032] Figure 5 depicts a flowchart of an example process flow according to example embodiments of the present disclosure. [0033] Figures 6A-6C depict a flowchart of an example method according to example embodiments of the present disclosure.

[0034] Figure 7 depicts a flowchart of an example method according to example embodiments of the present disclosure.

[0035] Figures 8-10 depicts an example user interface according to example embodiments of the present disclosure.

[0036] Figure 11 depicts a block diagram associated with machine-learned model(s) according to example embodiments of the present disclosure.

[0037] Figure 12 depicts a flowchart of an example process flow according to example embodiments of the present disclosure.

[0038] Figure 13 depicts a flowchart of an example process flow according to example embodiments of the present disclosure.

[0039] Figure 14A-14C depict a flowchart of an example method according to example embodiments of the present disclosure.

[0040] Figure 15-20 depict an example user interface according to example embodiments of the present disclosure.

DETAILED DESCRIPTION

[0041] Generally, the present disclosure is directed to systems and methods for generating and tuning large language models that obtain natural language user input and generate content item components as output. The computing systems and methods can include providing for display an interface for the generation and editing of content items in response to obtaining and processing data indicative of free-form user input. For example, the computing system can initiate the display of a content assistant component. The system can obtain, via the content assistant component, user input including free-form speech and generate advertisements based on analyzing the speech, extracting information from a website related to the user, and generating headlines and/or other content items (e.g., advertisement components) that a user w ould normally need to manually input into a plurality of fields associated with a structured interface. The structured interface can be associated with a content creation flow. The content creation flow can be used to generate content items (e.g., advertisements, advertisement campaigns). The traditional structured interface requires manual entry into a plurality (e.g., sometimes 40 or more) input fields associated with rigid requirements. [0042] Implementations of the present disclosure can provide for a more efficient and user- friendly generation of content items using complex machine learned models. In some implementations, analyzing the user input can be performed by machine learned models (e.g., natural language processing models). In some implementations, the systems and methods can include training and/or tuning machine learned models. By way of example, the computing system can obtain input for the model from a plurality of sources. Sources can include, for example, business and product descriptions, proxy data from websites associated with a content creator, human curated data, free-form inputs, etc. Outputs of the model can include content item components (e.g., headlines, descriptions, graphics, videos, color schemes). The computing system can train the machine learned model(s) to be implemented as part of a content item generation process.

[0043] In some implementations, the training process can include knowledge distillation using one or more teacher and student models. For example, a teacher model can be an existing large language model. In some examples, a student model can be a customized model developed to generate customized content items (e.g., headlines, descriptions, advertisements, videos, images, sound bites). For example, a machine learned model can provide output including content item components (e.g., headlines, descriptions, advertisements, videos, images, sound bites). The output of the student model can be evaluated to determine a quality score associated with the output. For example, the quality score can be associated with an automatically generated quality score, a user satisfaction with the content item components, human implemented grading, etc.

[0044] In some implementations, the computing system can collect a plurality of datasets, adopt Large Language Models to generate and annotate data examples, and/or train high- quality servable models for production. The computing system can evaluate the machine learned model(s). For example, evaluation can include offline and/or online evaluation. The computing system can also determine if the machine learned model is faithful.

[0045] The present disclosure provides for numerous technical effects and benefits. For example, the present disclosure can reduce processing by reducing the number of manual inputs provided by a user. Additionally, the computing system can provide for reduction in processing by reducing the number of interface screens which must be obtained, loaded, interacted with, and updated. For example, a user can provide data indicative of input (e.g., audio input, text input, etc.). The computing system can intelligently process the data indicative of input and extract relevant information from the input. In some implementations, the computing system can provide follow-up questions to guide a user to provide input related to a desired content item to be generated. The computing system can provide for reduction in user error due to less user input and pre-processing of data indicative of user input to populate the plurality of fields that traditionally would be manually filled by a user via a structured user input.

[0046] With reference now to the Figures, example embodiments of the present disclosure will be discussed in further detail.

[0047] Figure 1 depicts a block diagram of an example computing system 100 that generates, trains, and/or uses complex language models to interact with a user interface for facilitating generation of content item example embodiments of the present disclosure. The computing system 100 includes a client computing system 102. a server computing system 104, and a training computing system 106 that are communicatively coupled over a network 108.

[0048] The client computing system 102 can be any ty pe of computing device, such as, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, or any other type of computing device.

[0049] The client computing system 102 includes one or more processors 112 and a memory 114. The one or more processors 112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 114 can include one or more computer-readable storage media which may be non- transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 114 can store data 116 and instructions 118 which are executed by the processor 112 to cause the client computing system 102 to perform operations.

[0050] In some implementations, the client computing system 102 can store or include one or more machine-learned models 120. For example, the machine-learned models 120 can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory' recurrent neural networks), convolutional neural networks or other forms of neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine-learned models can include multi -headed self-attention models (e.g., transformer models). Example machine-learned models 120 are discussed with reference to Figures 2 and 3.

[0051] In some implementations, the one or more machine-learned models 120 can be received from the server computing system 104 over network 108, stored in the user computing device memory 1 14, and then used or otherwise implemented by the one or more processors 112. In some implementations, the client computing system 102 can implement multiple parallel instances of a single machine-learned model 120 (e.g., to perform parallel retrieval and composition of modular application spaces across multiple instances of user data obtained via a user interface associated with a user device.

[0052] More particularly, the machine learned model can obtain data indicative of user input (e.g., user session data 124 A). The user input data can be associated with a current user session and/or include historical user data. For example, data associated with a current user session can be data obtained in real-time via a user input component 122. Historical user data can include data associated with a user account, user characteristics, etc. Historical user data can include data associated with a user device (e g., device identifier). In addition, or alternatively, historical user data can include data associated with a user identifier. In some embodiments, historical user data can include aggregated data associated with a plurality of user identifiers. In some embodiments, the training data 166 can include session data (e.g.. of one or more input sessions) associated with one or more input devices, such as session data indexed over a ty pe of input interface or device (e.g., mobile device with touchscreen, mobile device with keyboard, large touchscreen, small touchscreen, large touchscreen, voice inputs, or combinations thereof, etc.). In some embodiments, the training data 166 can include session data not associated with user identifiers. Using the machine-learned model, the computing system can generate, train, and maintain one or more student models 164 (e.g., customized language models). The computing system can use the student models 164 to facilitate a user interface for generating customized content items (e.g., headlines, descriptions, advertisements) based on natural language input obtained from a user. The computing system can retrieve, suggest, and/or generate one or more content items in response to obtaining a user input (e.g., natural language input).

[0053] Additionally, or alternatively, one or more machine-learned models 126 can be included in or otherwise stored and implemented by the server computing system 104 that communicates with the client computing system 102 according to a client-server relationship. For example, the machine-learned models 126 can be implemented by the server computing system 104 as a portion of a web service (e.g., a content development service, a campaign management service, a content strategy management service). Thus, one or more machine- learned models 120 can be stored and implemented at the client computing system 102 and/or one or more models 126 can be stored and implemented at the server computing system 104. [0054] The client computing system 102 can also include one or more user input components 122 that can receive user input. For example, the user input component 122 can be a touch-sensitive component (e.g., a touch-sensitive display screen or a touch pad) that is sensitive to the touch of a user input object (e.g., a finger or a stylus). The touch-sensitive component can serve to implement a virtual keyboard. Other example user input components include a microphone, a traditional keyboard, or other means by which a user can provide user input.

[0055] The client computing system can include a user data database 124. User data database 124 can include user session data 124A, user context data 124B, and/or user account data 124C. User session data 124A can include data obtained via user input component 122 indicative of a current user session. For example, user session data 124A can include a current search term and/or other user input received within a threshold time of the current session. For example, a user can submit a first search and five minutes later submit a second search. The proximity of the first search and second search in time can be user context data 124B. The computing system can use user context data 124B when processing the user query to determine relevant content items and predicted performance increases (e.g., predicted performance metrics) to provide in response to the data indicative of user input to the structured user interface. The data indicative of user input can be used by client computing system 102 to send a request to server computing system 104 for one or more suggested content item components (e.g., suggested headlines, generated user interface elements, suggested labels). The computing system can cause the one or more suggested content items to be retrieved, generated, and/or presented to a user via a user interface of a device (e.g., user device). User context data 124B can include context of prior sessions and/or historical session data. User context data can include a location, a time, prior campaigns, prior content items used, device used, type of construction workflow used, etc. User account data 124C can include data associated with a user account (e.g., login, device identifier, user identifier). [0056] The server computing system 104 includes one or more processors 132 and a memory 134. The one or more processors 132 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plural ity of processors that are operatively connected. The memory 134 can include one or more computer-readable storage media which may be non- transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 134 can store data 136 and instructions 138 which are executed by the processor 132 to cause the server computing system 104 to perform operations.

[0057] In some implementations, the server computing system 104 includes or is otherwise implemented by one or more server computing devices. In instances in which the server computing system 104 includes plural server computing devices, such server computing devices can operate according to sequential computing architectures, parallel computing architectures, or some combination thereof.

[0058] As described above, the server computing system 104 can store or otherwise include one or more machine-learned models 126. For example, the machine-learned models 126 can be or can otherwise include various machine-learned models. Example machine- learned models include neural networks or other multi-layer non-linear models. Example neural networks include feed forward neural networks, deep neural networks, recurrent neural networks, and convolutional neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine- learned models can include multi-headed self-attention models (e.g., transformer models). Example machine-learned models 126 are discussed with reference to Figures 2-3.

[0059] The client computing system 102 and/or the server computing system 104 can train the machine-learned models 120 and/or 126 via interaction with the training computing system 106 that is communicatively coupled over the network 108. The training computing system 106 can be separate from the server computing system 104 or can be a portion of the server computing system 104.

[0060] The training computing system 106 includes one or more processors 152 and a memory 154. The one or more processors 152 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 154 can include one or more computer-readable storage media which may be non- transitory , such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 154 can store data 156 and instructions 158 which are executed by the processor 152 to cause the training computing system 106 to perform operations. In some implementations, the training computing system 106 includes or is otherwise implemented by one or more server computing devices (e.g., the server computing system 104).

[0061] The training computing system 106 can include a model trainer 160 that trains the machined earned models 120 and/or 126 stored at the client computing system 102 and/or the sen' er computing system 104 using various training or learning techniques, such as, for example, knowledge distillation. For example, model trainer 160 can include one or more teacher models 162, one or more student models 164, and/or training data 166. For example, teacher models 162 can include one or more pre-trained large language models. The large language models can include deep neural networks that require extensive resources and time to train. The large language models can be equipped to determine meaning from natural language input.

[0062] In some implementations, the training data can include labeled data. For example, the labeled data can be input to the teacher model and/or the student model. In some examples, the training data 166 can be data that is obtained as output from the teacher model 162. The computing system can obtain output from the teacher model 162 and label the data. This labeled data can be used to train the student model 164. The use of training data 166 to train the student model 164 will be described further with regard to Figures 2-3.

[0063] In some implementations, the various training or learning techniques can include, for example, backwards propagation of errors. For example, a loss function can be back propagated through the model(s) to update one or more parameters of the model(s) (e.g., based on a gradient of the loss function). Various loss functions can be used such as mean squared error, likelihood loss, cross entropy loss, hinge loss, and/or various other loss functions. Gradient descent techniques can be used to iteratively update the parameters over a number of training iterations.

[0064] In some implementations, performing backwards propagation of errors can include performing truncated backpropagation through time. The model trainer 160 can perform a number of generalization techniques (e.g., weight decays, dropouts, etc.) to improve the generalization capability' of the models being trained. [0065] In particular, the model trainer 160 can train the machine-learned models 120 and/or 126 based on a set of training data 166. The training data 166 can include, for example, past performance metrics (e.g., predicted performance increase(s)). In some implementations, the one or more student models 164 can be machine-learned models 120 and/or 126.

[0066] In some implementations, if the user has provided consent, the training examples can be provided by the client computing system 102. Thus, in such implementations, the machine-learned model 120 provided to the client computing system 102 can be trained by the training computing system 106 on user-specific data received from the client computing system 102. In some instances, this process can be referred to as personalizing the model. [0067] The model trainer 160 includes computer logic utilized to provide desired functionality. The model trainer 160 can be implemented in hardware, firmware, and/or software controlling a general -purpose processor. For example, in some implementations, the model trainer 160 includes program files stored on a storage device, loaded into a memory and executed by one or more processors. In other implementations, the model trainer 160 includes one or more sets of computer-executable instructions that are stored in a tangible computer-readable storage medium such as RAM, hard disk, or optical or magnetic media.

[0068] The network 108 can be any ty pe of communications network, such as a local area network (e.g., intranet), wide area network (e.g., Internet), or some combination thereof and can include any number of wired or wireless links. In general, communication over the network 1 8 can be carried via any type of wired and/or wireless connection, using a wide variety of communication protocols (e.g., TCP/IP, HTTP, SMTP, FTP), encodings or formats (e.g., HTML, XML), and/or protection schemes (e.g., VPN, secure HTTP, SSL).

[0069] The machine-learned models described in this specification may be used in a variety of tasks, applications, and/or use cases.

[0070] In some implementations, the input to the machine-learned model(s) of the present disclosure can be text or natural language data. The machine-learned model(s) can process the text or natural language data to generate an output. As an example, the machine-learned model(s) can process the natural language data to generate a language encoding output. As another example, the machine-learned model(s) can process the text or natural language data to generate a latent text embedding output. As another example, the machine-learned model(s) can process the text or natural language data to generate a translation output. As another example, the machine-learned model(s) can process the text or natural language data to generate a classification output. As another example, the machine-learned model(s) can process the text or natural language data to generate a textual segmentation output. As another example, the machine-learned model(s) can process the text or natural language data to generate a semantic intent output. The semantic intent output may comprise at least one word or phrase determined from the text or natural language data. As another example, the machine-learned model(s) can process the text or natural language data to generate an upscaled text or natural language output (e.g., text or natural language data that is higher quality than the input text or natural language, etc ). As another example, the machine- learned model(s) can process the text or natural language data to generate a prediction output.

[0071] In some implementations, the input to the machine-learned model(s) of the present disclosure can be speech data. The machine-learned model(s) can process the speech data to generate an output. As an example, the machine-learned model(s) can process the speech data to generate a speech recognition output. As another example, the machine-learned model(s) can process the speech data to generate a speech translation output. As another example, the machine-learned model(s) can process the speech data to generate a latent embedding output. As another example, the machine-learned model(s) can process the speech data to generate an encoded speech output (e.g., an encoded and/or compressed representation of the speech data, etc.). As another example, the machine-learned model(s) can process the speech data to generate an upscaled speech output (e.g., speech data that is higher quality than the input speech data, etc ). As another example, the machine-learned model(s) can process the speech data to generate a textual representation output (e.g., a textual representation of the input speech data, etc.). As another example, the machine-learned model(s) can process the speech data to generate a prediction output.

[0072] In some implementations, the input to the machine-learned model(s) of the present disclosure can be latent encoding data (e.g., a latent space representation of an input, etc.). The machine-learned model(s) can process the latent encoding data to generate an output. As an example, the machine-learned model(s) can process the latent encoding data to generate a recognition output. As another example, the machine-learned model(s) can process the latent encoding data to generate a reconstruction output. As another example, the machine-learned model(s) can process the latent encoding data to generate a search output. As another example, the machine-learned model(s) can process the latent encoding data to generate a reclustering output. As another example, the machine-learned model(s) can process the latent encoding data to generate a prediction output.

[0073] In some implementations, the input to the machine-learned model(s) of the present disclosure can be statistical data. Statistical data can be, represent, or otherwise include data computed and/or calculated from some other data source. The machine-learned model(s) can process the statistical data to generate an output. As an example, the machine-learned model(s) can process the statistical data to generate a recognition output. As another example, the machine-learned model(s) can process the statistical data to generate a prediction output. As another example, the machine-learned model(s) can process the statistical data to generate a classification output. As another example, the machine-learned model(s) can process the statistical data to generate a segmentation output. As another example, the machine-learned model(s) can process the statistical data to generate a visualization output. As another example, the machine-learned model(s) can process the statistical data to generate a diagnostic output.

[0074] In some cases, the machine-learned model(s) can be configured to perform a task that includes encoding input data for reliable and/or efficient transmission or storage (and/or corresponding decoding). For example, the task may be an audio compression task. The input may include audio data and the output may comprise compressed audio data. In another example, the input includes visual data (e.g., one or more images or videos), the output comprises compressed visual data, and the task is a visual data compression task. In another example, the task may comprise generating an embedding for input data (e.g., input audio or visual data).

[0075] In some cases, the input includes audio data representing a spoken utterance and the task is a speech recognition task. The output may comprise a text output which is mapped to the spoken utterance. In some cases, the task comprises encrypting or decrypting input data. In some cases, the task comprises a microprocessor performance task, such as branch prediction or memory address translation.

[0076] In some implementations, the machine-learned models can be deployed on-device. For example, one or more components of a predictive machine-learned model or pipeline can be deployed on-device to avoid the upload of potentially sensitive information relating to the types of input, the types of device(s), or the contents of the inputs (e.g., relating to disabilities, contact information, address, etc.) to a server. For example, the server computing system can send a form with a learned context vector describing one or more input fields associated with a component (e.g., portion of an application associated with performance of a processing task). An onboard client model associated with the client computing system 102 can input local client characteristics (e.g., obtained via the user input component 122) and a context vector to generate a composed modular application. This on device processing can increase data privacy for a user. In some embodiments, this can also reduce the amount of data transmitted off-device, thereby reducing bandwidth usage.

[0077] Figure 2 depicts one example system 200 for generating predicted performance increases associated with suggested content items and providing an updated user interface including the suggested content items and the respective predicted performance increase according to example embodiments of the present disclosure. The example system 200 contains a computing system 202. The computing system 202 can be any type of system of one or more computing devices (e.g., client computing system 102, server computing system 104, etc.). A computing device can be, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, a server computing device, a node of a distributed computing device, a virtual instance hosted on a shared server, or any other type of computing device. In some embodiments, the computing system 202 includes a plurality of computing devices interconnected via a netw ork or otherwise distributed in an interoperable manner. For example, the computing system 202 can include a server for serving content over a network (e.g., network 108). For instance, the computing system 202 can include a web server for hosting w eb content, for collecting data regarding w eb content (e.g., for receiving, monitoring, generating, or otherwise processing data regarding web content, such as the use, download of, and/or interaction with w eb content).

[0078] The computing system 202 can contain processor(s) 212 and memory 214. The one or more processors 212 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 214 can include one or more computer-readable storage media which may be non-transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 214 can store data 216 and instructions 218 which are executed by the processor 212 to cause the computing system 202 to perform operations. [0079] In some implementations, the computing system 202 can store or otherwise implement one or more machine-learned models of a machine-learned model framework. In some embodiments, the content assistant component 220 comprises a two-model machine- learned model framework. In some embodiments the content assistant component comprises no machine-learned model framework and/or any combination of custom language model 222 and/or large language model 226. The machine-learned model framework can include a machine-learned custom language model 222 (e.g., with learnable weights 224) and/or a machine-learned large language model 226 (e g., with learnable weights 228). In some embodiments, the content assistant component 220 can implement a single model that implements custom language model 222 for content suggestion, content ranking, content generating, and/or any combination (e.g., by combining one or more aspects of the custom language model 222 and/or large language model 226, by training a single model to obtain the desired outputs directly, etc.). One or more of the machine-learned models can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recunent neural networks (e g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine- learned models can include multi-headed self-attention models (e.g., transformer models). [0080] Embodiments of the example system 200 may be configured, as shown in Figure 2, to process a data 230. Responsive to processing the data 230, the computing system 202 can provide output 250. Output 250 can include generated suggested content item components 255. Suggested content item components 255 can be provided for display via user interface 270 of a client device associated with client computing system 260. Client computing system 260 can contain processor(s) 262 and memory 264. The one or more processors 262 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 264 can include one or more computer-readable storage media which may be non-transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 264 can store data 266 and instructions 268 which are executed by the processor 262 to cause the client computing system 260 to perform operations.

[0081] The output 250 can include suggested content item components 255. Suggested content item components 255 can include, for example suggested headlines, descriptions, follow-up communications, videos, images, sound bites, etc. For example, the custom language model 222 can obtain user input and provide follow-up communications to gather additional input to generate further suggested content item components 255.

[0082] In some embodiments, the data 230 is obtained via an input interface of a computing system (e.g., computing system 202 and/or client computing system 260). For example, in some embodiments, the content assistant component 220 can be configured to process data 230 as input to the computing system 202. In some embodiments, the data 230 can be implicit in the structure and/or configuration of content assistant component 220. [0083] In some implementations, the content suggestion generator model is trained to receive a set of input data (e.g., data 230) descriptive of user input and, in response, provide output data (e.g., suggested content item components 255) indicative of one or more suggested content items to be rendered via a user interface. For instance. Figure 2 illustrations instances of evaluation.

[0084] For instance, evaluation component 240 can perform offline evaluation and/or online evaluation (e.g.. real traffic evaluation). Evaluation can include offline and online evaluation. Offline evaluation can include using a human evaluation platform. The human evaluation platform can include a template and/or rubric for standardizing evaluation. Online evaluation can include pilot and/or live traffic experiments to capture online metrics. In some implementations evaluation component 240 can include automated evaluation and/or manual evaluation. In some implementations evaluation component 240 can include tagging and/or labeling data as training data. The system can determine a quality score for the model. The system can compare the quality score for the model to a threshold quality score. If the quality score of the model is greater than the threshold quality score, the system can determine the model is ready to be used. If the quality score for the model is below the threshold quality score, the system can determine the model needs additional training and/or tuning before being used.

[0085] In some implementations, the input data can include one or more features that are associated with an instance or an example. In some implementations, the one or more features associated with the instance or example can be organized into a feature vector. In some implementations the output data can include one or more predictions. Predictions can also be referred to as inferences. Thus, given features associated with a particular instance, the machine-learned model can output a prediction for such instance based on the features. [0086] The machine-learned model can be or include one or more of various different ty pes of machine-learned models. In particular, in some implementations, the machine- learned model can perform classification, regression, clustering, anomaly detection, recommendation generation, and/or other tasks.

[0087] In some implementations, the data 230 includes aggregated data 232, proxy data 234, and/or user input data 238. In some embodiments, data 230 can include instances of substantially any kind or ty pe of data that may be descriptive of various phenomena. In general, an instance refers to a set of one or more data values grouped together to describe a particular subject or subject matter. For example, an instance can be a feature vector. An instance can be associated with image data (e.g., a feature vector for an image, a hashed image, etc.). An instance can be associated with a measurement or other data collection event (e.g., at a particular time, or of a particular subject, or using a particular device, or from a particular perspective, etc.). An instance can be associated with a network session, such as a set of interactions with a web server. In some embodiments, an instance can be associated with a user’s interaction with web content (e.g., anonymous or identified).

[0088] In some embodiments, the custom language model 222 can be trained using knowledge distillation. For example, the custom language model 222 can be a student model and the large language model 226 can be a teacher model. A knowledge distillation training process can occur to train the small custom language model 222 based on the pre-trained large language model 226. For example, the large language model can be a large model that has been generated and trained using a large amount of computing resources. In order to conserve resources, knowledge distillation can be used to train the custom language model 222 to generate similar results to the pre-trained large language model 226.

[0089] In some implementations, the models of content assistant component 220 can be tuned and/or trained using prompting. For example, the computing system can obtain prompts indicative of sample inputs and outputs. The sample inputs and outputs can be used to tune the models to be familiar with the kinds of inputs and outputs that are desired for a specific implementation. For example, a particular implementation can include generation of an advertisement. Generation of advertisements requires understanding of nuances of businesses, business goals, products, etc. In this way, by prompting the one or more models (e.g., custom language model 222 and/or large language model 226), the computing system can tune the models to provide higher quality output (e.g., output 250 which can be evaluated using evaluation component 240). In some instances, after evaluation component 240 has performed evaluation, the data can be aggregated and used as data 230 for further model training.

[0090] In some implementations, the content assistant component 220 can be a machine-learned model. In some implementations, training and tuning can include using data 230. Aggregated data 232 can include data obtained from one or more user sessions that have been aggregated by system (e.g., computing system 202). Proxy data 234 can include for example, output data obtained from content assistant component 220. In some implementations proxy data 234 can be tagged and/or labeled representative of sample inputs and/or outputs. User input data 238 can include user generated input comprising inputs and/or outputs associated with content item desires and generated content items.

[0091] The trained model can be used to generate one or more suggested content items and/or update a structured user interface and/or construction workflow. In some embodiments, a simulated user model can be trained using the training data to generate simulated inputs according to patterns learned from the training data. The simulated user model can, in some instances, provide a simulated environment in which the reinforcement learning agent, the notification element subsystem, can be trained.

[0092] Figure 3 depicts a flow chart diagram of an example method 300 for training student models 310 using knowledge distillation. Training student model 310 can include utilizing teacher model 315, training data 305, and/or distillation component 320. For example, the computing system can define a large language model as the teacher model 315. The computing system can define a custom language model as the student model 310. In some implementations the teacher model 315 can include a large language model with millions and/or billions of parameters. In some implementations, the student model 310 can include a custom language model with a few thousand parameters.

[0093] The teacher model 315 can be fully trained. For example, the teacher model can be trained as a part of method 300 and/or before convergence with the one or more student models 310. Any training method can be used to train the teacher model 315. For example, the teacher model 315 can be trained until full convergence. For example, the loss function can be any loss function based on the problem statement. [0094] The computing system can train the student model 310 intelligently using the teacher model 315. In some implementations, for example, student model 310 and teacher model 315 can comprise neural networks. For example, student model 310 can be trained in coordination with the fully trained teacher model 315. In some implementations, training data 305 is input into teacher model 315. Output data 330 of teacher model 315 can be obtained. The computing system can annotate training data 305 by running training data 305 through teacher model 315. The computing system can obtain annotated training data 325 as output data 330 from teacher model 315. Student model 310 can obtain annotated training data 325 as input. The student model 310 can be trained using the annotated training data.

[0095] For example, teacher model 315 can include a large language model that can be trained on training data 305. Training data 305 can include, for example, a dataset comprising over 1.56 trillion words of multilingual cleaned public web documents, code, and conversations from the internet. In some implementations, training data 305 can include a seed dataset collected from humans. In some implementations, the computing system can obtain data to augment the training dataset. In some implementations, the training data 305 can include proxy data from content providers (e.g., advertisers) websites. For example, a content provider can have a website that includes an “about us”, “our story”, “blog”, “services”, etc. pages. These pages can include business description content. The one or more teacher and/or student models can obtain the proxy data as input and produce an output comprising one or more content items (e.g., headlines, descriptions). In some implementations, training data 305 can include human curated data. Human curated data can be data that is obtained via user input. For example, a user can provide a written business description. The human curated data can be used as seed training data. In some implementations, the computing system can use existing large language models to generate data to be used as training data 305. For example, the large language models can generate descriptions and creative assets. In some implementations, the descriptions and creative assets can include free-form inputs and/or headlines.

[0096] Additionally, or alternatively, the teacher model 315 can include an encoderdecoder network. In some implementations the encoder-decoder network can be a general- purpose, multimodal language model with multitask learning capabilities. In some implementations, the encoder-decoder network can be pre-trained on text from billions of high-quality web documents and can be highly tailored for particular subject matter areas (e.g., search results, advertisements, particular businesses). [0097] In some instances, the computing system can tune student model 310. For example, the computing system can input subject matter specific datasets into the student model 310. By providing subject matter specific datasets to the student model 310 during training, the computing system can demonstrate significant improvements in subject matter specific applications (e.g., uses of the student model). Having the student model 310 tuned and/or trained to a specific subject matter can allow for more nuanced understanding of input obtained from users. The more nuanced understanding of input obtained from users can result in better output. The better output can result in less processing done by the computing system due to processing of repetitive inputs and/or increased touch input obtained from a user needing to manually enter data into a plurality of fields with rigid requirements. Additionally, providing for better output can increase user experience and satisfaction.

[0098] While teacher model 315 is described as a singular model for purposes of illustration, teacher model 315 can include one or more models. While student model 310 is described as a singular model for purposes of illustration, student model 310 can include one or more models.

[0099] In some implementations, the teacher and student set up can include additional and/or alternative means for knowledge distillation. For example, knowledge distillation can include use of distillation component 320 comprising a distillation loss function 335 and a student loss function 345. Distillation loss function 335 can be used for forward propagation of teacher model 315 and student model 310. Distillation can include back propagation of student model 310. For example, by converging the losses of the output of student model 310 and the output of teacher model 315.

[0100] Algorithms used for knowledge distillation of teacher model 315 and student model 310 can include, for example, adversarial distillation, multi-teacher distillation, cross- modal distillation, graph-based distillation, attention-based distillation, data-free distillation, quantized distillation, lifelong distillation, and/or neural architecture search-based distillation. Training student model 310 can include, for example, offline distillation, online distillation, and/or self-distillation. Knowledge based distillation can include response-based knowledge, feature-based knowledge, and/or relation-based knowledge. Knowledge distillation can be implemented in a variety of applications. Applications can include, for example, vision, natural language processing, and/or speech.

[0101] Vision applications can include, for example, image classification, face recognition, image segmentations, action recognition, video captioning, image retrieval, text- to-image synthesis, and/or video classification. NLP applications can include, for example, text generation, text recognition, neural machine translations, question answering, and/or document retrieval. Speech applications can include, for example, speech recognition, language identification, audio classification, speaker recognition, speech synthesis, speech enhancement, and/or acoustic event detection.

[0102] Knowledge distillation can provide for various technical benefits and technical effects. For example, by utilizing a pre-trained model to tune a content specific model, the computing system can conserve computing resources and processing power that would be needed to train a model to perform as well as the pre-trained model without the pre-trained model. Additionally, in some implementations the training datasets can include one or more prompts used to tune the teacher models 315 and/or student models 310. By giving input alongside a prompt, the computing system can tune the models to produce better output than traditional model training methods.

[0103] Figure 4 depicts a flow chart diagram of a process flow 400 for facilitating content generation via a responsive content construction process at 420. By way of example, the computing system can include a content item table 405. At 410, the computing system can obtain user input indicative of a user selecting (e g., clicking) an option to generate a new content item (e.g., advertisement, advertisement campaign). At 415, the computing system can obtain user input indicative of a user selecting a content item group. In response to obtaining user input indicative of selecting a content item group, the computing system can initiate responsive content construction at 420.

[0104] At 420, the computing system initiating responsive content construction can include providing for display, via a user interface, a takeover promotion for content assistant. For example, at 425, the takeover promotion for content assistant can be provided for display. [0105] At 430, the computing system can provide for display a selectable interface element comprising a message. For example, a message can include a “use content assistant?” message. In some implementations, messages can be displayed with selectable user interface elements labeled “yes” and “no.”

[0106] By way of example, the computing system can obtain data indicative of a user selecting “no.” In response, at 435, the computing system can cause the takeover promotion for content assistant to disappear. At 440, the computing system can provide for display a selectable interface comprising a message. The message can include the following: “does the user have a final URL?”. In some implementations, the message can be displayed with selectable user interface elements labeled “yes” and “no.” The computing system can obtain data indicative of a user selecting "‘yes.” In response, the computing system can prefill one or more input fields at 445 A associated with the user interface for content generation. The computing system can obtain data indicative of a user selecting “no.” In response, the computing system can provide for no prefills at 445B for one or more input fields associated with the user interface for content generation.

[0107] By way of example, in response to display of a message at 430, the computing system can obtain data indicative of user selection of “yes.” In response, the computing system can update the user interface to provide for display content assistant 447. At 450, the computing system can provide for display a content assistant component via the user interface. The content assistant component can be displayed in a variety of formats. For example, formats can include a set number of free-form text input fields (e.g., as depicted in Figure 8), one or more progressive disclosure fields (e.g., as depicted in Figure 9), and/or a conversational interface (e.g., as depicted in Figure 10). The computing system can perform a content assistant flow 455. The content assistant flow 455 is discussed in more detail with regard to Figure 5.

[0108] Turning to Figure 5, the content assistant flow 455 includes providing an interface for a user to interact with (e.g., a content assistant component of a user interface). As described above the interface can be an audio interface, visual interface, etc. For purposes of explanation, content assistant flow 455 will be described as a visual interface (e.g., a graphical user interface). At 505, the computing system can provide a message for display: “tell me about your business?”. The message can include any message that prompts a user to provide input data in a natural language and/or free-form format. At 510, the computing system can obtain user input responsive to display of the message. For example, the message displayed can state “Could you please describe the product or service you want to advertise and how it helps your potential customers?”. For example, a user can provide information about their business. A user could provide the following as input: “We are a small puppysupply company in Northern Massachusetts. We are called Puppy Things. We sell dog kibbles, snacks, toys, and anything for puppies. Great value, high quality products. We open 9-5 and offer free refunds and ship to all 50 states.”

[0109] At 515, the computing system can extract information from the user input. For example, extracted information can include a business name and/or descriptive terms and phrases. For the example depicted in Figure 10, the system can determine that the business name is Puppy Things. The system can extract other relevant terms such as puppy, kibbles, snacks, toys, high quality’, open 9-5, free refunds, ship to all 50 states.

[0110] In some implementations, the computing system can determine that additional information is needed. In response, at 520, the computing system can provide a message for display indicating the need for additional information from the user. For example, the message can include a follow-up question, a clarifying question, a question for information not included in the initial user input, etc. For example, as depicted in Figure 10, the system can provide a follow-up question staring, ‘'Do you have a website that people should go to after they click the ad?”.

[Oi l 1] At 522, the computing system can obtain user input responsive to the message for additional information. For example, as depicted in Figure 10, the user can provide the following as input: “www.puppythings.com.” The computing system can extract information from the user input at 515 as previously described. This process can be repeated as many times as necessary to gather relevant information.

[0112] At 525, the computing system can generate suggested content. For example, suggested content can include the population of one or more input fields. In some implementations, content can include an entire generated advertisement and/or campaign. For example, the generated advertisement can include a visual display of a proposed advertisement. By way of example, the computing system could provide for display a summary of a strategy for an advertisement campaign comprising one or more generated advertisements, requirements for display, times for display, expected performance, etc.

[0113] Turning to Figure 10 for example, the computing system can generate the following message: “We created the ad based on your information: 10 keywords, 5 headlines, 2 descriptions. Please let me know if you want to modify it.” For example, the computing system can generate following as suggested headlines: anything your puppy needs, kibbles, snacks and toys for your puppy, high uality products for puppy, open 9-5 and free refunds, and ship to all states. The computing system can generate the following as suggested descriptions: puppy supply, great value, high quality, free shipping. Shop today at puppythings.com and/or welcome your new puppy with everything they need. Kibbles, snacks, toys, & more. All at puppythings.com. The computing system can determine the following targeting keywords: puppy supply, dog kibble, dog treats, etc.

[0114] At 530, the computing system can obtain user input indicative of a desire for modification of the suggested content. For example, a user can specify that they meant that they offer free refunds and that separately they ship to all 50 states. They can indicate that they do not provide free shipping. The suggested headlines and/or descriptions can be modified based on the user input. For example, the computing system can obtain user input in a natural language format. For example, a user could reply “Please modify the first headline to remove free shipping. Thank you.”

[0115] At 535, the computing system can modify the content responsive to the user input. In some implementations the computing system can repeat steps 530 and 535 until the computing system obtains input of user satisfaction with the suggested content.

[0116] At 540, the computing system can obtain user input indicative of satisfaction. For example, the computing system can obtain data indicative of a user selecting “continue,” “accept,” etc.

[0117] At 545, in response to obtaining user input indicative of satisfaction, the computing system can end the content assistant flow 455. In some implementations, the computing system will continue with the process described in Figure 4.

[0118] Turning back to Figure 4, following the content assistant flow 455, the computing system can obtain user input indicative of selecting an option to “apply suggestions” at 460. In response, the computing system can populate one or more input fields (e.g., associated with generating a content item, advertisement, etc.). The user can review the populated input fields to determine if there are any errors or problems. At 465, the computing system can provide for display via the user interface a message. The message can include the following: “are there any errors or problems?” In some implementations, the message can be displayed with selectable user interface elements labeled “yes” and “no.” The computing system can obtain data indicative of a user selecting “yes.” In response, at 470, the computing system can initiate the display of an error message and/or a troubleshooting flow. The computing system can obtain data indicative of a user selecting “no.” In response, at 475, the computing system can determine if this is the first time the user has used the assistant.

[0119] In some implementations, the computing system can determine that this is not the user’s first time using the assistant. In response, at 480, the computing system can cause the content assistant to disappear and provide for display input fields that the content assistant entered. In some implementations, the computing system can determine that this is the user’s first time using the assistant. In response, at 485, the computing system can provide for display a feature promotion sequence comprising an alert. The alert can include highlighting one or more input fields that the content assistant populated and/or a notification describing where to locate a content item component and/or content item that was generated by the computing system via responsive content construction at 420.

[0120] Figures 6A-C depict a flow chart diagram of an example method 600 to perform according to example embodiments of the present disclosure. Although Figures 6A- C depict steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of method 600 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.

[0121] At (602), method 600 can include providing input data to a first machine learned model. For instance, a computing system can provide input data to a first machine learned model. As described herein input data can include at least one of free-form input and/or landing page content. The input data can include natural language input.

[0122] In some implementations, the first machine learned model can be trained using a knowledge distillation training method. For example, the knowledge distillation training method can include training the first machine learned model (e.g., student model) using a pretrained larger teacher model (e.g., complex large language model). In some embodiments the first machine learned model can be trained based at least in part on output from a pre-trained second machine learned model.

[0123] In some implementations, the pre-trained second machine learning model can be a large language model. The pre-trained second machine learned model can be tuned using one or more prompts. For example, the system can tune the model to generate suggested assets from free-form inputs with few-shot learning. For example, inputs can include prompts. Prompts can include headlines and/or descriptions. For example, headlines can include anew tycoon game, try our dog friendly cafe, build your dog paradise, or the coolest app for dogs in the US. Descriptions can include serving amazing drinks in the friendliest dog cafe and dog cafe tycoon: feed the hungry customer and grow your dog cafe empire.

[0124] At (604), method 600 can include obtaining output data indicative of one or more suggested content item components. For instance, a computing system can obtain output data indicative of one or more suggested content item components. By way of example, content item components can include at least one of a headline or description. By way of example, the output of the model can include a generated text output. For example, the text could include “Dog Cafe Tycoon is a Simulation app. Here is a brief introduction of the app: A new tycoon game to build your dog paradise. It serves drinks in the dog cafe and helps to grow your dog cafe empire. Key topic keywords for this application are dog cafe. The target country is the US/’ Tuning the large language model can increase the ability' for the model to produce more nuanced content item suggestions in the future. For example, generating an advertisement can include the need to understand nuances associated with a business, goals of a particular user, etc. This language processing can be different from other language processing applications.

[0125] At (606), method 600 can include evaluating the first machine learned model based at least in part on the output data. For instance, a computing system can evaluate the first machine learned model based at least in part on the output data. As described herein, the evaluation of the first machine learned model can be performed based at least in part on determining quality scores associated with the output data of the first machine learned model. [0126] At (606 A), method 600 can include obtaining data indicative of a quality score associated with the output data of the first machine learned model. For instance, a computing system can obtain data indicative of a quality score associated with the output data of the first machine learned model. By way of example, the quality score can be automatically generated by the system and/or manually entered by a user.

[0127] At (606B), method 600 can include comparing the quality score associated with the output data of the first machine learned model to a threshold quality 7 score. For instance, a computing system can compare the quality score associated with the output data of the first machine learned model to a threshold quality score. By way of example, the computing system can continuously evaluate the models. The computing system can determine that the quality of the output data is acceptable. For example, in a training implementation, the output data can be obtained for input data that has been previously tagged and/or processed by a pre-trained model. The system can compare the actual output data to expected output data.

[0128] At (608), method 600 can include determining that the first machine learned model has a quality score above the threshold quality score. For instance, a computing system can determine that the first machine learned model has a quality score above the threshold quality score. As described herein a quality score can be associated with a quality of output data.

[0129] At (610), method 600 can include in response to determining that the first machine learned model has a quality score above the threshold quality score, implement the first machine learned model in a content creation flow. For instance, a computing system can in response to determining that the first machine learned model has a quality score above the threshold quality score, implementing the first machine learned model in a content creation flow. A content creation flow can be associated with a third party that provides a platform for content creators to generate customized content items (e.g., search results for display that link to a website, an advertisement, generated content items). A content creation flow can include a user interface for generating a content item. The content creation flow can include providing for display a content assistant component of a user interface.

[0130] At (612), method 600 can include generating an initial user interface comprising a content assistant component. For instance, a computing system can generate an initial user interface comprising a content assistant component. As described herein, the initial user interface can include a content assistant component. The content assistant component can be configured to obtain user input, and in response, generate suggested content (e.g., headlines, descriptions, advertisements, videos, images, sound bites, etc.). In some implementations, the content assistant component can include one or more input fields. For example, the content assistant component can include a set number of predetermined input fields. In some implementations the content assistant component can include a plurality of progressively displayed input fields. In some implementations, the content assistant component can be configured to provide a conversational interface.

[0131] In some implementations, user input can include text snippets, documents, images, handwriting, audio, etc. In some implementations a user may not have a strong landing page associated with their website. In response, the computing system can initiate an input funnel to obtain user input. Additionally, or alternatively, user input can include fine- granted controls on a content item strategy. For example, the content item can be an advertisement, and the strategy can correlate to a campaign. The system can obtain user input indicative of the user selecting one or more fine-granted controls including requirements through interactions with the system and adaptations for the campaigns.

[0132] At (614), method 600 can include obtaining data indicative of user input. For instance, a computing system can obtain data indicative of user input. User input can include input obtained from a user through any means. The means can include touch input (e.g., via a touch sensor keyboard), voice input, etc.

[0133] At (616), method 600 can include processing, by the first machine learned model, the data indicative of user input. For instance, a computing system can process, by the first machine learned model, the data indicative of user input. Processing the data indicative of user input can include transforming the data from a natural language input into a feature vector or some other data structure to be ingested by the first machine learned model.

[0134] At (618), method 600 can include obtaining output data, from the first machine learned model, indicative of one or more content item components. For instance, a computing system can obtain output data, from the first machine learned model, indicative of one or more content item components. By way of example, the one or more content item components can include a headline, description, video, image, tagline, sound bite, etc. The content items can be generated and/or retrieved by the content assistant component.

[0135] At (620), method 600 can include transmitting data which causes the one or more content item components to be provided for display via a user interface. For instance, a computing system can transmit data which causes the one or more content item components to be provided for display via a user interface. For example, the content item components can be headlines, descriptions, etc. that are populated in one or more input fields associated with a content creation flow form.

[0136] At (622), method 600 can include obtaining data indicative of user selection of approval of the one or more content item components. For instance, a computing system can obtain data indicative of user selection of approval of the one or more content item components. For example, the computing system can obtain data indicative of user selection of an “accept,’' “confirm,’" “create advertisement,"’ or some other notification indicating approval of the content item components.

[0137] At (624), method 600 can include generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items. For instance, a computing system can generate, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items. For example, a content item can be an advertisement comprising a plurality of content item components. In some implementations a content item can include a plurality of headlines, descriptions, videos, images, taglines, etc. Based on the processed user input and/or other user data, the system can generate one or more content items using the one or more content item components.

[0138] Figure 7 depicts a flow chart diagram of an example method 700 to perform according to example embodiments of the present disclosure. Although Figure 7 depicts steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of method 700 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.

[0139] At (702), method 700 can include obtaining data indicative of an input received from a user. For instance, a computing system can obtain data indicative of an input received from a user. Input from a user can include free-form input and/or landing page input. For example, free-form input can include a user describing themselves and/or an associated business using natural language. The user input can include a uniform resource locator (URL). In some instances, the URL can be associated with a website and/or landing page of a website associated with the user. In some implementations, the system can obtain data from the landing page to generate one or more content item components and/or content items to suggest to the user.

[0140] At (704), method 700 can include processing, by the machine learned model, the data indicative of the input received from the user. For instance, a computing system can process, by the machine learned model, the data indicative of the input received from the user.

[0141] In some implementations, method 700 can include training one or more machine learned models for implementation in a content creation flow. For instance, a computing system can train one or more machine learned models for implementation in a content creation flow. By way of example, the one or more machine learned models can be trained in any manner. For example, the one or more machine learned models can be trained using a knowledge distillation training method. In some implementations training the one or more machine learned models can include inputting labeled data into a first machine learned model. In some implementations, the system can obtain output data from the first machine learned model to output data from an expected machine learned model. The system can adjust the first machine learned model based on comparing the output data of the first and expected machine learned models.

[0142] In some implementations, the labeled data can include data that is output by a second machine learned model, wherein the second machine learned model is a pre-trained model. The output obtained by the expected machine learned model can include annotated data. For example, the system can annotate input data with the expected outcome based on the output when the same input data was processed by the expected machine learned model. In some implementations, the annotated data can be a silver training set. By way of example, the labeled data can include at least one of business and product descriptions, proxy data from websites associated with a content creator, human curated data, or free-form inputs.

[0143] In some implementations, training the one or more machine learned models can include inputting unlabeled data into a first machine learned model and a pre-trained second machine learned model. The training can include obtaining output data from the first machine learned model and the second machine learned model. The training can include adjusting the first machine learned model based on comparing the output data of the first machine learned model and the output of the second machine learned model. For example, the system can perform a loss regression on the output of the first and second machine learned models. In some instances, the system can continuously update and re-process input until the difference between the output of the first and second machine learned model are within an acceptable threshold.

[0144] In some implementations, the method can include implementing the machine learned model in a content creation flow. For instance, a computing system can implement the machine learned model in a content creation flow. A content creation flow can be associated with a third party that provides a platform for content creators to generate customized content items (e.g., search results for display that link to a website, an advertisement, generated constructed content items).

[0145] In some implementations, the method can include generating an initial user interface comprising a content assistant component. For instance, a computing system can generate an initial user interface comprising a content assistant component. In some implementations, the initial user interface can be a graphical user interface.

[0146] At (706), method 700 can include obtaining output data, from the machine learned model, indicative of one or more content item components. For instance, a computing system can obtain output data, from the machine learned model, indicative of one or more content item components. For example, content item components can include one or more suggestions for content. For example, the content can include suggested headlines, descriptions, images, videos, taglines, etc. In some implementations, the content can include a generated advertisement.

[0147] At (708), method 700 can include transmitting data which causes the one or more content item components to be provided for display via a user interface. For instance, a computing system can transmit data which causes the one or more content item components to be provided for display via a user interface. For example, the system can populate existing user input fields with content generated by the model. In some implementations, the system can generate entire suggested content items (e.g., advertisements).

[0148] At (710), method 700 can include obtaining data indicative of user selection of approval of the one or more content item components. For instance, a computing system can obtain data indicative of user selection of approval of the one or more content item components. By way of example, the system can obtain data indicative of a selection of a user interface element indicative of user selection of an “accept”, “confirm”, “create advertisement”, or some other notification indicating approval of the content item components.

[0149] At (712), method 700 can include generating, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items. For instance, a computing system can generate, in response to obtaining the data indicative of the user selection of the approval of the one or more content item components, one or more content items.

[0150] Figures 8-10 depict example content assistant components of user interfaces according to example embodiments of the present disclosure. The computing system can provide for display a content assistant component via the user interface. The content assistant component can be displayed in a variety' of formats. For example, formats can include a set number of free-form text input fields (e.g., as depicted in Figure 8), one or more progressive disclosure fields (e.g., as depicted in Figure 9), and/or a conversational interface (e.g., as depicted in Figure 10).

[0151] Figure 8 depicts an example content assistant component 800 comprising a set number of free-form text input fields 805, 810, 815. For example, a first free-form text input field 805 can include a prompt for a user to “describe the product or service you want to advertise.” The computing system can obtain data indicative of user input into the free-form text input field 805. The free-form text input can be natural language input by the user. A second free-form text input field 810 can include a prompt for a user to “explain what makes your product or service unique.” The computing system can obtain data indicative of user input into the free-form text input field 810. A third free-form text input field 815 can include a prompt for a user to “what URL should this ad link to?” In some implementations, the content generated can be an advertisement. For example, a user can provide a uniform resource locator (URL) that directs a user to a website associated with the user. For example, the user can be associated with a business called Puppy Things. The URL associated with the business can be "http://www.puppythings.com”.

[0152] Figure 9 depicts an example content assistant component 900 comprising one or more progressive disclosure fields. For instance, content assistant component 900 can include a first progressive disclosure field 905. The first progressive disclosure field 905 can be presented when the content assistant component 900 is first initiated. As a user provides input into the first progressive disclosure field 905, the system can provide for display second progressive disclosure field 910. After the system determines a user has input data into second progressive disclosure field 910, the system can provide for display third progressive disclosure field 915.

[0153] In some implementations, the progressive disclosure fields 905, 910, and/or 915 can be predetermined fields. Additionally, or alternatively, the progressive disclosure fields may be determined based on processing of information obtained as user input to a previously disclosed progressive disclosure field. For example, a first progressive disclosure field 905 could indicate "tell me about your business.” If a user provides a short 1-2 generic sentence answer, the second progressive disclosure field 910 could indicate a more pointed prompt than if the user provides a multiple paragraph answer in first progressive disclosure field 905. When a user has completed providing input into the one or more progressive disclosure fields, the system can obtain data indicative of a user selecting a submit element 920. In response, the system can process the user input provided via the progressive disclosure fields and provide content suggestions via the content creation flow.

[0154] Figure 10 depicts an example content assistant component 1000 comprising a conversational user interface. In some implementations, the content assistant component 1000 can include a back-and-forth communication session between a user and the content assistant. For example, the content assistant can provide an initial prompt 1005 to a user. The user can provide a reply 1010. The computing system can facilitate the transmission of multiple communications between a user and the content assistant. As described with regard to Figure 5, the computing system can provide follow-up questions, process the data indicative of the user input associated with the user’s responses and generate suggested content based on the processing of the user input (e g., via a machine learned natural language processing model). [0155] Figure 11 depicts one example system 1100 for generating predicted performance increases associated with suggested content items and providing an updated user interface including the suggested content items and the respective predicted performance increase according to example embodiments of the present disclosure. The example system 1100 contains a computing system 1102. The computing system 1102 can be any type of system of one or more computing devices (e.g., client computing system 102, server computing system 104, etc.). A computing device can be, for example, a personal computing device (e.g., laptop or desktop), a mobile computing device (e.g., smartphone or tablet), a gaming console or controller, a wearable computing device, an embedded computing device, a server computing device, a node of a distributed computing device, a virtual instance hosted on a shared server, or any other type of computing device. In some embodiments, the computing system 1102 includes a plurality of computing devices interconnected via a network or otherwise distributed in an interoperable manner. For example, the computing system 1102 can include a server for serving content over a network (e.g., network 108). For instance, the computing system 1102 can include a web server for hosting web content, for collecting data regarding web content (e.g., for receiving, monitoring, generating, or otherwise processing data regarding web content, such as the use, download of, and/or interaction with web content).

[0156] The computing system 1102 can contain processor(s) 1112 and memory 1114. The one or more processors 1112 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 1114 can include one or more computer-readable storage media which may be non-transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 1114 can store data 1116 and instructions 1118 which are executed by the processor 1112 to cause the computing system 1102 to perform operations. [0157] In some implementations, the computing system 1102 can store or otherwise implement one or more machine-learned models of a machine-learned model framework. In some embodiments, the content assistant component 1120 comprises a multi-model machine- learned model framework. In some embodiments, the content assistant component 1120 comprises no machine-learned model framework and/or any combination of custom language models 1122, language models 1126, or models 1125. The machine-learned model framework can include a machine-learned custom language model 1122 (e.g., with learnable weights 1124), machine-learned language model 1126 (e.g., with learnable weights 1128), or models 1125. Models 1125 can include generative models 1129 or action models 1127. Generative models 1129 can generate images, audio, text, audiovisual, or any other content. Action models 1127 can be capable of interfacing with one or more other models or components to perform an action (e.g. make a recommendation, interact with another website). The models 1125 can also have associated learnable weights (not depicted in Figure 11).

[0158] In some embodiments, the content assistant component 1120 can implement a single model that implements custom language model 1122 for content suggestion, content ranking, content generating, general advertising content or recommendations, general analytics content or recommendations, and/or any combination (e.g., by combining one or more aspects of the custom language model 1122 and/or language model 1126, by training a single model to obtain the desired outputs directly, etc.). One or more of the machine-learned models can be or can otherwise include various machine-learned models such as neural networks (e.g., deep neural networks) or other types of machine-learned models, including non-linear models and/or linear models. Neural networks can include feed-forward neural networks, recurrent neural networks (e.g., long short-term memory recurrent neural networks), convolutional neural networks or other forms of neural networks. Some example machine-learned models can leverage an attention mechanism such as self-attention. For example, some example machine-learned models can include multi -headed self-attention models (e.g., transformer models). In some instances, models 1125 can include multimodal models that can understand both image, text, or other input. In some instances, models 1125 can include a plurality of models that can use a language model (e.g.. large language model) to better understand an intent of obtained data (e g., user input data 1 138).

[0159] Embodiments of the example system 1100 may be configured, as shown in Figure 11, to process a data 1130. Responsive to processing the data 1130, the computing system 1102 can provide output 1150. Output 1150 can include generated suggested content item components 1 155, natural language responses 1157, or content campaign recommendations 1159. Suggested content item components 1155 can include a plurality of components or elements used to generate a content item. For instance, the suggested content item components can include text (e.g., headlines or descriptions), images (e.g., obtained from a database or generated in real-time), audiovisual, interactive, or other kinds of content. Natural language responses 1157 can include text generated by a language model based on determining an intent of an obtained user input and providing a natural language response providing an answer, recommendation, or other response as output 1150. Content campaign recommendations 1159 can include recommendations for analyzing historical content performance data (e.g., advertisement performance, cost per click, return on advertisement spend), recommendations for adjusting content campaign parameters based on analyzed data (e.g., adjust spend across different media channels, adjust maximum or minimum bidding parameters), or any other recommendation associated with a content campaign.

[0160] Suggested content item components 1155, natural language responses 1157, or content campaign recommendations 1159 can be provided for display via user interface 1170 of a client device associated with client computing system 1160. Client computing system 1 160 can contain processor(s) 1162 and memory 1164. The one or more processors 1162 can be any suitable processing device (e.g., a processor core, a microprocessor, an ASIC, an FPGA, a controller, a microcontroller, etc.) and can be one processor or a plurality of processors that are operatively connected. The memory 1164 can include one or more computer-readable storage media which may be non-transitory, such as RAM, ROM, EEPROM, EPROM, flash memory devices, magnetic disks, etc., and combinations thereof. The memory 1164 can store data 1166 and instructions 1168 which are executed by the processor 1162 to cause the client computing system 1160 to perform operations.

[0161] The output 1150 can include suggested content item components 1155. Suggested content item components 1155 can include, for example, suggested headlines, descriptions, follow-up communications, videos, images, audio, sound bites, etc. In some instances, the suggested content item components 1155 can be creative assets in a form that can be input into a free-form input or headline. For example, the custom language model 1 122 can obtain user input and provide follow-up communications to gather additional user input to generate further suggested content item components 1155. By way of example, the custom models can be trained using distillation techniques and the previously trained language models (e.g., previously trained large language models).

[0162] In some embodiments, the data 1 130 is obtained via an input interface of a computing system (e.g., computing system 1102 and/or client computing system 1160). For example, in some embodiments, the content assistant component 1120 can be configured to process data 1130 as input to the computing system 1102. In some embodiments, the data 1130 can be implicit in the structure and/or configuration of content assistant component 1 120.

[0163] In some implementations, the content suggestion generator model is trained to receive a set of input data (e.g., data 1130) descriptive of user input and, in response, provide output data (e.g.. suggested content item components 1155) indicative of one or more suggested content items to be rendered via a user interface. For instance, Figure 11 illustrations instances of evaluation.

[0164] For instance, evaluation component 1140 can perform offline evaluation and/or online evaluation (e.g., real traffic evaluation). Offline evaluation can include using a human evaluation platform. The human evaluation platform can include a template and/or rubric for standardizing evaluation. Online evaluation can include pilot and/or live traffic experiments to capture online metrics. In some implementations evaluation component 1140 can include automated evaluation and/or manual evaluation. In some implementations evaluation component 1140 can include tagging and/or labeling data as training data. The system can determine a quality score for the model. The system can compare the quality score for the model to a threshold quality score. If the quality score of the model is greater than the threshold quality score, the system can determine the model is ready to be used. If the quality score for the model is below the threshold quality score, the system can determine the model needs additional training and/or tuning before being used.

[0165] In some implementations, the input data can include one or more features that are associated with an instance or an example. In some implementations, the one or more features associated with the instance or example can be organized into a feature vector. In some implementations the output data can include one or more predictions. Predictions can also be referred to as inferences. Thus, given features associated with a particular instance, the machine-learned model can output a prediction for such instance based on the features. [0166] The machine-learned model can be or include one or more of various different types of machine-learned models. In particular, in some implementations, the machine- learned model can perform classification, regression, clustering, anomaly detection, recommendation generation, and/or other tasks.

[0167] In some implementations, the data 1130 includes aggregated data 1132, proxy data 1134, and/or user input data 1138. In some embodiments, data 1130 can include instances of substantially any kind or type of data that may be descriptive of various phenomena. In general, an instance refers to a set of one or more data values grouped together to describe a particular subject or subject matter. For example, an instance can be a feature vector. An instance can be associated with image data (e g., a feature vector for an image, a hashed image, etc.). An instance can be associated with a measurement or other data collection event (e.g., at a particular time, or of a particular subject, or using a particular device, or from a particular perspective, etc.). An instance can be associated with a network session, such as a set of interactions with a web server. In some embodiments, an instance can be associated with a user's interaction with web content (e.g.. anonymous or identified). [0168] In some embodiments, the custom language model 1122 can be trained using knowledge distillation. For example, the custom language model 1122 can be a student model and the large language model 1126 can be a teacher model. A knowledge distillation training process can occur to train the custom language model 1122 based on the pre-trained large language model 1126. For example, the large language model can be a large model that has been generated and trained using a large amount of computing resources. For instance, a large language model can be a language model consisting of a neural network with many parameters (e.g., millions, billions, or trillions of weights). The language models can be trained on large quantities of unlabeled text by self-supervised learning or semi-supervised learning. In order to conserve resources, knowledge distillation can be used to train the custom language model 1122 to generate similar results to the pre-trained large language model 1126.

[0169] In some implementations, the models of content assistant component 1120 can be tuned and/or trained using prompting. For example, the computing system can obtain prompts indicative of sample inputs and outputs. The sample inputs and outputs can be used to tune the models to be familiar with the kinds of inputs and outputs that are desired for a specific implementation. For example, a particular implementation can include generation of an advertisement. Generation of advertisements requires understanding of nuances of businesses, business goals, products, and the like. In this way, by prompting the one or more models (e.g., custom language model 1122 and/or large language model 1126), the computing system can tune the models to provide higher quality output (e.g., output 1150 which can be evaluated using evaluation component 1140). In some instances, after evaluation component 1140 has performed evaluation, the data can be aggregated and used as data 1130 for further model training.

[0170] In some implementations, the content assistant component 1120 can be a machine-learned model. In some implementations, training and tuning can include using data 1130. Aggregated data 1132 can include data obtained from one or more user sessions that have been aggregated by system (e.g., computing system 202). Proxy data 1134 can include for example, output data obtained from content assistant component 1120. In some implementations proxy data 1134 can be tagged and/or labeled representative of sample inputs and/or outputs. User input data 1138 can include user generated input comprising inputs and/or outputs associated with content item desires and generated content items. [0171] The trained model can be used to generate one or more suggested content items and/or update a structured user interface and/or construction workflow. In some embodiments, a simulated user model can be trained using the training data to generate simulated inputs according to patterns learned from the training data. The simulated user model can, in some instances, provide a simulated environment in which the reinforcement learning agent, the notification element subsystem, can be trained.

[0172] Figure 12 depicts an example content assistant flow 1200 for determining when to provide a prompt for display to a user to obtain additional user input via the user interface. At 1202. an initial prompt can be provided for display via the user interface. At 1205 the system can determine if user input has been obtained. If user input has been obtained, the system can proceed to 1210. At 1210, the content assistant can obtain one or more keyword recommendations. At 1215, content assistant can generate suggested content. Following 1215, the system can proceed to 1220. If no user input is obtained at 1205, the computing system can proceed to 1220.

[0173] At 1220, the computing system can determine if a prompt for more information should be provided for display via the user interface (e.g., via a content assistant component). For instance, at 1225 the computing system can determine a content score and compare the content score to a threshold score value. For instance, the score can be a numerical score or some other form of rating. The score can be determined based on a number of headlines, descriptions, images, or other content elements selected or filled in within the content campaign interface. At 1230 the computing system can determine if the content score is above a threshold. If the content score is not above the threshold, the computing system can progress to step 1255. At 1255, the content assistant component can periodically poll the client device to determine if updated user input has been obtained. [0174] If the content score is not above the threshold, the system can progress to 1235. At 1235, the system can determine if at least X minutes have passed since the last prompt was provided for display via the user interface. X can be any number of minutes (or can be a number of seconds) that can be automatically updated. Additionally, or alternatively, X can be manually selected or input. If X minutes have not passed, the computing system can progress to 1255. At 1255, the content assistant component can periodically poll the client device to determine if updated user input has been obtained. [0175] If X minutes have passed, the computing system can progress to 1240. At 1240, the computing system can determine if Y question has been provided for display via the user interface. For instance, the computing system can keep a log of a current communication session. The computing system can determine the meaning of Y question and compare it to the log of the current communication session. If the computing system determines that question Y has been provided for display (e.g., asked to the user), the computing system can progress to 1255. At 1255, the content assistant component can periodically poll the client device to determine if updated user input has been obtained.

[0176] If the computing system determines that question Y has not been provided for display, the computing system can progress to 1245. At 1245, the computing system can determine if user input indicative of typing has been obtained within Z seconds. Z can be any amount of time (e.g., seconds, minutes). For instance Z time can be automatically determined by the computing system based on a determined average amount of time between received inputs. Additionally, or alternatively, Z time can be set (e.g., by a user providing manual input). If the computing system determines that user input indicative typing has been obtained within Z seconds, the computing system can proceed to 1255. At 1255, the content assistant component can periodically poll the client device to determine if updated user input has been obtained.

[0177] If the computing system determines that user input indicative of typing has not been obtained within Z seconds, the computing system can proceed at 1250. At 1250, the computing system can provide a prompt for display via the user interface. The prompt can include a message prompting a user to provide additional user input. The additional user input can be free form input, selecting a selectable screen element, or any other user input. [0178] As discussed herein, at 1255. the content assistant can periodically poll the client device for obtained user input. The computing system can proceed to 1260. At 1260, the content assistant component can determine that user input has been obtained. In response the content assistant can determine an intent of the user input. For instance, the computing system can determine a semantic intent through natural language processing (e.g., via a large language model).

[0179] The computing system can determine that the content assistant is not capable of providing a recommendation or performing an action relating to the intent. In response, the computing system can proceed to 1265. At 1265, the content assistant can provide a message for display comprising an indication of an inability to assist with the determined intent. The computing system can proceed to step 1255. A 1255, the content assistant component can periodically poll the client device to determine if updated user input has been obtained.

[0180] The computing system can determine that the content assistant is capable of providing a recommendation or performing an action relating to the intent. In response, the computing system can proceed to 1215 and the content assistant can generate suggested content as discussed herein.

[0181] Figure 13 depicts an example content assistant flow loop 1300 for providing recommendations to update a content creation interface until a predicted score of the generated content exceeds a threshold. At 1305, the computing system can evaluate obtained user input to determine if a prompt to obtain user input should be provided for display. For instance, the computing system can determine the intent of obtained user input (e.g., desire to generate a new content campaign, an inquiry about the performance of various settings for content campaigns) and evaluate whether a response should be provided for display via the user device.

[0182] At 1310, the computing system can determine if a prompt to obtain additional user input was provided for display. If a prompt to obtain additional user input was not provided for display, the computing system can proceed to 1315. At 1315, the content assistant can periodically poll the client device for updated user input. If a prompt to obtain additional user input was provided for display, the computing system can proceed to 1320. At 1320, the computing system can determine if a requisite model competence exists for the subject matter of the obtained user input. For instance, the computing system can determine if an answer or response can be generated responsive to the user’s input. If a requisite model competence does not exist, the computing system can proceed to step 1340. At 1340, computing system can provide for display general questions for more information to be obtained via a user input component of the user interface.

[0183] If a requisite model competence exists for the subject matter, the computing system can proceed to 1325. At 1325, the computing system can determine if one or more salient properties are missing. If the computing system determines that there are salient properties missing, the computing system can progress to 1330. At 1330, the computing system can provide for display a directed question relating to a missing property to be obtained via a user input component of the user interface. For instance, a missing salient property can include a headline, a business name, a location, or other pertinent information relating to the subject matter of the content item to be generated. The computing system can proceed to 1345 to obtain additional user input responsive to a question provided for display via the user interface.

[0184] If the computing system determines that there are not missing salient properties, the computing system can proceed to 1340. At 1340, computing system can provide for display general questions for more information to be obtained via a user input component of the user interface. Following 1340, the computing system can proceed to 1340. [0185] Figure 14A, Figure 14B, and Figure 14C depict flow chart diagrams of an example method 1400 to perform according to example embodiments of the present disclosure. Although Figure 14A to Figure 14C depict steps performed in a particular order for purposes of illustration and discussion, the methods of the present disclosure are not limited to the particularly illustrated order or arrangement. The various steps of method 1400 can be omitted, rearranged, combined, and/or adapted in various ways without deviating from the scope of the present disclosure.

[0186] At (1402), method 1400 can include obtaining, via a conversational campaign assistant interface, by a language model, natural language input. For instance, a computing system can obtain, via a conversational campaign assistant interface, by a language model, natural language input. As described herein, the natural language input can include a URL, a description, a question, or other natural language input.

[0187] At (1404), method 1400 can include generating, by a custom language model, an output comprising a predicted user intent. For instance, a computing system can generate, by a custom language model, an output comprising a predicted user intent. As described herein, the custom language model can be a model trained using a distillation learning technique. For instance, the custom language model can be trained based off of a previously trained large language model.

[0188] In some instances, the custom language model can be trained to be an expert on a particular topic. This allows the custom language model to better determine a predicted user intent based on both the current context of the session and the data that the custom model was trained on.

[0189] At (1406), method 1400 can include determining one or more actions to perform. For instance, a computing system can determine one or more actions to perform. By way of example, an action can include populating an input field, updating the user interface to open a new window, adjust settings within the user interface, providing a recommendation, visual indication of an update to an input field, or other action. [0190] As depicted in Figure 14B, at step (1406), method 1400 can include steps (1418). (1420). and (1422).

[0191] For example, at (1418), step (1406) can include parsing the output generated by the custom language model. For instance, a computing system can parse the output generated by the custom language model. For instance, the output generated by the custom language model can include a user intent.

[0192] For example, at (1420), step (1406) can include determining an action associated with the output. For instance, a computing system can determine an action associated with the output. By way of example, the user intent can include a specific action to be performed, a question, a request for a recommendation, or another action.

[0193] For instance, a user can ask for a recommendation of settings for bidding strategies to optimize spending or return on content spending (e.g., return on advertising spending). The settings can be tailored to the context of the user session. For instance, the recommendations can be based on prior input received in the conversation, information gathered from a website associated with the user sessions, or other contextual information. In some instances, the user can request information about how to understand analytic data, request for data to be exported in a particular format, or any other request. For instance, a user can indicate the purpose for which the data needs to be exported. A request for data to be exported for manual analysis in a spreadsheet may result in a different type of file generated than data being exported for analysis via some other application. An application can be associated with a user device, a third-party service, and the like.

[0194] A content campaign can include a plurality of parameters. The parameters can include, for example a desired audience (e.g., targeting), an amount to pay for content being displayed and measures of performance (e.g., bidding), or the content that can be displayed (e.g., creatives). Content campaigns associated with an organization can be large and managed by a plurality' of individuals. There can be thousands of individual campaigns within an organization which can include thousands of keywords, landing pages, business use cases, business units, and the like. Business use cases can include customer acquisition, competitor targeting, new product launches, increasing brand exposure, and the like. Business units can include products and services, countries, languages, or geographic areas. The management of these campaigns can be disjointed and result in duplicative processing and display of content due to lack of coordination among campaigns. [0195] The content assistant component can be trained to provide recommendations to improve computing resource allocation. For instance, the content assistant component can be trained to adjust or provide recommendations for adjusting various parameters associated wi th content campaigns to decrease redundant data transmission. Additionally, improvements in performance can occur. For instance, the content assistant component can obtain information on performance metrics, determine an attribution of a change in the metrics to the change in parameters or an external cause, and provide recommendations or automatically adjusting parameters across an organization’s content campaigns.

[0196] For instance, the content assistant component can monitor one or more performance metrics. The content assistant component can provide a notification for display responsive to determining a statistically significant change in the performance metrics. The content assistant component can determine an explanation for the change in the performance metrics. The content assistant component can automatically update the user interface to display the updated information. For instance, the updated user interface can include providing recommendations for updating parameters associated with the content campaigns, providing a message indicating a reason for the change in performance metrics, or automatically adjusting parameters and providing a notification summarizing the changes made. The explanation for the performance metrics change can be determined based on obtaining a plurality of signals. The signal can include, for example, changes in queryvolumes. changes to content elements (e.g., keywords, descriptions, creative assets), or changes to the display of the organization’s content items compared to content items provided for display associated with other organizations.

[0197] Adjustments to parameters can include adding or removing content elements (e.g., descriptions, keywords, images, audio, video), adjusting bidding strategy, or other adjustments. As discussed herein, the adjustments can be made automatically and can be continually or periodically updated. The content assistant component can continually review the content campaign performance metrics and provide recommendations and updates in real time. In some implementations, changes can include reducing a number of keywords used, determining an intent of one or more campaigns and consolidating the campaigns, determining a campaign is heterogeneous and should be split into multiple campaigns, or recommending a different campaign type.

[0198] For example, at (1422), step (1406) can include generating an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action. For instance, a computing system can generate an action data structure comprising executable instructions that cause a processor to perform an operation associated with completing the action. The action can include performing operations within a content creation structured interface or can include performing operations with an unrelated application.

[0199] The action data structure can include transforming the data produced by the custom language model into a format that can be input into the content creation structured interface. This can include determining which input fields to populate with what data, what data is to be provided for display to a user, what data should be kept invisible to the user (e.g., meta data), and the like. This can allow for the output generated by the custom language model to be utilized by both the content creation structured interface and the content assistant component of the conversational campaign interface.

[0200] At (1408), method 1400 can include determining a natural language response. For instance, a computing system can determine a natural language response. By way of example the natural language response can include an indication of fields that have been updated in the content creation structured interface, a plurality of recommended content elements to be added to the content creation structured interface, a generated summary, and the like.

[0201] As depicted in Figure 14C, at step (1408), method 1400 can include steps (1424) and (1426).

[0202] For example, at (1424), step (1408) can include parsing the output generated by the custom language model. For instance, a computing system can parse the output generated by the custom language model. The output generated by the custom language model can include data extracted from a website associated with a URL provided by a user. The data extracted can be generated in a form that would not be readily accessible or understandable if displayed in a raw format. However, the computing system can parse the generated output to obtain relevant portions of the output to use in generating response data. [0203] For example, at (1426), step (1408) can include generating a response data structure comprising a natural language response to the obtained natural language input. For instance, a computing system can generate a response data structure comprising a natural language response to the obtained natural language input. For instance, the computing system can take the data extracted from step (1428) and transform the data into a natural language output that can be understood by a user. [0204] At (1410), method 1400 can include transmitting, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action. For instance, a computing system can transmit, to an action component, the action data structure comprising executable instructions that cause the action component to automatically perform operations associated with completing the action.

[0205] The action data structure can include data to be populated within one or more fields of a content creation structured interface.

[0206] The action component can include a content campaign performance model. The content campaign performance model can be configured to ingest data about content campaigns. The content campaign performance model can be a generic model that provides information on performance of content campaigns for a plurality of users or campaign types. For instance, the content campaign performance model can provide general advice and strategies for improving content performance.

[0207] The action component can include a content campaign analysis model. The content campaign analysis model can be configured to provide insights into the performance of content campaigns. For example, the data can include conversion data, view data, click data, spend data, or any other data indicative of performance of a content campaign. For instance, the natural language input can be a question about how many views a particular content item had, or which content items performed better than others. The content campaign analysis model can provide customized recommendations for particular content campaigns or sub-campaigns being managed via the user interface.

[0208] The action component can include a bidding strategy model. The bidding strategy model can provide personalized recommendations for allocation of spend based on previous performance. The previous performance can be associated with a specific return user and based on historical data. Additionally, or alternatively, the bidding strategy model can obtain additional user input to leam more about the user’s associated business, customer base, or other baseline knowledge to be used in generating recommendations.

[0209] The action component can include a generative model. The generative model can obtain the action data structure as an input prompt and generates an output including a creative asset. For instance, the natural language input can include a uniform resource locator (URL). The action component can include a generative model that generates a summary of information parsed from a page (e.g., website) associated with the URL. The generative model can be a machine learning model. The generative model can be trained using a knowledge distillation technique. Additionally, or alternatively, the creative asset can include an image, a sound, a video, or other forms of content.

[0210] At (1412), method 1400 can include transmitting, to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface. For instance, a computing system can transmit to the conversation campaign assistant interface, the response data structure comprising the natural language response to be provided for display to a user via the conversational campaign assistant interface.

[0211] At (1414), method 1400 can include obtaining, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure. For instance, a computing system can obtain, subsequent to transmitting the action data structure and response data structure, user input indicative of a validation of the action data structure or the response data structure. In some implementations, validation can occur automatically. Additionally, or alternatively, validation can be performed manually. Example approaches for validation are discussed within, for example, in Figure 2, Figure 3, and Figure 11.

[0212] At (1416), method 1400 can include updating the custom language model based on the user input. For instance, a computing system can update the custom language model based on the user input. As described herein, updating the custom language model can include automatically adjusting the model (e.g., weights, parameters) to alter the output produced by the model. The model can be automatically updated in real-time. Additionally, or alternatively, the model can be updated (e.g., trained) offline.

[0213] Figure 15 to Figure 21 depict example user interfaces including content assistant components according to example embodiments of the present disclosure. The computing system can provide for display a content assistant component alongside a content creation structured interface via the user interface. The content assistant component and structured interface can include a plurality of interactive selectable elements. The elements can include clickable buttons, input fields, or other similar elements. A user can provide input via the content assistant component. The computing system can process the input to determine a natural language response to provide for display via the content assistant component alongside determining one or more actions to complete (e.g., forms to fill or buttons to select) in the content creation structured interface. In some instances the one or more actions can additionally or alternatively include additional actions outside of the content creation interface. Each of the following figures will discuss features and functionalities of example user interfaces as described herein.

[0214] Figure 15 depicts an example user interface 1500 including a content assistant component 1505 and a content creation structured interface 1510. Content assistant component 1505 can obtain user input 1515 including a URL for a website associated with the user (or a user’s business). In response to obtaining the user input 1515 including the URL for the website, the content assistant component can generate a natural language response 1520. The nature language response 1520 can include a summary’ of the contents of the website associated with the URL that is generated by the content assistant component. For instance, the content assistant component can parse the website associated with the URL to generate the natural language response 1520. In response to the user providing the URL, the computing system can generate a summary of the contents of the website. The content assistant component 1505 can provide the summary to the user for display and request that the user provide input indicative of the summary’ being accurate. For instance, a user can select a button that says accept, type or talk to text a message, or provide other input. The system can obtain the input and update the user interface 1500 accordingly.

[0215] The content assistant component can be displayed in a variety' of formats. For example, formats can include a set number of free-form text input fields (e.g., as depicted in Figure 8), one or more progressive disclosure fields (e.g.. as depicted in Figure 9), and/or a conversational interface (e.g., as depicted in Figure 10).

[0216] Figure 16 depicts an example user interface 1600 including a content assistant component 1605 and a content creation structured interface 1610. Content assistant component 1605 can obtain user input and provide for display a response generated as discussed herein. Content creation structured interface 1610 can include one or more interactive components. For example, the components can include free form input or selectable elements. For instance, user input indicative of a selection of one of the fields can result in an expansion of the selected field. In some instances the fields can evolve based on direct user input or input obtained via the content assistant component 1605 of the user interface 1600. For instance, the content assistant component can obtain user input comprising a description of the product or sendees offered by a company associated with the user. The content assistant component 1605 can process the input and generate an action or a natural language response. For instance, the natural language response can be provided for display via the content assistant component 1605 and the user interface 1600 can be automatically updated to be populated with one or more recommended content elements. [0217] As depicted in Figure 17, the user interface 1700 can be automatically updated to include populated interactive user interface elements. For instance, user interface 1700 can include a content creation structured interface 1710. The content creation structured interface can include a plurality of keyword suggestions. The keyword suggestions can be prepopulated based on the content assistant component 1705 processing the obtained input. The content assistant component 1705 can input the obtained user input into a language model to determine the intent associated with the input. This can include determining subject matter of the input, extracting keywords, parsing the input to determine and extract the most relevant portions, parsing a website provided by a user to determine meaning of additional input in context of the contents of the website, or other relevant determinations. Content assistant component 1705 can be automatically updated to provide an updated prompt via the user interface. For instance, the updated prompt can be an answer to a question, an indication of what fields within user interface 1700 have been populated, a request for additional information, targeted questions, general questions, or any other message or inquiry.

[0218] Figure 18 depicts an example user interface 1800. The user interface 1800 can include a content assistant component 1805 and a content creation structured interface 1810. The content assistant component 1805 can be automatically updated to provide for display additional messages or selectable elements. For instance, as depicted in Figure 18, the selectable inputs can include a “review images” button that can be selected or clicked to cause the user interface 1800 to update to display one or more images that can be selected to be added to a content item. Content creation structured interface 1810 can include multiple sections. For instance, the sections can include a middle section including a content score and preview of the content item. The content score can be generated based on a number of completed input fields within the content creation structured interface or a quality of the input within the completed input fields. The completed input fields can include a plurality 7 of headlines that can be generated by the content assistant component.

[0219] Figure 19 depicts an example user interface 1900. The user interface 1900 can include a content assistant component 1905 and a content creation structured interface 1910. The content creation structured interface 1910 can include multiple sections. For instance, the sections can include a middle section including a content score and preview of the content item. The content score can be generated based on a number of completed input fields within the content creation structured interface or a quality of the input within the completed input fields. The completed input fields can include a plurality of descriptions that can be generated by content assistant component. The input fields can include a plurality of images selected to be utilized to generate content items.

[0220] The content assistant component 1905 can provide a plurality of selectable content elements. For instance, the selectable elements can include a plurality of headlines, descriptions, images, or other content elements. The computing system can obtain user input via user interface 1900. For instance, the user input can be indicative of a selection of one or more content elements to be added to the content creation structured interface 1910. In response to receiving user input via user interface 1900, the computing system can automatically update the user interface to incorporate the selected content elements from content assistant component 1905 to content creation structured interface 1910.

[0221] Figure 20 depicts an example updated user interface 2000. User interface 2000 can include a content assistant component 2005 and a content creation structured interface 2010. Content assistant component 2005 can be updated to include an indication of the selected content elements. For instance, a visual indicator, such as a check mark, can replace an addition element once the content element is selected. Additionally, or alternatively, the content creation structured interface 2010 can be updated to include the plurality 7 of selected content elements. Thus the interaction with the content assistant component 2005 can directly impact the content creation structured interface. This can include an adjustment to a content strength score, the preview, or the plurality of input fields.

[0222] The technology 7 discussed herein makes reference to servers, databases, software applications, and other computer-based systems, as well as actions taken, and information sent to and from such systems. The inherent flexibility of computer-based systems allows for a great variety of possible configurations, combinations, and divisions of tasks and functionality^ between and among components. For instance, processes discussed herein can be implemented using a single device or component or multiple devices or components working in combination. Databases and applications can be implemented on a single system or distributed across multiple systems. Distributed components can operate sequentially or in parallel.

[0223] While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations and/or additions to the present subject matter as would be readily apparent to one of ordinary skill in the art. For instance, features illustrated or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure covers such alterations, variations, and equivalents.

[0224] The depicted and/or described steps are merely illustrative and can be omitted, combined, and/or performed in an order other than that depicted and/or described; the numbering of depicted steps is merely for ease of reference and does not imply any particular ordering is necessary or preferred.

[0225] The functions and/or steps described herein can be embodied in computer-usable data and/or computer-executable instructions, executed by one or more computers and/or other devices to perform one or more functions described herein. Generally, such data and/or instructions include routines, programs, objects, components, data structures, or the like that perform particular tasks and/or implement particular data types when executed by one or more processors in a computer and/or other data-processing device. The computer-executable instructions can be stored on a computer-readable medium such as a hard disk, optical disk, removable storage media, solid-state memory, read-only memory (ROM), random-access memory (RAM), or the like. As will be appreciated, the functionality of such instructions can be combined and/or distributed as desired. In addition, the functionality can be embodied in whole or in part in firmware and/or hardware equivalents, such as integrated circuits, application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), or the like. Particular data structures can be used to implement one or more aspects of the disclosure more effectively, and such data structures are contemplated to be within the scope of computer-executable instructions and/or computer-usable data described herein.

[0226] Although not required, one of ordinary skill in the art will appreciate that various aspects described herein can be embodied as a method, system, apparatus, and/or one or more computer-readable media storing computer-executable instructions. Accordingly, aspects can take the form of an entirely hardware embodiment, an entirely software embodiment, an entirely firmware embodiment, and/or an embodiment combining software, hardware, and/or firmware aspects in any combination. [0227] As described herein, the various methods and acts can be operative across one or more computing devices and/or networks. The functionality- can be distributed in any manner or can be located in a single computing device (e.g., server, client computer, user device, or the like).

[0228] Aspects of the disclosure have been described in terms of illustrative embodiments thereof. Numerous other embodiments, modifications, and/or variations within the scope and spirit of the appended claims can occur to persons of ordinary skill in the art from a review of this disclosure. For example, one or ordinary skill in the art can appreciate that the steps depicted and/or described can be performed in other than the recited order and/or that one or more illustrated steps can be optional and/or combined. Any and all features in the following claims can be combined and/or rearranged in any way possible.

[0229] While the present subject matter has been described in detail with respect to various specific example embodiments thereof, each example is provided by way of explanation, not limitation of the disclosure. Those skilled in the art, upon attaining an understanding of the foregoing, can readily produce alterations to, variations of, and/or equivalents to such embodiments. Accordingly, the subject disclosure does not preclude inclusion of such modifications, variations, and/or additions to the present subject matter as would be readily apparent to one of ordinary 7 skill in the art. For instance, features illustrated and/or described as part of one embodiment can be used with another embodiment to yield a still further embodiment. Thus, it is intended that the present disclosure covers such alterations, variations, and/or equivalents.