top of page

Generative AI - the next frontier

From Unsplash

Historic perspective

Predictive AI

Perhaps the most popular and widely used branch of AI is the so-called “predictive AI”.

Predictive AI is used to make future predictions or create classifications of the data based on input data. Typically this involves training of a model using historical data, which is then applied to new data. The new data is unseen for the model and can have futuristic context i.e. refer to future dates. The training data depend on the task that the model will be used for and is unique and specific to it. To train a predictive model the data need to be labelled with an output label or value.

Generative AI

The branch of AI that is used to generate new data based on the patterns it has learned from training data. Typical use cases involve image and text generation, where the model creates new data in the form of images or text that are similar to the training data. The training data is generic and not specific to a task and the model can generate new data that it has not seen before. For this, the model requires unlabelled data such as images, text or audio that the model uses to learn patterns. To achieve this, “pre-training” is typically used to learn generic features of a large dataset before “fine-tuning” it to a smaller and more specific dataset.

Generative AI powered use cases

As with any form of science and practice used in a business environment, Generative AI needs to pass tests that prove it can provide operational and cost efficiencies and create new value in a business.

The main tasks that Generative AI seems to bring value are on summarisation, classification, question answering and content creation.

Summarisation and relevant content retrieval

This can be around:

- Summarising presentations into a digestible and graphical format

- Product or content recommendation from a catalogue and finding the most relevant article or product.

- Search of relevant text in documents and providing summaries of these.

Dialogues and conversations

This can be around:

- Question and answers driven by an application using internal sources

- Automating answering of customer service questions around services and products

- Effective information retrieval from a website via conversational queries

Content creation

Perhaps the most artistic and exciting aspect. This can involve:

- Video summary based on extensive video footage

- Generation of new documents, articles based on summarisation of the document sources

- Generation of new images, based on descriptive text inputs

- Creation of new marketing campaigns and content used in marketing

- Code creation by using text input

In terms of candidate users of Generative AI, these can be grouped in the following categories:

- Analysts who combine structured and unstructured data (text, images, audio) and can reduce the time to create value by creating insights not possible before.

- Customer services with online interactions that can be transformed and can become more natural, personal and can improve customer experience by serving customers at an increased rate.

- Creative groups creating new content a lot easier. Text, images and audio can be created easier and at scale, reducing the time need to create new content. Also software code generation.

- AI scientists who can customise large models and integrate them in the internal business models and processes. Software products can now specialise on a domain and provide bespoke value to that domain. For example marketing tools can generate custom content for a marketing campaign.

Evolution of Language Models

The evolution and comparison of models over time is mainly measured with relation to how many model parameters have been used to train a model. In the early 2017 since the invention of the decoder/encoder architecture there has been an exponential rise of the model parameters used for training. Starting in 2018, the Generative Pre-trained Transformer-1 (GPT-1) used 10^5 parameters and in 2023 ChatGPT used 10^8 parameters. Some of these models have been used into cloud services offered by major vendors in the form of APIs.

The case for foundation models

The challenges faced when training large models became apparent with the increased computational cost over time as well as the need for adapting them for businesses. The foundation models serve as an intermediate layer of models that are trained on unstructured data and they can then be adapted to serve a wide range of tasks. For example to extract information from text, the recognise objects, to perform question and answering. Without foundation models the models are specific to one task, such as retrieving information from text. But one foundation model can perform multiple tasks, such as summarise thousand word text, provide instructions on how to perform a task and give ideas on how to spend the weekend. But this versatility comes with less accurate results, which is an issue in terms of assessing the risk of potentially inaccuracies in results.

Examples of foundation models are GPT-4, PaLM, DALL-E 2 and Stable Diffusion.

Foundation models

What makes foundation models more generic compared to common deep learning neural networks is the ability to perform multiple functions because of the broad training data they have ingested and learned from during training. A foundation model that has been trained on a variety of company data could potentially be used to answer both customer related questions for a product but also help in the development of the same product.

The case for high quality training data

The quality of the training data seem to be the differentiating factor when training the models. This is because an increasing size of the model parameters is directly linked to increasing computational cost and cost of serving the output of the model. This can be partially offset by utilising high quality training data which can improve the model further.

A laconic of summary

8 views0 comments
bottom of page