Creatures vs. Bots Part 1: Introduction to AI for Business

By: Shane Richmond, Lenovo Pro Community Resident Expert

We all know them. Those little things the disrupt your day and make business harder than it needs to be. In our Creature Discomforts campaign, we visualize these little struggles using our animated creatures, and show how Lenovo Pro helps you overcome them. 

In this 6-part series, we look at the overwhelming feeling that comes along with the emergence of a new business disruption trend, in this case: AI.  The AI Revolution is coming, and the most successful businesses will be those who learn AI and understand how it can help them achieve unimagined levels of productivity and efficiency.  This written series provides you an entry point into learning about the current AI landscape and shows how you can use robots to overcome those Creature Discomforts.

Generative artificial intelligence (AI) has scarcely been out of the news for almost 18 months. Tools like ChatGPT have been celebrated as potential workplace saviors one moment and dismissed as marketing hot air the next. The truth, as is often the case, lies somewhere in between.

Generative AI is growing more capable every month but, as it stands today it can be enormously helpful for many office tasks. If you’re running a multinational corporation, the aggregate benefits could be sizeable. For the typical small or medium enterprise, generative AI is likely to help your staff perform many tasks, more quickly. Think of it as a smart intern whose work should be checked carefully.

This is the first of a series of articles exploring how to get the most out of generative AI. We’ll begin by considering what generative AI is, how it works, and take an overview of some of the things it could do in your business.

Understanding generative AI

Generative AI is a subset of artificial intelligence that produces content - words, pictures, videos and more. Different ‘models’ are used to create different output. Image-generating AI, for example, often uses a Generative Adversarial Network (GAN) model. These are given a huge database of real images and then programmed to analyze it for statistical patterns – a process known as ‘training’. A ‘generator’ then produces new images that a ‘discriminator’ evaluates - a process described as adversarial because one model corrects the other.

We’re mostly focusing on tools that produce words, most of which use large language models (LLMs). These are trained on huge amounts of words - billions and billions in fact, from articles, books, internet messages and more. A trained LLM can take a user’s instruction, or ‘prompt’, discern its underlying intent, and formulate a response based on statistical patterns found in its database.

Generative AI can feel magical. It has convinced some very smart people that they are talking to an intelligent being. In 2022, Blake Lemoine, a Google employee, was put on administrative leave after claiming that the company’s LaMDA AI was sentient, possessed a soul, and deserved the legal rights of a person. What he was experiencing is known as the ELIZA effect: our tendency to project human traits onto computers.

In reality, LaMDA and AI models like it are just word-generating machines. Given a prompt, they predict an appropriate response but nothing that we would describe as ‘intelligence’ is involved. To get a sense of how they work, try this. Open a messaging app on your phone, type the word “I’m” and then look at the suggested words presented by the app. Choose the middle one and repeat that until you have a sentence. I got the following: “I’m going on the boat tomorrow, so I have lots of stuff to get ready.” The system is intelligent enough to create a grammatical sentence that could be true but is actually meaningless.

How generative AI understands

Generative AI is doing something similar, but its ability to work with complex, detailed prompts can produce better results than your messaging app. Its responses might be useful and seem intelligent, but the AI is not self-aware and does not understand the concepts it is discussing. One reason it so convincing is something known as the transformer model - a key piece of technology powering the LLM.

When you give the AI a prompt, it turns that into ‘tokens’ - whole words or parts of words that it can analyze. It searches its database for tokens that are statistically likely to appear close to the ones in your prompt, as well as ones it is seldom close to. ‘Bat’ often appears with ‘ball’ and ‘cave’, for example, but seldom with ‘hedge’ and ‘dog’. The result is a list of values for each word. The AI follows the same process for every word in the sentence, addressing each in sequence.

The arrival of the transformer, developed by Google engineers in 2017, supercharged the process by allowing the AI to analyze all the words in the sentence at once. That helps it understand context. The whole sentence might make clear that the bat you are talking about is a creature, for example, and the AI can therefore give less statistical weight to the other kinds of bat in its training data.

Today’s LLMs can also use words elsewhere in a piece of text to help determine context. Ask it to summarize a long article about bat species, for example, and it will find words that increase the statistical likelihood you are talking about animals and not sports equipment. Statistical likelihood is key. Where your messaging app predicted only the next word, LLMs understand context well enough to predict the rest of the sentence, paragraph and more.

What it can do for your business

My explanation of the Transformer model - grossly oversimplified though it is - demonstrates one of the strengths of LLMs: summarizing documents. Give them a report and ask for every section that refers to, say, marketing plans, and they will find the relevant content, even if the word “marketing” is not used specifically. That’s something you can’t do with a manual keyword search. And, crucially, they can remember a conversation, so you can ask follow-ups about the report without sharing it with them before every query.

Of course, if you’ve heard anything about these tools then you will know that they can write. Their writing can be a little bland but ask them to write an article and they will give you a solid first draft you can polish and personalize. They are also great for answering questions - often quicker than Google and with the added advantage that you can ask follow-ups if you don’t understand the first answer.

There are downsides, however. Generative AI tools tend to ‘hallucinate’, which is a computer scientist’s way of saying they make things up. This is sometimes described as a side effect of the way they work but it’s more accurate to say that it just is how they work. Everything they produce is hallucination; it’s just that some hallucinations happen to be useful. The idea of information being true or false doesn’t mean anything to a generative AI. A good rule of thumb is to double-check any facts that generative AI gives you, unless you already know them to be true.

Another concern is that they sometimes reproduce content from their training data wholesale, which is plagiarism. If this comes from a copyrighted source and you use it in something generative AI produces for you, then it’s you that will get in trouble. Finally, depending on the country and sector in which your business operates, it’s important to be aware of any regulations restricting your use of AI. If you are in financial services, for example, then its use might be prohibited for certain tasks.

A wide array of AI tools

Nevertheless, generative AI presents a huge opportunity for most businesses - speeding up some tasks and taking over others entirely. The big LLMs, such as OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini, are all similarly capable. The companies behind them constantly update their models and training data to leapfrog rivals, so at any given time one might be slightly better than another, but there isn’t much in it. At the time of writing, Claude is considered slightly better at summarizing documents, for example.

Other services, meanwhile, are adding generative AI capabilities to their tools. Perplexity is an AI-powered search engine that answers your questions with links to its sources and will respond to your follow-up queries. Otter uses AI to transcribe meetings but also has AI chat built in so you can ask it to find, for example, revenue totals for the last quarter from your latest earnings call. Meanwhile, Notion is an immensely flexible service that can be used as a notebook, database, project management tool and much more. The addition of AI means it can find key information in your notes and help you write new ones more quickly.

And generative AI doesn’t just produce words. As mentioned above, there are systems that produce images, such as DALL-E, from OpenAI. Adobe, the makers of Photoshop, has added AI-created video to its Adobe Stock service, as well as AI-driven photo editing in Photoshop. Descript lets you edit a video by editing the script: delete a line and it will vanish from your clip. Then there’s HeyGen, which can change the language of a video and make it look like the same speaker is delivering the words.

There hasn’t even been space in this article to discuss generative AI for coding, data analytics, customer service or brainstorming. As the technology advances, which it is doing rapidly, we will see new use cases emerge and existing ones get even better. This is just the beginning.

Let us know your thoughts on this article in the comments below.

Stay tuned for our next part in the "Creatures vs Bots: How AI Can Help You Overcome Creature Discomforts” series: A Deep Dive into ChatGPT.