We all know them. Those little things the disrupt your day and make business harder than it needs to be. In our Creature Discomforts campaign, we visualize these little struggles using our animated creatures, and show how Lenovo Pro helps you overcome them.Â
In this 6-part series, we look at the overwhelming feeling that comes along with the emergence of a new business disruption trend, in this case: AI. The AI Revolution is coming, and the most successful businesses will be those who learn AI and understand how it can help them achieve unimagined levels of productivity and efficiency. This written series provides you an entry point into learning about the current AI landscape and shows how you can use robots to overcome those Creature Discomforts.
By:Â Shane Richmond, Lenovo Pro Community Resident Expert
---
Perhaps the most significant thing about generative artificial intelligence (AI), is the chat interface. There were AI tools before ChatGPT became public in November 2022 but working with them required programming skills or a more limited user interface. By allowing any user to enter text in natural language, ChatGPT made AI accessible to anyone with a computer and an internet connection. Other tools soon unveiled similar interfaces.
These tools are known as Large Language Models (LLMs) because they are trained to analyse and find patterns in masses of written data. They use that data to produce new text and understand user queries, known as âpromptsâ. Although LLMs will do their best with any input, better prompts get better results.
When I was a kid all the local dads - and it was always the dads - spent some of their weekend working underneath their car or under the hood. Early 80s cars werenât that reliable, so everyone needed a little mechanical knowledge to keep their vehicle on the road. Todayâs generative AI reminds me of that. Like the early 80s home mechanics, we need to know a little of whatâs called âprompt engineeringâ.
It can get pretty strange. For example, users are experimenting with whether the AI delivers better results if they promise a cash tip or even bully the AI. Ask the AI to write its own prompts and things get even weirder. A recent study asked an LLM to create prompts to solve 50 math problems. The most successful one began: âCommand, we need you to plot a course through this turbulence and locate the source of the anomaly. Use all available data and your expertise to guide us through this challenging situation.â
That hints at prompt engineeringâs future - where you tell the AI what you want to accomplish, and the machine writes its own prompt - weird though it may be. Increasingly, LLMs are also adding pre-written prompts for users to select. But those solutions are still developing. To get the best from generative AI now, itâs worth knowing some prompt engineering. Hereâs a five-step guide to better prompts:
1. Provide context
As mentioned, LLMs can draw on a massive database, so if you submit a general question, such as âwhat should I do on a trip to Japanâ, you will get the most general answers. Every generative AI response is shaped by probability: the likelihood that the words generated will satisfy your prompt. Essentially, giving the AI more detail tells it to narrow its database search. For example, tell it you are going to Tokyo will reduce the probability of the AI suggesting you visit Miyajima, 500 miles away. You can give further nudges by telling the AI what time of year you will be visiting or what kind of things you like to do. It will be very unlikely to suggest winter activities for a summer visit, for example, and very likely to suggest family activities if you tell it that you are taking children.
Another common trick is to have the AI assume a persona. You will get different results from telling it âyou are a professorâ, âyou are an editorâ, âyou are Steve Jobsâ or âyou are Shakespeareâ. You can probably imagine how: each persona will have a slightly different style - or very different in Shakespeareâs case. Each will draw on different expertise and emphasize different points. The professor might be more inclined to rely on detailed evidence, while the editor might prioritize grabbing the readerâs attention. Thereâs no right or wrong here - it depends on the problem you are trying to solve.
If you donât want to invent a role for the AI to play, you can get a similar effect by specifying an audience for your content. Consider the different results you would expect if you asked, âexplain AI to me like Iâm eight-years-oldâ, rather than âexplain AI to someone with a doctorate in computer scienceâ. Again, these differences give the AI clues as to which part of its database is likely to contain a good answer. In the first case, it is more likely to search its training data for articles written for children and try to emulate their style and tone.
As you can see, your prompt could include all kinds of context, but the last one we will consider is output. The simplest interaction with a tool like ChatGPT is to ask a question, which the AI recognizes as requiring a response in the form of an answer. Other kinds of response must be specified, or the AI will just guess. You could ask for âthings to do in Tokyo in Julyâ and get a response that is a single piece of writing, or bullet points, or a table. You could ask for the information organized by cost, by neighborhood, by opening times, and so on. Taking a moment to consider the most useful form of response is worth doing before you submit your prompt.
2. Give examples
Earlier in this series of articles I wrote that the AI does not understand the concepts it is discussing. It doesnât know your business, how you work or what you like. It just generates words based on a very sophisticated analysis of its training data. Those words are most likely to be useful if you provide context, as above, but you can help the AI further by giving examples. In technical terms, asking the AI to perform a task without examples is known as âzero-shotâ prompting, while providing several examples is âfew-shotâ prompting.
Examples could include uploading a report that you want the AI to emulate - most of the big LLMs will allow you to attach files to a prompt. Or it could mean saying you want the AI to avoid jargon and then providing three or four examples of what you consider to be jargon.
At this point, you are probably thinking these prompts are going to be long, which is true. Indeed, some of the self-styled prompt engineering gurus talk about âmega promptsâ, which often include everything mentioned so far, with each part - style, audience, persona, etc. - labelled. For the average user, mega prompts are probably overkill. However, one thing you should take from them is the practice of labelling examples. An easy way to do that is literally to write âhere are the examplesâ, then list them as bullet points.
3. Ask for an explanation
One problem with mega prompts is that LLMs can get confused partway through, or simply ignore parts of the prompt and focus on others. For example, one mega prompt I found online contained the following section: âVOICE AND STYLE: clear, engaging, data-driven, support insights with examples and research, conversational tone to make complex ideas easy to understand, figurative, challenge common beliefs, encourage readers to question assumptions.â Can the AI adhere to all those requirements? Will it try to apply them to the whole piece or to every sentence? If it canât apply them, which will it prioritize? I donât know how the AI will handle such complexity, so my inclination is to keep prompts as simple as possible.
That said, one way to manage longer prompts is to use a technique called Chain of Thought (CoT) prompting. LLMs sometimes get confused during multi-step tasks but simply telling them to approach the task step-by-step significantly improves performance. At its simplest, this just involves ending your prompt: âLetâs think step-by-step.â If your task is particularly complicated, then providing examples of the step-by-step process the AI should use can help even more.
4. Check the response
There are two brief notes to add to the above. First, donât assume that because youâve put so much thought into your prompt, you will get a perfect response. The AI is still trying to offer the most probable solution to your request and is just a prone to âhallucinationsâ as ever. Check any facts and data in its output to be sure that it hasnât invented anything.
5. Experiment!
Second, the element of probability in LLM responses means that prompt engineering is far from an exact science. Running the same prompt twice will get different responses, for example. So, you should experiment and see what works for you. Save useful prompts somewhere so you can use them again - but donât forget to try new approaches regularly. You might find something that works even better.
In time, I suspect that our interactions with LLMs will involve choosing from pre-selected prompts such as âI want to edit the following articleâ or describing the task so the AI can write its own prompt. For now, though, we are all tinkering under the hood of these machines, trying to get them to work efficiently. Itâs frustrating at times, confusing at others, but it can also be engaging and productive. Roll up your sleeves and get started!
Let us know your thoughts on this article in the comments below and stay tuned for our next part in the âCreatures vs Bots: How AI Can Help You Overcome Creature Discomfortsâ series: A deep dive into using generative AI for image and media.
Â
Â