π Prompt Design
Learn the ins and outs of writing great prompts to get the most out of Muse.
The use of ambiguous, disrespectful, racist, or otherwise improper vocabulary can lead to improper generations. Please use common sense when generating text. LightOn is not responsible for improper use of the Muse API.
Whether you want to write an article, answer questions, or classify customer reviews, it all starts with a prompt, i.e. the input text that is submitted to the model and conditions the outputs returned. The prompt primes the model to follow given instructions or perform given tasks, and is crucial to obtain better results.
βIn this guide, we review different types of prompts which can be used for Muse, using lyra-en
as an example.
If you're interested in designing prompt in French using lyra-fr
, check our Construction de Prompts guide:
most prompt design advice transfers across languages but there are a few specificities that can help you extract every last percent of performance!
π’ Give instructionsβ
Write a description, in natural language, of the task you want Muse to perform.
For example, we can ask Muse to generate an Instagram ad for a resort.
Write a task description as detailed as possible. Muse will utilize the details to perform the task at its best.
Bad β
Better β
In the first prompt, we offer little detail on the resort that we are trying to advertise. As a result, the advertisement is generic, Muse talks about the Mediterranean sea, that might not be the location of the resort, and there are some repetitions of the word atmosphere as that is the only item Muse could pick up from the prompt. Compare this with the result from the second prompt, where we provided details on the location and the main selling points of the resort: the advertisement generated talks about Philippines, scuba diving and luxurious villas. The overall quality of the generation is improved.
French models have a slight difference with respect to English models: they prefer full sentences instead of schematic queries. For more information see π Construction de Prompts.
πͺ Add examplesβ
βAdding examples to the prompt, especially if the task is more complex, improves the generation. Muse will understand and mirror the behavior. For example, we can use existing answer to reviews to improve our automated customer service. However when adding examples,
Provide examples in your prompt on top of the task description. Adding one or more examples of how you would like the task to be carried out will help Muse perform better. You can separate examples with
sequences like ***
or ###
.
Bad β
Better β
The prompt on the left offers no example of how to reply, therefore Muse is confused and generates a similar review instead of a reply. In the prompt on the right, we offer one example of answering a review. Muse picks up on the key elements: thank the client for the review, and ask them to come back soon.
π± Add more examples with more varietyβ
In the previous example, the answer remained pretty generic, it could be usd to reply to any positive review. What happens if we provide a negative review? Muse might default to the generic reply for positive reviews, and clearly that is something we want to avoid. To improve the generation quality we can add more examples with more variety to the prompt.
Provide varied examples. For example, if your task requires to respond positively or negatively, include an example of both situations in your prompt.
Bad β
Better β
In the prompt on the left, Muse defaults to a positive reply even if the review is negative. In the prompt on the right we now offer an additional example of reply to a negative review, and Muse is able to reply picking up the item that was not liked by the customer.
βοΈ Calibrate and order the examplesβ
Imagine that we now want to utilize Muse to classify review in positive and negative. What happens when we add six examples with five positive classification and only one negative? Or when the split is 50/50 but we put first the three negative ones and then the three positive ones? In both cases Muse will be biased to classify into the positive category. Muse picks up on the so-called frequency and recency bias. If a class is more frequent in the prompt or towards the end of the prompt, then it will bias the model to classify that way. To correct this, you need to make sure to spread different kind of examples equally in the prompt.
Provide balanced examples in your prompts, with a neutral order. Avoid groups of similar examples close together or towards the end of the prompt, for example five negative classification and only a single positive one.
Bad β
Better β
In the left prompt we have an five examples out of six that are positive reviews, and the only negative example is at the beginning of the prompt. As a result, Muse misclassifies the last review as being positive. In the box on the right, we improve the prompt by using three negative and three positive examples, and alternating them regularly. Muse now classifies correctly our review!
π Spelling, grammar, formattingβ
β Muse works by completing an input text as if it was a document to continue, and documents rarely change writing style abruptly. If your prompt contains spelling, grammar errors or inconsistent formatting, completions will also have these problems. It is unlikely that a person that writes with bad grammar will just suddenly start writing with correct grammar. Grammar is very important if your generations require reasoning, as well-reasoned document that are poorly written are rare.
Avoid spelling, grammar errors or inconsitent formatting if you do not want these issues to appear in the Muse completions. Poorly written documents are rarely well-reasoned.
Bad β
Better β
The quality of the output depends strongly on the quality of your prompt. In particular, the length, the vocabulary, the grammar of your prompt will have a crucial influence on your output. π§ Keep this in mind! β
πͺ Adding explanationsβ
Now you get the basics of good prompt design. If you want to go even further, adding step-by-step explanations in your examples also improves performance. Instead of simply providing examples of inputs and outputs of a task, you also provide the reasoning behind the output in each example. Muse will utilize this information to provide more accurate completions.
Provide step-by-step explanations in your examples to improve performance. As for task descriptions, the final result will be better if you provide more details.
Bad β
Better β
π§ͺ Experiment with your promptsβ
As you can see in the examples above, a good prompt is the key to achieving quality results. The more detail you can include in the description of a task, the more examples you can provide, the better the generation. Make sure to try out various prompts to see what produces the best results for your task at hand, and check out our examples for inspiration.