With large language model (LLM) products such as ChatGPT and Gemini taking over the world, we need to adjust our skills to follow the trend. One skill we need in the modern era is prompt engineering.
Prompt engineering is the strategy of designing effective prompts that optimize the performance and output of LLMs. By structuring the input, prompt engineering aims to provide relevant and high-quality results.
There are many standard prompt engineering techniques, such as zero-shot, few-shot, and chain-of-thought, but this article will explore various advanced techniques that you might not have heard of previously.
With that in mind, let’s get into it.
1. Meta Prompting
Meta prompting is a prompt engineering technique that depends on certain LLMs to generate and refine prompts for other LLMs, including itself. It’s a method where we develop high-level prompts, and the prompt-refining LLM will produce much more specific and effective inputs for us.
The technique works by treating the prompt as the output itself. The main aspect of meta prompting is that it allows LLMs to generate prompts, interpret the prompt, and adapt the prompt output based on feedback. It’s a great technique that enables LLMs to self-reflect.
The meta prompting technique has the advantage of automating the production of effective prompts, adapting easily to situations where prompts need to handle complex tasks, and improving the quality of LLMs’ output.
Meta prompting also presents specific challenges. While it is a good technique for guiding a structural response, it will need to rely on the LLM’s inherent knowledge base to produce the prompt. For specific tasks of which the LLM has little innate knowledge, the performance may not be very good.
An example of meta prompting is shown in the detail below. Here is the initial prompt:
Generate a detailed prompt that can help a language model write a persuasive essay. The generated prompt should include instructions on structure, tone, and essential elements to be covered in the essay.
The generated meta prompt is shown below:
Write a persuasive essay arguing why renewable energy should replace fossil fuels as the primary energy source. Structure the essay as follows:
Introduction: Start with a compelling hook and provide a clear thesis statement.
Body Paragraph 1: Discuss environmental benefits, using evidence and examples.
Body Paragraph 2: Address economic advantages, such as job creation and cost savings.
Body Paragraph 3: Counter potential opposing arguments, such as feasibility or cost concerns, and refute them logically.
Conclusion: Summarize key points and end with a powerful call to action. Use a confident and persuasive tone throughout.
With the power of LLMs trained to produce better prompts, meta prompting can definitely help you improve the output from LLMs.
2. Least-to-Most Prompting
Least-to-most prompting (LtM) is a promising engineering technique that improves the LLM’s ability to handle complex problems by helping them decompose the steps into smaller sub-problems. The method allows LLMs to generate more accurate and comprehensive output because the prompt is designed to address the problem sequentially. Its prompt will guide the model in arriving at the actual answers step-by-step.
LtM is a great method to use if you already have a solution in mind and want the model to follow those steps. It also guides the model well so it will not deviate from explaining something unnecessary for the problem you want to solve, which is especially ideal if it’s a complex problem from a specific domain.
The problem with this technique is that you need to understand the problem you want to solve and provide it correctly in the prompt. You also need to properly decompose the problem because the error can compound if the steps are wrong from the outset.
An example of LtM prompting is shown below:
Question: How many unique words are in the sentence “The quick brown fox jumps over the lazy dog”? Let’s break down this problem:
1. Identify all the words in the sentence.
2. Determine which words are unique.
3. Count the number of unique words.
The result will be the model following the steps above and produce the final output:
Answer: 8 unique words
You might want to try using this technique if you already know what you expect and want LLM to process the work efficiently.
3. Multi-Task Prompting
Multi-task prompting is a prompt engineering technique in which one prompt is designed to perform multiple tasks simultaneously. The prompt allows the LLM to handle interconnected actions within a single run.
The prompt from multi-task prompting will be designed to integrate all the tasks within it and efficiently build the output based on the different tasks at hand. Clearly defining the prompt and separating each task is the key to success in this technique.
The advantage of using Multi-Task prompting is how efficient the output will be. The prompt will address related tasks together, allowing the context to be maintained. It will also provide AI with the ability to manage complex scenarios better, especially those that need to handle multiple things at the same time.
Multi-task prompting will need LLMs that can handle higher-complexity tasks, as the prompt will have multiple tasks. Also, the more tasks you assign to the prompt, the higher the chance of the output accuracy being decreased.
The multi-task prompting example is shown below:
Analyze the sentiment of the following customer review and summarize its main points: ‘I recently purchased the new smartphone model, and while I appreciate its sleek design, the battery life is disappointing, and the camera quality is subpar.’
The output of the prompt above is like this:
Sentiment Analysis:
The sentiment of the review is mixed, leaning slightly negative. The reviewer acknowledges a positive aspect (sleek design) but expresses dissatisfaction with two critical features (battery life and camera quality).Summary of Main Points:
Positive: The design of the smartphone is sleek and appreciated.Negative:
The battery life is disappointing.
The camera quality does not meet expectations.
4. Role Prompting
Role prompting is a prompt engineering technique in which we assign an LLM to adopt a specific role or persona. It means we ask the model to act in a certain role, such as teacher, mechanic, scientist, etc., to guide the model’s output.
By assigning the role, we can expect the model to have a certain style and focus for their response. This will allow the model to maintain relevance in its answers and have the domain-specific information required by the role.
It’s a good technique to use if we need much more relevance in the answer and we want the model output to fit a specific domain. However, we need to be careful as the model can perpetuate stereotypes associated with certain roles, and the role is only effective if the model is trained with diverse data that could emulate the role well.
Here is an example of role prompting.
As a historian, provide an overview of the causes and consequences of the Industrial Revolution.
The short version of the output is shown below.
The Industrial Revolution was a transformative period in history that began in the late 18th century, primarily in Britain, before spreading to other parts of the world. It marked the shift from agrarian and handcrafted economies to industrialized and mechanized systems of production.
The output can be much longer, but what is important is the model adopts a certain role that could be used to guide the response.
5. Task-Specific prompting
Task-specific prompting is a prompt engineering technique that specifies the prompt for particular types of tasks when interacting with the LLM. The prompt design will have specific instructions and context that follow task requirements.
This prompt engineering technique will result in a prompt design that has a deep understanding of the intended tasks while still leveraging the creativity of the LLMs. It’s a good technique if you already have something in mind to do and do not want the model to deviate from it as long as you give a clear enough explanation and output format.
The technique is great for precise output from the model and efficient if we already have a certain objective. However, the user will need to be able to clearly craft the prompts for the task to perform well according to the model.
Example prompt will be like the following:
Task: Code Debugging
Analyze the following Python code snippet and identify any bugs or inefficiencies.
Your analysis should:
1. Clearly describe what the code is intended to do.
2. Point out any issues or errors in the code, explaining why they occur.
3. Suggest solutions or improvements to fix the issues or optimize the code.
4. Provide a corrected or improved version of the code, if applicable.Code:
def calculate_average(numbers):
total = 0
for i in range(len(numbers)):
total += numbers[i]
return total / len(numbers)
Analysis:
The result will follow the task as outlined in the prompt above.
6. Program-Aided Language Models (PAL)
Program-aided language models, or PAL, is a prompt engineering technique that integrates an external programming environment to solve the intended tasks. It’s different from standard chain-of-thought prompting, where the solution is not in free-form text but instead in the runtime environment, such as Python.
This method will allow the LLM to translate the input into more structured program outline steps and rely on an external interpreter to help solve the problem. It’s a pretty great technique if we need programming capability to solve the problem.
The only thing you need is for the model to be able to access external programming tools. You can either use a product like ChatGPT or develop it yourself.
An example prompt for the PAL is shown below.
# Q: Sarah has $150 in her bank account. She deposits 200 more, and then she withdraws 50. How much money does she have now?
# Sarah started with 150 in her bank account.
initial_balance = 150# She deposits 200.
deposit = 200# She withdraws 50.
withdrawal = 50# The answer is the final balance in her account.
answer = initial_balance + deposit – withdrawal
print(answer)# Q: A train travels 120 miles in 3 hours. What is its average speed in miles per hour?
# The train travels 120 miles in 3 hours.
distance = 120
time = 3# The average speed is distance divided by time.
average_speed = distance / time
print(average_speed)# Q: Alice bought 12 packs of pencils. Each pack contains 8 pencils. She gave away 35 pencils to her classmates. How many pencils does she have left?
The output will be similar to this:
To calculate how many pencils Alice has left, we need to follow these steps:
Calculate the total number of pencils Alice bought.
Subtract the number of pencils she gave away.
Here’s the solution in code:
# Alice bought 12 packs of pencils. Each pack contains 8 pencils. packs = 12 pencils_per_pack = 8
# Calculate the total number of pencils. total_pencils = packs * pencils_per_pack
# She gave away 35 pencils. pencils_given_away = 35
# Calculate how many pencils she has left. pencils_left = total_pencils – pencils_given_away print(pencils_left) |
It’s a good way for the model to get direction and provide output that was structured in the programming way.
7. Chain-of-Verification (CoVe) Prompting
Chain-of-verification (CoVe) is a prompt engineering technique for improving LLM accuracy by systematically verifying the output and then refining it. It’s a method designed specifically to address any hallucination issues.
It works via four different steps, as outlined below:
1. The LLM produces initial answers from the input.
For example:
Three famous inventions by Nikola Tesla are the Tesla Coil, AC (Alternating Current) system, and the radio.
2. From the response above, the model generates specific questions to verify the accuracy of its answer.
The model generates verification questions for each claim:
“Was the Tesla Coil invented by Nikola Tesla?”
“Did Nikola Tesla invent the AC system?”
“Did Nikola Tesla invent the radio?”
3. The model then answers the questions above separately to verify the model.
“Yes, the Tesla Coil was invented by Nikola Tesla in 1891. It is a resonant transformer circuit used to produce high-voltage, low-current electricity.”
“Nikola Tesla significantly contributed to the development and popularization of the AC system, but he did not single-handedly invent it. AC systems were the result of collaborative efforts.”
“Although Tesla made significant contributions to radio technology, the invention of the radio is primarily credited to Guglielmo Marconi.”
4. The model then integrates the verification answers above to refine the model output.
Three famous inventions by Nikola Tesla are the Tesla Coil, his contributions to the AC system, and advancements in radio technology (though the radio’s invention is credited to Guglielmo Marconi).
Using the technique above, the model result can be refined wholly and verified to improve the model’s output accuracy.
Conclusion
Refining prompts to improve the LLMs model is the point of prompt engineering techniques. By doing this, we can improve the model’s accuracy and produce a more relevant output. In this article, we explore seven different next-generation prompt engineering techniques that you can use to right away to help your prompt engineering efforts.
I hope this has helped!