Introduction to Prompt Engineering

A useful guide on prompt engineering and its different types.
By Boris Delovski • Updated on Mar 27, 2024
blog image

AI has long been a remarkable field of innovation,  yet it frequently remained unnoticed by the average person. While other technologies quickly integrated into our daily lives, AI's impact appeared limited to industrial or niche applications. For many, AI was more of a fascinating concept encountered in articles or online discussions rather than a practical tool. This scenario began to dramatically shift with the emergence of advanced generative AI models.

Generative models, while not a novel idea within the tech community, have always sparked interest among AI enthusiasts. However, their complexity and the need for programming skills marked them as out of reach for the layperson. This landscape transformed with the debut of Large Language Models, like ChatGPT. This marked the first time such powerful AI tools became accessible to a broader audience.

These models, including ChatGPT, revolutionized the way we interact with AI. This was done by simplifying the user interface to just typing text. Users without any coding background could now harness the capabilities of generative AI through simple text prompts, integrating these technologies into the daily lives of many. As these models gained popularity, it became evident that the effectiveness of an interaction relied heavily on the crafting of these prompts. 

Some prompts elicited better responses than others, highlighting the significance of prompt design. This concept, referred to as prompt engineering, entails crafting prompts in a way that optimizes the AI's understanding and response. In this article, we will explore the fundamentals of prompt engineering, aiming to guide you on how to communicate with generative AI models effectively.

What Is Prompt Engineering

Prompts are brief inputs of text that act as instructions or questions directed toward an AI model. Their role is to guide it on the user's goals and expectations for the model. Writing good prompts enables smooth interaction between a user and an AI model, mimicking a natural conversation between two individuals. They are crucial for eliciting specific, relevant responses from AI models. Moreover,  they play a highly important role in various areas, ranging from creative writing assistance to complex problem-solving.

Incorporating the right keywords and crafting prompts with clarity and precision can substantially improve the quality of AI-generated responses. This practice, known as prompt engineering, goes beyond merely communicating with AI. It involves effectively communicating to achieve optimal results while simultaneously minimizing computational costs, thereby reducing the overall cost of interaction for the user.

Prompt engineering is often portrayed as an exact science, however, it is both art and science at the same time.  It requires users to understand both the limitations and the capabilities of the particular AI model they are working with. Effective prompts are designed not only to convey a user's needs but also to navigate around potential pitfalls, such as ambiguities, overly broad requests, or misinterpretations. This skill becomes particularly important as AI models become more sophisticated, enabling them to process and generate complex information. 

Creating good prompts is often challenging because it does not involve only creating the starting command. The process encompasses also an entire chain of follow-up questions followed by the responses of the AI model. By refining prompts based on the AI's responses, users can guide the model toward more accurate and useful results. This repetitious process resembles refining a conversation to achieve deeper levels of comprehension or clarity on a topic. In essence, this entire process is highly advanced, as it demands users to iteratively adjust their queries to the model and precisely define context when issuing commands. 

This is especially true for prompts that are domain-specific or demand a sophisticated understanding. For example, educators can employ prompts to create quizzes, and study guides, or simulate complex problem-solving situations. However, to achieve this effectively, they must specify the intended use of the model's output. By informing the model about the purpose of the output, for example, a quiz for high school freshmen, it will adjust its behavior and outputs accordingly to meet the user's specifications.

What Are the Different Types of Prompt Engineering Techniques

When instructing a model to produce an output, we can address the problem we are tackling from different angles, using different approaches. There are several types of prompt engineering approaches, each involving its strategies and objectives. We commonly use the following prompt engineering strategies:

  • Direct instruction prompts
  • Zero-shot prompts
  • Few-shot prompts
  • Chain-of-thought prompts
  • Exploratory prompts
  • Contextual prompts

Each type serves a different purpose and is chosen based on our desired output from the AI model. In practice, we often combine multiple of these techniques when creating prompts for our models.

What Are Direct Instruction Prompts

Direct instruction prompts are clear, concise commands or questions that aim for a simple response from the AI. The effectiveness of these prompts lies in their clarity and specificity, guiding the AI to provide a precise answer or action without ambiguity. An example of a high-quality direct instruction prompt is: 

"Please outline a detailed step-by-step guide for creating a comprehensive budget for a small business. The guide should cover the initial assessment of financial resources, estimation of expenses and income, allocation of funds towards various business needs, and strategies for ongoing budget management and adjustment. Include practical tips for accurately forecasting revenues and managing unforeseen costs. Additionally, highlight the importance of contingency planning and provide examples of common budgeting pitfalls to avoid."

The prompt above provides clear instructions and even specifies the desired depth of information. This increases the chances to yield a useful and actionable guide for small business owners navigating the budget creation process.

Article continues below


What Are Zero-shot Prompts

These prompts require the model to perform a task without having been given any examples. Therefore, they need to be self-explanatory and designed in a way that the task can be understood and executed based solely on the instructions within the prompt. An example of a high-quality zero-shot prompt is: 

"Utilizing your linguistic capabilities, please translate the English sentence 'The quick brown fox jumps over the lazy dog' into Spanish, ensuring that the translated sentence maintains the grammatical structure, idiomatic expressions, and cultural nuances of the original."

The prompt above emphasizes the need for translation accuracy and cultural and idiomatic sensitivity. However, as a zero-shot prompt, it does not include any examples that the model can use as a reference.

What Are Few-shot Prompts

Unlike zero-shot prompts, which do not provide the model with an example, few-shot prompts provide the model with a few examples. This is done to illustrate the task before asking it to complete a similar task. This approach helps the AI understand the pattern or format expected in the response. An example of a high-quality few-shot prompt is: 

" For each arithmetic question provided below, calculate the sum and present your answer in the following format: 'Question: [Your Question Here] Answer: [Your Answer Here]'. Aim for accuracy and clarity in your response.

Examples:
Question: What is 2 + 2?
Answer: The sum of 2 + 2 is 4.
Question: What is 3 + 3?
Answer: The sum of 3 + 3 is 6.

Given the format and examples above, please answer the following arithmetic question:
Question: What is 8 + 2?"

By clearly specifying the desired format for the model's response, we minimize ambiguity and ensure consistency across responses. Additionally, providing a few examples for the model to reference ensures that it generates the precise output we seek. 

What Are Chain-of-thought Prompts

Chain-of-thought prompts encourage the model to articulate its reasoning process step by step before concluding. This method is particularly useful for complex problems, improving transparency and understanding of the model's thought process. By making the model thoroughly explain its reasoning process, we also increase the likelihood that the model will produce a good result. An example of a high-quality chain-of-thought prompt is:

"Imagine you are a detective trying to solve a mystery. The case involves determining the sequence of events that led to the disappearance of a valuable painting from a museum. Use the chain-of-thought method to lay out your reasoning process, step by step.

1. Identify the last time the painting was seen and by whom. Consider the security measures in place at that time, such as cameras or guards.

2. Analyze the security footage from the day of the disappearance, focusing on the time frame when the painting was last seen. Look for any suspicious activities or unrecognized individuals near the painting.

3. Interview witnesses who were near the painting's location on the day of the disappearance. Note any inconsistencies or valuable insights they provide that could point to a suspect or method of theft.

4. Review the museum's layout and any possible exit routes that could have been used to smuggle out the painting unnoticed. Consider if any special events or distractions occurred that day that could have facilitated the theft.

5. Examine the security protocols for potential vulnerabilities, such as unguarded entrances or blind spots in camera coverage. Determine if the theft required insider knowledge of the museum's operations.

6. Compile a list of suspects based on the evidence gathered, including any museum staff, visitors with suspicious behavior, or known art thieves who have been active in the area.

7. Deduce the most likely scenario for the theft based on the chain of thought developed through steps 1-6. Consider motives, opportunities, and methods available to each suspect.

8. Propose the next steps in the investigation, such as searching the suspects' homes, checking for the painting in the black market, or increasing surveillance in case the thief attempts to return.

Finally, use the evidence and reasoning outlined in the previous steps to formulate a comprehensive hypothesis about how the painting was stolen and who might be responsible."

This intricate prompt guides the model through a logical, step-by-step investigation, prompting it to analyze various facets of the problem and its interconnectedness. It mirrors the cognitive process of a detective, from gathering initial evidence to formulating a hypothesis, thereby demonstrating an effective use of the chain-of-thought method in problem-solving. 

What Are Exploratory Prompts

These prompts are designed to make the model create broad, creative, and innovative responses. They are open-ended, encouraging the model to think beyond simple answers and explore a variety of possibilities or perspectives. An example of a high-quality exploratory prompt is:

"Imagine the landscape of energy storage a decade from now: Identify emerging technologies on the horizon that hold the potential to transform how we store energy. Describe their mechanisms, speculate on their evolution, and analyze the multifaceted impacts they could have on society, the environment, and the global economy." 

The prompt above encourages a deeper dive into future technologies. It does so by not only asking for a description of potential innovations, but also requiring an analysis of their mechanisms, anticipated development, and the broad implications they may hold. It requires the model to take into consideration many different factors when formulating its answer. As a result, this will enrich the creativity of the answer the model provides.

What Are Contextual Prompts

Contextual prompts utilize detailed background information to guide the AI's output more accurately. By embedding specific context, these prompts shape the AI's responses to align with a given narrative, scenario, or set of considerations, thereby refining the relevance and depth of the answers. Additionally, we can further enhance these types of prompts by instructing the model to respond from a specific viewpoint or role. An example of a high-quality contextual prompt is:

"Assuming the role of a seasoned healthcare ethicist deeply versed in AI technology, analyze the crucial ethical considerations for incorporating AI into healthcare practices, with a special emphasis on protecting patient privacy and securing data. Drawing from the latest advancements in AI ethics and regulatory measures, discuss how these factors should shape the design, deployment, and governance of AI systems in clinical settings to uphold ethical standards." 

By adopting the persona of a healthcare ethicist, the AI is prompted to synthesize considerations with an expert's insight, reflecting deep understanding and concern for ethical implications. This approach prompts the model to consider the intricacies of patient privacy, data security, and regulatory compliance from the perspective of a seasoned professional. Consequently, it yields a response that is both informed and empathetic.

What Is Prompt Engineering for Image Generation Models

Upon mentioning prompt engineering, most people immediately associate it with different techniques we can use to improve the quality of the results we get from Large Language Models, like ChatGPT. In practice, prompt engineering is not limited only to such models, but can also greatly impact the results we get from other types of models. To be more precise, the second most popular application of prompt engineering is in the field of image generation. Here, crafting high-quality prompts for models such as Stable Diffusion, DALL-E, and Midjourney, plays a significant role in determining the quality of the generated output. This influence can be even more pronounced than in the case of Large Language models.

However, we will not get into the details behind prompt engineering for image generation models. There are several crucial differences between how we create prompts for those types of models and Large Language Models. Because of these differences, prompt engineering techniques for image generation models merit an article on their own and we will focus on them in the future

In this article, we focused on introducing prompt engineering, a significant topic in the AI community nowadays. There are different types of prompt engineering techniques, each is thoroughly illustrated here. Improving proficiency in prompt engineering enhances the quality of results obtained from the models you are working with. Moreover, this can ensure that your work with AI models that require prompt engineering is as efficient as possible and minimizes the usage of computational resources. In future articles, we will delve deeper into prompt engineering and into creating custom GPTs. In addition, we will explain how prompt engineering works in image generation models.

Boris Delovski

Data Science Trainer

Boris Delovski

Boris is a data science trainer and consultant who is passionate about sharing his knowledge with others.

Before Edlitera, Boris applied his skills in several industries, including neuroimaging and metallurgy, using data science and deep learning to analyze images.