ChatGPT to NotionChatGPT to Notion

Basic Prompts

on 13 days ago

Basic Prompts

You can obtain numerous results through simple prompts, but the quality of the results depends on the quantity and completeness of the information you provide. A prompt can include "instructions" or "questions" conveyed to the model, as well as other details such as "context," "inputs," or "examples." You can use these elements to better guide the model and achieve better results.

Consider the simple example below:Prompt

Plaintext

The sky is

Output Result

Plaintext

blue.

When using chat models like OpenAI's gpt-4 or gpt-3.5-turbo, you can construct prompts using three roles: systemuser, and assistantSystem is optional but helps define the assistant's overall behavior, enabling the model to understand user needs and provide appropriate responses. The example above contains only a user message, which can serve directly as the prompt. For simplicity, all examples in this guide (unless stated otherwise) will use only user messages as prompts for the gpt-3.5-turbo model. The assistant message in the example is the model's response. You can also define assistant messages to provide examples of desired behavior. Learn more about using chat models here(opens in a new tab).

As shown in the prompt example, language models can complete continuations based on provided context (e.g., "The sky is"). Outputs may be unexpected or exceed task requirements, but we can refine prompts to improve results.

Let’s try an improved version:Prompt

Plaintext

Complete the following sentence: The sky is

Output Result

Plaintext

blue during the day and dark at night.

Better, right? Here, we instructed the model to complete the sentence, so the output aligns perfectly with the input. Prompt Engineering explores how to design optimal prompts to guide language models to efficiently complete tasks.

These examples illustrate the core capabilities of modern large language models, which can perform advanced tasks like text summarization, mathematical reasoning, and code generation.

Prompt Format

The prompts used earlier were simplistic. Standard prompts follow these formats:

Plaintext

<Question>?

or

Plaintext

<Instruction>

These can be formatted as standard Q&A:

Plaintext

Q: <Question>? A:

This approach is called zero-shot prompting, where users provide no task-specific examples and directly ask the model for answers. Some large language models support zero-shot prompting, though effectiveness depends on task complexity and the model’s knowledge base.

A zero-shot prompting example:Prompt

Plaintext

Q: What is prompt engineering?

For newer models, you can omit Q: and enter the question directly, as the model is trained to recognize Q&A tasks. The prompt simplifies to:Prompt

Plaintext

What is prompt engineering?

Beyond zero-shot, the industry widely uses few-shot prompting, where users provide a small number of task examples (e.g., problem-answer pairs). The typical format is:

Plaintext

<Question>? <Answer> <Question>? <Answer> <Question>? <Answer> <Question>?

In Q&A form:

Plaintext

Q: <Question>? A: <Answer> Q: <Question>? A: <Answer> Q: <Question>? A: <Answer> Q: <Question>? A:

Q&A formatting is optional; adjust the structure for specific tasks. For example, a simple classification task with annotations:Prompt

Plaintext

This is awesome! // Positive This is bad! // Negative Wow that movie was rad! // Positive What a horrible show! //

Output Result

Plaintext

Negative

Language models learn tasks through descriptions, and few-shot prompting enhances in-context learning. We’ll explore zero-shot and few-shot prompting in depth in subsequent chapters.

Basic Prompts