10 - Prompt Engineering – Part 3: Types of Prompts
In the previous post, we explored the components and characteristics of good prompts — how elements like instruction, context, tone, and format come together to shape the quality of AI responses.
Today, we’ll dive deeper into three foundational prompt strategies: Zero-Shot, One-Shot, and Few-Shot Prompting — and understand when, why, and how to use each of them effectively.
But before we do that, let’s understand a key concept that powers all three strategies: In-Context Learning.
When working with large language models (LLMs), there are typically two major approaches to make them perform well on specific tasks: fine-tuning and in-context learning.
-
Fine-tuning involves modifying the internal weights of a pre-trained model using task-specific data. This process requires large datasets, considerable computing power, and time. It’s a more permanent and resource-intensive way to adapt a model for a new domain or behavior.
-
In-context learning, on the other hand, adapts the model’s behavior without changing its internal parameters. Instead of retraining the model, you simply provide examples or additional information directly in the prompt, and the model uses that context to generate relevant and accurate outputs.
What Exactly is In-Context Learning?
In-context learning is a technique where an LLM learns from the examples you give it in the prompt itself — not from retraining or updating its model weights. Each of these examples is referred to as a “shot”, and depending on how many you include, this strategy is categorized into:
-
Zero-Shot Prompting – no examples provided.
-
One-Shot Prompting – one example provided.
-
Few-Shot Prompting – a few examples provided.
Why Does This Matter?
In-context learning is especially powerful because:
-
It’s faster and cheaper than fine-tuning.
-
It doesn’t require hosting or retraining models.
-
It allows flexible, task-specific behavior in real-time.
However, it also comes with limitations:
-
The number of examples is limited by the model's token limit.
-
It’s sensitive to how examples are phrased and structured.
-
It can be inconsistent in following complex patterns over long sessions.
As models get more powerful, they often need fewer examples — sometimes none at all — to perform well. But understanding in-context learning is still crucial for crafting effective prompts and building reliable AI workflows.
Let’s now explore each of these strategies in detail, with real-world use cases from the banking and credit risk domain.
1. Zero-Shot Prompting
Definition:
Zero-shot prompting is when you ask the model to perform a task without providing any examples. The model relies purely on its general knowledge and language patterns to generate a response.
Someone on the street asks you out of the blue:
“How do I get to Balagandharva Kaladalan from Balewadi?”
You respond based on your understanding:“Take a bus or cab to Shivajinagar, then walk to Balagandharva. It’s about 10–11 km from Balewadi.”
No help, no hint. Just do your best to answer.
When to Use:
-
The task is simple, familiar, or well-understood by the model.
-
You want a quick answer or don’t have examples to provide.
-
You're testing the model’s baseline capabilities.
Benefits:
-
Fast and lightweight.
-
No need to prepare training examples.
-
Works well for common or general knowledge tasks.
Limitations:
-
Less control over format or accuracy.
-
May misinterpret intent if the task is ambiguous.
Examples :
Prompt:"List the top three financial risks faced by banks in unsecured personal lending."
Output:
Prompt:“Explain Loss Given Default (LGD) to a junior credit risk analyst.”
Output:
“Loss Given Default (LGD) is the amount of money a bank loses when a borrower defaults, expressed as a percentage of the total exposure. It helps assess potential credit losses.”
Prompt:
“Explain what AML means in banking.”
Output:
“Anti-Money Laundering (AML) refers to regulations and procedures aimed at preventing criminals from disguising illegally obtained funds as legitimate income.”
No examples were needed, but results depend on how clear the prompt is.
2. One-Shot Prompting
Definition:
Analogy:
Before asking, the person says:“I asked someone how to get to FC Road from Balewadi and they said: ‘Take a PMPML bus to Shivajinagar, FC Road is a 5-minute walk.’
Now, can you tell me how to reach Balagandharva Kaladalan?”
You now mirror that style:
“Take a PMPML bus or cab to Shivajinagar. From there, walk towards Jangli Maharaj Road. Balagandharva is just next to Sambhaji Park.”
That one example helps you frame the answer more clearly.
When to Use:
-
The model needs to match your structure or style.
-
You’re introducing a slightly specialized or contextual task.
-
You want to give the AI a gentle nudge in the right direction.
Benefits:
-
Introduces structure or format without much overhead.
-
Helps in moderately complex tasks.
Limitations:
-
One example may not be enough to define a consistent pattern.
-
Output still varies if the prompt isn't clear.
Example :
Prompt:"Provide risk descriptions using the format shown below.
Example:
Risk: Credit Risk
Description: The possibility of loss due to borrower default.
Now continue with another risk."
Output:
Prompt:“Use this format to explain risk terms to a junior analyst:
Term: Probability of Default (PD)
Explanation: This is the likelihood that a borrower will not repay the loan. It is expressed as a percentage and is used to estimate credit risk.
Now explain Loss Given Default (LGD).”
Output:
Term: Loss Given Default (LGD)
Explanation: This is the percentage of exposure a bank expects to lose if a borrower defaults. It helps banks understand the severity of losses in default scenarios.
You can see how the single example helps guide the AI’s structure and tone.
3. Few-Shot Prompting
Definition:
Few-shot prompting is a technique where you provide multiple examples (typically 2 to 5, but sometimes more) within your prompt to demonstrate how a task should be done. These examples act as guides or patterns that help the model generalize and generate responses for similar inputs.
Before asking you, the person says:
• To go to JM Road from Baner: ‘Auto till Shivajinagar, then walk towards Garware Bridge.’
• To reach Deccan Gymkhana from Aundh: ‘Bus to Goodluck Chowk, then walk back towards Deccan Bus Stop.’
• To go to FC Road from Balewadi: ‘Take Bus No. 298 to Shivajinagar, then walk 5 mins towards FC Road.’
Now tell me how to get to Balagandharva Kaladalan from Balewadi.
You instinctively follow the pattern and respond:“Take a bus or cab to Shivajinagar Bus Stop. From there, walk towards Sambhaji Park — Balagandharva is opposite it, near JM Road.”
Multiple examples train you to give a helpful, structured, and relevant answer.
When to Use:
-
The task is domain-specific or requires context.
-
You need consistency in formatting or phrasing.
-
The model is struggling with zero- or one-shot results.
Benefits:
-
Significantly improves quality and consistency.
-
Allows the AI to learn and replicate patterns accurately.
-
Great for report-style, analytical, or rule-based tasks.
Limitations:
-
Longer prompts may hit token limits.
-
Requires well-crafted and relevant examples.
Example :
Prompt:"You are a banking risk analyst. Use the following format to list common risks.
Example 1:
Risk: Credit Risk
Description: Loss due to borrower not repaying loans.
Example 2:
Risk: Market Risk
Description: Risk of financial loss due to changes in market variables.
Now provide one more example."
Expected Output (Few-Shot):
Here, the model clearly follows both the format and tone of the examples provided.
Summary Table: Comparing Prompt Types
Prompt Type | When to Use | Pros | Cons |
---|---|---|---|
Zero-Shot | Simple or general tasks | Fast, easy, no prep | Can be vague or inconsistent |
One-Shot | When format matters slightly | Adds structure and tone | One example may not be enough |
Few-Shot | Complex, contextual, or structured tasks | More accurate, more control | Needs well-written examples |
Why This Works for AI (and Humans)
Just like people, AI learns from context and examples. When you:
-
Say nothing extra → AI guesses (Zero-Shot)
-
Give one clue → AI follows the pattern (One-Shot)
-
Give a few examples → AI really gets the hang of it (Few-Shot)
Whether you’re guiding a tourist or training an AI, the quality of instruction = the quality of response.
Final Thoughts
Prompting isn’t about being fancy — it’s about being effective. Whether you're analyzing risks in lending, building dashboards, or writing executive summaries, knowing which type of prompt to use can drastically improve both speed and accuracy.
In the next post, we’ll look at advanced prompting strategies like Chain-of-Thought, Role-Based, and Meta Prompting — perfect for more sophisticated risk modeling and communication tasks.
Until then, experiment with these types in your own domain. Try building your own examples — and notice how each small change in prompting leads to a big difference in outcome.
✍️ Author’s Note
This blog reflects the author’s personal point of view — shaped by 22+ years of industry experience, along with a deep passion for continuous learning and teaching.
The content has been phrased and structured using Generative AI tools, with the intent to make it engaging, accessible, and insightful for a broader audience.
Comments
Post a Comment