10 - Prompt Engineering – Part 3: Types of Prompts

Exploring Zero-Shot, One-Shot, and Few-Shot Prompting Strategies with Real-World Banking Risk Examples


In the previous post, we explored the components and characteristics of good prompts — how elements like instruction, context, tone, and format come together to shape the quality of AI responses.

Today, we’ll dive deeper into three foundational prompt strategies: Zero-Shot, One-Shot, and Few-Shot Promptingand understand when, why, and how to use each of them effectively.

But before we do that, let’s understand a key concept that powers all three strategies: In-Context Learning.

When working with large language models (LLMs), there are typically two major approaches to make them perform well on specific tasks: fine-tuning and in-context learning.

  • Fine-tuning involves modifying the internal weights of a pre-trained model using task-specific data. This process requires large datasets, considerable computing power, and time. It’s a more permanent and resource-intensive way to adapt a model for a new domain or behavior.

  • In-context learning, on the other hand, adapts the model’s behavior without changing its internal parameters. Instead of retraining the model, you simply provide examples or additional information directly in the prompt, and the model uses that context to generate relevant and accurate outputs.

What Exactly is In-Context Learning?

In-context learning is a technique where an LLM learns from the examples you give it in the prompt itself — not from retraining or updating its model weights. Each of these examples is referred to as a “shot”, and depending on how many you include, this strategy is categorized into:



  • Zero-Shot Prompting – no examples provided.

  • One-Shot Prompting – one example provided.

  • Few-Shot Prompting – a few examples provided.




For instance, if you’re building a tool to evaluate loan applications, you don’t need to retrain the model on your company’s risk policy. Instead, you can give it a few examples of approved and rejected applications in your prompt — and the model can follow those patterns on new inputs.

Why Does This Matter?

In-context learning is especially powerful because:

  • It’s faster and cheaper than fine-tuning.

  • It doesn’t require hosting or retraining models.

  • It allows flexible, task-specific behavior in real-time.

However, it also comes with limitations:

  • The number of examples is limited by the model's token limit.

  • It’s sensitive to how examples are phrased and structured.

  • It can be inconsistent in following complex patterns over long sessions.

As models get more powerful, they often need fewer examples — sometimes none at all — to perform well. But understanding in-context learning is still crucial for crafting effective prompts and building reliable AI workflows.

Let’s now explore each of these strategies in detail, with real-world use cases from the banking and credit risk domain.


1. Zero-Shot Prompting

Definition:
Zero-shot prompting is when you ask the model to perform a task without providing any examples. The model relies purely on its general knowledge and language patterns to generate a response.

This approach assumes that the model has already encountered similar tasks during training and can generalize from that prior knowledge. You're essentially asking the model to figure out the task just from your instruction.

Analogy:

Someone on the street asks you out of the blue:

“How do I get to Balagandharva Kaladalan from Balewadi?”

You respond based on your understanding:

“Take a bus or cab to Shivajinagar, then walk to Balagandharva. It’s about 10–11 km from Balewadi.”

No help, no hint. Just do your best to answer.

When to Use:

  • The task is simple, familiar, or well-understood by the model.

  • You want a quick answer or don’t have examples to provide.

  • You're testing the model’s baseline capabilities.

Benefits:

  • Fast and lightweight.

  • No need to prepare training examples.

  • Works well for common or general knowledge tasks.

Limitations:

  • Less control over format or accuracy.

  • May misinterpret intent if the task is ambiguous.

Examples :

Prompt:

"List the top three financial risks faced by banks in unsecured personal lending."

Output:

1. Credit Risk – Risk of borrower default. 2. Operational Risk – Failures in internal processes or systems. 3. Liquidity Risk – Inability to meet short-term obligations.
Prompt:

“Explain Loss Given Default (LGD) to a junior credit risk analyst.”

Output:

“Loss Given Default (LGD) is the amount of money a bank loses when a borrower defaults, expressed as a percentage of the total exposure. It helps assess potential credit losses.”

Prompt:

“Explain what AML means in banking.”

Output:
“Anti-Money Laundering (AML) refers to regulations and procedures aimed at preventing criminals from disguising illegally obtained funds as legitimate income.”


No examples were needed, but results depend on how clear the prompt is.


2. One-Shot Prompting

Definition:

One-shot prompting is a prompt engineering strategy where you give a single example of the task you want the model to perform, followed by a new, similar input. The idea is that the single example is enough to set the pattern or show the structure of the desired response.

Analogy:

Before asking, the person says:
“I asked someone how to get to FC Road from Balewadi and they said: ‘Take a PMPML bus to Shivajinagar, FC Road is a 5-minute walk.’
Now, can you tell me how to reach Balagandharva Kaladalan?”

You now mirror that style:
“Take a PMPML bus or cab to Shivajinagar. From there, walk towards Jangli Maharaj Road. Balagandharva is just next to Sambhaji Park.”

That one example helps you frame the answer more clearly.


When to Use:

  • The model needs to match your structure or style.

  • You’re introducing a slightly specialized or contextual task.

  • You want to give the AI a gentle nudge in the right direction.

Benefits:

  • Introduces structure or format without much overhead.

  • Helps in moderately complex tasks.

Limitations:

  • One example may not be enough to define a consistent pattern.

  • Output still varies if the prompt isn't clear.

Example :

Prompt:
"Provide risk descriptions using the format shown below.
Example:
Risk: Credit Risk
Description: The possibility of loss due to borrower default.

Now continue with another risk."

Output:

Risk: Operational Risk
Description: Risk arising from internal process failures, system breakdowns, or human error.
Prompt:
“Use this format to explain risk terms to a junior analyst:
Term: Probability of Default (PD)
Explanation: This is the likelihood that a borrower will not repay the loan. It is expressed as a percentage and is used to estimate credit risk.

Now explain Loss Given Default (LGD).”


Output:

Term: Loss Given Default (LGD)

Explanation: This is the percentage of exposure a bank expects to lose if a borrower defaults. It helps banks understand the severity of losses in default scenarios.

You can see how the single example helps guide the AI’s structure and tone.


3. Few-Shot Prompting

Definition:
Few-shot prompting is a technique where you provide multiple examples (typically 2 to 5, but sometimes more) within your prompt to demonstrate how a task should be done. These examples act as guides or patterns that help the model generalize and generate responses for similar inputs.

Analogy:

Before asking you, the person says:

To go to JM Road from Baner: ‘Auto till Shivajinagar, then walk towards Garware Bridge.’

To reach Deccan Gymkhana from Aundh: ‘Bus to Goodluck Chowk, then walk back towards Deccan Bus Stop.’

To go to FC Road from Balewadi: ‘Take Bus No. 298 to Shivajinagar, then walk 5 mins towards FC Road.’

Now tell me how to get to Balagandharva Kaladalan from Balewadi.

You instinctively follow the pattern and respond:

“Take a bus or cab to Shivajinagar Bus Stop. From there, walk towards Sambhaji Park — Balagandharva is opposite it, near JM Road.”

Multiple examples train you to give a helpful, structured, and relevant answer.


When to Use:

  • The task is domain-specific or requires context.

  • You need consistency in formatting or phrasing.

  • The model is struggling with zero- or one-shot results.

Benefits:

  • Significantly improves quality and consistency.

  • Allows the AI to learn and replicate patterns accurately.

  • Great for report-style, analytical, or rule-based tasks.

Limitations:

  • Longer prompts may hit token limits.

  • Requires well-crafted and relevant examples.

Example :

Prompt:
"You are a banking risk analyst. Use the following format to list common risks.
Example 1:
Risk: Credit Risk
Description: Loss due to borrower not repaying loans.
Example 2:
Risk: Market Risk
Description: Risk of financial loss due to changes in market variables.

Now provide one more example."


Expected Output (Few-Shot):

Risk: Compliance Risk
Description: Risk of legal penalties due to non-adherence to regulatory requirements.

Prompt:

“You are building a training guide for junior credit risk analysts. Follow this format for key Basel III terms:

Term: Probability of Default (PD)

Explanation: Likelihood of a borrower defaulting. A key component of credit risk assessment.

Term: Exposure at Default (EAD)

Explanation: The total value a bank is exposed to at the time of borrower default.

Term: Loss Given Default (LGD)

Explanation: [Model continues…]”

Output:


Term: Loss Given Default (LGD)

Explanation: The percentage of the total exposure a bank expects to lose if a borrower

defaults, considering recovery costs and collateral value.

Here, the model clearly follows both the format and tone of the examples provided.


 Summary Table: Comparing Prompt Types


Prompt TypeWhen to UseProsCons
Zero-ShotSimple or general tasksFast, easy, no prepCan be vague or inconsistent
One-ShotWhen format matters slightlyAdds structure and toneOne example may not be enough
Few-ShotComplex, contextual, or structured tasksMore accurate, more controlNeeds well-written examples

Why This Works for AI (and Humans)

Just like people, AI learns from context and examples. When you:

  • Say nothing extra → AI guesses (Zero-Shot)

  • Give one clue → AI follows the pattern (One-Shot)

  • Give a few examples → AI really gets the hang of it (Few-Shot)

Whether you’re guiding a tourist or training an AI, the quality of instruction = the quality of response.


Final Thoughts

Prompting isn’t about being fancy — it’s about being effective. Whether you're analyzing risks in lending, building dashboards, or writing executive summaries, knowing which type of prompt to use can drastically improve both speed and accuracy.

In the next post, we’ll look at advanced prompting strategies like Chain-of-Thought, Role-Based, and Meta Prompting — perfect for more sophisticated risk modeling and communication tasks.

Until then, experiment with these types in your own domain. Try building your own examples — and notice how each small change in prompting leads to a big difference in outcome.


✍️ Author’s Note

This blog reflects the author’s personal point of view — shaped by 22+ years of industry experience, along with a deep passion for continuous learning and teaching.
The content has been phrased and structured using Generative AI tools, with the intent to make it engaging, accessible, and insightful for a broader audience.

Comments

Popular posts from this blog

01 - Why Start a New Tech Blog When the Internet Is Already Full of Them?

02 - How the GenAI Revolution is Different

06 - Building a Financial Statement Analyzer in 15 Minutes Using AI Agents, n8n & GPT-4o