11 - Prompt Engineering – Part 4: Advanced Prompting Strategies

Recap from Previous Posts

In our earlier posts, we covered the foundations of prompt engineering, including the components of a prompt, their characteristics, and three key prompting strategies — Zero-Shot, One-Shot, and Few-Shot.

We also introduced the idea of in-context learning, which enables large language models (LLMs) to learn from examples provided within the prompt, without retraining.

Today, let’s take the next step and explore some powerful advanced techniques that go beyond basic input-output interactions. These help unlock more accurate, context-aware, and insightful responses from LLMs, particularly useful in complex, high-stakes domains like banking and risk management.



Chain-of-Thought Prompting

Definition:

Chain-of-Thought (CoT) prompting is a technique that encourages the model to break down its response into a series of logical steps, mimicking how humans approach complex problems. Instead of giving a direct answer, the model walks through the intermediate steps — like performing mental math out loud — before concluding.


Why It Matters:

While large language models (LLMs) are fundamentally trained for language generation, they’ve demonstrated surprising competence in math, programming, and structured decision-making. Why? Because these tasks involve reasoning — and reasoning is a strength that can be unlocked through proper prompting.

Reasoning refers to the sequence of logical steps followed to arrive at a conclusion. These steps should be consistent, reproducible, and should lead to the same answer regardless of who (or what) performs them — human or machine. CoT prompting essentially activates and guides this internal reasoning process.

Although advanced models often perform well with simple instructions, smaller models or domain-specific fine-tuned models may struggle to deliver the same depth of analysis. In such cases, CoT prompting provides a clear path for the model to reason through problems in a structured manner, leading to more reliable and explainable outputs.


CoT prompting itself can be enhanced using examples — this is where One-Shot and Few-Shot Chain-of-Thought strategies come into play. These methods provide the model with structured reasoning examples to help it mimic similar logical patterns in new tasks.


One-Shot Chain-of-Thought Prompting

Definition:
One-shot CoT provides a single example where the task is solved step-by-step. This shows the model how to think before asking it to solve a similar task. It’s useful when you want to guide the model’s reasoning behavior with minimal data.

Use Case:
Helpful when the task is moderately complex and you want to establish a reasoning pattern without overloading the context window.

Example – Banking Risk (Loan Approval)
Prompt:

Example:
Applicant A:
Income: ₹75,000
EMI: ₹25,000
CIBIL Score: 700
Job: PSU Employee
Step-by-step reasoning:
EMI/Income = 25,000 / 75,000 = 33% — acceptable
CIBIL Score = 700 — borderline, but okay
Job security = strong → low risk
Eligible
Now evaluate Applicant B:
Income: ₹60,000
EMI: ₹30,000
CIBIL Score: 680
Job: Private Sector
Explain step-by-step.

 Response of Geema 3:4B model 


 Okay, let’s evaluate Applicant B, following the same step-by-step reasoning process.



Applicant B: Evaluation

1. Initial Data Assessment:

Income: ₹60,000 – This is a moderate income level.
EMI: ₹30,000 – A significant portion of their income is allocated to debt.
CIBIL Score: 680 – This is a borderline score. While it's not excellent, it’s generally considered “Fair” and indicates some past credit activity.

Job: Private Sector – This job type carries a moderate level of risk compared to PSU or government jobs.

2. Debt-to-Income (DTI) Calculation & Analysis:

DTI Calculation: (EMI / Income) * 100 = (₹30,000 / ₹60,000) * 100 = 50%

Interpretation: A DTI of 50% is a significant concern. Lenders generally prefer DTIs below 36%, and 50% pushes the applicant into a higher-risk category. This high DTI means a larger percentage of their income is dedicated to debt repayment, leaving less flexibility for unexpected expenses or savings.

3. CIBIL Score Analysis & Risk Adjustment:

CIBIL Score Context: A score of 680 is considered "Fair." While it shows responsible credit use in the past, it’s not as strong as a score of 700 or higher. The lower score suggests potential for higher interest rates or difficulty securing favorable loan terms.

Risk Adjustment: The lower CIBIL score necessitates a more cautious approach.

4. Job Security Considerations:

Job Sector: Private Sector employment is inherently less stable than a PSU (Public Sector Undertaking) job. While many private sector jobs are secure, there’s a greater risk of layoffs or reduced income compared to government employment.

5. Overall Risk Assessment & Recommendation:

Overall Risk: Based on this analysis, Applicant B presents a higher level of risk compared to Applicant A. The high DTI (50%), combined with the borderline CIBIL score and the inherent risk associated with a private sector job, warrants careful scrutiny.

Recommendation: I would classify Applicant B as potentially eligible, but only with significant caveats. Further investigation and potentially stricter loan terms would be absolutely necessary. Specifically:

Detailed Financial Interview: A thorough discussion to understand their spending habits, savings, and overall financial strategy.

Higher Interest Rate: A loan offered to Applicant B would likely require a higher interest rate to compensate for the increased risk.
Smaller Loan Amount: The loan amount might be limited to a smaller sum to reduce the overall repayment burden.

Few-Shot Chain-of-Thought Prompting

Definition:
Few-shot CoT provides multiple examples (usually 2–5) with step-by-step reasoning. This allows the model to learn a reasoning pattern across varied contexts, improving accuracy and generalization, especially for complex tasks.

Use Case:
Best when the model struggles with Zero-Shot reasoning and you have space to include more examples within token limits.

Example – Banking Risk (Fraud Detection with AML patterns)
Prompt:

 
Applicant A:

Income: ₹75,000
EMI: ₹25,000
CIBIL Score: 700
Job: PSU Employee
Step-by-step reasoning:
EMI/Income = 25,000 / 75,000 = 33% — acceptable
CIBIL Score = 700 — borderline, but okay
Job security = strong → low risk
→ Eligible


Now evaluate Applicant B:
Income: ₹60,000
EMI: ₹30,000
CIBIL Score: 680
Job: Private Sector
Explain step-by-step.
EMI/Income = 30,000 / 60,000 = 50% — borderline, more than application A lower is better
CIBIL Score = 680 — borderline, lower than application B, higher is better
Job security = strong → but government job is more stable than private
→ borderline Eligible, may need additional collateral 
Now evaluate Applicant C:
Income: ₹120,000
EMI: ₹20,000
CIBIL Score: 710

Job: Private Sector

Step-by-step reasoning:

 Response of Geema 3:4B model 


Applicant C: Income: ₹120,000 EMI: ₹20,000 CIBIL Score: 710 Job: Private Sector

Step-by-step reasoning:
EMI/Income Calculation: EMI/Income = ₹20,000 / ₹120,000 = 16.67% – This is an excellent ratio. Ideally, we want the EMI to be a smaller percentage of the income. This demonstrates a very strong ability to manage debt.

CIBIL Score: CIBIL Score = 710 – This is a very good score. A score of 710 indicates a strong credit history and responsible borrowing behavior.

Job Security: Job: Private Sector – While private sector jobs can be less stable than Public Sector Units (PSUs), many positions offer good benefits and long-term career prospects. We need to investigate the stability of this specific role within the company.

Overall Assessment: Considering all factors, Applicant C demonstrates a strong ability to manage debt (low EMI/Income ratio), has a good credit history (high CIBIL score), and works in a stable job.

Conclusion: Applicant C is Highly Eligible for this loan. However, to ensure complete risk mitigation and adherence to ethical lending practices, a deeper dive into the specific job role within the private sector is recommended – to fully understand the associated risks. We need to confirm long-term employment stability.  

These strategies allow you to blend the power of CoT with in-context learning, helping models that might otherwise fall short on complex, high-risk, or ambiguous decision-making tasks.




Role-Based Prompting: Solving Problems Through a Persona

Definition:
Role-based prompting means instructing the model to assume a specific role or persona — such as a doctor, engineer, lawyer, or teacher — while solving a task. This influences the style, structure, and focus of the response based on the expertise and mindset of that role.

These prompts often start with phrases like “You are a…” or “I want you to be…” — this sets the context for the model.

In more advanced LLM applications, this role-setting is typically defined as part of the System Prompt — the instruction that tells the model who it is before the conversation even starts.

Why It Matters

Different people solve the same problem in very different ways depending on their training, experience, and responsibilities. By telling the AI who it is, you're shaping its perspective and priorities — just like a real professional would.

Let’s explore a universal question — “How do you make a crying child laugh?” — through three different roles:

Engineer’s Perspective

As an engineer:

I would identify patterns in why the child is crying. Then I'd design a distraction mechanism — perhaps a musical toy with motion sensors — that responds when the child cries, using feedback loops to trigger laughter.


Doctor’s Perspective

As a doctor:

First, I would rule out any physical discomfort — hunger, pain, illness, or fatigue. If those are clear, I would suggest gentle sensory engagement — soft sounds, colors, or touch — to soothe the child and elicit laughter naturally.


Mother’s Perspective

As a mother:

I’d pick the child up, hold them close, and sing their favorite song while making silly faces. Laughter often comes from connection, so I'd focus on warmth, familiarity, and affection to ease their distress.


Here is another example

Prompt: “How can we reduce default risk in our personal loan portfolio?”

As a Credit Risk Analyst:

I would segment the portfolio using credit scores, income levels, and repayment history. Then, I’d build logistic regression or machine learning models to identify high-risk segments and adjust underwriting policies accordingly. Early warning indicators can also be set using behavioral data.


As a Compliance Officer:

I'd ensure that any strategy to reduce default risk complies with regulatory frameworks like Basel III and fair lending laws. Emphasis would be on ensuring transparent risk disclosures and validating that risk-based pricing doesn't introduce bias.


 As a Data Scientist:

I'd extract historical data, engineer relevant features (e.g., credit utilization, past delinquencies), and use supervised learning techniques to build a predictive model for probability of default (PD). I’d also validate model fairness and explainability using SHAP or LIME.

What We Learn from This

Each role looks at the same question through entirely different lenses — logical design, clinical diagnosis, and emotional bonding. These differences aren't just stylistic — they affect the actual solution.

So when you're working with LLMs:

  • Clearly define the role or persona in your prompt.

  • Be intentional about how the problem should be approached.

  • Expect different answers based on professional context.


When designing prompts, who you ask the model to be — matters as much as what you ask it to do.

Whether you're simulating a banking analyst, a compliance officer, or a customer support agent, defining the role helps tailor the response realistically and responsibly.




Meta Prompting (Prompting the Prompt)

Definition:
Meta prompting is a prompt engineering technique where you use one large language model (LLM) to generate, refine, or optimize prompts for another model — or even for itself. Rather than hand-crafting every instruction from scratch, you guide the LLM to assist in building prompts dynamically.

In essence: Meta prompting is a prompt engineering method that leverages large language models (LLMs) to create, adapt, or evolve prompts — based on your intent, feedback, or the complexity of the task at hand.

Traditional prompt engineering relies on manual iteration — you write a prompt, test it, refine it, repeat. But meta prompting flips the script:

You use prompts to write other prompts.

It’s a layered process, where you instruct a higher-capability LLM (e.g., GPT-4, Claude, Gemini) to help you design, optimize, or adapt instructions that can then be used with another LLM — often a lighter or more specialized model.


How It's Different from Traditional Prompting

In traditional prompt engineering, you often:

  • Write a prompt manually

  • Test the result

  • Refine it by hand

  • Hope the model “gets it” on the next try

In meta prompting, instead:

  • You start with a base prompt

  • Then ask the LLM to improve, simplify, expand, or tailor that prompt

  • You guide it with feedback — and it adjusts the instructions automatically

This iterative process helps tackle:

  • Complex, multi-layered tasks

  • Domain-specific instructions

  • Dynamic business workflow

  • Scales prompt creation for complex workflows


Layman’s Example:

Imagine you’re writing an exam paper. Instead of writing each question yourself, you ask a very smart assistant:

“Write me 5 questions that will test a student’s understanding of financial regulation.”

That’s what meta prompting is — using the AI’s intelligence to build high-quality instructions for other tasks, systems, or models.

Example: Using Meta Prompting in Banking Risk

Let’s say you're building an AI assistant to help credit analysts evaluate loan applications.

Step 1 – User Prompt:

“Help me write a prompt for assessing borrower creditworthiness using income, debt-to-income ratio, and credit history.”

Step 2 – LLM-Generated Prompt (Meta Prompting Result):

“You are a credit risk analyst. Given a borrower’s financial profile including income, DTI ratio, and credit history, assess their creditworthiness. Provide a risk category (low, medium, high) with reasoning.”

Step 3 – Refinement (via Meta Prompting):

“Now rewrite the prompt to include regulatory context from Basel III.”

Updated Prompt:
“You are a Basel III-compliant credit risk analyst. Evaluate a borrower’s creditworthiness based on income, debt-to-income ratio, and credit history. Ensure your assessment aligns with Basel III credit risk guidelines and assign a risk tier.”


Final Thoughts

These advanced prompting techniques aren’t just clever hacks — they represent a more collaborative way of working with LLMs, especially as tasks become more nuanced.

In high-stakes fields like banking, where reasoning, compliance, and communication precision are critical, leveraging Chain-of-Thought, Role-Based, and Meta Prompting can drastically improve decision-making quality and reduce the risk of error.


✍️ Author’s Note

This blog reflects the author’s personal point of view — shaped by 22+ years of industry experience, along with a deep passion for continuous learning and teaching.
The content has been phrased and structured using Generative AI tools, with the intent to make it engaging, accessible, and insightful for a broader audience.

Comments

Popular posts from this blog

01 - Why Start a New Tech Blog When the Internet Is Already Full of Them?

13 - Voice of Industry Experts - The Smart Shift: AI in Project Management

02 - How the GenAI Revolution is Different