08 - Prompt Engineering – Part 1: Introduction to Prompt Engineering – A New Series

A Note of Thanks

First off, a heartfelt thank you to everyone who has been following my blog over the past few weeks. I'm thrilled to share that within just 7 weeks, we've crossed 1,000 readers, and the thoughtful comments I’ve received have been incredibly motivating. Your support truly fuels this journey.

This blog has unexpectedly become a learning accelerator for me. Before writing each post, I dive deep—reading not just to solve, but to truly learn. In many ways, it feels like being back in college again, where learning was the goal, not just the means.

A special mention to my daughter Veena, who creates the beautiful line art illustrations you've seen across these posts. She’s in 12th grade and an art enthusiast with her own Instagram channel. While AI isn’t her subject, our weekly 30-minute collaboration sessions have become a precious bonding ritual for both of us. It’s a reminder that when diverse minds collaborate — one rooted in technology, the other in art — the outcome is not only more creative, but also a valuable lesson in commitment, patience, and long-term thinking for both of us.


As I’ve mentioned before, I approach this space as a practitioner and engineer first. I don’t claim to have all the answers, but I’m committed to sharing what I learn along the way. I experiment with real tools, try out new techniques, and aim to deliver insights that are practical and easy to apply.

Over time, one topic has consistently come up in my conversations — whether with students, industry peers, or even friends and family — and that’s Prompt Engineering.

Initially, I hesitated to write about it. There are already some great resources available, and in many ways, learning to prompt feels more like learning to communicate than following a technical manual. Just as we learn to interact with people through experience, I believe prompt crafting evolves the same way — through trial, observation, and iteration.

But with the growing interest in AI-powered roles — often labeled as "high-paying" — many of you reached out asking for a dedicated, practical series on Prompt Engineering. And so, here we are.


A Practical Start to GenAI

In the last few posts, we began with practical introductions to Generative AI — focusing on how you can try hands-on experiments using free tools. The goal is to help you not just read about GenAI, but actually start using it in a meaningful way.

To clarify the context of AI Engineering further, let’s look at three major differences between AI engineering and ML engineering:

  1. Model Ownership: In ML engineering, you typically build and train your own models. In contrast, AI engineering uses large pre-trained models built by others — like OpenAI’s GPT, Google’s Gemma, or Meta’s LLaMA.

  2. Resource Usage: AI models are much larger and demand significantly more memory, compute, and time to use and deploy effectively.

  3. Output Format: ML engineering often produces close-ended outputs (e.g., classifying spam emails), whereas AI engineering deals with open-ended outputs (e.g., generating responses, writing reports, or summarizing content).

Bottom Line: AI engineering is more about adapting and evaluating pre-trained models — not building them from scratch.

How Do We Adapt Models?

In practice, adaptation techniques fall into two broad categories based on whether they require modifying the model’s internal weights:

Prompt-based Techniques
These do not require updating the model weights. Instead, you shape the model’s output by giving carefully crafted input prompts.

  • Easier to start with
  • Requires minimal or no labeled data
  • No need for GPU-heavy training

Finetuning Techniques
These involve updating the internal weights of the model to specialize it for a domain or task.
  • Requires large datasets
  • More complex to implement
  • High resource and time cost

Because prompt engineering is simpler, faster, and ideal for experimentation — it’s where we’re starting this series.

In the coming posts, we’ll explore prompting techniques in depth — from zero-shot to few-shot strategies. Later, we’ll return with a dedicated series on finetuning and adaptation at scale.


Why Prompts Matter More Than Ever

As powerful as large language models (LLMs) are, they still can’t read your mind. They don’t “understand” your intentions the way a human might. Instead, they rely entirely on how you communicate with them — and that’s exactly where Prompt Engineering comes into play.

We’re living in a time where LLMs like GPT, Claude, Gemini, and Gemma are evolving at lightning speed — becoming faster, cheaper, and more capable with each iteration. However, the one thing that hasn't changed is this:

The quality of your output still depends on the quality of your input.

Think of prompts like instructions to a highly skilled but literal-minded assistant. If you're vague, unclear, or overly broad, the response will reflect that. But if you give structure, context, and clarity, you'll unlock the full power of the model.

Here’s why this matters:

  • LLMs don’t assume anything — if you don’t tell them what tone, format, or context to use, they’ll guess. Sometimes they guess well. Often, they don’t.

  • Your prompt is the interface. It’s your API to the model. Just as a bad UI confuses users, a poor prompt confuses the model.

  • It’s not magic, it’s communication. Many people expect LLMs to behave like sci-fi AI — intuitive, seamless, and emotionally intelligent. In reality, they need well-formed, intentional cues to perform at their best.

  • Prompt Engineering puts you in control. It transforms you from a passive user into an active designer of the conversation.

Think of it like this:

If you ask an LLM,

“Tell me about risk,”
you might get something vague or generic.

But if you instead say,

“You are a senior banking risk analyst. Explain credit risk under Basel III norms in 5 bullet points for a non-technical stakeholder,”
you’ll get a response that is specific, structured, and directly useful.

In short, Prompt Engineering is the key to unlocking precision, clarity, and creativity from these powerful models. As we move forward in this series, we’ll learn how to write better prompts — not just to get good answers, but to create systems, assistants, and tools powered by AI that are actually usable in real-world workflows.

Whether you’re a developer, analyst, educator, or business user — learning to prompt well is becoming an essential skill.




Let’s look at a quick example using the Gemma 1B model:


Prompt:
What is an instrument?
Gemma 1B Response:
“An instrument is a device or tool designed to produce sound, create movement, or measure something. It’s often used to achieve a specific goal. These can range from musical instruments to scientific measuring devices. They serve as a means of action or observation. Essentially, it’s something that does something.”

Okay, not bad — but what if we meant financial instrument? Let’s try again:

Prompt:
I was asking about financial instrument. Can you provide details in 5 lines?
Gemma 1B Response:

  1. Definition: Financial instruments are contracts that represent a financial claim on an underlying asset.

  2. Types: They encompass a wide range, including stocks, bonds, mutual funds, derivatives, and more.

  3. Purpose: They are used for investment, hedging risk, or generating income.

  4. Risk & Return: Each instrument carries varying levels of risk and potential reward – stocks can fluctuate significantly, while bonds are generally less volatile.

  5. Regulation: Financial instruments are heavily regulated to protect investors and maintain market stability.


This is exactly why prompts matter:

The first answer was totally valid — but not what we intended. Once we added context (“financial instrument”), the response became specific, relevant, and insightful.

So, yes — Prompt Engineering = Context + Clarity + Creativity.

And it can make the difference between a generic reply and a goldmine of insight.



What is Prompt Engineering? (Gemma 1B Responds)

Here’s how the Gemma 1B model defines Prompt Engineering in 10 lines:

Prompt Engineering is the art and science of crafting effective instructions — “prompts” — for AI models like LLMs to get desired results.
It’s not just about asking a question; it’s about structuring the input to guide the AI.
Key techniques include: providing context, specifying format (e.g., poem, code), and using clear, unambiguous language.
Iterative refinement is essential — you test, tweak, and improve prompts for better results.
Use role-playing to simulate personas (e.g., “Act like a software architect…”).
Leverage few-shot learning by showing examples.
Try chain-of-thought prompting to guide step-by-step reasoning.
Practice zero-shot prompting when no examples are available.
Enhance contextual awareness by giving relevant background info.
It’s a core skill for unlocking the full power of AI tools.


How a Good Prompt Changes Everything – In 20 Points

— as answered by Gemma 1B

To demonstrate how impactful prompt engineering can be, I asked Gemma 1B the following question:

Prompt:
How can a good prompt change the outcome? Explain in 20 lines.

Here’s what Gemma 1B had to say:

  1. Initial Response is Key: The initial prompt sets the stage – the “temperature” of the AI’s creativity.

  2. Precision Matters: A vague prompt yields vague results. Specific language is crucial.

  3. Contextualization Drives Quality: Providing background information dramatically improves relevance.

  4. Role-Playing Enhances Accuracy: Assigning a persona – “as a chef…” – improves output quality.

  5. Constraint Definition: Setting limits – “don’t use clichés” – steers the AI away from undesirable outputs.

  6. Few-Shot Learning Amplifies: Providing a few examples dramatically increases accuracy.

  7. Chain-of-Thought Improves Reasoning: Asking the AI to explain its logic improves the final answer.

  8. Iterative Refinement = Better Results: Small tweaks to the prompt, repeated, yield progressively better results.

  9. Temperature Control (Creative Output): Lower temperatures produce more predictable and creative results.

  10. Top-P Filtering (Focused Output): Prioritizes specific ideas, reducing ambiguity.

  11. Negative Constraints: Explicitly telling the AI what not to do (e.g., "don’t use jargon") reduces errors.

  12. Semantic Clarity: Using natural language – avoiding overly technical jargon – boosts comprehension.

  13. Prompt Length – Less is often Better: Concise prompts often yield better results.

  14. Testing Different Phrasing: Slight word changes can transform output quality.

  15. Understanding AI Bias: Prompting can influence the AI’s inherent biases.

  16. Meta-Prompting: Asking the AI to think about its thinking – improves output quality.

  17. Feedback Loops – Continuous Improvement: Analyzing responses and adjusting prompts is vital.

  18. Multi-Turn Dialogue Enhances Learning: Iterative interaction helps refine responses.

  19. Task-Specific Language Helps: Tailoring language to the task improves alignment.

  20. Prompt Engineering = Unlocking Potential: The right prompt brings out the best from the model.


This example from Gemma 1B reinforces the idea: a good prompt isn’t just helpful — it’s essential.



What’s Next in This Series?

Some of you may be wondering — “Is this blog just going to be AI-generated now?” Don’t worry! I’ll be mixing things up.

In the upcoming posts, I’ll go deeper into:

  • Different prompting strategies

  • Real-world examples and case studies

  • Industry-grade prompt templates

  • Tools and resources for Prompt Engineers

  • Use cases in data analysis, creative writing, business workflows, and more

I’m not in a rush. I want to take time to build a solid foundation — and help you do the same.

So stay tuned. More posts are coming — practical, experimental, and always with the spirit of learning and sharing. 



✍️ Author’s Note

This blog reflects the author’s personal point of view — shaped by 22+ years of industry experience, along with a deep passion for continuous learning and teaching.
The content has been phrased and structured using Generative AI tools, with the intent to make it engaging, accessible, and insightful for a broader audience.

Comments

Popular posts from this blog

01 - Why Start a New Tech Blog When the Internet Is Already Full of Them?

02 - How the GenAI Revolution is Different

06 - Building a Financial Statement Analyzer in 15 Minutes Using AI Agents, n8n & GPT-4o