All Posts

Mastering Prompt Templates: System, User, and Assistant Roles with LangChain

Abstract AlgorithmsAbstract Algorithms
··6 min read

TL;DR

TLDR: A prompt isn't just a single string of text. Modern LLMs (like GPT-4) expect a structured list of messages. The System sets the behavior, the User provides the input, and the Assistant stores the history. Using tools like LangChain helps manage...

Cover Image for Mastering Prompt Templates: System, User, and Assistant Roles with LangChain

TLDR: A prompt isn't just a single string of text. Modern LLMs (like GPT-4) expect a structured list of messages. The System sets the behavior, the User provides the input, and the Assistant stores the history. Using tools like LangChain helps manage this structure cleanly.


1. The Anatomy of a Prompt (The "No-Jargon" Explanation)

Imagine a Theater Play.

  1. System Message (The Director's Note): Before the play starts, the director tells the actor: "You are a grumpy pirate. You hate technology. Speak only in rhymes." The audience never hears this, but it controls the actor's entire personality.
  2. User Message (The Audience): Someone from the audience shouts: "How do I fix my iPhone?"
  3. Assistant Message (The Actor): The actor responds based on the Director's note: "A glowing brick of glass and wire? Toss it in the fire, sire!"

In the old days (GPT-3), we just pasted everything into one big text block. Now (Chat Models), we explicitly separate these roles.


2. The Three Roles Explained

A. System Message

  • Purpose: Defines the Persona, Constraints, and Tone.
  • Persistence: It is usually sent with every request so the model doesn't forget who it is.
  • Example: "You are a helpful SQL assistant. Output ONLY valid SQL code. Do not explain."

B. User Message

  • Purpose: The dynamic input from the human.
  • Example: "Show me all users who signed up yesterday."

C. Assistant Message (AI)

  • Purpose: The model's previous replies. Used for Context/Memory.
  • Example: "SELECT * FROM users WHERE signup_date = '2023-10-27';"

3. Deep Dive: Implementing with LangChain

Hardcoding strings is messy. LangChain provides a structured way to build these prompts using ChatPromptTemplate.

Toy Scenario: A History Tutor We want a bot that teaches history but speaks like a Gen-Z teenager.

The Structure:

  1. System: "You are a history teacher. Speak using Gen-Z slang (no cap, bet, sus)."
  2. User: "Tell me about Napoleon."

The Code (Python + LangChain):

from langchain_core.prompts import ChatPromptTemplate

# 1. Define the Template
template = ChatPromptTemplate.from_messages([
    ("system", "You are a history teacher. Speak using Gen-Z slang."),
    ("user", "{topic}")
])

# 2. Format the Prompt (Fill in the blanks)
prompt_value = template.invoke({"topic": "Tell me about Napoleon"})

# 3. What the LLM actually sees (The "Black Box" Revealed)
# [
#   SystemMessage(content="You are a history teacher. Speak using Gen-Z slang."),
#   HumanMessage(content="Tell me about Napoleon")
# ]

Why use Templates?

  • Reusability: You can swap {topic} for anything.
  • Safety: It handles escaping special characters automatically.
  • Chaining: You can easily pipe this into an LLM chain.

4. Advanced Techniques: Hallucination, Logits, CoT, and RAG

A. Inducing Hallucination (For Creativity)

Sometimes you want the model to lie (e.g., writing a fantasy story).

  • Technique: Increase Temperature (> 1.0) and remove constraints. (See our LLM Hyperparameters Guide for more on Temperature).
  • Prompt Example:
    [
      {"role": "system", "content": "You are a surrealist poet. Ignore physics and logic."},
      {"role": "user", "content": "Describe the taste of the color blue."}
    ]
    
  • Result: "It tastes like cold electricity and forgotten Tuesdays." (Pure hallucination).

B. Deterministic Output (Logit Bias)

If you need the model to output exactly "Yes" or "No" and nothing else.

  • Technique: Use logit_bias. You can force the probability of specific tokens (like the token ID for "Yes") to 100.
  • Prompt Example:
    [
      {"role": "system", "content": "You are a binary classifier. Output ONLY 'Yes' or 'No'."},
      {"role": "user", "content": "Is the sky blue?"}
    ]
    
  • Usage: Classification tasks where you parse the output programmatically.

C. Chain of Thought (CoT)

Forcing the model to "show its work" improves accuracy on math/logic. (Learn more in our Prompt Engineering Guide).

  • Standard Prompt:
    [{"role": "user", "content": "If I have 3 apples and eat 1, how many do I have?"}]
    
    Output: "2" (Might fail on complex math).
  • CoT Prompt:
    [
      {"role": "system", "content": "You are a math tutor. Always think step-by-step before answering."},
      {"role": "user", "content": "If I have 3 apples and eat 1, how many do I have?"}
    ]
    
    Output: "Step 1: You start with 3 apples. Step 2: You eat 1 apple. Step 3: 3 - 1 = 2. Answer: 2."

D. Injecting Context (RAG)

When you need the model to answer based on your data (not its training data), you inject retrieved text into the prompt. (Check out our full guide on RAG).

  • Technique: Retrieve relevant documents from a database and insert them into the System or User message.
  • Prompt Example:
    [
      {
        "role": "system", 
        "content": "You are a helpful assistant. Answer the user's question using ONLY the following context: === CONTEXT START === - Our refund policy allows returns within 30 days. - Shipping is free for orders over $50. === CONTEXT END ==="
      },
      {
        "role": "user", 
        "content": "Can I return my shirt after 45 days?"
      }
    ]
    
  • Result: "No, our policy only allows returns within 30 days." (Grounded answer).

5. Advanced Technique: Few-Shot Prompting with Templates

You can combine roles to provide examples (Few-Shot) inside the chat history. This "tricks" the model into thinking it has already answered similar questions correctly.

The Logic:

  1. System: "Translate to French."
  2. User (Example 1): "Hello"
  3. Assistant (Example 1): "Bonjour"
  4. User (Real Input): "Goodbye"

LangChain Implementation:

from langchain_core.prompts import (
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    AIMessagePromptTemplate
)

template = ChatPromptTemplate.from_messages([
    SystemMessagePromptTemplate.from_template("You are a translator."),
    HumanMessagePromptTemplate.from_template("Hello"),
    AIMessagePromptTemplate.from_template("Bonjour"),
    HumanMessagePromptTemplate.from_template("{text}"),
])

# Result: The model sees the pattern and is highly likely to output "Au revoir"

6. Real-World Application: Customer Support Bot

  • Goal: A bot that answers shipping questions but never promises a refund (only a human can do that).
  • System Prompt: "You are a support agent. Answer shipping queries based on the FAQ. IMPORTANT: If a user asks for a refund, politely decline and say 'I cannot process refunds, please contact billing@example.com'. Do not hallucinate policies."
  • User Prompt: "My package is late! I want my money back!"

The Final Prompt (What the LLM sees):

[
  {
    "role": "system",
    "content": "You are a support agent. Answer shipping queries based on the FAQ. IMPORTANT: If a user asks for a refund, politely decline and say 'I cannot process refunds, please contact billing@example.com'. Do not hallucinate policies."
  },
  {
    "role": "user",
    "content": "My package is late! I want my money back!"
  }
]

The Output:

"I apologize for the delay in your package. However, I cannot process refunds directly. Please contact billing@example.com for assistance with your refund request."


Summary & Key Takeaways

  • System Message: The "God Mode" instruction that sets behavior.
  • User/Assistant Messages: The conversation history.
  • LangChain: Use ChatPromptTemplate to manage dynamic variables and structure.
  • Few-Shot: Fake a history of correct answers to guide the model.
  • CoT: Use "Think step by step" for logic.
  • RAG: Inject external data into the prompt context.

Practice Quiz: Test Your Skills

  1. Scenario: You want your chatbot to always reply in JSON format. Where is the best place to put this instruction?

    • A) In the User Message every time.
    • B) In the System Message.
    • C) In the Assistant Message.
  2. Scenario: In LangChain, what is the purpose of {variable} inside a prompt string?

    • A) It is a comment.
    • B) It is a placeholder that gets replaced with real data at runtime.
    • C) It tells the LLM to ignore that word.
  3. Scenario: Why do we include AIMessage (Assistant) examples in a Few-Shot prompt?

    • A) To show the model the expected output format for a given input.
    • B) To confuse the model.
    • C) To save tokens.

(Answers: 1-B, 2-B, 3-A)

Abstract Algorithms

Written by

Abstract Algorithms

@abstractalgorithms