All Posts

Mastering Prompt Templates: System, User, and Assistant Roles with LangChain

Prompt templates turn messy string concatenation into structured, testable message flows for reliable LLM applications.

Abstract AlgorithmsAbstract Algorithms
ยทยท5 min read
Share
Share on X / Twitter
Share on LinkedIn
Copy link

TLDR: A production prompt is not a string โ€” it is a structured message list with system, user, and optional assistant roles. LangChain's ChatPromptTemplate turns this structure into a reusable, testable, injection-safe blueprint.


๐Ÿ“– The API Contract Analogy

Ad-hoc string concatenation breaks the same way that untyped API calls do:

# Fragile: injection risk, hard to test, format changes break everything
prompt = "You are " + role + ". Answer this: " + user_input

A ChatPromptTemplate is like a typed API contract: roles are explicit, placeholders are validated, and the format is consistent regardless of what user_input contains.


๐Ÿ”ข The Three-Role Structure

A modern LLM chat prompt has three layers:

RoleResponsibilityExample
systemNon-negotiable behavior rules (always sent)"You are a concise SQL assistant. Output only SQL."
userDynamic request from the application"Find users created yesterday."
assistantPrevious model response (for multi-turn)"SELECT * FROM users WHERE..."

The model sees this as a structured conversation, not a blob of text. The system role has the highest priority โ€” it anchors behavior regardless of what the user sends.


โš™๏ธ Building Templates in LangChain

Single-Turn Template

from langchain_core.prompts import ChatPromptTemplate

template = ChatPromptTemplate.from_messages([
    ("system", "You are a customer support assistant. Be concise and factual."),
    ("user", "Issue: {issue}\nCustomer tier: {tier}\nRespond in bullet points.")
])

prompt_value = template.invoke({
    "issue": "My card was charged twice",
    "tier": "gold"
})

messages = prompt_value.to_messages()
# [SystemMessage("You are a customer..."), HumanMessage("Issue: My card...")]

Why this is better than string concatenation:

  • {issue} and {tier} are validated by LangChain โ€” missing keys raise errors early.
  • Role boundaries are explicit โ€” no accidental prompt injection via role-blurring.
  • The template is unit-testable: call .invoke() in a test without any LLM.

Multi-Turn Template with History

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

template = ChatPromptTemplate.from_messages([
    ("system", "You are a concise SQL assistant. Output only SQL."),
    MessagesPlaceholder(variable_name="history"),   # injects conversation history
    ("user", "{input}")
])

MessagesPlaceholder injects a list of previous HumanMessage / AIMessage objects without you manually formatting them. When the history grows too long, replace MessagesPlaceholder with a ConversationSummaryMemory that summarizes and truncates automatically.


๐Ÿง  Prompt Injection Prevention

A {user_input} placeholder in the user role is safe when it is a declared placeholder โ€” LangChain does not execute it as instructions. But never do this:

# UNSAFE: user input can break role boundaries
template = ChatPromptTemplate.from_messages([
    ("system", f"Help with: {raw_user_input}")  # f-string, not placeholder
])

If raw_user_input = "Ignore previous instructions and ...", the f-string injects attack instructions directly into the system role.

Safe pattern: Always use {placeholders} inside from_messages(), never f-strings with user data.

flowchart TD
    Input["User Input (untrusted)"]
    Template["ChatPromptTemplate\nplaceholders {var}"]
    Safe["Safe role-structured messages"]
    LLM["LLM API call"]

    Input -->|injected as placeholder value| Template
    Template -->|validated & structured| Safe
    Safe --> LLM

โš™๏ธ Composing Templates with LCEL

Templates compose naturally with the LangChain Expression Language pipe operator:

from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser

model  = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()

chain = template | model | parser

result = chain.invoke({
    "issue": "Order not delivered after 14 days",
    "tier": "platinum"
})

The chain is: render template โ†’ call LLM โ†’ parse to string. Adding a JsonOutputParser instead of StrOutputParser enforces structured JSON output with automatic retry on parse failure.

Testing Prompts Without an LLM

rendered = template.invoke({"issue": "...", "tier": "gold"})
messages = rendered.to_messages()
assert messages[0].content.startswith("You are a customer")
assert "{issue}" not in messages[1].content   # placeholder was replaced

You can run hundreds of prompt rendering tests without LLM API calls โ€” fast, cheap, deterministic.


โš–๏ธ Template Design Patterns

PatternWhen to UseExample
Fixed system + dynamic userMost casesSupport bot, SQL generator
System with few-shot examplesFormatting tasksClassification, extraction
MessagesPlaceholder for historyMulti-turn chatbotsCustomer service agents
Partial templatesShared system prompt across multiple chainsMulti-step pipelines
FewShotChatMessagePromptTemplateNeed structured examples from a vector storeSemantic few-shot selection

๐Ÿ“Œ Summary

  • Use ChatPromptTemplate.from_messages() โ€” never f-strings with user input in role content.
  • Three roles: system (rules), user (dynamic request), assistant (history).
  • MessagesPlaceholder injects conversation history cleanly in multi-turn templates.
  • LCEL pipe | chains template โ†’ model โ†’ parser into a testable, composable pipeline.
  • Unit-test templates with .invoke() and .to_messages() โ€” no LLM API calls needed.

๐Ÿ“ Practice Quiz

  1. What is the risk of using an f-string to inject user input directly into the system role content?

    • A) The f-string is slower than a placeholder.
    • B) User input can contain instructions that override the system role โ€” a classic prompt injection attack.
    • C) LangChain does not support f-strings inside from_messages().
      Answer: B
  2. Your chatbot must remember the last 5 conversation turns. Which LangChain component injects them into the prompt automatically?

    • A) StrOutputParser with a history field.
    • B) MessagesPlaceholder(variable_name="history") in the template โ€” it accepts a list of HumanMessage/AIMessage objects.
    • C) ConversationChain only โ€” no template needed.
      Answer: B
  3. You want to unit-test a ChatPromptTemplate to verify placeholders are replaced correctly. What is the cheapest way?

    • A) Call the live LLM API and check the response.
    • B) Call template.invoke(inputs).to_messages() โ€” renders the prompt to a message list without any LLM API call.
    • C) Use LLMChain.run() with a mocked model.
      Answer: B

Abstract Algorithms

Written by

Abstract Algorithms

@abstractalgorithms