Mastering Prompt Templates: System, User, and Assistant Roles with LangChain
Prompt templates turn messy string concatenation into structured, testable message flows for reliable LLM applications.
Abstract AlgorithmsTLDR: A production prompt is not a string โ it is a structured message list with
system,user, and optionalassistantroles. LangChain'sChatPromptTemplateturns this structure into a reusable, testable, injection-safe blueprint.
๐ The API Contract Analogy
Ad-hoc string concatenation breaks the same way that untyped API calls do:
# Fragile: injection risk, hard to test, format changes break everything
prompt = "You are " + role + ". Answer this: " + user_input
A ChatPromptTemplate is like a typed API contract: roles are explicit, placeholders are validated, and the format is consistent regardless of what user_input contains.
๐ข The Three-Role Structure
A modern LLM chat prompt has three layers:
| Role | Responsibility | Example |
system | Non-negotiable behavior rules (always sent) | "You are a concise SQL assistant. Output only SQL." |
user | Dynamic request from the application | "Find users created yesterday." |
assistant | Previous model response (for multi-turn) | "SELECT * FROM users WHERE..." |
The model sees this as a structured conversation, not a blob of text. The system role has the highest priority โ it anchors behavior regardless of what the user sends.
โ๏ธ Building Templates in LangChain
Single-Turn Template
from langchain_core.prompts import ChatPromptTemplate
template = ChatPromptTemplate.from_messages([
("system", "You are a customer support assistant. Be concise and factual."),
("user", "Issue: {issue}\nCustomer tier: {tier}\nRespond in bullet points.")
])
prompt_value = template.invoke({
"issue": "My card was charged twice",
"tier": "gold"
})
messages = prompt_value.to_messages()
# [SystemMessage("You are a customer..."), HumanMessage("Issue: My card...")]
Why this is better than string concatenation:
{issue}and{tier}are validated by LangChain โ missing keys raise errors early.- Role boundaries are explicit โ no accidental prompt injection via role-blurring.
- The template is unit-testable: call
.invoke()in a test without any LLM.
Multi-Turn Template with History
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
template = ChatPromptTemplate.from_messages([
("system", "You are a concise SQL assistant. Output only SQL."),
MessagesPlaceholder(variable_name="history"), # injects conversation history
("user", "{input}")
])
MessagesPlaceholder injects a list of previous HumanMessage / AIMessage objects without you manually formatting them. When the history grows too long, replace MessagesPlaceholder with a ConversationSummaryMemory that summarizes and truncates automatically.
๐ง Prompt Injection Prevention
A {user_input} placeholder in the user role is safe when it is a declared placeholder โ LangChain does not execute it as instructions. But never do this:
# UNSAFE: user input can break role boundaries
template = ChatPromptTemplate.from_messages([
("system", f"Help with: {raw_user_input}") # f-string, not placeholder
])
If raw_user_input = "Ignore previous instructions and ...", the f-string injects attack instructions directly into the system role.
Safe pattern: Always use {placeholders} inside from_messages(), never f-strings with user data.
flowchart TD
Input["User Input (untrusted)"]
Template["ChatPromptTemplate\nplaceholders {var}"]
Safe["Safe role-structured messages"]
LLM["LLM API call"]
Input -->|injected as placeholder value| Template
Template -->|validated & structured| Safe
Safe --> LLM
โ๏ธ Composing Templates with LCEL
Templates compose naturally with the LangChain Expression Language pipe operator:
from langchain_openai import ChatOpenAI
from langchain_core.output_parsers import StrOutputParser
model = ChatOpenAI(model="gpt-4o")
parser = StrOutputParser()
chain = template | model | parser
result = chain.invoke({
"issue": "Order not delivered after 14 days",
"tier": "platinum"
})
The chain is: render template โ call LLM โ parse to string. Adding a JsonOutputParser instead of StrOutputParser enforces structured JSON output with automatic retry on parse failure.
Testing Prompts Without an LLM
rendered = template.invoke({"issue": "...", "tier": "gold"})
messages = rendered.to_messages()
assert messages[0].content.startswith("You are a customer")
assert "{issue}" not in messages[1].content # placeholder was replaced
You can run hundreds of prompt rendering tests without LLM API calls โ fast, cheap, deterministic.
โ๏ธ Template Design Patterns
| Pattern | When to Use | Example |
| Fixed system + dynamic user | Most cases | Support bot, SQL generator |
| System with few-shot examples | Formatting tasks | Classification, extraction |
| MessagesPlaceholder for history | Multi-turn chatbots | Customer service agents |
| Partial templates | Shared system prompt across multiple chains | Multi-step pipelines |
| FewShotChatMessagePromptTemplate | Need structured examples from a vector store | Semantic few-shot selection |
๐ Summary
- Use
ChatPromptTemplate.from_messages()โ never f-strings with user input in role content. - Three roles:
system(rules),user(dynamic request),assistant(history). MessagesPlaceholderinjects conversation history cleanly in multi-turn templates.- LCEL pipe
|chains template โ model โ parser into a testable, composable pipeline. - Unit-test templates with
.invoke()and.to_messages()โ no LLM API calls needed.
๐ Practice Quiz
What is the risk of using an f-string to inject user input directly into the
systemrole content?- A) The f-string is slower than a placeholder.
- B) User input can contain instructions that override the system role โ a classic prompt injection attack.
- C) LangChain does not support f-strings inside
from_messages().
Answer: B
Your chatbot must remember the last 5 conversation turns. Which LangChain component injects them into the prompt automatically?
- A)
StrOutputParserwith a history field. - B)
MessagesPlaceholder(variable_name="history")in the template โ it accepts a list of HumanMessage/AIMessage objects. - C)
ConversationChainonly โ no template needed.
Answer: B
- A)
You want to unit-test a
ChatPromptTemplateto verify placeholders are replaced correctly. What is the cheapest way?- A) Call the live LLM API and check the response.
- B) Call
template.invoke(inputs).to_messages()โ renders the prompt to a message list without any LLM API call. - C) Use
LLMChain.run()with a mocked model.
Answer: B

Written by
Abstract Algorithms
@abstractalgorithms
More Posts
SFT for LLMs: A Practical Guide to Supervised Fine-Tuning
TLDR: Supervised fine-tuning (SFT) is the stage where a pretrained model learns task-specific response behavior from curated input-output examples. It is usually the first alignment step after pretraining and often the foundation for later RLHF. Good...
RLHF in Practice: From Human Preferences to Better LLM Policies
TLDR: Reinforcement Learning from Human Feedback (RLHF) helps align language models with human preferences after pretraining and SFT. The typical pipeline is: collect preference comparisons, train a reward model, then optimize a policy (often with KL...
PEFT, LoRA, and QLoRA: A Practical Guide to Efficient LLM Fine-Tuning
TLDR: Full fine-tuning updates every model weight, which is expensive in memory, compute, and storage. PEFT methods update only a small trainable slice. LoRA learns low-rank adapters on top of frozen base weights. QLoRA pushes efficiency further by q...
LLM Model Naming Conventions: How to Read Names and Why They Matter
TLDR: LLM names encode practical decisions: model family, size, training stage, context window, format, and quantization level. If you can decode naming conventions, you can avoid costly deployment mistakes and choose the right checkpoint faster. ๏ฟฝ...
