Exploring Different Types of Binary Trees
Not all Binary Trees are created equal. We explain Full, Complete, Perfect, and Balanced Binary T...
Abstract AlgorithmsTLDR: A Binary Tree has at most 2 children per node, but the shape of the tree determines performance. A Full tree has 0 or 2 children. A Complete tree fills left-to-right. A Perfect tree is a symmetric triangle. A Degenerate tree becomes a linked list with $O(N)$ search.
๐ Why Tree Shape Changes Everything
Two binary trees with 7 nodes can have dramatically different heights:
Perfect tree (height = 2): Degenerate tree (height = 6):
1 1
/ \ \
2 3 2
/ \ / \ \
4 5 6 7 3
\
...7
Searching the perfect tree takes at most 3 comparisons. Searching the degenerate tree takes up to 7. The shape is everything.
๐ข The Four Types: Full, Complete, Perfect, and Degenerate
| Type | Rule | Example use case |
| Full | Every node has 0 or 2 children (never 1) | Expression trees in compilers |
| Complete | All levels filled left-to-right; last level may be partially filled | Binary Heap (priority queue) |
| Perfect | Every internal node has exactly 2 children; all leaves at same level | Theoretical analysis baseline |
| Degenerate (Pathological) | Every node has at most 1 child โ essentially a linked list | Unbalanced BST after sorted insertions |
A perfect tree is always full and complete. A complete tree is not always perfect.
โ๏ธ Height and Search Complexity: Why Balance Is Critical
For a tree with $N$ nodes:
| Tree type | Height | Binary search time |
| Perfect / Complete | $\lfloor \log_2 N \rfloor$ | $O(\log N)$ |
| Full (unspecified shape) | Varies | $O(\log N)$ to $O(N)$ |
| Degenerate | $N - 1$ | $O(N)$ |
Why this matters for BSTs: If you insert elements in sorted order into a vanilla binary search tree, it degenerates โ 1, 2, 3, 4, 5 becomes a right-leaning chain. Every search is $O(N)$, worse than a sorted array.
Self-balancing trees (AVL, Red-Black) maintain $O(\log N)$ height automatically by performing rotations on insert/delete.
๐ง Where Each Type Appears in Real Code
graph TD
CompleteTree[Complete Binary Tree] -->|Heap property added| PQ[Priority Queue\nBinaryHeap]
FullTree[Full Binary Tree] -->|Operator nodes| ExprTree[Expression Tree\nCompiler AST]
DegTree[Degenerate BST] -->|Rebalanced| AVL[AVL / Red-Black Tree\nstd::map, TreeMap]
PerfectTree[Perfect Tree] -->|Theoretical| Analysis[Algorithm Analysis\ne.g., merge sort levels]
| Data structure | Tree type underneath | Why |
heapq (Python), PriorityQueue (Java) | Complete binary tree | Heaps require level-order packing for array encoding |
std::map / Java TreeMap | Red-Black (balanced) BST | Guarantees $O(\log N)$ for insert/search/delete |
| Huffman coding tree | Full binary tree | Every code node has 0 or 2 children |
| Segment tree | Nearly complete | Built top-down for range queries |
โ๏ธ Self-Balancing Trees: When Plain BSTs Are Not Enough
| Tree | Balance mechanism | Worst-case height | Use case |
| AVL tree | Height-based rotations | $1.44 \log N$ | Read-heavy: faster lookup |
| Red-Black tree | Color-based rotations | $2 \log N$ | Write-heavy: faster insert/delete |
| B-Tree / B+ Tree | Multi-way branching | $\log_d N$ | Disk storage, database indexes |
| Splay tree | Most recently accessed promoted to root | Amortized $O(\log N)$ | Cache-like access patterns |
๐ Key Takeaways
- Tree shape determines performance: balanced = $O(\log N)$, degenerate = $O(N)$.
- Full: 0 or 2 children. Complete: level-ordered. Perfect: symmetric triangle. Degenerate: linked list.
- Inserting sorted data into a plain BST creates a degenerate tree.
- Self-balancing trees (AVL, Red-Black) maintain $O(\log N)$ automatically.
- Complete trees underpin heaps; balanced BSTs underpin
std::mapand Java'sTreeMap.
๐งฉ Test Your Understanding
- Can a complete tree that is not perfect still have $O(\log N)$ search?
- A developer inserts elements
1, 2, 3, 4, 5into an unbalanced BST. Draw the resulting tree. - Why do binary heaps use a complete tree and not a perfect tree?
- What is the height bound of an AVL tree with $N$ nodes?
๐ Related Posts

Written by
Abstract Algorithms
@abstractalgorithms
More Posts
SFT for LLMs: A Practical Guide to Supervised Fine-Tuning
TLDR: Supervised fine-tuning (SFT) is the stage where a pretrained model learns task-specific response behavior from curated input-output examples. It is usually the first alignment step after pretraining and often the foundation for later RLHF. Good...
RLHF in Practice: From Human Preferences to Better LLM Policies
TLDR: Reinforcement Learning from Human Feedback (RLHF) helps align language models with human preferences after pretraining and SFT. The typical pipeline is: collect preference comparisons, train a reward model, then optimize a policy (often with KL...
PEFT, LoRA, and QLoRA: A Practical Guide to Efficient LLM Fine-Tuning
TLDR: Full fine-tuning updates every model weight, which is expensive in memory, compute, and storage. PEFT methods update only a small trainable slice. LoRA learns low-rank adapters on top of frozen base weights. QLoRA pushes efficiency further by q...
LLM Model Naming Conventions: How to Read Names and Why They Matter
TLDR: LLM names encode practical decisions: model family, size, training stage, context window, format, and quantization level. If you can decode naming conventions, you can avoid costly deployment mistakes and choose the right checkpoint faster. ๏ฟฝ...
