Fast and Slow Pointer: Floyd's Cycle Detection Algorithm Explained
Detect cycles, find their starts, and locate list midpoints — all in O(n) time and O(1) space.
Abstract Algorithms
TLDR: Move a slow pointer one step and a fast pointer two steps through a linked structure. If they ever meet, a cycle exists. Then reset one pointer to the head and advance both one step at a time — where they meet next is the cycle's start node. This single algorithm, invented by Robert Floyd in the 1960s, solves cycle detection, cycle start finding, linked list midpoint, and the Happy Number problem — all in O(n) time with O(1) space.
📖 The Infinite Loop Problem: Detecting It Without Extra Memory
Your production service is stuck. A background thread processes a linked list of tasks and never terminates. There's no log entry marking completion — it's just spinning. How do you detect that it's caught in a cycle, and which node caused the loop? More importantly, how do you detect it without allocating a hash set to track visited nodes?
This is not just a theoretical problem. Corrupt memory writes in embedded systems, malformed graph structures from database queries, and circular dependency chains in build systems all create cycles in linked structures at runtime. The naive detection approach — store every visited address in a HashSet, check on each step — works but allocates O(n) memory proportional to the list length.
Floyd's cycle detection algorithm answers the same question with O(1) space. No hash set. No visited array. Just two pointers moving at different speeds through the same structure.
The intuition is physical: imagine two runners on a circular track. The faster runner will eventually lap the slower one. If the track is finite and non-circular, the fast runner reaches the end first and you know there's no cycle. If the track loops, they'll meet somewhere on the loop. The speed ratio (2:1) makes the math clean and the guarantees strong.
🔍 Linked List Cycles and Why Array Indexing Won't Help
Before the algorithm, it helps to understand why the problem is harder for linked lists than for arrays. An array has index-based random access — you can ask "have I visited index 7?" in O(1) by checking a boolean array. A linked list has only pointer-based traversal. There is no natural integer key you can use to mark nodes without allocating separate storage.
A cycle in a linked list means at least one next pointer points backward to an earlier node in the sequence. Once a traversal enters the cycle, node.next will always return a node you've already visited, and the traversal will spin forever. The cycle doesn't have to be a full loop back to the head — it can be an internal loop starting midway through the list.
Non-cyclic list: 1 → 2 → 3 → 4 → 5 → null
Cyclic list: 1 → 2 → 3 → 4 → 5 → 3 (node 5's next points back to node 3)
In the cyclic case, node.next != null is always true for nodes inside the cycle. A simple while (node.next != null) loop never terminates. You need a fundamentally different stopping condition — and Floyd provides it.
| Detection method | Time | Space | Works on streams? |
| HashSet of visited nodes | O(n) | O(n) | No (needs revisit) |
| Tortoise and hare (Floyd) | O(n) | O(1) | Yes |
| Brent's algorithm | O(n) | O(1) | Yes |
⚙️ Speed Differential: Why 2× Speed Guarantees a Meeting
The mechanism is simple to state: advance slow by one node per step and fast by two nodes per step. If there is a cycle of length C, once fast enters the cycle it gains one step on slow per iteration. Since the cycle has finite length C, fast will "lap" slow within C steps of both being inside the cycle. They are guaranteed to meet inside the loop.
Why 2× specifically? The proof works for any speed ratio where fast > slow. The ratio 2:1 is the smallest integer ratio that guarantees meeting and also makes Phase 2 (finding the cycle start) clean to implement. With a 3:1 ratio, the Phase 2 math becomes more complex without any speed benefit.
Phase 1 — Detect the cycle:
ListNode slow = head;
ListNode fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
if (slow == fast) {
// Cycle detected — slow and fast met inside the loop
break;
}
}
If fast reaches null, the list is acyclic. If slow == fast, a cycle exists and the meeting point is somewhere inside it.
Phase 2 — Find the cycle start:
Reset one pointer to head. Leave the other at the meeting point. Advance both by one step at a time. Where they next meet is the cycle start. (The math behind why this works is in the Deep Dive section below.)
// After Phase 1 confirms a cycle (slow == fast at meeting point)
ListNode pointer1 = head;
ListNode pointer2 = slow; // still at meeting point
while (pointer1 != pointer2) {
pointer1 = pointer1.next;
pointer2 = pointer2.next;
}
// pointer1 == pointer2 == cycle start node
🧠 Deep Dive: The Phase 2 Math and Why It Points to the Cycle Start
The Internals: Where Slow and Fast Meet, and Why It Matters
Let F = distance from head to the cycle start (number of nodes before the cycle begins).
Let C = length of the cycle.
Let M = distance from the cycle start to the meeting point (measured inside the cycle).
When slow and fast meet in Phase 1:
- Slow has traveled
F + Msteps. - Fast has traveled
F + M + k·Csteps for some integer k ≥ 1 (fast has gone around the cycle at least once more than slow). - Since fast moves 2× as fast as slow:
2(F + M) = F + M + k·C
Solving: F + M = k·C, which means F = k·C - M.
Now interpret k·C - M geometrically: starting from the meeting point and walking k·C - M steps forward along the cycle brings you exactly to the cycle start (because k·C is a full number of cycles and M steps backward from the start is k·C - M forward).
This is why Phase 2 works: pointer1 starts at head and walks F steps to the cycle start. Pointer2 starts at the meeting point and walks k·C - M = F steps to the same cycle start. They arrive at exactly the same time.
The elegance is that you don't need to know F, C, or M explicitly. The algorithm computes the cycle start by construction.
Performance Analysis: O(n) Time and the Space Guarantee
Phase 1 time complexity:
If the list has no cycle, fast reaches null in at most n/2 steps (it moves two nodes per step). If there is a cycle, fast enters the cycle after at most F steps, and then gains one position on slow per step. The cycle has length C ≤ n, so fast catches slow within C additional steps. Total Phase 1: O(F + C) = O(n).
Phase 2 time complexity:
Pointer1 walks F steps to the cycle start. Pointer2 walks F steps around the cycle to the same point. Phase 2: O(F) = O(n).
Total: O(n) time, O(1) space.
| Phase | Pointer movements | Time | Space |
| Phase 1 (detect) | At most F + C moves | O(n) | O(1) |
| Phase 2 (find start) | At most F moves | O(n) | O(1) |
| Middle-finding variant | At most n/2 moves | O(n) | O(1) |
No heap allocation. No stack growth beyond the two pointer variables. This is why Floyd's algorithm is used in memory-constrained embedded systems and is the reference implementation in most algorithm textbooks.
📊 Visualizing Slow and Fast Pointers Converging in a Cycle
flowchart TD
A([Start: slow=head, fast=head]) --> B{fast != null AND\nfast.next != null}
B -- No --> C([No cycle — list is acyclic])
B -- Yes --> D[slow = slow.next\nfast = fast.next.next]
D --> E{slow == fast?}
E -- No --> B
E -- Yes --> F([Cycle detected!\nMeeting point found])
F --> G[Reset pointer1 = head\npointer2 = meeting point]
G --> H{pointer1 == pointer2?}
H -- No --> I[pointer1 = pointer1.next\npointer2 = pointer2.next]
I --> H
H -- Yes --> J([Cycle start node found])
Phase 1 detects the cycle; Phase 2 navigates both pointers to the exact start node using the F = kC − M invariant.
Worked trace on 1 → 2 → 3 → 4 → 5 → 3 (cycle starts at node 3, index 2):
| Step | Slow | Fast | Met? |
| 0 | 1 | 1 | — |
| 1 | 2 | 3 | No |
| 2 | 3 | 5 | No |
| 3 | 4 | 4 | Yes |
Phase 2: pointer1 at node 1, pointer2 at node 4. Both advance: pointer1→2, pointer2→5. Then: pointer1→3, pointer2→3. Cycle start = node 3. ✓
🌍 Where Cycle Detection Shows Up in Real Systems
Floyd's algorithm sounds academic, but it solves real engineering problems every day:
Memory safety in embedded systems: Microcontrollers that traverse linked memory regions (free lists, task queues) use fast/slow pointer checks to detect corruption before it causes a hang. A corrupted next pointer that forms a loop is detected in O(n) without any heap allocation — critical when heap is unavailable.
Database cursor traversal: Some database engines internally represent result-set cursors as linked node chains. A corrupted cursor that loops back to an earlier record would spin indefinitely; the fast/slow check provides a cheap safety gate.
OS process scheduling: Certain operating system scheduler implementations maintain runnable threads as a circular linked list. Adding a duplicate entry by mistake creates a cycle within the circular structure. Detection code based on Floyd's algorithm can verify integrity in O(n) during debug builds.
Functional sequence convergence: In numerical analysis and cryptography, iterating a function f(f(f(x))) can cycle (Pollard's rho factorization algorithm deliberately exploits this with a Floyd-style cycle detector to find prime factors of large integers).
🧪 Four Classic Problems: Full Java Solutions
Problem 1: Linked List Cycle Detection (LeetCode 141)
public class LinkedListCycle {
static class ListNode {
int val;
ListNode next;
ListNode(int val) { this.val = val; }
}
/**
* LeetCode 141 — Linked List Cycle
* Returns true if the list contains a cycle.
* Time: O(n) Space: O(1)
*/
public boolean hasCycle(ListNode head) {
ListNode slow = head;
ListNode fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
if (slow == fast) return true;
}
return false;
}
}
Problem 2: Find the Start of the Cycle (LeetCode 142)
public class LinkedListCycleII {
static class ListNode {
int val;
ListNode next;
ListNode(int val) { this.val = val; }
}
/**
* LeetCode 142 — Linked List Cycle II
* Returns the node where the cycle begins, or null if no cycle.
* Time: O(n) Space: O(1)
*
* Phase 1: detect meeting point.
* Phase 2: reset one pointer to head; advance both by 1 until they meet.
* Meeting point is the cycle start (proof: F = kC - M).
*/
public ListNode detectCycle(ListNode head) {
ListNode slow = head;
ListNode fast = head;
boolean cycleFound = false;
// Phase 1: find meeting point
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
if (slow == fast) {
cycleFound = true;
break;
}
}
if (!cycleFound) return null;
// Phase 2: find cycle start
ListNode pointer1 = head;
ListNode pointer2 = slow; // meeting point
while (pointer1 != pointer2) {
pointer1 = pointer1.next;
pointer2 = pointer2.next;
}
return pointer1; // cycle start
}
}
Problem 3: Find the Middle of a Linked List (LeetCode 876)
When fast reaches the end, slow is at the middle. For even-length lists, this returns the second middle node — exactly what LeetCode expects.
public class MiddleOfLinkedList {
static class ListNode {
int val;
ListNode next;
ListNode(int val) { this.val = val; }
}
/**
* LeetCode 876 — Middle of the Linked List
* Time: O(n) Space: O(1)
*
* When fast reaches null (odd length) or fast.next reaches null (even length),
* slow is at the middle node.
*/
public ListNode middleNode(ListNode head) {
ListNode slow = head;
ListNode fast = head;
while (fast != null && fast.next != null) {
slow = slow.next;
fast = fast.next.next;
}
return slow;
}
}
Trace on [1, 2, 3, 4, 5]:
- Step 1: slow=2, fast=3. Step 2: slow=3, fast=5. Step 3: fast.next=null → stop. Middle = node 3. ✓
Trace on [1, 2, 3, 4]:
- Step 1: slow=2, fast=3. Step 2: slow=3, fast=null → stop. Middle = node 3 (second middle). ✓
Problem 4: Happy Number (LeetCode 202)
A happy number eventually reaches 1 when you repeatedly replace it with the sum of squares of its digits. An unhappy number cycles without ever reaching 1. The digit-squaring function acts like a next pointer — you can model the sequence as a linked list and detect the cycle with fast/slow.
public class HappyNumber {
/**
* LeetCode 202 — Happy Number
* Treat each number as a "node" where next = sumOfSquaredDigits(n).
* If the sequence cycles without hitting 1, the number is not happy.
* Time: O(log n) per step × O(n) steps = O(n log n) Space: O(1)
*/
public boolean isHappy(int n) {
int slow = n;
int fast = getNext(n);
while (fast != 1 && slow != fast) {
slow = getNext(slow);
fast = getNext(getNext(fast));
}
return fast == 1;
}
private int getNext(int n) {
int sum = 0;
while (n > 0) {
int digit = n % 10;
sum += digit * digit;
n /= 10;
}
return sum;
}
public static void main(String[] args) {
HappyNumber h = new HappyNumber();
System.out.println(h.isHappy(19)); // true: 19→82→68→100→1
System.out.println(h.isHappy(2)); // false: 2→4→16→37→58→89→145→42→20→4 (cycle)
}
}
The Happy Number problem demonstrates that fast/slow pointers apply wherever you have a deterministic next function — not just traditional linked list nodes. Any sequence that either terminates or cycles is fair game.
⚖️ When the Speed Differential Approach Breaks Down
Single-element cycles: If a node points to itself (node.next == node), slow and fast both jump to it on the first step. Detection still works, but Phase 2 boundary conditions need a null check before dereferencing pointer.next.
Doubly linked lists: The fast.next.next step assumes a singly linked structure. In a doubly linked list with backward pointers, you'd need to be explicit about always following next (not prev). The algorithm still works but requires care.
Very large cycles (streaming data): Floyd's is optimal for in-memory structures. For streaming cycle detection (detecting revisited states in a long computation), you might need Brent's algorithm, which achieves the same O(n) bound but with smaller constant factors because the fast pointer only tracks powers-of-two positions rather than doubling every step.
When you need the cycle length: Floyd's Phase 1 doesn't directly give you the cycle length. You can compute it after Phase 1 by keeping one pointer at the meeting point and counting steps until it returns, but this is an additional O(C) pass.
🧭 Deciding Between Fast/Slow and Hash-Set Cycle Detection
| Situation | Recommendation |
| Linked list, O(1) space required | Fast/slow pointers — the standard choice |
| Graph cycle detection | DFS with coloring (white/grey/black) — fast/slow doesn't generalize to graphs |
| Need cycle length immediately | Fast/slow + count loop, or Brent's algorithm |
| Streaming sequence, memory-constrained | Brent's algorithm for better constants |
| Finding cycle start node | Fast/slow Phase 2 — elegant O(n) solution |
| Debugging/instrumentation (correctness over performance) | HashSet — simpler to reason about |
The one hard rule: fast/slow pointers require a deterministic single next function. Any structure where each node has exactly one successor and the successor relationship is stable is a valid candidate.
🛠️ How Java's Concurrent Utilities Use Cycle-Awareness
Java's java.util.concurrent package deals with cycles in a different but related way. ForkJoinPool's work-stealing deques and the LinkedBlockingDeque both perform internal consistency checks during debug builds to ensure that internal node chains don't form cycles — a condition that would cause the pool's worker threads to spin indefinitely.
The ReentrantLock implementation in java.util.concurrent.locks maintains a CLH (Craig, Landin, and Hagersten) queue of waiting threads as a linked chain of node objects. If a bug causes a node to point back to an earlier node in the queue, the lock's acquire() loop would never terminate — exactly the cycle problem Floyd's algorithm detects. Production JVM implementations guard against this with bounded loop counters rather than pointer-speed games, but the structural problem is identical.
For application-level cycle detection in Java dependency graphs (Spring's @Autowired circular dependency detection), Spring's BeanFactory uses a Set<String> of currently-in-creation bean names — the hash-set approach. It prioritizes debuggability (you get an error message naming the cycle) over memory minimalism.
// Simplified version of how you might detect a cycle in a dependency graph
// using the fast/slow idea adapted for a Map-based "next" relationship
public boolean hasCycleInDependencies(String start, Map<String, String> dependsOn) {
String slow = start;
String fast = dependsOn.get(start);
while (fast != null && dependsOn.get(fast) != null) {
slow = dependsOn.get(slow);
fast = dependsOn.get(dependsOn.get(fast));
if (slow.equals(fast)) return true;
}
return false;
}
📚 Lessons Learned: Off-by-One Traps and Null Pointer Landmines
1. Always check fast != null && fast.next != null. Checking only fast != null will crash on even-length lists when fast.next.next dereferences a null next. The double guard is required and non-negotiable.
2. Phase 2 must start with pointer2 at the meeting point, not at head. A common bug resets both pointers to head for Phase 2. The algorithm requires pointer2 to stay at the meeting point inside the cycle. Only pointer1 goes back to head.
3. Happy Number's getNext must handle n=1 as a base case. If you call getNext(1) inside the loop body before checking fast == 1, you correctly get getNext(1) = 1 (since 1² = 1), but explicitly checking fast == 1 before the call avoids a redundant squaring step.
4. For middleNode, understand which "middle" you return. On [1, 2, 3, 4], the two middle nodes are 2 and 3. The standard fast/slow implementation returns the second middle (3), because when fast hits the boundary, slow has made one extra step. If a problem requires the first middle, stop the loop one step earlier: while (fast.next != null && fast.next.next != null).
5. Don't use reference equality (==) on Integer objects. In the Happy Number problem, if you cache slow and fast as Integer objects instead of int primitives, reference equality (==) will fail for values outside the JVM's cached range (-128 to 127). Always use int primitives or .equals().
📌 Summary and Key Takeaways
- Two-phase structure: Phase 1 detects the cycle (O(n)). Phase 2 finds the cycle start using the
F = kC - Minvariant (O(n)). Total: O(n) time, O(1) space. - The speed ratio is 2:1. Fast advances two nodes, slow one. Any cycle of length ≥ 1 guarantees a meeting point within C additional steps of both entering the cycle.
- Four problems, one pattern: Cycle detection, cycle start, list midpoint, and Happy Number all reduce to the same two-pointer speed game.
- Guard conditions matter more than the algorithm itself. The
fast != null && fast.next != nulldouble check is the source of most fast/slow bugs in interviews. - Phase 2 pointer placement: Only pointer1 resets to head. Pointer2 stays at the meeting point.
- Generalizes beyond linked lists: Any deterministic sequence with a
nextfunction — digit squaring, function iteration — is a valid target for Floyd's algorithm.
📝 Practice Quiz
In Floyd's Phase 1, the loop condition is
fast != null && fast.next != null. What happens if you use onlyfast != nullfor a list with an even number of nodes and no cycle?- A) The algorithm correctly returns false (no cycle).
- B) A NullPointerException is thrown when
fast.next.nextis evaluated whilefast.nextis null. - C) The slow and fast pointers both reach null simultaneously.
Correct Answer: B — On an even-length list, fast lands exactly on the last node (not null). Its
nextis null. The code then triesfast.next.next, which isnull.next, throwing a NullPointerException.
You run Floyd's Phase 2: you set pointer1 = head and pointer2 = meetingPoint, then advance both by one step at a time. What property of the algorithm guarantees they meet at the cycle start?
- A) The total path length from the meeting point to the cycle start equals the total path length from head to the cycle start.
- B) The cycle is always of length 1.
- C) Fast always meets slow exactly at the cycle start in Phase 1. Correct Answer: A — The invariant F = kC - M means pointer1 (walking from head) and pointer2 (walking from the meeting point) both travel exactly F steps to reach the cycle start, arriving simultaneously.
The Happy Number problem doesn't use
ListNodeobjects. How is the fast/slow pointer technique still applicable?- A) It is not applicable; the problem requires a HashSet.
- B) Each integer value acts as a "node" and
getNext(n)acts as thenextpointer, creating a deterministic sequence that either reaches 1 or enters a cycle. - C) The problem uses floating-point arithmetic where Floyd's convergence proof still holds.
Correct Answer: B — The fast/slow technique applies to any deterministic
nextfunction, not just physical pointer-linked nodes.
Open-ended challenge: Brent's cycle detection algorithm is an alternative to Floyd's with the same O(n) time and O(1) space but claims a smaller constant factor. How does Brent's algorithm differ in how it advances the fast pointer? Under what conditions would you choose Brent's over Floyd's for a production system, and what trade-off does it introduce in terms of code complexity vs. performance?
🔗 Related Posts
- Two Pointer Technique — the opposite-end variant of two pointers for pair sums and partition problems on sorted arrays.
- Sliding Window Technique — extends the two-pointer idea with a running aggregate window for subarray and substring problems.
- Backtracking Explained — a complementary search pattern for exhaustive exploration problems on trees and graphs.

Written by
Abstract Algorithms
@abstractalgorithms
More Posts

Two Pointer Technique: Solving Pair and Partition Problems in O(n)
TLDR: Place one pointer at the start and one at the end of a sorted array. Move them toward each other based on a comparison condition. Every classic pair/partition problem that naively runs in O(n²)

Tries (Prefix Trees): The Data Structure Behind Autocomplete
TLDR: A Trie stores strings character by character in a tree, so every string sharing a common prefix shares those nodes. Insert and search are O(L) where L is the word length. Tries beat HashMaps on

Sliding Window Technique: From O(n·k) Scans to O(n) in One Pass
TLDR: Instead of recomputing a subarray aggregate from scratch on every shift, maintain it incrementally — add the incoming element, remove the outgoing element. For a fixed window this costs O(1) per

Merge Intervals Pattern: Solve Scheduling Problems with Sort and Sweep
TLDR: Sort intervals by start time, then sweep left-to-right and merge any interval whose start ≤ the current running end. O(n log n) time, O(n) space. One pattern — three interview problems solved.
