Développement

Imperative vs Reactive Programming: Understanding Thread Models and Performance

Feb 5, 2026
18 min read

The Thread Model Nobody Talks About

We spent six months wondering why our payment service maxed out at 200 req/s despite having plenty of CPU and memory. The answer wasn't in the code. It was in how threads wait.

The Problem

Each payment request hits three services:

  • User service: 50ms
  • Payment gateway: 800ms
  • Notification service: 200ms

With Spring MVC, one thread handles the entire flow. Thread blocks three times, total wait: 1050ms. 200 threads in the pool = 200 concurrent requests maximum.

The Insight

Threads weren't working—they were waiting. 80% of their time spent doing nothing.

Switched to Spring WebFlux with R2DBC. Event loop model. Same hardware, same database pool (20 connections).

Result: 2000+ req/s.

What Changed

Connection pooling became efficient. In blocking code, a thread holds a DB connection for 100ms even though the query finishes in 5ms. The other 95ms? Waiting for result transfer.

R2DBC: Acquire connection, submit query, release immediately. Same 20 connections now serve 1000+ concurrent operations.

The Reactive Tradeoff

Not simpler. Harder to debug. Steeper learning curve. No JPA lazy loading. But when you need it, nothing else scales the same way.

Two Rules

  1. If your requests make multiple external calls: consider reactive
  2. If you're mostly database CRUD: imperative is fine

Architecture should match requirements, not trends.

Thread Models: The Foundation

Thread-Per-Request (Imperative)

Each incoming request gets one dedicated thread that handles everything from start to finish.

sequenceDiagram participant Request participant Thread participant UserService participant PaymentAPI Request->>Thread: Process payment Thread->>UserService: Get user (50ms) Note over Thread: BLOCKED 😴 UserService-->>Thread: User data Thread->>PaymentAPI: Charge (800ms) Note over Thread: BLOCKED 😴 PaymentAPI-->>Thread: Success Thread->>Request: Response Note over Thread: Total: 850ms
Thread idle: 850ms

The Math

  • Thread pool: 200 threads
  • Request duration: 1050ms (with all service calls)
  • Max throughput: 200 ÷ 1.05 = 190 req/s

The Waste: Each thread consumes 1MB memory but spends 80% of time blocked, doing nothing.

Event Loop (Reactive)

Small pool of threads (8-16) handle all requests. Operations are non-blocking—threads register callbacks and move on.

sequenceDiagram participant Request participant EventLoop participant UserService participant PaymentAPI Request->>EventLoop: Process payment EventLoop->>UserService: Register callback Note over EventLoop: Thread FREED ✓ EventLoop->>PaymentAPI: Register callback Note over EventLoop: Handles 100+ other requests UserService-->>EventLoop: User ready PaymentAPI-->>EventLoop: Payment complete EventLoop->>Request: Response Note over EventLoop: Thread never waits
Always working

The Math

  • Thread pool: 8 threads
  • Threads never block
  • Max throughput: Limited by I/O, not threads = 2000+ req/s

Blocking vs Non-Blocking I/O

Blocking (JDBC)

Thread requests data and STOPS until data arrives.

gantt title Thread Activity During Database Query dateFormat X axisFormat %L ms section Blocking Thread Active :done, 0, 5 WAITING (Idle) :crit, 5, 95 Active :done, 95, 100

What Happens

  1. Thread acquires database connection
  2. Sends query to database
  3. Thread pauses execution—does NOTHING
  4. Database processes query
  5. Database sends results back
  6. Thread wakes up and continues

95% of the 100ms is the thread doing nothing.

Non-Blocking (R2DBC)

Thread submits request and immediately continues working.

gantt title Thread Activity During Database Query dateFormat X axisFormat %L ms section Non-Blocking Thread Submit Query :done, 0, 5 Handle Req 2 :active, 5, 15 Handle Req 3 :active, 15, 25 Handle Req 4 :active, 25, 35 Handle Req 5 :active, 35, 45 Handle Req 6 :active, 45, 55 Handle Req 7 :active, 55, 65 Handle Req 8 :active, 65, 75 Handle Req 9 :active, 75, 85 Handle Req 10 :active, 85, 95 Process Result :done, 95, 100

What Happens

  1. Thread acquires database connection
  2. Submits query to database
  3. Releases connection and thread immediately
  4. Thread handles other requests
  5. Database completes query and notifies system
  6. Any available thread processes the result

Thread is productive 100% of the time.

Connection Pooling: The Hidden Bottleneck

Traditional JDBC Pool (HikariCP)

Connection pool maintains fixed connections (typically 10-20).

Blocking Behavior

  • Thread acquires connection
  • Holds it during entire query execution
  • Including all the wait time
  • Returns connection when completely done
graph TB subgraph "10 Connection Pool - Blocking" T1[Thread 1] -.holds.-> C1[Connection 1] T2[Thread 2] -.holds.-> C2[Connection 2] T10[Thread 10] -.holds.-> C10[Connection 10] T11[Thread 11] --> Wait[Waiting...] T12[Thread 12] --> Wait T50[Thread 50] --> Wait end C1 & C2 & C10 --> DB[(Database)] style Wait fill:#ff6b6b,color:#fff

Result: 10 connections = maximum 10 concurrent database operations, even with 200 threads.

R2DBC Pool

Same pool size, different behavior.

Non-Blocking Behavior

  • Thread acquires connection
  • Submits query
  • Immediately releases connection
  • Connection available for next operation
  • Result arrives later via callback
graph TB subgraph "10 Connection Pool - Non-Blocking" EL[Event Loop
8 Threads] -->|borrow| Pool[10 Connections] Pool -->|submit query| DB[(Database)] Pool -.released instantly.-> EL DB -.result ready.-> EL end style EL fill:#51cf66,color:#000

Result: Same 10 connections serve 1000+ concurrent operations because they're not held during wait time.

Publisher-Subscriber Pattern

The foundation of reactive programming. Components subscribe to data streams instead of requesting data.

graph LR P[Publisher
Data Source] -->|emits items| O1[Operator
Transform] O1 -->|filtered| O2[Operator
Map] O2 -->|delivers| S[Subscriber
Consumer] S -.requests N items.-> P style P fill:#4CAF50 style S fill:#2196F3

The Flow

  1. Subscriber subscribes to Publisher
  2. Subscriber requests N items (backpressure control)
  3. Publisher emits items through Operators
  4. Operators transform/filter data
  5. Subscriber processes items
  6. Subscriber requests more items

Key Difference from Imperative

Imperative: "Give me all users" → loads everything

Reactive: "Give me 10 users" → processes 10 → "Give me 10 more"

Backpressure: Flow Control

What happens when data arrives faster than you can process it?

Without Backpressure

graph LR A[Database
1M records/sec] -->|floods| B[Email Service
100 emails/sec] B --> C[Memory Buffer
Fills up...] C --> D[OutOfMemory] style D fill:#ff6b6b,color:#fff

Records pile up in memory until crash.

With Backpressure

graph LR A[Database] -.request 100.-> B[Email Service] B -->|processes| C[Completes] C -.request 100 more.-> A A -->|adapts to pace| D[Stable] style D fill:#51cf66,color:#000

Subscriber Controls the Flow

  • Requests 100 items
  • Processes them
  • Requests 100 more
  • Producer matches consumer's pace

Backpressure Strategies

  • Buffer: Collect items temporarily (risk: buffer can overflow)
  • Drop: Discard items if can't keep up (good for real-time data)
  • Latest: Keep only most recent item (for state updates)
  • Error: Fail if overwhelmed (forces proper sizing)

Streaming vs Batch Processing

Batch Processing (Imperative)

Load everything into memory, process, return.

graph LR A[(Database
1M records)] --> B[Load ALL
into memory] B --> C[Process ALL] C --> D[Return complete] B -.2GB RAM.-> E[High Memory] style E fill:#ff6b6b,color:#fff

Memory: 1M records × 2KB = 2GB for one request.

Streaming (Reactive)

Load in chunks, process incrementally, send as available.

graph LR A[(Database
1M records)] --> B[Load 50] B --> C[Send to client] C --> D[Load next 50] D --> E[Send to client] E --> F[Continue...] B -.100KB RAM.-> G[Low Memory] style G fill:#51cf66,color:#000

Memory: 50 records × 2KB = 100KB peak. 20,000x reduction.

Hot vs Cold Publishers

Cold Publisher

Produces data only when subscribed. Each subscriber gets independent stream.

Example: Database query

  • Subscriber A subscribes → Query runs from start
  • Subscriber B subscribes → Query runs again from start
  • Each gets complete, separate data

Hot Publisher

Always producing data. Subscribers join mid-stream and get data from that point forward.

Example: Stock price feed

  • Stream is live, emitting prices
  • Subscriber A joins at 10:00 → Gets prices from 10:00 onward
  • Subscriber B joins at 10:05 → Gets prices from 10:05 onward (missed earlier prices)
sequenceDiagram participant HP as Hot Publisher participant S1 as Subscriber 1 participant S2 as Subscriber 2 Note over HP: Already producing HP->>HP: Item 1 HP->>HP: Item 2 S1->>HP: Subscribe HP->>S1: Item 3 HP->>S1: Item 4 S2->>HP: Subscribe HP->>S1: Item 5 HP->>S2: Item 5 (joins mid-stream)

When to Use Each Paradigm

Use Imperative When

  • ✓ Simple CRUD operations
  • ✓ Low traffic (<100 concurrent users)
  • ✓ Internal tools and admin dashboards
  • ✓ Team unfamiliar with reactive
  • ✓ Quick prototypes, tight deadlines

Use Reactive When

  • ✓ High concurrency (1000+ users)
  • ✓ Multiple external API calls per request
  • ✓ Real-time streaming requirements
  • ✓ Microservices orchestration
  • ✓ Infrastructure cost matters

The Complete Stack Comparison

Component Imperative Reactive
Web Framework Spring MVC Spring WebFlux
Thread Model Thread-per-request Event Loop
HTTP Client RestTemplate WebClient
Database Driver JDBC R2DBC
Data Access JPA/Hibernate Spring Data R2DBC
Connection Pool HikariCP R2DBC Pool
Execution Blocking Non-blocking

Real Performance Impact

Before (Imperative)

  • 200 thread pool
  • Each request: 1050ms total (mostly waiting)
  • Throughput: 190 req/s
  • 20 DB connections: bottleneck at 200 concurrent queries

After (Reactive)

  • 8 event loop threads
  • Parallel execution where possible
  • Throughput: 2000+ req/s
  • Same 20 DB connections: serve 1000+ operations

10x improvement, same hardware.

The Tradeoffs

What You Lose

  • Simpler debugging (stack traces are complex)
  • JPA lazy loading
  • Familiar synchronous patterns
  • Easy onboarding for junior developers

What You Gain

  • 10-100x throughput on same infrastructure
  • Efficient resource utilization
  • Natural backpressure handling
  • Lower cloud costs

The Bottom Line

Reactive isn't "better"—it's a different tool for different problems. Use it when thread blocking is your bottleneck, not because it's trendy.

Architecture should match requirements, not resume keywords.

Example Projects

Imperative Project

  • Traditional e-commerce platform
  • Spring MVC + JDBC + JPA
  • Thread-per-request model
  • HikariCP connection pooling

Reactive Project

  • Real-time notification platform
  • Spring WebFlux + R2DBC
  • Event loop model
  • Server-Sent Events for live updates

Understanding thread models and when to apply reactive patterns is essential for building scalable modern applications. From blocking I/O to event loops, the field continues to evolve to meet growing performance demands.