What’s back-pressure
Back-pressure happens when data comes in faster than your system can handle it. A slow database can’t keep up with incoming requests, so the HTTP layer starts timing out. One bottleneck breaks everything upstream.
Understanding back-pressure is undervalued. It’s not just about preventing failures — it’s about adding capacity control mechanisms that let you control data flow speed across your entire system.
Back-pressure effects
When your database can’t keep up with write requests, queries start piling up. The backlog hits the HTTP layer — requests timeout, errors spike, users complain. The breakdown spreads. One slow component takes down the whole system.
Not just a database issue
Databases are the usual suspect, but any component can cause back-pressure — a message bus, a cache, an external API. When something downstream slows down, everything upstream breaks.
Solutions
The obvious fix is to throw more resources at the database. But that’s reactive and expensive.
A better approach: add a buffer between the HTTP layer and the database. Queue incoming requests and drain them at a pace the database can handle. This decouples request handling from database state — requests don’t fail just because the database is temporarily slow.
This makes your system more reliable and cheaper. You’re not paying for capacity you only need during spikes.
Pull systems
Push systems send data as fast as they can. Pull systems let the consumer request data when it’s ready.
Pull is underrated. The consumer controls the pace, so it never gets overwhelmed. This adds reliability that push architectures often lack. It also means better resource utilization — you’re not wasting compute on data you can’t process yet.
Conclusion
Back-pressure breaks systems when data floods in faster than you can process it. The fix is simple: buffers or pull systems to control the flow. Prevents crashes. Saves money. Worth keeping in mind before you reach for the “scale up” button.