- Disabling full-page writes in PostgreSQL can boost write performance by up to 5x in write-intensive scenarios.
- Full-page writes are a core component of PostgreSQL’s crash recovery mechanism, enabled by default through the full_page_writes configuration parameter.
- Disabling full-page writes reduces I/O overhead by logging only changes (deltas) rather than entire 8KB pages.
- The trade-off between speed and safety has reignited debate over whether durability is overpaid for in certain workloads.
- The results suggest disabling full-page writes is suitable for specific use cases like caching layers, analytics pipelines, or ephemeral data processing.
Can you really boost PostgreSQL write performance by 5x—just by flipping a switch? That’s the question rippling through database engineering circles after a recent benchmark highlighted dramatic gains from disabling full-page writes (FPW). While PostgreSQL is renowned for its reliability and ACID compliance, this trade-off between speed and safety has reignited debate: are we overpaying for durability in certain workloads? The results suggest that for specific use cases like caching layers, analytics pipelines, or ephemeral data processing, the answer might be yes. But at what cost?
What Happens When Full-Page Writes Are Disabled?
Full-page writes (FPW) are a core component of PostgreSQL’s crash recovery mechanism, enabled by default through the full_page_writes configuration parameter. When active, FPW ensures that every page modification during a checkpoint is written in full to the Write-Ahead Log (WAL), protecting against partial page writes caused by disk failures or power outages. Disabling FPW instructs PostgreSQL to log only the changes (deltas) rather than entire 8KB pages, significantly reducing I/O overhead. In write-intensive scenarios—such as bulk inserts, real-time event ingestion, or time-series logging—this can slash WAL volume and free up disk bandwidth. The result? Benchmarks show up to a 5x increase in write throughput, particularly on systems constrained by disk I/O or using slower storage tiers.
Benchmark Data and Real-World Performance Gains
A widely discussed test on Hacker News (Hacker News thread) demonstrated a 5x improvement in insert performance on a modest PostgreSQL instance when FPW was turned off. The test involved inserting millions of rows into a simple table under controlled load, comparing throughput with full_page_writes = on versus off. Independent verification by database engineers using tools like pgbench and sysbench confirmed similar gains, especially in OLTP-like workloads with high concurrency. According to the PostgreSQL documentation hosted on postgresql.org, the trade-off is clear: while performance improves, the system becomes vulnerable to data corruption if a crash occurs mid-page-write. This risk is magnified on storage systems without atomic 8KB writes or battery-backed write caches.
Why Some Experts Warn Against Disabling FPW
Despite the allure of faster writes, many database professionals caution against disabling FPW outside of isolated, non-critical environments. The PostgreSQL community, including core contributors, emphasizes that FPW exists to uphold data integrity—a cornerstone of the database’s design. In mission-critical applications such as financial systems, user authentication, or healthcare records, losing even a single page to corruption could have cascading consequences. Skeptics also note that the 5x gain often appears in synthetic benchmarks that don’t reflect complex real-world queries, indexing overhead, or concurrent read-load interference. Furthermore, modern storage systems with NVMe SSDs and robust filesystems (like ZFS or XFS) have narrowed the performance gap, making FPW less of a bottleneck than in the past. As one senior DBA noted in the Hacker News discussion, “Speed means nothing if your data is unrecoverable.”
When Might Disabling FPW Make Sense in Practice?
There are legitimate scenarios where disabling FPW offers a calculated advantage. Temporary data processing pipelines—such as ETL jobs that rebuild daily or session-tracking systems where some data loss is acceptable—are prime candidates. Similarly, development and testing environments can safely leverage FPW-off configurations to accelerate test runs without risking production data. Some analytics platforms use PostgreSQL as a fast ingestion buffer before moving data to columnar stores, where durability is enforced downstream. In these cases, the performance boost enables faster iteration and reduces infrastructure costs. However, such decisions require rigorous risk assessment, monitoring, and often architectural safeguards like frequent backups or redundant ingestion paths to mitigate potential data loss.
What This Means For You
If you manage PostgreSQL instances, especially in high-write environments, understanding FPW’s role is essential. While a 5x performance gain is tempting, it should not come at the expense of data integrity unless the use case explicitly allows it. Always evaluate your durability requirements, storage stack, and recovery objectives before tweaking low-level WAL settings. For most production systems, keeping FPW enabled remains the safest choice. But for specific, well-isolated workloads, it may be worth experimenting in staging environments with full rollback plans.
As hardware evolves and new storage engines emerge, will traditional durability mechanisms like FPW become obsolete? Or will the demand for speed push more systems toward eventual consistency models, even in relational databases? The balance between performance and reliability continues to shift—what role should database defaults play in guiding that balance?
Source: Databricks




