In the realm of internet data transfer, “compression” is often the most overlooked, yet profoundly impactful, aspect. You might never consciously think about it, but whether web pages load instantly, API responses are fluid, and bandwidth costs are kept in check, often hinges on this unseen process. For the past two decades, gzip has been the reliable choice in compression. It’s simple, versatile, and offers good compatibility – but in today’s era, which demands real-time responsiveness, edge distribution, and global acceleration, it’s increasingly inadequate: for dynamic content, its compression efficiency is low, and latency is high; in high-concurrency environments, CPU overhead is significant; and for complex formats (such as JSON, Protobuf), its compression ratio is limited.

In short, gzip was once good enough, but now it’s not fast enough. This is precisely the motivation behind OpenResty Edge’s introduction of the Zstandard (zstd) compression algorithm. Zstandard, an open-source modern high-performance compression technology from Meta (Facebook), can significantly reduce data size with comparable CPU overhead. For edge architectures striving for ultimate transmission efficiency, this translates to lower bandwidth consumption, faster responses, and a smoother global access experience. In this article, we will explore how Zstandard redefines “efficient transmission” and guide you through quickly enabling this new feature on OpenResty Edge to dramatically boost compression performance.

The Evolution of Compression Algorithms

For the past two decades, gzip has been the “default standard” for internet data transmission. From web HTML to API responses, from log streams to configuration files, its presence is almost ubiquitous. It is reliable and offers robust compatibility, but as network architectures and user experience demands evolve, we must acknowledge: the era of gzip is slowly waning.

  1. The “Invisible Bottleneck” in Network Transmission

When users access content globally, every millisecond of page load speed is critical. Even with continuous optimization of caching and CDN layers, data volume remains the most direct factor impacting transmission efficiency.

  • For end-users: smaller data volume means faster loading and lower energy consumption;
  • For platform operators: a higher compression ratio leads to lower bandwidth costs and increased throughput for edge nodes.

gzip’s past success stemmed precisely from its “balance”: a sufficiently good compression ratio coupled with reasonable computational overhead. However, this very balance has now become a limitation. The demands of modern application scenarios are fundamentally different:

  • The proportion of dynamic responses is increasing (e.g., API calls, GraphQL queries, real-time log streams), necessitating frequent compression of short-lived data;
  • The proliferation of mobile and IoT devices leads to greater network fluctuations, demanding higher transmission efficiency and decompression performance;
  • Data formats are increasingly diverse (JSON, Protobuf, Parquet, etc.), and gzip’s compression efficiency is suboptimal for some of these formats.

These shifts compel us to re-evaluate the role of compression algorithms: they are no longer merely “tools for saving bandwidth,” but critical components in the performance optimization pipeline that directly influence latency and user experience.

  1. When Compression Algorithms Become a Performance Bottleneck

In high-concurrency scenarios, such as real-time response compression at the CDN edge, gzip’s CPU usage can become a hidden cost. The benchmark results below clearly show that zstd is significantly faster than Brotli and gzip.

スクリーンショット

スクリーンショット

For example, under the same latency budget:

  • gzip might only be able to compress data at level=3;
  • whereas next-generation algorithms like Zstandard can achieve smaller compression ratios and faster speeds with the same CPU overhead.

This means that with the same hardware resources, the system can: serve more requests; reduce network traffic costs; lower average response latency. For enterprises, this represents a very direct financial gain.

  1. Unique Challenges of Edge Computing Scenarios

In edge computing environments, each node processes traffic in close proximity to users: bandwidth resources are limited; CPU capacity is constrained; and the number of nodes is massive. In such an environment, every 1% optimization in compression performance can be amplified into significant overall benefits. This is precisely why more and more cloud vendors and infrastructure platforms are starting to introduce Zstandard (zstd) — a high-performance compression algorithm designed for modern computing architectures. It not only strikes a new balance between compression ratio and speed, but more importantly: it transforms “compression” into an active lever for performance optimization, rather than an unavoidable overhead.

What is Zstandard (zstd)

Zstandard, abbreviated as zstd, is a general-purpose lossless compression algorithm developed by Meta (formerly Facebook). Its design goal is straightforward: to significantly boost compression and decompression speeds while maintaining a high compression ratio. At the algorithmic level, zstd isn’t merely an incremental improvement upon gzip or Brotli; instead, it fundamentally re-evaluates the “compression performance triangle” – the relationship between compression ratio, speed, and resource consumption.

  1. A “Modern Architecture” Approach to Compression

Traditional gzip, designed in the 1990s, primarily considered environments with single-threaded CPUs, slower disk I/O, and limited memory. Zstd, however, addresses a completely different era:

  • Multi-core CPUs and SIMD instruction sets are now the norm;
  • Data formats are highly structured (JSON, Avro, Protobuf);
  • Network I/O has become a performance bottleneck.

In this context, zstd’s core optimization objective is to make compression an operation suitable for real-time use, rather than a background task. Its compression mode supports fine-grained adjustment across 22 levels:

  • Lower levels are ideal for real-time traffic (e.g., API response compression);
  • Higher levels can be utilized for archiving or log storage (offering compression ratios that can surpass Brotli).

This positions zstd as both a powerful tool for high-performance data transmission and an effective solution for high-density storage.

  1. Key Technical Features
FeatureDescription
High Compression RatioAchieves 10-20% higher compression than gzip in most test scenarios, while also offering faster decompression speeds.
Extremely Fast Decompression Performancezstd’s decompression speed significantly surpasses Brotli’s, making it ideal for high-concurrency online services.
Adjustable Compression Level (1-22)Provides flexible control over the balance between CPU utilization and compression ratio, adapting to diverse business workloads.
Multi-threaded OptimizationNatively supports multi-core compression, enabling full utilization of parallel resources in Edge computing environments.
Dictionary ModeFor small, structurally similar data (e.g., API payloads), it can achieve an additional 2-5x compression improvement.

These features enable a previously elusive balance between “transmission efficiency” and “computational overhead.”

Why Zstandard is Particularly Well-Suited for OpenResty Edge

Zstandard is not merely a simple algorithm upgrade; it represents a performance evolution perfectly tailored to the demands of edge architecture. In an edge computing environment, systems must simultaneously contend with three critical pressures: high concurrent requests, constrained computing resources, and extreme latency sensitivity. Traditional gzip often struggles to strike the right balance between compression ratio and speed in such scenarios – achieving high compression typically slows down processing, while faster compression often leads to wasted bandwidth.

Zstandard’s advent effectively resolves this “performance trade-off.” At low compression levels, it outperforms gzip in speed, and at medium to high levels, it still maintains an excellent compression ratio and decompression performance. This makes it ideal for adaptively balancing latency and resource utilization on edge nodes based on real-time load. More importantly, Zstandard offers a distinct advantage for dynamic content. It can stream-compress API responses or log data, completing the process within a millisecond-level latency budget, without needing to cache entire data blocks before processing. When combined with OpenResty Edge’s origin bandwidth optimization, this efficiency gain often translates into tangible system-wide performance improvements.

This means that when you enable Zstandard compression in OpenResty Edge:

  • Users experience faster responses
  • Bandwidth utilization is higher
  • CPU overhead is actually lower

In the realm of performance engineering, achieving such a “triple-win” optimization is exceptionally rare. If gzip addressed the fundamental question of “can it be compressed?”, then Zstandard tackles the challenge of “how fast and how intelligently can compression be performed?”

How to Enable Zstandard (Zstd) Compression in OpenResty Edge

In OpenResty Edge, Zstd compression is designed as a multi-level configurable feature. This allows you to enable it globally, or exercise fine-grained control for specific applications or paths.

Hierarchical Configuration Architecture: Balancing Flexibility and Control

OpenResty Edge’s configuration system allows you to define compression strategies at various levels. The system automatically applies the highest-priority configuration, ensuring both flexibility and consistency. The configuration levels, from highest to lowest priority, are as follows:

  1. Page Rule Configuration
Location: Applications > HTTP Applications > [Specific Application] > Page Rules > Actions > Enable Gateway Zstandard / Set Zstandard Type

This provides the most granular control, allowing you to enable Zstandard compression for specific URL paths or conditions. It’s ideal for tailored optimization of performance-sensitive APIs or high-traffic resources.

  1. Global Custom Action Configuration
Location: Global Configuration > Global Custom Actions > Actions > Enable Gateway Zstandard / Set Zstandard Type

Allows you to create reusable Zstandard compression actions that can be consistently applied across multiple applications. This is particularly beneficial for multi-team collaboration and large-scale operations.

  1. Application Settings Configuration
Location: Applications > HTTP Applications > [Specific Application] > Settings > Zstandard

Define compression strategies for individual HTTP applications, enabling quick activation or adjustment of Zstandard compression. This is ideal for application-level performance tuning and gradual rollout/experimentation scenarios.

  1. Global Rewrite Rule Configuration
Location: Global Configuration > Global Rewrite Rules > Actions > Enable Gateway Zstandard / Set Zstandard Type

Enables defining system-level compression logic for specific conditions (e.g., request headers, path patterns). This ensures compression policies are consistently applied across multiple environments.

  1. Global General Settings
Location: Global Settings > General Settings > Zstandard Configuration (Enabled by default)

This section defines the system’s fundamental compression parameters. Zstd compression is enabled by default for core MIME types (such as HTML, CSS, JSON, JS). It establishes a stable default baseline for all higher-level configurations.

When multiple zstd compression rules are defined across different levels, OpenResty Edge will automatically apply the highest priority configuration to avoid policy conflicts. The Global General Settings serve as a foundational layer, ensuring the system can automatically enable efficient compression even in the absence of specific configurations.

Interoperability with Other Compression Algorithms

The OpenResty Edge platform supports enabling multiple compression algorithms simultaneously (gzip, brotli, zstd). The system automatically selects the optimal algorithm based on client support:

Priority: Zstandard > Brotli > Gzip

This means that when clients such as Chrome or curl indicate support for zstd, OpenResty Edge will automatically use zstd compression to deliver content, ensuring the lowest latency and highest compression ratio. Clients that do not support zstd will automatically fall back to brotli or gzip, requiring no additional configuration.

Summary: Next Steps in Performance Optimization

Throughout the history of internet performance optimization, compression algorithms have often been regarded as low-level details—silently residing at the end of the transmission chain, rarely brought to the forefront. But the reality is: compression efficiency is transmission efficiency. Every byte, every millisecond saved, is amplified into significant user experience and operational cost differences across tens of thousands of requests. The advent of Zstandard is more than just a leap in algorithmic performance. It embodies a new optimization philosophy: performance gains don’t necessarily stem from “more resources,” but rather from “smarter processing methods.”

By enabling Zstandard compression in OpenResty Edge, you gain not just a more efficient encoding algorithm, but an entire smoother, lighter content delivery pipeline:

  • Faster Response Times — shortening Time to First Byte (TTFB) and ensuring dynamic content arrives more promptly;
  • Lower Bandwidth Costs — achieving higher compression ratios across global nodes, optimizing overall transmission load;
  • Smarter Architectural Choices — adaptive algorithms and browser negotiation mechanisms ensure optimal experience for various devices.

Future performance competition will no longer be solely about “processing quickly,” but about “faster transmission, lower costs, and a more stable experience.” Zstandard is precisely the new “transmission acceleration engine” for this era. Enabling it in OpenResty Edge might only involve a few lines of configuration changes, but it represents your first step in shifting performance optimization from “passive enhancement” to “active design.”

What is OpenResty Edge

OpenResty Edge is our all-in-one gateway software for microservices and distributed traffic architectures. It combines traffic management, private CDN construction, API gateway, security, and more to help you easily build, manage, and protect modern applications. OpenResty Edge delivers industry-leading performance and scalability to meet the demanding needs of high concurrency, high load scenarios. It supports scheduling containerized application traffic such as K8s and manages massive domains, making it easy to meet the needs of large websites and complex applications.

If you like this tutorial, please subscribe to this blog site and/or our YouTube channel. Thank you!

About The Author

Yichun Zhang (Github handle: agentzh), is the original creator of the OpenResty® open-source project and the CEO of OpenResty Inc..

Yichun is one of the earliest advocates and leaders of “open-source technology”. He worked at many internationally renowned tech companies, such as Cloudflare, Yahoo!. He is a pioneer of “edge computing”, “dynamic tracing” and “machine coding”, with over 22 years of programming and 16 years of open source experience. Yichun is well-known in the open-source space as the project leader of OpenResty®, adopted by more than 40 million global website domains.

OpenResty Inc., the enterprise software start-up founded by Yichun in 2017, has customers from some of the biggest companies in the world. Its flagship product, OpenResty XRay, is a non-invasive profiling and troubleshooting tool that significantly enhances and utilizes dynamic tracing technology. And its OpenResty Edge product is a powerful distributed traffic management and private CDN software product.

As an avid open-source contributor, Yichun has contributed more than a million lines of code to numerous open-source projects, including Linux kernel, Nginx, LuaJIT, GDB, SystemTap, LLVM, Perl, etc. He has also authored more than 60 open-source software libraries.