Today I’d demonstrate how to setup the simplest reverse proxy and load balancer in OpenResty Edge.
We manage all the gateway server nodes and their configurations in a central place, the Edge Admin web console.
Let’s go to a web console of OpenResty Edge. This is our sample deployment of the console. Every user would have their own deployment.
Let’s login using our user name and password.
Other authentication methods are also configurable.
Sign in now.
We’re now at the application list page. There are many existing applications we created previously. Each application is like a virtual host or virtual server in the same gateway.
Here we’d create a new application.
We will just create one of the HTTP type. It is the default.
We assign a single domain,
test-edge.com, to this application.
We may add more domains, including wildcard domains.
We only care about the 80 port in this example.
Let’s create this application!
Now we are inside this new application. It’s empty.
Let’s go to the Upstreams page.
Obviously, we currently have no upstreams defined.
Create a new upstream for our backend servers.
We give this upstream a name, say,
For simplicity, we just use the HTTP protocol here.
We may always want to use HTTPS for the real thing.
Here we need the backend server’s IP address.
We’ve already prepared a sample backend server at this IP address.
It simply returns the default index page of the open source OpenResty server software.
It could be anything that speaks HTTP.
We can now fill out the host field for the backend server.
We keep the 80 port intact.
We may add more servers to this upstream in the future.
Now save this upstream.
We can see this
my_backend upstream is already there.
Now let’s create a new page rule to actually make use of this upstream.
We currently don’t have any page rules defined.
Create a new page rule.
For this page rule, we do not specify a condition. This way it will apply to all incoming requests.
We could, however, limit this proxy page rule to certain requests only.
We disable the condition again.
Let’s add a proxy target here.
Let’s choose an upstream.
Here we have our newly created upstream present.
We also have some pre-defined global upstreams. They can be reused by all the applications including this one.
We select our
Our upstream has only one server.
So the balancing policy does not really matter here.
We’d just keep the default round robin policy.
We may also want to enable caching of the responses. We’ll cover this topic in another video.
Finally, create this rule for real.
We can see the proxy page rule is already listed on the page rule list.
The last step is to make a new configuration release. It will push out our pending changes to all our gateway servers.
Let’s click on this link to make a new release.
We have a chance to review our changes before pushing them out.
This is our first change.
It is our addition of the
This is our second change.
This is indeed our proxy page rule.
Now we make a release to all our gateway servers.
We can watch the configuration synchronization progress at real time. It is pushed out to the whole gateway network.
Now it is fully synchronized. As we can see, this sample deployment has 13 servers in the gateway network.
We do incremental config synchronization across the whole network.
We live-update config on the request level. None of the application-level configuration changes require server reload, restart, or binary upgrade. So it is very scalable even when you have many different users making frequent releases.
We can also check all the gateway servers grouped by clusters.
This is just our sample deployment around the world.
Our users are free to deploy their gateway servers anywhere they like. Or spanning different clouds and hosting services.
This column shows the configuration synchronization status for each gateway server.
We can test a gateway server near San Francisco here.
Its public IP address is this.
We copy this IP address to test this server directly.
On the terminal, we can use
curl to test this San Francisco gateway server.
curl -sS -H 'Host: test-edge.com' 'http://126.96.36.199/' | less
Note that we specify the
Host request header. This is because the same server is serving many different virtual hosts.
Send the request.
It works as expected! We got the default OpenResty index page just like accessing the backend server directly.
We can check the response header too via the
-I option of
curl -I -H 'Host: test-edge.com' 'http://188.8.131.52/'
There are some headers created by the OpenResty Edge gateway software.
Alternatively, we could bind the IP address to the host name in this local
/etc/hosts file. Then we’ll be able to point a web browser to this domain directly.
For the real setup, we should add the gateway server IP addresses to our DNS name servers.
Here we haven’t configured this domain’s DNS records yet. We’ll demonstrate it in another video.
OpenResty Edge can also work as an authoritative DNS server network at the same time.
This is optional though. The user could still choose to use 3rd-party DNS name servers. This is what I’d like to cover today.
This article and its associated video are both generated automatically from a simple screenplay file.
Yichun Zhang is the creator of the OpenResty® open source project. He is also the founder and CEO of the OpenResty Inc. company. He contributed a dozen open source Nginx 3rd-party modules, quite some Nginx and LuaJIT core patches, and designed products like OpenResty XRay and OpenResty Edge.
We provide the Chinese translation for this article on blog.openresty.com.cn. We welcome interested readers to contribute translations in other natural languages as long as the full article is translated without any omissions. We thank them in advance.
We always welcome talented and enthusiastic engineers to join our team at OpenResty Inc.
to explore various open source software’s internals and build powerful analyzers and
visualizers for real world applications built atop the open source software. If you are
interested, please send your resume to
email@example.com . Thank you!