Supercharging NGINX with LUA (Part 3)

In Supercharging NGINX with Lua (Part 2) we demonstrated how to run some basic Lua inline inside our nginx config and how to provide a custom (but simple) lua authorization handler. In this post I'll walk through some additional extensions you can use for Lua integrations.

Note that a myriad of extensions already exist (browse them in the OpenResty Github Page). For now, I'll choose two to demonstrate; the idea being that with this information you can integrate any of the other extensions based on your requirements.

As with the previous post in this series, I am assuming that you have installed NGINX capable of handling Lua scripts, and have a basic working NGINX configuration file.

Defining Lua Search Paths

Its a good idea before starting to define the location in which your lua modules reside - such that NGINX can locate them (they may not always live directly inside your NGINX configuration folder). To do this we use the lua_package_path directive.

Directly from the OpenResty Documentation - lua_package_path sets the Lua module search path used by scripts specified by set_by_lua, content_by_lua and others. The path string is in standard Lua path form, and ;; can be used to stand for the original search paths.

For this example, let us assume that all of our configs are going to be placed in /etc/nginx/lua. So we should go ahead and add this directive to the http block in our existing NGINX config:

lua_package_path  "/etc/nginx/lua/?.lua;;";  
Example 1. LRU Cache Extension

Our first example is an implementation of a Least Recently Used Cache (LRU Cache). The source project and documentation can be found here

Using this cache we can calculate and store arbitrary values for quick subsequent retrieval and rely on LRU cache to automatically expire items based on our requirement.

Installation

Clone the repository and place the file lib/lrucache.lua in /etc/nginx/lua/openresty/redis.lua. Alternatively, you can create this file from the raw file contents.

Using the Extension

Lets create a simple Lua module /etc/nginx/lua/time_cache.lua that will expose a method go - this method will return the time from the cache if it exists, or the current time if it does not.

local _M = {}  
local lrucache = require("resty.lrucache")  
-- cache time in seconds
local cachettl = 300  
-- we need to initialize the cache on the lua module level so that
-- it can be shared by all the requests served by each worker process:
-- allow up to 20 items in the cache
local c, err = lrucache.new(20)  
if not c then  
    return error("failed to create the cache: " .. (err or "unknown"))
end

function _M.go()  
  local osclock = c:get("osclock")
  if osclock == nil then
    osclock = tostring(os.clock())
    c:set("osclock", osclock, cachettl)
  end
  return osclock
end

return _M  

Then, in our NGINX config, lets call this from a location block:

location /cached_time {  
  content_by_lua_block {
    local time_cache = require("time_cache")
    local time = time_cache.go()
    ngx.say(time)
  }
}
Example 2. Redis Data Extension

This extension allows us to query Redis for data from inside our NGINX request. The circumstances in which you would want to do this are again completely dependent on your application design and constraints. An example requirement could be for a single source of potentially externally changing data whereby you could be affected by the NGINX Worker data-sharing constraints. In a case like this an external data source (such as Redis) could be helpful.

The source project and documentation can be found here

Installation

Clone the repository and place the file lib/redis.lua in /etc/nginx/lua/openresty/redis.lua. Alternatively you can create this file from the raw file contents.

Using the Extension

The provided sample is pretty concise - so I'll replicate that here! In our NGINX config we add the following location block (Note that you will need to have a Redis instance up and running and accepting requests on 127.0.0.1:6379)

location /redis-request {  
  content_by_lua_block {
      local redis = require "resty.redis"
      local red = redis:new()

      -- 1 sec
      red:set_timeout(1000) 

      -- or connect to a unix domain socket file listened
      -- by a redis server:
      -- local ok, err = red:connect("unix:/path/to/redis.sock")
      local ok, err = red:connect("127.0.0.1", 6379)
      if not ok then
          ngx.say("failed to connect: ", err)
          return
      end

      ok, err = red:set("dog", "an animal")
      if not ok then
          ngx.say("failed to set dog: ", err)
          return
      end

      ngx.say("set result: ", ok)
      local res, err = red:get("dog")
      if not res then
          ngx.say("failed to get dog: ", err)
          return
      end

      if res == ngx.null then
          ngx.say("dog not found.")
          return
      end

      ngx.say("dog: ", res)
      red:init_pipeline()
      red:set("cat", "Marry")
      red:set("horse", "Bob")
      red:get("cat")
      red:get("horse")
      local results, err = red:commit_pipeline()
      if not results then
          ngx.say("failed to commit the pipelined requests: ", err)
          return
      end

      for i, res in ipairs(results) do
          if type(res) == "table" then
              if res[1] == false then
                  ngx.say("failed to run command ", i, ": ", res[2])
              else
                  -- process the table value
              end
          else
              -- process the scalar value
          end
      end

      -- put it into the connection pool of size 100,
      -- with 10 seconds max idle time
      local ok, err = red:set_keepalive(10000, 100)
      if not ok then
          ngx.say("failed to set keepalive: ", err)
          return
      end

      -- or just close the connection right away:
      -- local ok, err = red:close()
      -- if not ok then
      --     ngx.say("failed to close: ", err)
      --     return
      -- end
  }
}
Conclusion

Hopefully, this series will have demonstrated how to use Lua to effectively extend your NGINX capabilities. Here at Cloud 66 we're always interested in new and interesting technologies and application of those technologies, and would love to hear from you!


Part I: To learn about LUA, read part 1 article "Supercharging NGINX with LUA (Part 1)".

Part II: For examples of NGINX Lua integration, check out "Supercharging NGINX with Lua (Part 2)".


Vic van Gool

Vic is the CTO of Cloud 66. He oversees development, infrastructure and architecture at Cloud 66.

Subscribe and get updates

Have feedback? Please get in touch @cloud66 on Twitter.

Everything you need to build, manage and maintain containers in production on your own servers and any cloud

Try Cloud 66 — 14 Days Free Trial, No credit card required