# Ledge
[![Build Status](https://travis-ci.org/ledgetech/ledge.svg?branch=master)](https://travis-ci.org/ledgetech/ledge)
An RFC compliant and [ESI](https://www.w3.org/TR/esi-lang) capable HTTP cache for [Nginx](http://nginx.org) / [OpenResty](https://openresty.org), backed by [Redis](http://redis.io).
Ledge can be utilised as a fast, robust and scalable alternative to Squid / Varnish etc, either installed standalone or integrated into an existing Nginx server or load balancer.
Moreover, it is particularly suited to applications where the origin is expensive or distant, making it desirable to serve from cache as optimistically as possible.
## Table of Contents
* [Installation](#installation)
* [Philosophy and Nomenclature](#philosophy-and-nomenclature)
* [Cache keys](#cache-keys)
* [Streaming design](#streaming-design)
* [Collapsed forwarding](#collapsed-forwarding)
* [Advanced cache patterns](#advanced-cache-patterns)
* [Minimal configuration](#minimal-configuration)
* [Config systems](#config-systems)
* [Events system](#events-system)
* [Caching basics](#caching-basics)
* [Purging](#purging)
* [Serving stale](#serving-stale)
* [Edge Side Includes](#edge-side-includes)
* [API](#api)
* [ledge.configure](#ledgeconfigure)
* [ledge.set_handler_defaults](#ledgeset_handler_defaults)
* [ledge.create\_handler](#ledgecreate_handler)
* [ledge.create\_worker](#ledgecreate_worker)
* [ledge.bind](#ledgebind)
* [handler.bind](#handlerbind)
* [handler.run](#handlerrun)
* [worker.run](#workerrun)
* [Handler configuration options](#handler-configuration-options)
* [Events](#events)
* [Administration](#administration)
* [Managing Qless](#managing-qless)
* [Licence](#licence)
## Installation
[OpenResty](http://openresty.org/) is a superset of [Nginx](http://nginx.org), bundling [LuaJIT](http://luajit.org/) and the [lua-nginx-module](https://github.com/openresty/lua-nginx-module) as well as many other things. Whilst it is possible to build all of these things into Nginx yourself, we recommend using the latest OpenResty.
### 1. Download and install:
* [OpenResty](http://openresty.org/) >= 1.11.x
* [Redis](http://redis.io/download) >= 2.8.x
* [LuaRocks](https://luarocks.org/)
### 2. Install Ledge using LuaRocks:
```
luarocks install ledge
```
This will install the latest stable release, and all other Lua module dependencies, which if installing manually without LuaRocks are:
* [lua-resty-http](https://github.com/pintsized/lua-resty-http)
* [lua-resty-redis-connector](https://github.com/pintsized/lua-resty-redis-connector)
* [lua-resty-qless](https://github.com/pintsized/lua-resty-qless)
* [lua-resty-cookie](https://github.com/cloudflare/lua-resty-cookie)
* [lua-ffi-zlib](https://github.com/hamishforbes/lua-ffi-zlib)
* [lua-resty-upstream](https://github.com/hamishforbes/lua-resty-upstream) *(optional, for load balancing / healthchecking upstreams)*
### 3. Review OpenResty documentation
If you are new to OpenResty, it's quite important to review the [lua-nginx-module](https://github.com/openresty/lua-nginx-module) documentation on how to run Lua code in Nginx, as the environment is unusual. Specifcally, it's useful to understand the meaning of the different Nginx phase hooks such as `init_by_lua` and `content_by_lua`, as well as how the `lua-nginx-module` locates Lua modules with the [lua_package_path](https://github.com/openresty/lua-nginx-module#lua_package_path) directive.
[Back to TOC](#table-of-contents)
## Philosophy and Nomenclature
The central module is called `ledge`, and provides factory methods for creating `handler` instances (for handling a request) and `worker` instances (for running background tasks). The `ledge` module is also where global configuration is managed.
A `handler` is short lived. It is typically created at the beginning of the Nginx `content` phase for a request, and when its [run()](#handlerrun) method is called, takes responsibility for processing the current request and delivering a response. When [run()](#handlerrun) has completed, HTTP status, headers and body will have been delivered to the client.
A `worker` is long lived, and there is one per Nginx worker process. It is created when Nginx starts a worker process, and dies when the Nginx worker dies. The `worker` pops queued background jobs and processes them.
An `upstream` is the only thing which must be manually configured, and points to another HTTP host where actual content lives. Typically one would use DNS to resolve client connections to the Nginx server running Ledge, and tell Ledge where to fetch from with the `upstream` configuration. As such, Ledge isn't designed to work as a forwarding proxy.
[Redis](http://redis.io) is used for much more than cache storage. We rely heavily on its data structures to maintain cache `metadata`, as well as embedded Lua scripts for atomic task management and so on. By default, all cache body data and `metadata` will be stored in the same Redis instance. The location of cache `metadata` is global, set when Nginx starts up.
Cache body data is handled by the `storage` system, and as mentioned, by default shares the same Redis instance as the `metadata`. However, `storage` is abstracted via a [driver system](#storage_driver) making it possible to store cache body data in a separate Redis instance, or a group of horizontally scalable Redis instances via a [proxy](https://github.com/twitter/twemproxy), or to roll your own `storage` driver, for example targeting PostreSQL or even simply a filesystem. It's perhaps important to consider that by default all cache storage uses Redis, and as such is bound by system memory.
[Back to TOC](#table-of-contents)
### Cache keys
A goal of any caching system is to safely maximise the HIT potential. That is, normalise factors which would split the cache wherever possible, in order to share as much cache as possible.
This is tricky to generalise, and so by default Ledge puts sane defaults from the request URI into the cache key, and provides a means for this to be customised by altering the [cache\_key\_spec](#cache_key_spec).
URI arguments are sorted alphabetically by default, so `http://example.com?a=1&b=2` would hit the same cache entry as `http://example.com?b=2&a=1`.
[Back to TOC](#table-of-contents)
### Streaming design
HTTP response sizes can be wildly different, sometimes tiny and sometimes huge, and it's not always possible to know the total size up front.
To guarantee predictable memory usage regardless of response sizes Ledge operates a streaming design, meaning it only ever operates on a single `buffer` per request at a time. This is equally true when fetching upstream to when reading from cache or serving to the client request.
It's also true (mostly) when processing [ESI](#edge-size-includes) instructions, except for in the case where an instruction is found to span multiple buffers. In this case, we continue buffering until a complete instruction can be understood, up to a [configurable limit](#esi_max_size).
This streaming design also improves latency, since we start serving the first `buffer` to the client request as soon as we're done with it, rather than fetching and saving an entire resource prior to serving. The `buffer` size can be [tuned](#buffer_size) even on a per `location` basis.
[Back to TOC](#table-of-contents)
### Collapsed forwarding
Ledge can attempt to collapse concurrent origin requests for known (previously) cacheable resources into a single upstream request. That is, if an upstream request for a resource is in progress, subsequent concurrent requests for the same resource will not bother the upstream, and instead wait for the first request to finish.
This is particularly useful to reduce upstream load if a spike of traffic occurs for expired and expensive content (since the chances of concurrent requests is higher on slower content).
[Back to TOC](#table-of-contents)
### Advanced cach
没有合适的资源?快使用搜索试试~ 我知道了~
符合 RFC 且支持 ESI 的 Nginx , OpenResty HTTP 缓存,由 Redis 支持.zip
共96个文件
t:49个
lua:28个
yml:3个
1.该资源内容由用户上传,如若侵权请联系客服进行举报
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
2.虚拟产品一经售出概不退款(资源遇到问题,请及时私信上传者)
版权申诉
0 下载量 192 浏览量
2024-12-04
12:46:50
上传
评论
收藏 168KB ZIP 举报
温馨提示
壁架符合 RFC 规范并支持ESI 的Nginx / OpenResty HTTP 缓存,由Redis提供支持。Ledge 可以作为 Squid / Varnish 等的快速、强大且可扩展的替代品,可以独立安装,也可以集成到现有的 Nginx 服务器或负载均衡器中。此外,它特别适合于原点昂贵或距离较远的应用程序,因此希望尽可能乐观地从缓存中提供服务。目录安装哲学和术语缓存键流式设计折叠转发高级缓存模式最低配置配置系统事件系统缓存基础知识清除服务陈旧边缘侧包括APIledge.configureledge.set_handler_defaultsledge.create_handlerledge.create_workerledge.bind处理程序.bind处理程序运行worker.run处理程序配置选项活动行政管理 Qless执照安装OpenResty是Nginx的超集,捆绑了LuaJIT和lua-nginx-module以及许多其他东西。虽然您可以自行将所有这些内容构建到 Nginx 中,但我们建议使用最新
资源推荐
资源详情
资源评论
收起资源包目录
符合 RFC 且支持 ESI 的 Nginx , OpenResty HTTP 缓存,由 Redis 支持.zip (96个子文件)
.luacov 207B
t
01-unit
cache_key.t 20KB
jobs.t 11KB
response.t 13KB
processor_1_0.t 13KB
request.t 4KB
purge.t 14KB
range.t 10KB
esi.t 5KB
storage.t 24KB
ledge.t 8KB
state_machine.t 5KB
validation.t 3KB
stale.t 3KB
util.t 12KB
handler.t 9KB
events.t 4KB
tag_parser.t 10KB
worker.t 2KB
cert
rootCA.srl 17B
rootCA.pem 1KB
example.com.crt 1KB
rootCA.key 2KB
example.com.key 2KB
03-sentinel
02-master_down.t 3KB
03-slave_promoted.t 2KB
01-master_up.t 3KB
02-integration
stale-if-error.t 6KB
gc.t 4KB
cache.t 19KB
response.t 4KB
request_leak.t 3KB
max_size.t 4KB
purge.t 31KB
range.t 20KB
esi.t 72KB
on_abort.t 6KB
max-stale.t 5KB
vary.t 15KB
memory_pressure.t 8KB
stale-while-revalidate.t 15KB
gzip.t 4KB
origin_mode.t 4KB
via_header.t 2KB
upstream_client.t 3KB
ssl.t 8KB
validation.t 11KB
upstream.t 2KB
age.t 1KB
multiple_headers.t 3KB
req_body.t 614B
hop_by_hop_headers.t 1KB
events.t 1KB
req_method.t 3KB
collapsed_forwarding.t 13KB
LedgeEnv.pm 2KB
lib
ledge
validation.lua 2KB
esi
tag_parser.lua 5KB
processor_1_0.lua 33KB
purge.lua 10KB
handler.lua 29KB
util.lua 7KB
stale.lua 3KB
response.lua 14KB
background.lua 1KB
collapse.lua 1KB
state_machine.lua 3KB
cache_key.lua 8KB
gzip.lua 1KB
storage
redis.lua 10KB
jobs
purge.lua 2KB
collect_entity.lua 697B
revalidate.lua 3KB
request.lua 3KB
state_machine
states.lua 18KB
events.lua 18KB
actions.lua 7KB
pre_transitions.lua 1021B
esi.lua 6KB
range.lua 10KB
worker.lua 2KB
header_util.lua 2KB
ledge.lua 7KB
.travis.yml 91B
.github
FUNDING.yml 18B
标签.txt 43B
migrations
1.26-1.27.lua 7KB
.gitattributes 26B
Makefile 6KB
docker
tests
docker-compose.yml 504B
.luacheckrc 34B
dist.ini 444B
资源内容.txt 1014B
.gitignore 60B
util
lua-releng 3KB
README.md 53KB
共 96 条
- 1
资源评论
徐浪老师
- 粉丝: 8552
- 资源: 1万+
上传资源 快速赚钱
- 我的内容管理 展开
- 我的资源 快来上传第一个资源
- 我的收益 登录查看自己的收益
- 我的积分 登录查看自己的积分
- 我的C币 登录后查看C币余额
- 我的收藏
- 我的下载
- 下载帮助
最新资源
- html5新年快乐主题带动画网页设计模板
- 美食点餐系统-JAVA-基于微信美食点餐系统小程序的设计与实现(毕业论文)
- 实时RGB-D多人检测与跟踪系统:适用于移动机器人和头戴摄像头的深度感知方法
- 纵向泵浦固态激光器吸收损耗模型及其对性能的影响
- MATLAB面板 BP的交通标志系统.zip
- 医学图像分析中基于弱监督推断个性化心脏模型的4D心腔表面网格生成技术
- Python网络编程与数据处理任务指南 - 实现基于Socket通信的任务并确保唯一性
- 交通标志照片测试素材集
- MATLAB【面板】 GUI的水果识别.zip
- MATLAB【面板】 ORL的人脸考勤系统.zip
- MATLAB【面板】 GUI漂浮物垃圾分类检测.zip
- MATLAB【面板】 SVM的车牌识别.zip
- 【被动 LQR主动悬架模型】 采用LQR控制的主动悬架模型,选取车身加速度、悬架动挠度等参数构造线性二次型最优控制目标函数 输入为B级随机路面激励,输出为车身垂向加速度、俯仰角加速度、
- 探究回文串的特性及其在计算机科学与多领域中的应用价值
- MATLAB【面板】车标识别.zip
- MATLAB【面板】车道线检测.zip
资源上传下载、课程学习等过程中有任何疑问或建议,欢迎提出宝贵意见哦~我们会及时处理!
点击此处反馈
安全验证
文档复制为VIP权益,开通VIP直接复制
信息提交成功