服务器 > 网络 > SSL

Nginx动态配置upstream的使用小结

57人参与 2025-12-10 SSL

1. 前言

nginx 作为一款高性能的 web 服务器和反向代理服务器,在现代互联网架构中扮演着至关重要的角色。其中,upstream 模块是 nginx 实现负载均衡的核心组件。传统的 upstream 配置需要修改配置文件并重载 nginx,这在动态的云原生环境中显得不够灵活。本文将深入探讨 nginx 动态配置 upstream 的各种方法,从基础概念到高级实践,提供万字详细解析。

2. upstream 基础概念

2.1 什么是 upstream

在 nginx 中,upstream 模块用于定义一组后端服务器,nginx 可以将请求代理到这些服务器上,并实现负载均衡。

http {
    upstream backend {
        server backend1.example.com weight=5;
        server backend2.example.com;
        server backup1.example.com backup;
    }

    server {
        location / {
            proxy_pass http://backend;
        }
    }
}

2.2 upstream 的负载均衡算法

nginx upstream 支持多种负载均衡算法:

2.3 upstream 服务器参数

每个 upstream 服务器可以配置多种参数:

server address [parameters];

常用参数包括:

3. 传统 upstream 配置的局限性

3.1 静态配置的问题

传统 upstream 配置的主要问题:

3.2 动态服务发现的必要性

在现代架构中,服务发现成为必需功能:

4. nginx 动态 upstream 配置方案

4.1 nginx plus 商业版本

nginx plus 提供了官方的动态配置 api:

http {
    upstream backend {
        zone backend 64k;
        server 10.0.0.1:80;
    }

    server {
        listen 80;
        server_name example.com;
        
        location / {
            proxy_pass http://backend;
        }
        
        # nginx plus api 端点
        location /upstream_conf {
            upstream_conf;
            allow 127.0.0.1;
            deny all;
        }
    }
}

使用 api 动态管理 upstream:

# 添加服务器
curl -x post -d 'server=10.0.0.2:80' http://localhost/upstream_conf?upstream=backend

# 删除服务器
curl -x delete http://localhost/upstream_conf?upstream=backend&id=0

# 查看服务器状态
curl http://localhost/upstream_conf?upstream=backend

4.2 openresty 方案

openresty 基于 nginx 和 luajit,提供了强大的扩展能力:

http {
    lua_package_path "/path/to/lua/scripts/?.lua;;";
    
    upstream backend {
        server 0.0.0.1; # 占位符
        balancer_by_lua_block {
            local balancer = require "ngx.balancer"
            local upstream = require "upstream"
            
            local peer = upstream.get_peer()
            if peer then
                balancer.set_current_peer(peer.ip, peer.port)
            end
        }
    }
    
    init_worker_by_lua_block {
        local upstream = require "upstream"
        upstream.init()
    }
    
    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
        
        location /upstream {
            content_by_lua_block {
                local upstream = require "upstream"
                
                if ngx.var.request_method == "get" then
                    upstream.list_peers()
                elseif ngx.var.request_method == "post" then
                    upstream.add_peer(ngx.var.arg_ip, ngx.var.arg_port)
                elseif ngx.var.request_method == "delete" then
                    upstream.remove_peer(ngx.var.arg_ip, ngx.var.arg_port)
                end
            }
        }
    }
}

对应的 lua 模块:

-- upstream.lua
local _m = {}

local peers = {}
local current_index = 1

function _m.init()
    -- 从配置中心或服务发现初始化
    peers = {
        {ip = "10.0.0.1", port = 80},
        {ip = "10.0.0.2", port = 80}
    }
end

function _m.get_peer()
    if #peers == 0 then
        return nil
    end
    
    local peer = peers[current_index]
    current_index = current_index % #peers + 1
    
    return peer
end

function _m.add_peer(ip, port)
    table.insert(peers, {ip = ip, port = port})
    ngx.say("peer added: " .. ip .. ":" .. port)
end

function _m.remove_peer(ip, port)
    for i, peer in ipairs(peers) do
        if peer.ip == ip and peer.port == port then
            table.remove(peers, i)
            ngx.say("peer removed: " .. ip .. ":" .. port)
            return
        end
    end
    ngx.say("peer not found: " .. ip .. ":" .. port)
end

function _m.list_peers()
    ngx.say("current peers:")
    for _, peer in ipairs(peers) do
        ngx.say(peer.ip .. ":" .. peer.port)
    end
end

return _m

4.3 第三方模块:nginx-upsync-module

nginx-upsync-module 是一个流行的第三方模块,支持从 consul、etcd 等服务发现组件同步 upstream 配置。

编译安装:

# 下载 nginx 源码
wget http://nginx.org/download/nginx-1.20.1.tar.gz
tar -zxvf nginx-1.20.1.tar.gz

# 下载 nginx-upsync-module
git clone https://github.com/weibocom/nginx-upsync-module.git

# 编译安装
cd nginx-1.20.1
./configure --add-module=../nginx-upsync-module
make && make install

配置示例:

http {
    upstream backend {
        upsync 127.0.0.1:8500/v1/kv/upstreams/backend upsync_timeout=6m upsync_interval=500ms 
                 upsync_type=consul strong_dependency=off;
        upsync_dump_path /usr/local/nginx/conf/servers/servers_backend.conf;
        
        include /usr/local/nginx/conf/servers/servers_backend.conf;
    }
    
    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
        
        # upsync 状态页面
        location /upstream_list {
            upstream_show;
        }
    }
}

4.4 dns 动态解析方案

利用 nginx 的 dns 解析功能实现动态服务发现:

http {
    resolver 10.0.0.2 valid=10s;
    
    upstream backend {
        zone backend 64k;
        server backend-service.namespace.svc.cluster.local service=http resolve;
    }
    
    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
    }
}

5. 基于 consul 的服务发现集成

5.1 consul 服务注册

首先,在 consul 中注册服务:

# 注册服务
curl -x put -d '{
  "id": "backend1",
  "name": "backend",
  "address": "10.0.0.1",
  "port": 80,
  "tags": ["v1", "primary"]
}' http://127.0.0.1:8500/v1/agent/service/register

# 注册另一个实例
curl -x put -d '{
  "id": "backend2",
  "name": "backend",
  "address": "10.0.0.2",
  "port": 80,
  "tags": ["v1", "secondary"]
}' http://127.0.0.1:8500/v1/agent/service/register

5.2 nginx 配置集成

使用 ngx_http_js_module 集成 consul:

load_module modules/ngx_http_js_module.so;

http {
    js_path "/etc/nginx/js/";
    js_import main from consul_upstream.js;
    
    upstream backend {
        server 127.0.0.1:11111; # 占位符
        js_filter main.resolve_backend;
    }
    
    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
        }
        
        # 动态更新端点
        location /upstream/update {
            js_content main.update_upstream;
        }
    }
}

javascript 模块:

// consul_upstream.js
const consul = require('consul');

let consul;
let currentservers = [];

function initconsul() {
    consul = new consul({
        host: '127.0.0.1',
        port: 8500
    });
    
    // 初始获取服务列表
    updateservicelist();
    
    // 设置监听
    setinterval(updateservicelist, 5000);
}

function updateservicelist() {
    consul.agent.service.list((err, services) => {
        if (err) {
            console.error('consul error:', err);
            return;
        }
        
        const backendservices = [];
        for (const id in services) {
            if (services[id].service === 'backend') {
                backendservices.push({
                    address: services[id].address,
                    port: services[id].port
                });
            }
        }
        
        currentservers = backendservices;
    });
}

function resolve_backend(r) {
    if (currentservers.length === 0) {
        r.error('no backend servers available');
        return;
    }
    
    // 简单的轮询
    const server = currentservers[r.variables.requests % currentservers.length];
    r.variables.backend_address = server.address;
    r.variables.backend_port = server.port;
}

function update_upstream(r) {
    updateservicelist();
    r.headersout['content-type'] = 'application/json';
    r.return(200, json.stringify({
        status: 'updated',
        servers: currentservers
    }));
}

export default { resolve_backend, update_upstream };

// 初始化
initconsul();

6. kubernetes 环境中的动态 upstream

6.1 使用 nginx ingress controller

在 kubernetes 中,nginx ingress controller 可以自动管理 upstream:

apiversion: networking.k8s.io/v1
kind: ingress
metadata:
  name: example-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /api
        pathtype: prefix
        backend:
          service:
            name: api-service
            port:
              number: 80
      - path: /web
        pathtype: prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

6.2 自定义 controller 实现

创建自定义的 upstream 控制器:

package main

import (
    "context"
    "encoding/json"
    "fmt"
    "net/http"
    "os"
    "os/exec"
    "time"
    
    corev1 "k8s.io/api/core/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/client-go/kubernetes"
    "k8s.io/client-go/rest"
)

type upstreammanager struct {
    clientset *kubernetes.clientset
    nginxconfigpath string
}

func newupstreammanager() (*upstreammanager, error) {
    config, err := rest.inclusterconfig()
    if err != nil {
        return nil, err
    }
    
    clientset, err := kubernetes.newforconfig(config)
    if err != nil {
        return nil, err
    }
    
    return &upstreammanager{
        clientset: clientset,
        nginxconfigpath: "/etc/nginx/conf.d/upstreams.conf",
    }, nil
}

func (um *upstreammanager) updateupstream(servicename, namespace string) error {
    endpoints, err := um.clientset.corev1().endpoints(namespace).get(
        context.todo(), servicename, metav1.getoptions{})
    if err != nil {
        return err
    }
    
    var servers []string
    for _, subset := range endpoints.subsets {
        for _, address := range subset.addresses {
            for _, port := range subset.ports {
                servers = append(servers, 
                    fmt.sprintf("server %s:%d;", address.ip, port.port))
            }
        }
    }
    
    configcontent := fmt.sprintf(`
upstream %s {
    %s
}`, servicename, joinservers(servers))
    
    err = os.writefile(fmt.sprintf("%s/%s.conf", um.nginxconfigpath, servicename),
        []byte(configcontent), 0644)
    if err != nil {
        return err
    }
    
    // 重载 nginx
    cmd := exec.command("nginx", "-s", "reload")
    return cmd.run()
}

func (um *upstreammanager) watchservices() {
    for {
        services, err := um.clientset.corev1().services("").list(context.todo(), metav1.listoptions{})
        if err != nil {
            fmt.printf("error listing services: %v\n", err)
            time.sleep(5 * time.second)
            continue
        }
        
        for _, service := range services.items {
            if service.spec.type == corev1.servicetypeclusterip {
                err := um.updateupstream(service.name, service.namespace)
                if err != nil {
                    fmt.printf("error updating upstream for %s: %v\n", service.name, err)
                }
            }
        }
        
        time.sleep(30 * time.second)
    }
}

func joinservers(servers []string) string {
    result := ""
    for _, server := range servers {
        result += server + "\n    "
    }
    return result
}

func main() {
    manager, err := newupstreammanager()
    if err != nil {
        panic(err)
    }
    
    go manager.watchservices()
    
    http.handlefunc("/health", func(w http.responsewriter, r *http.request) {
        w.writeheader(http.statusok)
        json.newencoder(w).encode(map[string]string{"status": "healthy"})
    })
    
    http.listenandserve(":8080", nil)
}

7. 高级动态配置策略

7.1 基于权重的动态调整

根据后端服务器的性能指标动态调整权重:

-- dynamic_weight.lua
local _m = {}

local metrics = {}
local weight_cache = {}

function _m.collect_metrics(ip, port)
    -- 模拟收集指标
    local cpu_usage = math.random(10, 90)
    local memory_usage = math.random(20, 80)
    local active_connections = math.random(0, 1000)
    
    metrics[ip .. ":" .. port] = {
        cpu = cpu_usage,
        memory = memory_usage,
        connections = active_connections,
        timestamp = ngx.now()
    }
    
    return metrics[ip .. ":" .. port]
end

function _m.calculate_weight(ip, port)
    local metric = _m.collect_metrics(ip, port)
    
    -- 基于指标计算权重
    local base_weight = 100
    
    -- cpu 使用率越高,权重越低
    local cpu_factor = (100 - metric.cpu) / 100
    
    -- 内存使用率越高,权重越低
    local memory_factor = (100 - metric.memory) / 100
    
    -- 连接数越多,权重越低
    local conn_factor = math.max(0, 1 - metric.connections / 1000)
    
    local calculated_weight = math.floor(base_weight * cpu_factor * memory_factor * conn_factor)
    calculated_weight = math.max(1, math.min(calculated_weight, 100))
    
    weight_cache[ip .. ":" .. port] = calculated_weight
    return calculated_weight
end

function _m.get_weight(ip, port)
    if not weight_cache[ip .. ":" .. port] then
        return _m.calculate_weight(ip, port)
    end
    
    -- 每30秒重新计算权重
    if ngx.now() - metrics[ip .. ":" .. port].timestamp > 30 then
        return _m.calculate_weight(ip, port)
    end
    
    return weight_cache[ip .. ":" .. port]
end

return _m

7.2 健康检查与熔断机制

实现智能的健康检查和熔断:

http {
    upstream backend {
        server 10.0.0.1:80;
        server 10.0.0.2:80;
        
        # 健康检查配置
        check interval=3000 rise=2 fall=3 timeout=1000 type=http;
        check_http_send "head /health http/1.0\r\n\r\n";
        check_http_expect_alive http_2xx http_3xx;
    }
    
    server {
        listen 80;
        
        location / {
            proxy_pass http://backend;
            
            # 熔断配置
            proxy_next_upstream error timeout http_500 http_502 http_503;
            proxy_next_upstream_tries 3;
            proxy_next_upstream_timeout 10s;
        }
        
        # 健康检查状态页面
        location /status {
            check_status;
            access_log off;
        }
    }
}

自定义健康检查逻辑:

-- health_check.lua
local _m = {}

local health_status = {}
local check_interval = 5  -- 检查间隔(秒)
local failure_threshold = 3  -- 失败阈值

function _m.check_health(ip, port)
    local http = require "resty.http"
    local httpc = http.new()
    
    local res, err = httpc:request_uri("http://" .. ip .. ":" .. port .. "/health", {
        method = "get",
        timeout = 1000,  -- 1秒超时
        keepalive_timeout = 60,
        keepalive_pool = 10
    })
    
    local key = ip .. ":" .. port
    
    if not health_status[key] then
        health_status[key] = {
            consecutive_failures = 0,
            last_check = ngx.now(),
            healthy = true
        }
    end
    
    if not res or res.status ~= 200 then
        health_status[key].consecutive_failures = health_status[key].consecutive_failures + 1
        
        if health_status[key].consecutive_failures >= failure_threshold then
            health_status[key].healthy = false
        end
    else
        health_status[key].consecutive_failures = 0
        health_status[key].healthy = true
    end
    
    health_status[key].last_check = ngx.now()
    
    return health_status[key].healthy
end

function _m.is_healthy(ip, port)
    local key = ip .. ":" .. port
    
    if not health_status[key] then
        return _m.check_health(ip, port)
    end
    
    -- 如果超过检查间隔,重新检查
    if ngx.now() - health_status[key].last_check > check_interval then
        return _m.check_health(ip, port)
    end
    
    return health_status[key].healthy
end

function _m.get_health_status()
    return health_status
end

return _m

8. 性能优化与最佳实践

8.1 连接池优化

http {
    upstream backend {
        server 10.0.0.1:80;
        server 10.0.0.2:80;
        
        # 连接池配置
        keepalive 32;
        keepalive_requests 100;
        keepalive_timeout 60s;
    }
    
    server {
        location / {
            proxy_pass http://backend;
            proxy_http_version 1.1;
            proxy_set_header connection "";
            
            # 缓冲区优化
            proxy_buffering on;
            proxy_buffer_size 4k;
            proxy_buffers 8 4k;
            proxy_busy_buffers_size 8k;
            
            # 超时配置
            proxy_connect_timeout 3s;
            proxy_send_timeout 10s;
            proxy_read_timeout 10s;
        }
    }
}

8.2 缓存与限流

http {
    # 限流配置
    limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
    
    # 缓存配置
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m 
                     max_size=10g inactive=60m use_temp_path=off;
    
    upstream backend {
        server 10.0.0.1:80;
        server 10.0.0.2:80;
    }
    
    server {
        location /api/ {
            # 限流
            limit_req zone=api burst=20 nodelay;
            
            # 缓存
            proxy_cache my_cache;
            proxy_cache_valid 200 302 5m;
            proxy_cache_valid 404 1m;
            proxy_cache_use_stale error timeout updating http_500 http_502 http_503;
            
            proxy_pass http://backend;
        }
    }
}

9. 监控与日志

9.1 详细访问日志

http {
    log_format upstream_log '[$time_local] $remote_addr - $remote_user '
                           '"$request" $status $body_bytes_sent '
                           '"$http_referer" "$http_user_agent" '
                           'upstream: $upstream_addr '
                           'upstream_status: $upstream_status '
                           'request_time: $request_time '
                           'upstream_response_time: $upstream_response_time '
                           'upstream_connect_time: $upstream_connect_time';
    
    upstream backend {
        server 10.0.0.1:80;
        server 10.0.0.2:80;
    }
    
    server {
        access_log /var/log/nginx/access.log upstream_log;
        
        location / {
            proxy_pass http://backend;
        }
    }
}

9.2 状态监控

server {
    listen 8080;
    
    # 基础状态
    location /nginx_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
    
    # 上游状态
    location /upstream_status {
        proxy_pass http://backend;
        access_log off;
    }
    
    # 健康检查端点
    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header content-type text/plain;
    }
}

10. 安全考虑

10.1 api 安全防护

# 动态配置 api 安全
location /upstream_api {
    # ip 白名单
    allow 10.0.0.0/8;
    allow 172.16.0.0/12;
    allow 192.168.0.0/16;
    deny all;
    
    # 认证
    auth_basic "upstream api";
    auth_basic_user_file /etc/nginx/.htpasswd;
    
    # 限流
    limit_req zone=api_admin burst=5 nodelay;
    
    # 方法限制
    if ($request_method !~ ^(get|post|delete)$) {
        return 405;
    }
    
    proxy_pass http://upstream_manager;
}

10.2 输入验证

-- input_validation.lua
local _m = {}

function _m.validate_ip(ip)
    if not ip or type(ip) ~= "string" then
        return false
    end
    
    local chunks = {ip:match("^(%d+)%.(%d+)%.(%d+)%.(%d+)$")}
    if #chunks ~= 4 then
        return false
    end
    
    for _, v in pairs(chunks) do
        if tonumber(v) > 255 then
            return false
        end
    end
    
    return true
end

function _m.validate_port(port)
    if not port then
        return false
    end
    
    local port_num = tonumber(port)
    if not port_num or port_num < 1 or port_num > 65535 then
        return false
    end
    
    return true
end

function _m.sanitize_input(input)
    if not input then
        return nil
    end
    
    -- 移除潜在的危险字符
    local sanitized = input:gsub("[<>%$%[%]%{%}]", "")
    return sanitized
end

return _m

11. 故障排除与调试

11.1 调试配置

server {
    # 调试日志
    error_log /var/log/nginx/debug.log debug;
    
    location / {
        # 调试头部
        add_header x-upstream-addr $upstream_addr;
        add_header x-upstream-status $upstream_status;
        add_header x-request-id $request_id;
        
        proxy_pass http://backend;
        
        # 调试日志
        log_subrequest on;
    }
}

11.2 常见问题解决

12. 总结

nginx 动态 upstream 配置是现代微服务架构中的关键组件。通过本文介绍的多种方案,您可以根据具体需求选择合适的实现方式:

无论选择哪种方案,都需要考虑性能、安全、监控和维护等方面。动态 upstream 配置大大提高了系统的弹性和可维护性,是现代云原生架构不可或缺的一部分。

在实际生产环境中,建议:

到此这篇关于nginx动态配置upstream的使用小结的文章就介绍到这了,更多相关nginx动态配置upstream内容请搜索代码网以前的文章或继续浏览下面的相关文章希望大家以后多多支持代码网!

(0)

您想发表意见!!点此发布评论

推荐阅读

Nginx错误拦截转发 error_page的问题解决

12-10

Nginx更新SSL证书的实现步骤

12-10

Ubuntu 20.04缺少libssl.so.1.0.0错误的解决方案

11-23

Nginx配置带SSL认证的转发方式 (HTTPS请求)

01-11

ubuntu下国内升级ollama全过程

01-11

Tomcat输出控制台中文乱码问题分析及解决过程

11-13

猜你喜欢

版权声明:本文内容由互联网用户贡献,该文观点仅代表作者本人。本站仅提供信息存储服务,不拥有所有权,不承担相关法律责任。 如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 2386932994@qq.com 举报,一经查实将立刻删除。

发表评论