Inner-System
HARDWARE : Xeon E-2236 x 32GB x 1TB SSD ) with 4 servers. only for load balancing, for performancing x 2ea , for db CRUD
SOFTWARE : centos 7, nginx 1.18, node v12.22.1
When external connection to server, load-balancing server send to reverse-proxy( performancing server ) and calculate it. when calculate is completed, send it to db-server to record it. it
This work needs low performance, so always CPU usage is 0~2%, RAM usage is 3~7%, IO WAIT is 0%
PROBLEM IS
When external request is arrived to load-sever, request is randomly delays perfectly 1 minute and it sended to reverse-proxy server. and While request's 1 min delay, restarting of load-server's nginx ( systemctl restart nginx ) complete request immediately with no errors. it handled well.
Mysteriously, this problem delays perfectly 1 minute ( 1min 0.02s ~ 1min 0.1s spends). when 1 minute later, it seems to normal ( responses in 50ms serveral times ) but when request from same device, it delays 1 min per 5 min
but perfectly copy of external http connection x 5000 requests send it from load-server to load-server with curl,
load-server send to perform-server and perform-server to db-server spends lower then average 50ms Checked all nginx to reverse-proxy port and responses, it is lower then average 50ms, too.
same in nuxt-server and api-server. they are running from perform-server - each localhost:3000, localhost:3001 ~ 3012
load-nginx.conf :
#user nobody;
worker_processes auto;
error_log logs/error.log;
events {
use epoll;
worker_connections 4096;
multi_accept off;
}
http {
client_max_body_size 300M;
include mime.types;
default_type application/octet-stream;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
#keepalive_timeout 0;
keepalive_timeout 35;
reset_timedout_connection on;
send_timeout 15;
upstream nuxtserver-ssl {
ip_hash;
server 10.10.10.21:500;
server 10.10.10.22:500;
}
upstream apiserver-ssl {
server 10.10.10.21:465;
server 10.10.10.22:465;
server 10.10.10.21:466;
server 10.10.10.22:466;
server 10.10.10.21:467;
server 10.10.10.22:467;
}
# HTTPS server
#
server {
include /usr/local/nginx/conf/ipdeny.conf;
listen 443 ssl;
server_name subdomain.example.com;
ssl_certificate /usr/local/nginx/ssl/__example_com.crt;
ssl_certificate_key /usr/local/nginx/ssl/__example_com.key;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
access_log /usr/local/nginx/logs/ssl-access.log combined;
error_log /usr/local/nginx/logs/ssl-error.log;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://nuxtserver-ssl;
}
location /api/ {
proxy_set_header Upgrade $http_upgrade;
proxy_http_version 1.1;
proxy_set_header Connection upgrade;
proxy_set_header Accept-Encoding gzip;
proxy_cache_bypass $http_upgrade;
proxy_pass http://apiserver-ssl;
proxy_connect_timeout 3;
proxy_buffering off;
}
}
I don't know that this was OP's issue, but I ran into similar issues in my docker swarm which were all relating to IPV6 support.
My approach was to ensure that I was always proxying to an IPV4 address, rather than leaving it up to chance.
Relating to localhost:
And relating to docker-internal DNS queries: