PHP FPM 7.1 socket leak causes NGINX - 504 Gateway Time-out

2.3k views Asked by At

I use Laravel Forge for spinning up my EC2 environments, which makes a LEMP stacks for me. I recently started getting 504 timeouts on requests.

I'm no sysadmin (hence subscription to Forge), but I looked through the logs and narrowed the issue down to these 2 repeated entries in my logs:

in: /var/log/nginx/default-error.log

2017/09/15 09:32:17 [error] 2308#2308: *1 upstream timed out (110: Connection timed out) while sending request to upstream, client: x.x.x.x, server: xxxx.com, request: "POST /upload HTTP/2.0", upstream: "fastcgi://unix:/var/run/php/php7.1-fpm.sock", host: "xxxx.com", referrer: "https://xxxx.com/rest/of/the/path"

in: /var/log/php7.1-fpm-log

[15-Sep-2017 09:35:09] WARNING: [pool www] seems busy (you may need to increase pm.start_servers, or pm.min/max_spare_servers), spawning 8 children, there are 0 idle, and 14 total children

It seems like fpm opens connections that never die, and from my RDS load logs I can see that the RAM is constantly maxed out.

I've tried:

  • Rolling back to a definite stable version of my app (2months ago)
  • Reinstalling my EC2 with 5.6, 7.0, and 7.1 (with their respective fpm)
  • Doing all the above on 14.04 and 16.04
  • Creating a bigger RDS

Right now the only thing that works is a beefy RDS (8gb RAM) + killing fpm pooled connections every 300 requests. But obviously throwing resources at this problem is not the solution.

Here is my config for /etc/php/7.1/fpm/pool.d/www.conf

user = forge
group = forge
listen = /run/php/php7.1-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0666
pm = dynamic
pm.max_children = 30
pm.start_servers = 7
pm.min_spare_servers = 6
pm.max_spare_servers = 10
pm.process_idle_timeout = 7s;
pm.max_requests = 300

And here is my config for nginx.conf

listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name xxxx.com;
root /home/forge/xxxx.com/public;

# FORGE SSL (DO NOT REMOVE!)
ssl_certificate /etc/nginx/ssl/xxxx.com/111111/server.crt;
ssl_certificate_key /etc/nginx/ssl/xxxx.com/111111/server.key;

ssl_protocols xxxx;
ssl_ciphers ...;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparams.pem;

add_header X-Frame-Options "SAMEORIGIN";
add_header X-XSS-Protection "1; mode=block";
add_header X-Content-Type-Options "nosniff";

index index.html index.htm index.php;

charset utf-8;

# FORGE CONFIG (DOT NOT REMOVE!)
include forge-conf/xxxx.com/server/*;

location / {
    try_files $uri $uri/ /index.php?$query_string;
}

location = /favicon.ico 
location = /robots.txt  

access_log /var/log/nginx/xxxx.com-access.log;
error_log  /var/log/nginx/xxxx.com-error.log error;

error_page 404 /index.php;

location ~ \.php$ {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
    fastcgi_index index.php;
    fastcgi_read_timeout 60;
    include fastcgi_params;
}

location ~ /\.(?!well-known).* {
    deny all;
}

location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
    expires 30d;
    add_header Pragma public;
    add_header Cache-Control "public";
}
1

There are 1 answers

0
Artur Grigio On

OK, after a LOT of debugging and testing I've noticed these few causes.

  • Primary Cause for me: The AWS RDS instance that I was using for my MySQL had 500Mb of memory. Looking back, all these issues started once the DB size surpassed 400Mb.

    • Solution: Make sure you have 2x RAM of your DB size at all times. Otherwise the entire B+Tree doesn't fit in the memory, so it has to do constant swaps. This can take your query time upwards of 15 secs.
  • Primary Cause for problems like these: Not optimized SQL queries.

    • Solution: In your localhost maintain data similar to the size of your data on the server.