Dealing with Zombie PHP processes on shared hosting

539 views Asked by At

I am using a shared hosting. I see ever growing amount of processes. When I looked into ps aux, I saw that about two <defunct> processes of lsphp are added every day. Now the total number is over 40 and this hosting has a hard cap of 100 total processes which makes me worried.

I noticed that the running TIME of these processes was 0:29 or 0:30. This makes me suspicious that these are the processes that run out of execution time. I set this time manually in my code near the start: ini_set('max_execution_time', 30); because the slowest scripts on my site run some 3-5 seconds and 30 seconds seemed like giving a huge enough gap to know that the process has hung up and won't be continued. However this seems to have backfired.

I have looked into the application logs and it seems that the hung processes (at least the ones exceeding time limit) are the DomPDF ones which are the longest. My guess would be that the user requests PDF but closes connection before the PDF was prepared and the answer was sent and maybe this somehow leaves the process is idle state... Or just DomPDF gets itself in this state?

What could be the cause? What can I do to solve this?

Can I somehow (by modifying PHP script) prevent these processes from zombifying? Do I have any chance to kill off the processes (I have no rights to reboot the machine or to kill the parent process).

1

There are 1 answers

0
symcbean On

You've told us nothing about how the system is configured other than that it is "shared hosting". That it is shared hosting means that you are unlikely to have the access to influence its behaviour much, and the first person you should be speaking to to is the service provider - after all you are paying them for support.

That your are seeing php process suggests that it is configured as CGI or fastCGI - (or, god forbid, suPHP). While there are many reasons that you might have zombie processes, these may not be counted towards your server limit (you didn't say how this was implemented/enforced). Likely causes are that:

  • you have some sort of fCGI process manager which is enforcing a time limit - which you probably don't have access to
  • you have ignore_user_abort(true) in some of your scripts

Go speak to your hosting provider.