ZSH keeps breaking with `zsh: fork failed:`

11.1k views Asked by At

In the past few weeks (possibly since I upgraded to Sierra) I keep getting this weird issue in iTerm2 using ZSH.

Basically, at intermittent points during my regular workflow, commands will stop working properly with the error:

_run-with-bundler:5: fork failed: resource temporarily unavailable
zsh: fork failed: resource temporarily unavailable

Does anybody know why this is happening, and how I can fix it?

2

There are 2 answers

1
Salamit On

I had this issue for almost a week and it was driving me nuts because I would have to restart my computer every. single. time.

In my case, it was a cron process. I had the process running every minute which seemed to be using up a lot of memory. Steps to solve the problem in my case were.

  1. Run crontab -e and reduced the frequency of the cron process running.
  2. Run Activity Monitor.
  3. If the Activity Monitor icon jumps up and down and doesn't open up, close down a some programs. In my case, I shut down Evernote and Slack. That freed up some memory, I think and Activity Monitor opened up.
  4. Click on the Process Name tab in Activity Monitor.
  5. Scroll down, and you may see a process that keeps repeating. (Here, I saw a lot of cron)
  6. You wan to stop all those processes. Select all the renegade processes and in the top left corner, click the x to quit them all. .
  7. That solved my problem.

I will watch over the next few days. If anything changes, I will update. Otherwise, that means it worked.

Goodluck!

0
Mac Strelioff On

This error might reflect a memory leak in your workflow. I had the issue with an automated script lately, and found that memory usage increased to around 100% before my program failed with this message.

You can generally check for memory leakage by running the Activity Monitor application on your Mac OS and navigating to the memory tab. There's also many ways to monitor memory from a zsh terminal, e.g. you can print out the number of processes with $ ps -eLf | wc -l, or check free memory with free -m.

If it is a memory issue, the best fix would be to rewrite your workflow to be more memory efficient. Another fix could be to increase your computer's limit on the processes it can run, e.g. by adding the following to your etc/profile file;

if [ $USER = "oracle" ]; then
    if [ $SHELL = "/bin/ksh" ]; then
        ulimit -p 16384
        ulimit -n 65536
    else
        ulimit -u 16384 -n 65536
    fi
fi

References: