I'm using Autogluon, calling Tabularpredictor.predict from multiple processes at the same time.
Each call to predict appears to be using all the cores on my machine, and as a result it appears that the time taken for each predict is a LOT longer than if I just do it from a single process.
As a result, I would like to limit TabularPredictor.predict to just use 1 CPU. I can't see how to do that in the documentation. I also tried to mock os.cpu_count. But that did not help.
What else could I try?
To limit the number of CPUs used by the Autogluon TabularPredictor.predict function, you can try using the set_num_gpus and set_num_cpus methods provided by the autogluon.utils.tabular.ml.models.abstract.abstract_model module. These methods allow you to set the number of GPUs and CPUs to be used during training and inference.
Here's an example of how you can limit the number of CPUs used:
By setting num_cpus to 1, you instruct Autogluon to use only one CPU during the prediction operation.
Note that the set_num_cpus method affects all subsequent calls to predict from the same process. If you are running TabularPredictor.predict from multiple processes, you might need to set num_cpus within each process to ensure they are isolated.
If setting the number of CPUs using set_num_cpus does not have the desired effect, you can try using other methods provided by the Autogluon library to control resource allocation.