When I googled, there were some answers saying that in kubernetes, 100ms cpu means that you are going to use 1/10 time of one cpu core, and 2300ms cpu means that you are going to use 2 cores fully and 3/10 time of another cpu core. Is it correct?

I just wonder if multiple threads can run in parallel on multiple cores at the same time when using under 1000ms cpu requests in kubernetes.

1

There are 1 answers

0
PjoterS On

Regarding first part, it's true that you can use part of CPU resource to run some tasks. In Kubernetes documentation - Managing Resources for Containers you can find information that you can specify minimal resources requirements - requests to run pod and limits which cannot be exceeded.

It's well described in this article

Requests and limits are the mechanisms Kubernetes uses to control resources such as CPU and memory. Requests are what the container is guaranteed to get. If a container requests a resource, Kubernetes will only schedule it on a node that can give it that resource. Limits, on the other hand, make sure a container never goes above a certain value. The container is only allowed to go up to the limit, and then it is restricted.

CPU Requests/Limits:

CPU resources are defined in millicores. If your container needs two full cores to run, you would put the value 2000m. If your container only needs ΒΌ of a core, you would put a value of 250m. One thing to keep in mind about CPU requests is that if you put in a value larger than the core count of your biggest node, your pod will never be scheduled.

Regarding second part, you can use multiple threads in parallel. Good example of this is Kubernetes Job.

A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel.

Especially part about Parallel execution for Jobs

You can also check Parallel Processing using Expansions to run multiple Jobs based on a common template. You can use this approach to process batches of work in parallel. In this documentation you can find example with description how it works.