Would like to know how to link a persistent volumen to store the data to a mongoDB docker container hosted on google cloud run in order to be shared with multiple instances when cloud run scale down/up for a number of request to the database?
and second on google cloud run container can be private in order to be access by any docker container on cloud run or any cloud functions but no public by it's ip?
third what's is the auto scaling best practice for concurrency in order to achieve the best performance at low cost?
thanks a lot.
By definition, Cloud Run is stateless. That means you can't keep state from one instance to another one, and thus to store local data. Mounting a volume on Cloud Run is impossible. You can access to external services (Databases and file storage), but on Cloud Run instances, only the
/tmp
directory is writable (it's an in memory storage)The concept of public and private goes beyond the concept of IP. Firstly, Google repeats to not trust the network. Secondly, the concept of private mean: you need to be authenticated and authorized to access it. There isn't a (old school) DMZ between Cloud Run or Cloud Functions services. Each must be authenticated and authorized to called the private services. It's 0 trust concept. With Cloud Function and soon with Cloud Run, you can also filter by the network originator (from Google Cloud VPC or from internet) and thus to reject traffic from outside your project.
It depends!!! What is your workload? processing time? memory/cpu consumption per request? the cold start duration?... There is many factors to answer you correctly.