I am attempting to run a simple daily batch script that can run for some hours, after which it will send the data it generated and shut down the instance. To achieve that, I have put the following into user-data
:
users:
- name: cloudservice
uid: 2000
runcmd:
- sudo HOME=/home/root docker-credential-gcr configure-docker
- |
sudo HOME=/home/root docker run \
--rm -u 2000 --name={service_name} {image_name} {command}
- shutdown
final_message: "machine took $UPTIME seconds to start"
I am creating the instance using a python script to generate the configuration for the API like so:
def build_machine_configuration(
compute, name: str, project: str, zone: str, image: str
) -> Dict:
image_response = (
compute.images()
.getFromFamily(project="cos-cloud", family="cos-stable")
.execute()
)
source_disk_image = image_response["selfLink"]
machine_type = f"zones/{zone}/machineTypes/n1-standard-1"
# returns the cloud init from above
cloud_config = build_cloud_config(image)
config = {
"name": f"{name}",
"machineType": machine_type,
# Specify the boot disk and the image to use as a source.
"disks": [
{
"type": "PERSISTENT",
"boot": True,
"autoDelete": True,
"initializeParams": {"sourceImage": source_disk_image},
}
],
# Specify a network interface with NAT to access the public
# internet.
"networkInterfaces": [
{
"network": "global/networks/default",
"accessConfigs": [{"type": "ONE_TO_ONE_NAT", "name": "External NAT"}],
}
],
# Allow the instance to access cloud storage and logging.
"serviceAccounts": [
{
"email": "default",
"scopes": [
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
],
}
],
# Metadata is readable from the instance and allows you to
# pass configuration from deployment scripts to instances.
"metadata": {
"items": [
{
# Startup script is automatically executed by the
# instance upon startup.
"key": "user-data",
"value": cloud_config,
},
{"key": "google-monitoring-enabled", "value": True},
]
},
}
return config
I am however running out of disk space inside the docker engine.
Any ideas on how to increase the size of the volume available to docker services?
The Docker engine uses the space of the disk of the Instance. So if the container doesn't have space is because the disk of the Instance is full.
The first thing that you can try to do is create an Instance with a bigger disk. The documentation says:
You could increase the size adding the field
diskSizeGb
in the deployment:Other thing you could try is execute the following command in the instance to see if the disk is full and what partition is full:
In the same way you could execute the following command to see the disk usage of the Docker Engine:
If you want more infomration you could use the flag -v