I'm trying to bring up a Kubernetes cluster in OpenStack using the steps mentioned here -> http://kubernetes.io/docs/getting-started-guides/openstack-heat/
The command KUBERNETES_PROVIDER=openstack-heat ./cluster/kube-up.sh
fails with the following messages:
... Starting cluster using provider: openstack-heat
... calling verify-prereqs
swift client installed
glance client installed
nova client installed
heat client installed
openstack client installed
... calling kube-up
kube-up for provider openstack-heat
[INFO] Execute commands to create Kubernetes cluster
[INFO] Uploading kubernetes-server-linux-amd64.tar.gz
kubernetes-server.tar.gz
[INFO] Uploading kubernetes-salt.tar.gz
kubernetes-salt.tar.gz
[INFO] Image CentOS7 already exists
[INFO] Key pair already exists
Stack not found: KubernetesStack
[INFO] Retrieve new image ID
[INFO] Image Id 44284b7f-4f83-4c5d-89a2-992fab6ddaa3
[INFO] Create stack KubernetesStack
b'#cloud-config\nmerge_how: dict(recurse_array)+list(append)\nbootcmd:\n - mkdir -p /etc/salt/minion.d\n - mkdir -p /srv/salt-overlay/pillar\nwrite_files:\n - path: /etc/salt/minion.d/log-level-debug.conf\n content: |\n log_level: warning\n log_level_logfile: warning\n - path: /etc/salt/minion.d/grains.conf\n content: |\n grains:\n node_ip: $MASTER_IP\n publicAddressOverride: $MASTER_IP\n network_mode: openvswitch\n networkInterfaceName: eth0\n api_servers: $MASTER_IP\n cloud: openstack\n cloud_config: /srv/kubernetes/openstack.conf\n roles:\n - $role\n runtime_config: ""\n docker_opts: ""\n master_extra_sans: "DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:kubernetes-master"\n keep_host_etcd: true\n kube_user: $KUBE_USER\n - path: /srv/kubernetes/openstack.conf\n content: |\n [Global]\n auth-url=$OS_AUTH_URL\n username=$OS_USERNAME\n password=$OS_PASSWORD\n region=$OS_REGION_NAME\n tenant-id=$OS_TENANT_ID\n [LoadBalancer]\n lb-version=$LBAAS_VERSION\n subnet-id=$SUBNET_ID\n floating-network-id=$FLOATING_NETWORK_ID\n - path: /srv/salt-overlay/pillar/cluster-params.sls\n content: |\n service_cluster_ip_range: 10.246.0.0/16\n cert_ip: 10.246.0.1\n enable_cluster_monitoring: influxdb\n enable_cluster_logging: "true"\n enable_cluster_ui: "true"\n enable_node_logging: "true"\n logging_destination: elasticsearch\n elasticsearch_replicas: "1"\n enable_cluster_dns: "true"\n dns_server: 10.246.0.10\n dns_domain: cluster.local\n enable_dns_horizontal_autoscaler: "false"\n federations_domain_map: \'\'\n instance_prefix: kubernetes\n admission_control: NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota\n enable_cpu_cfs_quota: "true"\n network_provider: none\n opencontrail_tag: R2.20\n opencontrail_kubernetes_tag: master\n opencontrail_public_subnet: 10.1.0.0/16\n e2e_storage_test_environment: "false"\n' is not JSON serializable
The last line is different for different executions of the kube-up.sh command. I have noticed that this corresponds to the yaml and sh files present in /cluster/openstack-heat/kubernetes-heat/kubecluster.yaml
. In this example this is ./cluster/openstack-heat/kubernetes-heat/fragments/configure-salt.yaml
For some reason, it is not able to merge the contents of these files into kubecluster.yaml.
Any ideas?
Managed to solve this. It was because of a bug in
python-heatclient 1.1.0
. https://bugs.launchpad.net/python-heatclient/+bug/1589519Upgrading to version 1.3.0 solved the issue.