"OKD 3.11 The connection to the master server was refused - did you specify the right host or port?"

1.1k views Asked by At

I just went through the exercise deploying OKD 3.11 and was mostly successful up to the pre-check of the first ansible playbook for the prerequistises. Upon running the second ansible playbook to perform the installation of OKD, I am see timeout for the oc get master on port 8443. The port should be block as the firewalld service is not running. Insight please!

TASK [openshift_control_plane : fail] 
**************************************************************************
skipping: [192.168.56.122]

TASK [openshift_control_plane : Wait for all control plane pods to come up and become ready] 
*******************
FAILED - RETRYING: Wait for all control plane pods to come up and become ready (72 retries left).
FAILED - RETRYING: Wait for all control plane pods to come up and become ready (71 retries left).
FAILED - RETRYING: Wait for all control plane pods to come up and become ready (70 retries left).

failed: [192.168.56.122] (item=etcd) => {"attempts": 72, "changed": false, "item": "etcd", "msg": 
{"cmd": "/usr/bin/oc get pod master-etcd-master.cccd-lab.local -o json -n kube-system", "results": 
[{}], "returncode": 1, "stderr": "The connection to the server master.cccd-lab.local:8443 was refused 
- did you specify the right host or port?\n", "stdout": ""}}

My inventory file is as this,

[root@master opt]# cat inventory.ini [OSEv3:children] master nodes etcd

[OSEv3:vars]

ansible_ssh_user=root

ansible_become=true
openshift_master_default_subdomain=infra.cccd-lab.local
deployment_type=origin
#New addition

[nodes:vars]
openshift_disable_check=disk_availability,memory_availability,docker_storage
[masters:vars]
openshift_disable_check=disk_availability,memory_availability,docker_storage

openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 
'kind': 'HTPasswdPasswordIdentityProvider'}]

[masters]
192.168.56.122

[etcd]
192.168.56.122

[nodes]
192.168.56.120  openshift_node_group_name='node-config-compute'
192.168.56.121  openshift_node_group_name='node-config-infra'
192.168.56.122  openshift_node_group_name='node-config-master'
#compute openshift_ip=192.168.56.120 openshift_schedulable=true openshift_node_group_name='node-c 
config-compute'
#infra openshift_ip=192.168.56.121 openshift_schedulable=true openshift_node_group_name='node-config- 
infra'
#master openshift_ip=192.168.56.122 openshift_schedulable=true openshift_node_group_name='node- 
config-master'

In investigating further, I am noted the following

oc get pod master-etcd-master.cccd-lab.local -o json -n kube-system", "results": [{}],

Which is  . . .

The connection to the server master.cccd-lab.local:8443 was refused - did you specify the right host 
or port?\n", "stdout": ""}}
[root@master opt]# netstat -tupln | grep LISTEN
tcp        0      0 10.0.2.15:53            0.0.0.0:*               LISTEN      19370/dnsmasq       
tcp        0      0 192.168.56.122:53       0.0.0.0:*               LISTEN      19370/dnsmasq       
tcp        0      0 172.17.0.1:53           0.0.0.0:*               LISTEN      19370/dnsmasq       
tcp        0      0 192.168.122.1:53        0.0.0.0:*               LISTEN      1753/dnsmasq        
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1354/sshd           
tcp        0      0 127.0.0.1:631           0.0.0.0:*               LISTEN      1357/cupsd          
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1846/master         
tcp        0      0 127.0.0.1:43644         0.0.0.0:*               LISTEN      17379/hyperkube     
tcp        0      0 0.0.0.0:8444            0.0.0.0:*               LISTEN      14284/openshift     
tcp        0      0 10.0.2.15:2379          0.0.0.0:*               LISTEN      14349/etcd          
tcp        0      0 10.0.2.15:2380          0.0.0.0:*               LISTEN      14349/etcd          
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      760/rpcbind         
tcp6       0      0 fe80::5fe7:910c:c2de:53 :::*                    LISTEN      19370/dnsmasq       
tcp6       0      0 fe80::a00:27ff:fe5d::53 :::*                    LISTEN      19370/dnsmasq       
tcp6       0      0 :::22                   :::*                    LISTEN      1354/sshd           
tcp6       0      0 ::1:631                 :::*                    LISTEN      1357/cupsd          
tcp6       0      0 ::1:25                  :::*                    LISTEN      1846/master         
tcp6       0      0 :::10250                :::*                    LISTEN      17379/hyperkube     
tcp6       0      0 :::111                  :::*                    LISTEN      760/rpcbind         
[root@master opt]# ^C
[root@master opt]# 

Not sure how to correct the issue.

1

There are 1 answers

1
Louis Preston Thornton III On BEST ANSWER

There were a couple of changes I had to make in order to make this work. First I decided to abandon my Virtualbox environment after discovering a certificate error determined by some additional research.

So, starting again with VMware Workstation 15 Pro, performed the following changes,

  1. Pick an IP Address range I wanted to work with and then disabled the DHCP Server within the application.
  2. Setup your RHEL7/Centos VM's with the attributes
    (+) hostname (DNS: nip.io) - [master.|compute.|infra.]<IP Address>.nip.io
    (+) Memory - 4 RAM or more
    (+) Processor - Number of processors: 2, Total processor cores: 2
    (+) Add two separate Hard Disk 
    (+) Ideally, set the NIC to a static IP Address.  Google for details
  1. Starting on the master either attached a Red Hat Subscription (RHEL7), or Centos 7 Repo and install the required packages, using a "yum localinstall *rpm," followed by a "yum update".
    [Syntax] yum install --downloadonly --downloaddir=<directory> <package>

    # yum install --downloadonly --downloaddir=<directory of choice> install -y wget git zile net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct openssl-devel httpd-tools  python-cryptography pyt  hon2-pip python-devel python-passlib java-1.8.0-openjdk-headless "@Development Tools"
  1. Establish a shared filesystem and move the bits over to the other systems, rinse, and repeat.
  2. Install the docker 1.13.1,
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo
# cd /<directory of choice>/<pkg>/docker/
# yum localinstall *rpm -y
# docker version

Configure the new disk added to the system earlier and reboot

# vim docker-storage-setup 
Edit the file to just the following two lines.

DEVS=/dev/sdb
VG=docker-vgo

# docker-storage-setup 
# systemctl enable docker.service  - -now
# systemctl status docker.service

Install ansible 2.7, yes, 2.7!


Note 1: If ansible version is less than 2.4 or 2.8 then remove, install ansible 2.7 from the package (yum remove ansible)
Note 2: If the ansible version is not locally available, then retrieve from the following location
(# rpm -Uvh https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.7.10-1.el7.ans.noarch.rpm)

# cd /tmp/ansible/ansible2710
# yum localinstall *rpm -y
# ansible --version

The key thing to avoiding the master refused for me was to which to using "nip.io" for DNS, and setting up the /etc/resolv.conf as follows,

search nip.io
nameserver 8.8.8.8
nameserver 8.8.4.4
nameserver 75.75.75.75

Pay attention to the "search" line.

All this yielded

PLAY RECAP *********************************************************************
192.168.196.140            : ok=724  changed=317  unreachable=0    failed=0   
192.168.196.141            : ok=136  changed=69   unreachable=0    failed=0   
192.168.196.142            : ok=137  changed=69   unreachable=0    failed=0   
localhost                  : ok=11   changed=0    unreachable=0    failed=0   


INSTALLER STATUS ***************************************************************
Health Check                 : Complete (0:01:06)
Node Bootstrap Preparation   : Complete (0:37:12)
etcd Install                 : Complete (0:04:55)
Master Install               : Complete (0:18:15)
Master Additional Install    : Complete (0:02:52)
Node Join                    : Complete (0:07:10)
Hosted Install               : Complete (0:03:11)
Cluster Monitoring Operator  : Complete (0:02:06)
Web Console Install          : Complete (0:02:33)
Console Install              : Complete (0:02:20)
Service Catalog Install      : Complete (0:08:09)