We have a single node kubernetes environment hosted on an on prem server and we are attempting to host jitsi on it as a single pod. Jitsi web, jicofo, jvb and the prosody will be in on one pod rather than having separate pods for each (reference here)
So far what we have managed to set it up by adding our ingress hostname to as the PUBLIC_URL to all 4 containers within the pod. This service works fine if two users are on the same network.
If a user using another network joins the call, there is no video or audio and will receive such an error in the jvb container
JVB 2022-03-16 02:03:28.447 WARNING: [62] [confId=200d989e4b048ad3 gid=116159 stats_id=Durward-H4W [email protected] ufrag=4vfdk1fu8vfgn1 epId=eaff1488 local_ufrag=4vfdk1fu8vfgn1] ConnectivityCheckClient.startCheckForPair#374: Failed to send BINDING-REQUEST(0x1)[attrib.count=6 len=92 tranID=0xBFC4F7917F010AF9DA6E21D7] java.lang.IllegalArgumentException: No socket found for 172.17.0.40:10000/udp->192.168.1.23:42292/udp at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:631) at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:581) at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267) at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245) at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680) at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:335) at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:231) at org.ice4j.ice.ConnectivityCheckClient$PaceMaker.run(ConnectivityCheckClient.java:938) at org.ice4j.util.PeriodicRunnable.executeRun(PeriodicRunnable.java:206) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)
Furthermore the errors in the browser console are as such
EDIT
I have added the yaml file for the jitsi here
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: jitsi
name: jitsi
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
k8s-app: jitsi
template:
metadata:
labels:
k8s-app: jitsi
spec:
containers:
- name: jicofo
image: jitsi/jicofo:stable-7001
volumeMounts:
- mountPath: /config
name: jicofo-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: prosody
image: jitsi/prosody:stable-7001
volumeMounts:
- mountPath: /config
name: prosody-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JVB_AUTH_USER
value: jvb
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: web
image: jitsi/web:stable-7001
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: JICOFO_AUTH_USER
value: focus
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: XMPP_BOSH_URL_BASE
value: http://127.0.0.1:5280
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: jvb
image: jitsi/jvb:stable-7001
volumeMounts:
- mountPath: /config
name: jvb-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: DOCKER_HOST_ADDRESS
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
# - name: JVB_STUN_SERVERS
# value: stun.l.google.com:19302,stun1.l.google.com:19302,stun2.l.google.com:19302
- name: JICOFO_AUTH_USER
value: focus
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: JVB_AUTH_USER
value: jvb
- name: JVB_PORT
value: "10000"
- name: JVB_TCP_PORT
value: "4443"
- name: JVB_TCP_MAPPED_PORT
value: "4443"
# - name: JVB_ENABLE_APIS
# value: "rest,colibri"
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: TZ
value: America/Los_Angeles
volumes:
- name: jvb-config-volume
hostPath:
path: /home/jitsi-config/jvb
- name: jicofo-config-volume
hostPath:
path: /home/jitsi-config/jicofo
- name: prosody-config-volume
hostPath:
path: /home/jitsi-config/prosody
EDIT 2
apiVersion: v1
kind: Service
metadata:
labels:
service: web
name: web
namespace: default
spec:
ports:
- name: "http"
protocol: TCP
port: 80
targetPort: 80
nodePort: 31015
- name: "https"
protocol: TCP
port: 443
targetPort: 443
nodePort: 30443
- name: "prosody"
protocol: TCP
port: 5222
targetPort: 5222
- port: 30300
name: jvb-0
protocol: UDP
targetPort: 30300
nodePort: 30300
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
- name: "udp"
protocol: UDP
port: 10000
targetPort: 10000
# - name: "udp-secondary"
# protocol: UDP
# port: 20000
# targetPort: 20000
- name: "test"
protocol: TCP
port: 4443
targetPort: 4443
selector:
k8s-app: jitsi
type: NodePort
---
# service for jvbs
# create service for jvb upd access on kubernetes Nodeport starting with 31000.
# Make sure NodePorts between 31000-31005 are available on your kube cluster.
# update this if you need JVBs more than 6.
# JVB-0
apiVersion: v1
kind: Service
metadata:
labels:
service: jvb-0
name: jvb-0
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 31000
name: jvb-0
protocol: UDP
targetPort: 31000
nodePort: 31000
# - name: "udp"
# protocol: UDP
# port: 10000
# targetPort: 10000
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
selector:
app: jvb
"statefulset.kubernetes.io/pod-name": jvb-0
---
Managed to fix it. Posting this for anyone who comes across the same issue.
first off the UDP port 10000 does not work in kubernetes as you can only expose ports between 30000 to 32768. Having said that you need to pick a port within that range and use it for the JVB_PORT configuration in the JVB container.
secondly use that port in the service lay to expose it to the front end
Thirdly, regarding the firewall and, if you are behind a company firewall, make sure you have enabled ingress and egress for your JVB_PORT