WebSocket connection failed when using Kamal

31 views Asked by At

After deploying my rails app with Kamal, my frontend is not able to connect to /cable endpoint anymore. The message in browser log says: WebSocket connection to 'wss://domain.com/cable' failed: without giving me any more information about why it failed after the colon. Also, the rails log is empty for /cable route. The only suspicious thing i can see in traefik logs are these lines:

2024-03-22T18:56:17.441214126Z time="2024-03-22T18:56:17Z" level=debug msg="'499 Client Closed Request' caused by: context canceled"

But i'm not sure if that's related as this app is on production already with decent traffic so i'm not able to tell if that relates to the failing wss connection or not. However, the amount of these logs doesn't match the amount of failed wss connection retries so i would say that's not it.

My kamal config looks as follows:

service: myapp

image: user/myapp

volumes:
  - "/home/app/myapp-cache:/app/tmp/cache"
  - "/home/app/myapp-shared:/app/shared"
  - "/home/app/myapp-storage:/app/storage"

servers:
  web:
    hosts:
      - x.x.x.x
    labels:
      traefik.http.routers.myapp.entrypoints: websecure
      traefik.http.routers.myapp.rule: Host(`domain.com`)
      traefik.http.routers.myapp.tls.certresolver: letsencrypt
    options:
      network: "private"
  job:
    hosts:
      - x.x.x.x
    cmd: bundle exec rake solid_queue:start
    options:
      network: "private"
  clock:
    hosts:
      - x.x.x.x
    cmd: bundle exec clockwork clock.rb
    options:
      network: "private"

registry:
  server: ghcr.io
  username: user

  password:
    - KAMAL_REGISTRY_PASSWORD

# Inject ENV variables into containers (secrets come from .env).
# Remember to run `kamal env push` after making changes!
env:
  clear:
    HOSTNAME: domain.com
    APP_DOMAIN: domain.com
    DB_HOST: x.x.x.x
    RAILS_SERVE_STATIC_FILES: true
    RAILS_LOG_TO_STDOUT: true
    ARTISTS_TAXONOMY_ID: 9
    CATEGORIES_TAXONOMY_ID: 8
    PATTERNS_TAXONOMY_ID: 10
    FLIPPER_PSTORE_PATH: shared/flipper.pstore
  secret:
    - POSTGRES_PASSWORD
    - RAILS_MASTER_KEY

ssh:
  user: app

builder:
  dockerfile: Dockerfile.production
  multiarch: false
  cache:
    type: registry

accessories:
  db:
    image: postgres:15
    host: x.x.x.x
    port: 5432
    env:
      clear:
        POSTGRES_USER: "myapp"
        POSTGRES_DB: 'myapp_production'
      secret:
        - POSTGRES_PASSWORD
    files:
      - config/init.sql:/docker-entrypoint-initdb.d/setup.sql
    directories:
      - data:/var/lib/postgresql/data
    options:
      network: "private"

traefik:
  args:
    accesslog: true
  options:
    network: "private"
    publish:
      - "443:443"
    volume:
      - "/letsencrypt/acme.json:/letsencrypt/acme.json"
  args:
    entryPoints.web.address: ":80"
    entryPoints.websecure.address: ":443"
    entryPoints.web.http.redirections.entryPoint.to: websecure # We want to force https
    entryPoints.web.http.redirections.entryPoint.scheme: https
    entryPoints.web.http.redirections.entrypoint.permanent: true
    certificatesResolvers.letsencrypt.acme.email: "[email protected]"
    certificatesResolvers.letsencrypt.acme.storage: "/letsencrypt/acme.json" # Must match the path in `volume`
    certificatesResolvers.letsencrypt.acme.httpchallenge: true
    certificatesResolvers.letsencrypt.acme.httpchallenge.entrypoint: web

healthcheck:
  path: /health/ready
  port: 4000
  max_attempts: 15

# Bridge fingerprinted assets, like JS and CSS, between versions to avoid
# hitting 404 on in-flight requests. Combines all files from new and old
# version inside the asset_path.
# asset_path: /rails/public/assets

# Configure rolling deploys by setting a wait time between batches of restarts.
# boot:
#   limit: 10 # Can also specify as a percentage of total hosts, such as "25%"
#   wait: 2

# Configure the role used to determine the primary_host. This host takes
# deploy locks, runs health checks during the deploy, and follow logs, etc.
#
# Caution: there's no support for role renaming yet, so be careful to cleanup
#          the previous role on the deployed hosts.
# primary_role: web

# Controls if we abort when see a role with no hosts. Disabling this may be
# useful for more complex deploy configurations.
#
# allow_empty_roles: false
0

There are 0 answers