Skip to main content
Version: 24.1

Install on a Kubernetes cluster

This installation guide describes how to install Seqera Platform on a Kubernetes cluster.

Prerequisites

The following prerequisites are required to deploy Seqera on a Kubernetes cluster.

  • A Kubernetes cluster version 1.30.2 or newer
  • A local installation of kubectl CLI
  • Access to the cluster with the cluster-admin role

1. Create a namespace

Create a namespace to isolate Kubernetes resources used by Seqera Platform from the other resources on your cluster.

This installation guide assumes the use of seqera-platform as the installation namespace. Consider using a different one that better fits your cluster naming convention.

  1. Create a namespace for the Seqera resources:

    kubectl create namespace seqera-platform
    View command output
    namespace/seqera-platform created
  2. Switch to the namespace:

    kubectl config set-context --current --namespace=seqera-platform

2. Configure container registry credentials

Seqera Enterprise is distributed as a collection of Docker containers available through the Seqera container registry cr.seqera.io. Contact support to get your container access credentials. After you've received your credentials, grant your cluster access to the registry:

  1. Retrieve the name and secret values from the JSON file that you received from Seqera support.

  2. Create a secret for the image pull secret:

    kubectl create secret docker-registry cr.seqera.io \
    --docker-server=cr.seqera.io \
    --docker-username='<name>' \
    --docker-password='<secret>'

    The credential name contains a dollar $ character. Wrap the name in single quotes to prevent the Linux shell from interpreting this value as an environment variable.

    View command output
    secret/cr.seqera.io created
  3. Confirm that the secret exists:

    kubectl get secrets cr.seqera.io
    View command output
    NAME           TYPE                             DATA   AGE
    cr.seqera.io kubernetes.io/dockerconfigjson 1 26s
  4. Confirm that you can pull containers from cr.seqera.io:

    1. Pull a container from the private repository:

      kubectl run pull-test --command --restart=Never --image-pull-policy=Always \
      --image cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4 \
      --overrides='{ "spec": { "imagePullSecrets": [ { "name": "cr.seqera.io" } ] } }' \
      --override-type=strategic -- /bin/true
      View command output
      pod/pull-test created
    2. Confirm that the container was pulled:

      kubectl get pods/pull-test -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase
      View command output
      NAME        STATUS
      pull-test Succeeded

3. Configure Seqera Platform

Configure the following environment variables. For more information about Seqera configuration options, see Configuration overview.

The configmap.yml manifest includes both the tower.env and tower.yml files. These files are made available to the other containers through volume mounts.

  1. Create a file named configmap.yml with the following Kubernetes manifest:

    Show configmap.yml file
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: tower-backend-cfg
    labels:
    app: backend-cfg
    data:
    TOWER_ENABLE_UNSAFE_MODE: "true"
    TOWER_ROOT_USERS: "<root_users>"
    TOWER_SERVER_URL: "http://localhost:8080"
    TOWER_CONTACT_EMAIL: "user@example.com"
    TOWER_REDIS_URL: "redis://redis:6379"
    TOWER_DB_URL: "jdbc:mysql://mysql:3306/tower?permitMysqlScheme=true"
    TOWER_DB_DRIVER: "org.mariadb.jdbc.Driver"
    TOWER_DB_USER: "tower"
    TOWER_DB_PASSWORD: "tower"
    TOWER_DB_DIALECT: "io.seqera.util.MySQL55DialectCollateBin"
    TOWER_SMTP_HOST: "mailcatcher"
    TOWER_SMTP_USER: ""
    TOWER_SMTP_PASSWORD: ""
    TOWER_JWT_SECRET: "<jwt_secret>"
    TOWER_CRYPTO_SECRETKEY: "<crypt_secret>"
    TOWER_LICENSE: "<license>"
    TOWER_ENABLE_PLATFORMS: "local-platform"
    FLYWAY_LOCATIONS: "classpath:db-schema/mysql"
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: tower-yml
    labels:
    app: backend-cfg
    data:
    tower.yml: |
    mail:
    smtp:
    auth: false
    starttls:
    enable: false
    required: false
  2. Generate two unique secrets with the following command: openssl rand -base64 32 | tr -d /=+ | cut -c -32

  3. Edit the configmap.yml file and set the following environment variables:

    • TOWER_ROOT_USERS: Specify your email address
    • TOWER_JWT_SECRET: Specify a unique secret that is at least 35 alphanumeric characters
    • TOWER_CRYPTO_SECRETKEY: Specify a unique secret.
    • TOWER_LICENSE: Specify your Seqera license key, if known. Otherwise, leave this empty.
  4. Apply the config map:

    kubectl apply -f configmap.yml
    View command output
    configmap/tower-backend-cfg created
    configmap/tower-yml created

4. Deploy a Redis instance

Seqera Enterprise requires a Redis database for caching purposes. Configure Redis manually by deploying a manifest to your cluster.

  1. Create a file named redis.yml with the following Kubernetes manifest:

    Show redis.yml file
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: redis-data
    labels:
    app: redis
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: <storage_class>
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
    name: redis
    labels:
    app: redis
    spec:
    selector:
    matchLabels:
    app: redis
    serviceName: redis
    template:
    metadata:
    labels:
    app: redis
    spec:
    containers:
    - image: cr.seqera.io/public/redis:6.0
    name: redis
    args:
    - --appendonly yes
    ports:
    - containerPort: 6379
    volumeMounts:
    - mountPath: "/data"
    name: "vol-data"
    volumes:
    - name: vol-data
    persistentVolumeClaim:
    claimName: redis-data
    restartPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: redis
    labels:
    app: redis
    spec:
    ports:
    - port: 6379
    targetPort: 6379
    selector:
    app: redis
  2. Set the spec.storageClassName field for the persistent volume claim:

    1. Obtain the default storage class name that your Kubernetes cluster provides:

      kubectl get storageclass -o=custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner
      View command output
      NAME       PROVISIONER
      hostpath docker.io/hostpath
    2. Edit the redis.yml file and set spec.storageClassName to the name of the default storage class from the output from the previous step.

  3. Apply the manifest:

    kubectl apply -f redis.yml
    View command output
    persistentvolumeclaim/redis-data created
    statefulset.apps/redis created
    service/redis created
  4. Confirm that Redis is available:

    kubectl get statefulsets/redis
    View command output
    NAME    READY   AGE
    redis 1/1 3d5h

5. Deploy a MySQL instance

  1. Create a file named mysql.yml with the following Kubernetes manifest:

    Show mysql.yml file
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
    name: mysql-pvc
    spec:
    accessModes:
    - ReadWriteOnce
    resources:
    requests:
    storage: 10Gi
    storageClassName: hostpath
    ---
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
    name: mysql
    spec:
    serviceName: mysql
    replicas: 1
    selector:
    matchLabels:
    app: mysql
    template:
    metadata:
    labels:
    app: mysql
    spec:
    containers:
    - name: mysql
    image: mysql:8.0
    ports:
    - containerPort: 3306
    env:
    - name: MYSQL_ALLOW_EMPTY_PASSWORD
    value: "yes"
    - name: MYSQL_USER
    value: "tower"
    - name: MYSQL_PASSWORD
    value: "tower"
    - name: MYSQL_DATABASE
    value: "tower"
    volumeMounts:
    - name: mysql-storage
    mountPath: /var/lib/mysql
    readinessProbe:
    exec:
    command:
    - mysqladmin
    - ping
    - -h
    - localhost
    initialDelaySeconds: 30
    periodSeconds: 10
    timeoutSeconds: 20
    failureThreshold: 10
    volumes:
    - name: mysql-storage
    persistentVolumeClaim:
    claimName: mysql-pvc
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: mysql
    labels:
    app: mysql
    spec:
    clusterIP: None
    ports:
    - port: 3306
    selector:
    app: mysql
  2. Set the spec.storageClassName field for the persistent volume claim:

    1. Obtain the default storage class name that your Kubernetes cluster provides:

      kubectl get storageclass -o=custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner
      View command output
      NAME       PROVISIONER
      hostpath docker.io/hostpath
    2. Edit the mysql.yml file and set spec.storageClassName to the name of the default storage class from the output from the previous step.

  3. Apply the manifest:

    kubectl create -f mysql.yml
    View command output
    persistentvolumeclaim/mysql-pvc created
    statefulset.apps/mysql created
    service/mysql created
  4. Confirm that MySQL is available:

    kubectl get statefulsets/redis
    View command output
    NAME    READY   AGE
    mysql 1/1 2d12h

6. Deploy Seqera

Seqera Platform consists of deployments for a cron service, a backend service, and a frontend service.

  1. Create the manifest files:
    • Create a file named tower-cron.yml with the following Kubernetes manifest:

      Show tower-cron.yml file
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: cron
      labels:
      app: cron
      spec:
      selector:
      matchLabels:
      app: cron
      template:
      metadata:
      labels:
      app: cron
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      volumes:
      - name: config-volume
      configMap:
      name: tower-yml
      initContainers:
      - name: migrate-db
      image: cr.seqera.io/private/nf-tower-enterprise/migrate-db:v24.1.3
      command: ["sh", "-c", "/migrate-db.sh"]
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      containers:
      - name: backend
      image: cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      env:
      - name: MICRONAUT_ENVIRONMENTS
      value: "prod,redis,cron"
      ports:
      - containerPort: 8080
      readinessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      livenessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      failureThreshold: 10
    • Create a file named tower-svc.yml with the following Kubernetes manifest:

      Show tower-svc.yml file
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: backend
      labels:
      app: backend
      spec:
      selector:
      matchLabels:
      app: backend
      strategy:
      rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
      template:
      metadata:
      labels:
      app: backend
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      volumes:
      - name: config-volume
      configMap:
      name: tower-yml
      containers:
      - name: backend
      image: cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      env:
      - name: MICRONAUT_ENVIRONMENTS
      value: "prod,redis,ha"
      ports:
      - containerPort: 8080
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      resources:
      requests:
      cpu: "1"
      memory: "1200Mi"
      limits:
      memory: "4200Mi"
      readinessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      livenessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      failureThreshold: 10
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: frontend
      labels:
      app: frontend
      spec:
      replicas: 1
      selector:
      matchLabels:
      app: frontend
      template:
      metadata:
      labels:
      app: frontend
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      containers:
      - name: frontend
      image: cr.seqera.io/private/nf-tower-enterprise/frontend:v24.1.4-unprivileged
      env:
      - name: NGINX_LISTEN_PORT # If not defined, defaults to 8000.
      value: "8000"
      ports:
      - containerPort: 8000
      restartPolicy: Always
      ---
      # Services
      apiVersion: v1
      kind: Service
      metadata:
      name: backend
      labels:
      app: backend
      spec:
      ports:
      - name: http
      port: 8080
      targetPort: 8080
      selector:
      app: backend
      ---
      apiVersion: v1
      kind: Service
      metadata:
      name: frontend
      spec:
      # type: LoadBalancer
      ports:
      - port: 80
      targetPort: 8000
      selector:
      app: frontend
      ---
  1. Deploy the cron service:

    This manifest includes an init container that creates the required database schema the first time it instantiates. This process can take a few minutes to complete and must finish before you instantiate the Seqera backend. Ensure this container is in the READY state before proceeding to the next step.

    1. Apply the tower-cron.yml manifest:

      kubectl apply -f tower-cron.yml
      View command output
      deployment.apps/cron configured
    2. Confirm that cron service deployed successfully:

      kubectl rollout status deployment/cron
      Details

      View command output deployment "cron" successfully rolled out

  2. Deploy the backend and frontend services:

    1. Apply the tower-svc.yml manifest:

      kubectl apply -f tower-svc.yml
      View command output
      deployment.apps/backend configured
      deployment.apps/frontend configured
      service/backend configured
      service/frontend configured
    2. Confirm that cron service deployed successfully:

      kubectl get deployments
      View command output
      NAME       READY   UP-TO-DATE   AVAILABLE   AGE
      backend 1/1 1 1 2d6h
      cron 1/1 1 1 2d6h
      frontend 1/1 1 1 2d6h

7. Create a new user account

Use the same email address that you specified for the TOWER_ROOT_USERS environment variable.

  1. Open a port forward to the frontend to access the Seqera UI:

    kubectl port-forward services/frontend 8080:80 &
    Details

    View command output Forwarding from 127.0.0.1:8080 -> 8000 Forwarding from [::1]:8080 -> 8000

  2. In a web browser, visit http://localhost:8080/. In the Sign in to Seqera platform form, enter the email address that you set the TOWER_ROOT_USERS environment variable to.

  3. Create a file named mailcatcher.yml with the following manifest:

    Show mailcatcher.yml file
    ---
    apiVersion: v1
    kind: Pod
    metadata:
    name: mailcatcher
    labels:
    app: mailcatcher
    spec:
    containers:
    - name: mailcatcher
    image: sj26/mailcatcher
    ports:
    - containerPort: 1025
    - containerPort: 1080
    ---
    apiVersion: v1
    kind: Service
    metadata:
    name: mailcatcher
    spec:
    selector:
    app: mailcatcher
    ports:
    - name: smtp
    protocol: TCP
    port: 587
    targetPort: 1025
    - name: http
    protocol: TCP
    port: 1080
    targetPort: 1080
    type: ClusterIP
  4. Deploy the MatchCatcher application, so that you can access the email that Seqera sends with your authentication token:

    kubectl apply -f mailcatcher.yml
  5. Open a port forward to the MailCatcher application, to access its web UI:

    kubectl port-forward services/mailcatcher 1080 &
    View command output
    Forwarding from 127.0.0.1:1080 -> 1080
    Forwarding from [::1]:1080 -> 1080
  6. In a web browser, visit http://localhost:1080/.

  7. Open the authentication email with the subject Complete your sign-in to Seqera and select Complete sign-in. Seqera redirects your browser to the Launchpad page.

  8. Optional: To stop the background port forward processes, you can run jobs in your shell, and then run kill -15 <pid> to stop each process. If you stop the port forward to the frontend, you can no longer access the Seqera UI. We recommend using a cloud provider's load balancers in conjunction with the Kubernetes ingress functionality for persistent access to your installation.

Your installation of Seqera Platform is complete.

Next steps

  • Configure OpenID Connect (OIDC) for seamless integration with your identify provider
  • Configure ingress for your public cloud provider for load balancing and TLS termination
  • Configure access to your organization's email server