Skip to main content
Version: 24.1

Install on an Amazon EKS cluster

This installation guide describes how to install Seqera Platform on Enterprise on Amazon Web Services (AWS) Elastic Kubernetes Service (EKS). When you complete the steps in this guide you'll have an installation suitable for product use on EKS.

Prerequisites

The following prerequisites are required to complete this installation guide:

Additionally, the ingress assumes the presence of SSL certificates, DNS resolution, and ALB logging. If you've chosen not to use some or all of these features, you'll need to modify the manifest accordingly before applying it to the cluster.

Amazon Web Services (AWS) setup

Set up commonly-used AWS services for Seqera deployment.

Provision an EKS cluster

See the EKS documentation to provision your own Kubernetes cluster. Kubernetes version 1.19 or later is required.

Amazon SES

Seqera Enterprise supports AWS Simple Email Service (SES) as an alternative to traditional SMTP servers for sending application emails.

If you use AWS SES in sandbox mode, both the sender and the receiver email addresses must be verified via AWS SES. Sandbox is not recommended for production use. See the AWS docs for instructions to move out of the sandbox.

Managed Redis services

Seqera supports managed Redis services such as Amazon ElastiCache.

  • Use a single-node cluster, as multi-node clusters are not supported
  • Use an instance with at least 6GB capacity (cache.m4.large or greater)

Amazon RDS

External databases for Seqera Enterprise deployments require:

  • A MySQL8 Community DB instance
  • At least 2 vCPUs, 8 GB memory, and 30 GB SSD storage
  • Manual MySQL user and database schema creation. See Database configuration for more details.

Recommended instance class and storage requirements depend on the number of parallel pipelines you expect to run.

See Creating an Amazon RDS DB instance to guide you through the external database setup for your production deployment.

After your database is created:

  • Update the inbound rules for the underlying EC2 instance to allow MySQL connections.
  • Update your Seqera configuration with the database hostname, username, and password.

Fetch Seqera config values from AWS Parameter Store

From version 23.1, you can optionally retrieve Seqera Enterprise configuration values remotely from the AWS Parameter Store. See AWS Parameter Store configuration for instructions.

Installation

Complete the following sections to install Seqera Platform Enterprise on your EKS cluster.

1. Create a namespace

Create a namespace to isolate Kubernetes resources used by Seqera Platform from the other resources on your cluster.

This installation guide assumes the use of seqera-platform as the installation namespace. Consider using a different one that better fits your cluster naming convention.

  1. Create a namespace for the Seqera resources:

    kubectl create namespace seqera-platform
    View command output
    namespace/seqera-platform created
  2. Switch to the namespace:

    kubectl config set-context --current --namespace=seqera-platform

2. Configure container registry credentials

Seqera Enterprise is distributed as a collection of Docker containers available through the Seqera container registry cr.seqera.io. Contact support to get your container access credentials. After you've received your credentials, grant your cluster access to the registry:

  1. Retrieve the name and secret values from the JSON file that you received from Seqera support.

  2. Create a secret for the image pull secret:

    kubectl create secret docker-registry cr.seqera.io \
    --docker-server=cr.seqera.io \
    --docker-username='<name>' \
    --docker-password='<secret>'

    The credential name contains a dollar $ character. Wrap the name in single quotes to prevent the Linux shell from interpreting this value as an environment variable.

    View command output
    secret/cr.seqera.io created
  3. Confirm that the secret exists:

    kubectl get secrets cr.seqera.io
    View command output
    NAME           TYPE                             DATA   AGE
    cr.seqera.io kubernetes.io/dockerconfigjson 1 26s
  4. Confirm that you can pull containers from cr.seqera.io:

    1. Pull a container from the private repository:

      kubectl run pull-test --command --restart=Never --image-pull-policy=Always \
      --image cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4 \
      --overrides='{ "spec": { "imagePullSecrets": [ { "name": "cr.seqera.io" } ] } }' \
      --override-type=strategic -- /bin/true
      View command output
      pod/pull-test created
    2. Confirm that the container was pulled:

      kubectl get pods/pull-test -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase
      View command output
      NAME        STATUS
      pull-test Succeeded

3. Configure Seqera Platform

Configure the following environment variables. For more information about Seqera configuration options, see Configuration overview.

The configmap.yml manifest includes both the tower.env and tower.yml files. These files are made available to the other containers through volume mounts.

  1. Create a file named configmap.yml with the following Kubernetes manifest:

    Show configmap.yml file
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: tower-backend-cfg
    labels:
    app: backend-cfg
    data:
    TOWER_SERVER_URL: "https://<seqera_platform_hostname>"
    TOWER_CONTACT_EMAIL: "<system_email_address>"
    TOWER_REDIS_URL: "redis://<redis_host_name>:6379"
    TOWER_DB_URL: "jdbc:mysql://<database_host_name>:3306/tower?permitMysqlScheme=true"
    TOWER_DB_DRIVER: "org.mariadb.jdbc.Driver"
    TOWER_DB_DIALECT: "io.seqera.util.MySQL55DialectCollateBin"
    TOWER_DB_USER: "<db_username>"
    TOWER_DB_PASSWORD: "<db_password>"
    TOWER_SMTP_HOST: "<smtp_host_name>"
    TOWER_SMTP_USER: "<smtp_username>"
    TOWER_SMTP_PASSWORD: "<smtp_password>"
    TOWER_JWT_SECRET: "<jwt_secret>"
    TOWER_CRYPTO_SECRETKEY: "<crypt_secret>"
    TOWER_LICENSE: "<license>"
    TOWER_ENABLE_PLATFORMS: "awsbatch-platform,gls-platform,googlebatch-platform,azbatch-platform,uge-platform,slurm-platform"
    FLYWAY_LOCATIONS: "classpath:db-schema/mysql"
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: tower-yml
    labels:
    app: backend-cfg
    data:
    tower.yml: |-
  2. Generate two unique secrets with the following command: openssl rand -base64 32 | tr -d /=+ | cut -c -32

  3. Edit the configmap.yml file and set the following environment variables:

    • TOWER_CONTACT_EMAIL: Specify a contact email address for the Seqera administrator.
    • TOWER_SERVER_URL: Specify your fully qualified host name for Platform Enterprise, such as https://example.com:8000.
    • TOWER_REDIS_URL: Specify the host name for your Amazon ElastiCache Redis instance.
    • TOWER_DB_URL: Specify the Amazon RDS instance URI connection string, such as jdbc:mysql://<host_name>:3306/tower?permitMysqlScheme=true. Replace <host_name> with the RDS instance's host name.
    • TOWER_DB_USER: Specify the Amazon RDS instance user name.
    • TOWER_DB_PASSWORD: Specify the Amazon RDS instance password.
    • TOWER_SMTP_HOST: Specify the mail server host name.
    • TOWER_SMTP_USER: Specify the mail server user name.
    • TOWER_SMTP_PASSWORD: Specify the mail server password.
    • TOWER_JWT_SECRET: Specify a unique secret that is at least 35 alphanumeric characters.
    • TOWER_CRYPTO_SECRETKEY: Specify a unique secret.
    • TOWER_LICENSE: Specify your Seqera license key, if known. Otherwise, leave this empty.
  4. Apply the config map:

    kubectl apply -f configmap.yml
    View command output
    configmap/tower-backend-cfg created
    configmap/tower-yml created

4. Deploy Seqera

Seqera Platform consists of deployments for a cron service, a backend service, and a frontend service.

  1. Create the manifest files:
    • Create a file named tower-cron.yml with the following Kubernetes manifest:

      Show tower-cron.yml file
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: cron
      labels:
      app: cron
      spec:
      selector:
      matchLabels:
      app: cron
      template:
      metadata:
      labels:
      app: cron
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      volumes:
      - name: config-volume
      configMap:
      name: tower-yml
      initContainers:
      - name: migrate-db
      image: cr.seqera.io/private/nf-tower-enterprise/migrate-db:v24.1.3
      command: ["sh", "-c", "/migrate-db.sh"]
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      containers:
      - name: backend
      image: cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      env:
      - name: MICRONAUT_ENVIRONMENTS
      value: "prod,redis,cron"
      ports:
      - containerPort: 8080
      readinessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      livenessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      failureThreshold: 10
    • Create a file named tower-svc.yml with the following Kubernetes manifest:

      Show tower-svc.yml file
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: backend
      labels:
      app: backend
      spec:
      selector:
      matchLabels:
      app: backend
      strategy:
      rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
      template:
      metadata:
      labels:
      app: backend
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      volumes:
      - name: config-volume
      configMap:
      name: tower-yml
      containers:
      - name: backend
      image: cr.seqera.io/private/nf-tower-enterprise/backend:v24.1.4
      envFrom:
      - configMapRef:
      name: tower-backend-cfg
      env:
      - name: MICRONAUT_ENVIRONMENTS
      value: "prod,redis,ha"
      ports:
      - containerPort: 8080
      volumeMounts:
      - name: config-volume
      mountPath: /tower.yml
      subPath: tower.yml
      resources:
      requests:
      cpu: "1"
      memory: "1200Mi"
      limits:
      memory: "4200Mi"
      readinessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      livenessProbe:
      httpGet:
      path: /health
      port: 8080
      initialDelaySeconds: 5
      timeoutSeconds: 3
      failureThreshold: 10
      ---
      apiVersion: apps/v1
      kind: Deployment
      metadata:
      name: frontend
      labels:
      app: frontend
      spec:
      replicas: 1
      selector:
      matchLabels:
      app: frontend
      template:
      metadata:
      labels:
      app: frontend
      spec:
      imagePullSecrets:
      - name: cr.seqera.io
      containers:
      - name: frontend
      image: cr.seqera.io/private/nf-tower-enterprise/frontend:v24.1.4
      ports:
      - containerPort: 80
      restartPolicy: Always
      ---
      apiVersion: v1
      kind: Service
      metadata:
      name: backend
      labels:
      app: backend
      spec:
      ports:
      - name: http
      port: 8080
      targetPort: 8080
      selector:
      app: backend
      ---
      apiVersion: v1
      kind: Service
      metadata:
      name: frontend
      spec:
      type: LoadBalancer
      ports:
      - port: 80
      selector:
      app: frontend
      ---
  1. Deploy the cron service:

    This manifest includes an init container that creates the required database schema the first time it instantiates. This process can take a few minutes to complete and must finish before you instantiate the Seqera backend. Ensure this container is in the READY state before proceeding to the next step.

    1. Apply the tower-cron.yml manifest:

      kubectl apply -f tower-cron.yml
      View command output
      deployment.apps/cron configured
    2. Confirm that cron service deployed successfully:

      kubectl rollout status deployment/cron
      Details

      View command output deployment "cron" successfully rolled out

  2. Deploy the backend and frontend services:

    1. Apply the tower-svc.yml manifest:

      kubectl apply -f tower-svc.yml
      View command output
      deployment.apps/backend configured
      deployment.apps/frontend configured
      service/backend configured
      service/frontend configured
    2. Confirm that cron service deployed successfully:

      kubectl get deployments
      View command output
      NAME       READY   UP-TO-DATE   AVAILABLE   AGE
      backend 1/1 1 1 2d6h
      cron 1/1 1 1 2d6h
      frontend 1/1 1 1 2d6h

5. Configure HTTPS traffic load balancer

The Kubernetes ingress resource is used to make Seqera Enterprise publicly accessible, load-balance traffic, terminate TLS, and offer name-based virtual hosting. The included ingress manifest will create an external IP address and forward HTTP traffic to the Seqera frontend.

  1. Create a file named ingress.yml with the following Kubernetes manifest:

    Show ingress.yml file
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: front-ingress
    annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/certificate-arn: <YOUR_CERTIFICATE_ARN>
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301" }}'
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-Ext-2018-06
    alb.ingress.kubernetes.io/load-balancer-attributes: >
    idle_timeout.timeout_seconds=301,
    routing.http2.enabled=false,
    access_logs.s3.enabled=true,
    access_logs.s3.bucket=<YOUR_LOGS_S3_BUCKET>,
    access_logs.s3.prefix=<YOUR_LOGS_PREFIX>
    spec:
    rules:
    - host: <YOUR_SEQERA_HOST_NAME>
    http:
    paths:
    - path: /*
    pathType: ImplementationSpecific
    backend:
    service:
    name: ssl-redirect
    port:
    name: use-annotation
    - path: /*
    pathType: ImplementationSpecific
    backend:
    service:
    name: frontend
    port:
    number: 80
  2. To deploy the manifest to your cluster, run the following:

    kubectl apply -f ingress.yml

See Kubernetes ingress for more information. If you don't need to make Seqera externally accessible, use a service resource to expose a node port or a load balancer service to make it accessible within your intranet.

See the AWS Load Balancer Controller documentation for configuring an ingress service.

6. Check status

Check that all services are up and running:

kubectl get pods

7. Test the application

See Test deployment.

Optional: Configure database console

Use the dbconsole.yml manifest to deploy a simple web frontend to the Seqera database. Though not required, this can be useful for administrative purposes.

  1. Deploy the database console:

    kubectl apply -f dbconsole.yml
  2. Enable a port-forward for the database console to your local machine:

    kubectl port-forward deployment/dbconsole 8080:8080
  3. Access the database console in a web browser at http://localhost:8080.

Next steps