Tekton est un framework natif Kubernetes complet, orienté intégration et déploiement continus, ayant fait son apparition au début de l’année 2019.

La communauté s’étant formée autour de Tekton regroupe des contributeurs de RedHat, Google, … tous les grands acteurs Kubernetes. Tekton est aujourd’hui disponible sur OpenShift 4, pouvant se substituer aux BuildConfig ou autres pipelines Jenkins.

Profitant de l’ajout récent du support ARM, depuis Tekton Pipeline v0.19.0 (9 décembre 2020), Tekton Triggers v0.10.0 (26 octobre 2020) et Tekton Dashboard v0.12.2 (15 décembre 2020), et donnant suite à notre article concernant le deploiement d’un cluster Kubernetes sur Raspberry Pi, ainsi que celui sur la configuration de volumes persistants et d’une registry, nous regarderons aujourd’hui comment assembler nos propres images sur Kubernetes.

Tekton Pipelines

Build d’Images Docker

À ce stade, nous pourrons tester l’exécution d’un pipeline. Pour cela, commençons par créer une tâche, décrivant l’assemblage d’une image :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: docker-build-kaniko
  namespace: ci
spec:
  params:
  - default: gcr.io/kaniko-project/executor:multi-arch-v1.3.0
    description: The location of the kaniko builder image.
    name: builderimage
    type: string
  - default: Dockerfile
    description: The name of the Dockerfile
    name: dockerfile
    type: string
  - default: .
    description: Parent directory for your Dockerfile.
    name: dockerroot
    type: string
  - default: ""
    description: Forces USER in Dockerfile - either a specific uid, or any string would translate to 1001
    name: forceuid
    type: string
  - default: ""
    description: Forces FROM in Dockerfile.
    name: fromimage
    type: string
  resources:
    inputs:
    - name: source
      type: git
    outputs:
    - name: image
      type: image
  steps:
  - command:
    - /bin/sh
    - -c
    - |
        if echo "$(inputs.params.fromimage)" | grep -E '/.*/' >/dev/null; then \
            sed -i "s|^[ ]*FROM[ ]*[^ ]*$|FROM $(inputs.params.fromimage)|" \
                 "$(inputs.params.dockerroot)/$(inputs.params.dockerfile)"; \
        elif test "$(inputs.params.fromimage)"; then \
            sed -i "s|^[ ]*FROM[ ]*[^ ]*$|FROM registry.registry.svc.cluster.local:5000/ci/$(inputs.params.fromimage)|" \
                 "$(inputs.params.dockerroot)/$(inputs.params.dockerfile)"; \
        fi; \
        if test "$(inputs.params.forceuid)" -a "$(inputs.params.forceuid)" -gt 0 >/dev/null 2>&1; then \
            if ! grep -E '^USER[ \t]*[0-9]' "$(inputs.params.dockerroot)/$(inputs.params.dockerfile)" >/dev/null; then \
                echo USER $(inputs.params.forceuid) \
                    >>"$(inputs.params.dockerroot)/$(inputs.params.dockerfile)"; \
            fi; \
        elif test "$(inputs.params.forceuid)"; then \
            if ! grep -E '^USER[ \t]*[0-9]' "$(inputs.params.dockerroot)/$(inputs.params.dockerfile)" >/dev/null; then \
                echo USER 1001 >>"$(inputs.params.dockerroot)/$(inputs.params.dockerfile)"; \
            fi; \
        fi
    image: docker.io/busybox:latest
    name: patch-from
    securityContext:
      privileged: true
    workingDir: /workspace/source
  - command:
    - /kaniko/executor
    - --skip-tls-verify-pull
    - --skip-tls-verify
    - "--build-arg=DO_UPGRADE=true"
    - "--build-arg=SRC_CLONE_PASS=$(inputs.params.clonepass)"
    - "--build-arg=SRC_CLONE_USER=$(inputs.params.cloneuser)"
    - --dockerfile=$(inputs.params.dockerroot)/$(inputs.params.dockerfile)
    - --context=dir:///workspace/source
    - --destination=$(outputs.resources.image.url)
    env:
    - name: DOCKER_CONFIG
      value: /builder/home/.docker
    image: $(inputs.params.builderimage)
    name: build-and-push
    securityContext:
      privileged: true
    workingDir: /workspace/source
EOF

Notons que nous avons ici choisi d’utiliser Kaniko, un projet publié par Google en 2018, permettant l’assemblage d’images de conteneur, depuis un autre conteneur.
Notons que d’autres implémentations existent, et nous pourrons notament citer BuildKit, Img, ou Buildah, fourni par RedHat, tout aussi capable que Kaniko, mais avec l’inconvénient de ne pas encore proposer d’image arm64.

À ce stade, bien que Kaniko ou Buildah prétendent assembler des images sans privilèges particuliers, notons que ce n’est pas vraiment le cas sur Kubernetes, ou il faudra toujours laisser ces conteneurs s’exécuter en tant que root. Pour cela, nous voudrons créer un ServiceAccount, et lui donner les privilèges nécessaires :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: docker-build
  namespace: ci
spec:
  params:
  - default: Dockerfile
    name: dockerfile
    type: string
  - default: .
    name: dockerroot
    type: string
  - default: ""
    description: Forces FROM in Dockerfile.
    name: fromimage
    type: string
  resources:
  - name: app-git
    type: git
  - name: app-image
    type: image
  tasks:
  - name: build
    params:
    - name: dockerfile
      value: $(params.dockerfile)
    - name: dockerroot
      value: $(params.dockerroot)
    - name: fromimage
      value: $(params.fromimage)
    - name: tlsverify
      value: "false"
    resources:
      inputs:
      - name: source
        resource: app-git
      outputs:
      - name: image
        resource: app-image
    taskRef:
      kind: Task
      name: docker-build-kaniko
EOF

Pipelines et Tasks ont pour vocation à être réutilisés. Ils seront contextualisés, à l’aide de paramètres tels que dépôts git ou registry, que nous définirons de la manière suivante :

$ cat <<EOF | kubectl apply -f-
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: registry-exporter-git
  namespace: ci
spec:
  params:
  - name: url
    value: https://gitlab.com/Worteks/docker-registry-exporter
  - name: revision
    value: master
  type: git
---
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: registry-exporter-image
  namespace: ci
spec:
  params:
  - name: url
    value: registry.registry.svc.cluster.local:5000/ci/registry-exporter:master
  type: image
EOF

Enfin, l’exécution d’un job se déclanchera avec la création d’un PipelineRun, tel que le suivant :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: registry-exporter-1
  namespace: ci
spec:
  pipelineRef:
    name: docker-build
  resources:
  - name: app-git
    resourceRef:
      name: registry-exporter-git
  - name: app-image
    resourceRef:
      name: registry-exporter-image
  serviceAccountName: tkn-ci
  timeout: 30m0s
EOF

Nous pourrons suivre l’exécution de notre pipeline :

$ kubectl  get pods -n ci
NAME                                        READY   STATUS            RESTARTS   AGE
registry-exporter-1-build-gc7k8-pod-sp6qh   0/5     PodInitializing   0          19s
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh
error: a container name must be specified for pod registry-exporter-1-build-gc7k8-pod-sp6qh, choose one of: [step-create-dir-image-wql9r step-git-source-source-2jpbq step-patch-from step-build step-image-digest-exporter-kzr5b] or one of the init containers: [working-dir-initializer place-tools]
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh -f -c step-git-source-source-2jpbq
Error from server (BadRequest): container "step-git-source-source-2jpbq" in pod "registry-exporter-1-build-gc7k8-pod-sp6qh" is waiting to start: PodInitializing
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh -f -c step-create-dir-image-wql9r
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh -f -c step-git-source-source-2jpbq
{"level":"info","ts":1610369828.8041682,"caller":"git/git.go:165","msg":"Successfully cloned https://github.com/Worteks/docker-registry-exporter @ 0388c04686d4aaaddd28c747a66dd3f362b35428 (grafted, HEAD, origin/fix-docker-build) in path /workspace/source"}
{"level":"info","ts":1610369828.899402,"caller":"git/git.go:203","msg":"Successfully initialized and updated submodules in path /workspace/source"}
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh -f -c step-git-source-source-2jpbq
{"level":"info","ts":1610369828.8041682,"caller":"git/git.go:165","msg":"Successfully cloned https://github.com/Worteks/docker-registry-exporter @ 0388c04686d4aaaddd28c747a66dd3f362b35428 (grafted, HEAD, origin/fix-docker-build) in path /workspace/source"}
{"level":"info","ts":1610369828.899402,"caller":"git/git.go:203","msg":"Successfully initialized and updated submodules in path /workspace/source"}
$ kubectl logs -n ci registry-exporter-1-build-gc7k8-pod-sp6qh -f -c step-git-source-source-2jpbq
E0109 13:04:30.159776      14 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
INFO[0007] Resolved base name python:3.7-alpine to base
INFO[0007] Retrieving image manifest python:3.7-alpine
INFO[0007] Retrieving image python:3.7-alpine
INFO[0009] Retrieving image manifest python:3.7-alpine
INFO[0009] Retrieving image python:3.7-alpine
INFO[0010] Retrieving image manifest python:3.7-alpine
INFO[0010] Retrieving image python:3.7-alpine
INFO[0011] Retrieving image manifest python:3.7-alpine
INFO[0011] Retrieving image python:3.7-alpine
INFO[0012] Built cross stage deps: map[0:[/install]]
INFO[0012] Retrieving image manifest python:3.7-alpine
INFO[0012] Retrieving image python:3.7-alpine
INFO[0014] Retrieving image manifest python:3.7-alpine
INFO[0014] Retrieving image python:3.7-alpine
INFO[0016] Executing 0 build triggers
INFO[0016] Unpacking rootfs as cmd RUN  apk add --no-cache --virtual=build-dependencies     autoconf     automake     g++     gcc     linux-headers     make     openssl-dev     zlib-dev requires it.
INFO[0019] RUN  apk add --no-cache --virtual=build-dependencies     autoconf     automake     g++     gcc     linux-headers     make     openssl-dev     zlib-dev
INFO[0019] Taking snapshot of full filesystem...
INFO[0022] cmd: /bin/sh
INFO[0022] args: [-c apk add --no-cache --virtual=build-dependencies     autoconf     automake     g++     gcc     linux-headers     make     openssl-dev     zlib-dev]
INFO[0022] Running: [/bin/sh -c apk add --no-cache --virtual=build-dependencies     autoconf     automake     g++     gcc     linux-headers     make     openssl-dev     zlib-dev]
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/main/aarch64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.12/community/aarch64/APKINDEX.tar.gz
(1/24) Installing m4 (1.4.18-r1)
(2/24) Installing perl (5.30.3-r0)
(3/24) Installing autoconf (2.69-r2)
(4/24) Installing automake (1.16.2-r0)
(5/24) Installing libgcc (9.3.0-r2)
(6/24) Installing libstdc++ (9.3.0-r2)
(7/24) Installing binutils (2.34-r1)
(8/24) Installing gmp (6.2.0-r0)
(9/24) Installing isl (0.18-r0)
(10/24) Installing libgomp (9.3.0-r2)
(11/24) Installing libatomic (9.3.0-r2)
(12/24) Installing libgphobos (9.3.0-r2)
(13/24) Installing mpfr4 (4.0.2-r4)
(14/24) Installing mpc1 (1.1.0-r1)
(15/24) Installing gcc (9.3.0-r2)
(16/24) Installing musl-dev (1.1.24-r10)
(17/24) Installing libc-dev (0.7.2-r3)
(18/24) Installing g++ (9.3.0-r2)
(19/24) Installing linux-headers (5.4.5-r1)
(20/24) Installing make (4.3-r0)
(21/24) Installing pkgconf (1.7.2-r0)
(22/24) Installing openssl-dev (1.1.1i-r0)
(23/24) Installing zlib-dev (1.2.11-r3)
(24/24) Installing build-dependencies (20210109.130446)
Executing busybox-1.31.1-r19.trigger
OK: 238 MiB in 59 packages
INFO[0029] Taking snapshot of full filesystem...
INFO[0057] RUN mkdir /install
INFO[0057] cmd: /bin/sh
INFO[0057] args: [-c mkdir /install]
INFO[0057] Running: [/bin/sh -c mkdir /install]
INFO[0057] Taking snapshot of full filesystem...
INFO[0065] WORKDIR /install
INFO[0065] cmd: workdir
INFO[0065] Changed working directory to /install
INFO[0065] No files changed in this command, skipping snapshotting.
INFO[0065] COPY requirements.txt /requirements.txt
INFO[0065] Taking snapshot of files...
INFO[0065] RUN pip install --prefix=/install -r /requirements.txt
INFO[0065] cmd: /bin/sh
INFO[0065] args: [-c pip install --prefix=/install -r /requirements.txt]
INFO[0065] Running: [/bin/sh -c pip install --prefix=/install -r /requirements.txt]
Collecting prometheus_client==0.4.2
  Downloading prometheus_client-0.4.2.tar.gz (29 kB)
Collecting Twisted==19.7.0
  Downloading Twisted-19.7.0.tar.bz2 (3.1 MB)
Collecting attrs>=17.4.0
  Downloading attrs-20.3.0-py2.py3-none-any.whl (49 kB)
Collecting Automat>=0.3.0
  Downloading Automat-20.2.0-py2.py3-none-any.whl (31 kB)
Collecting constantly>=15.1
  Downloading constantly-15.1.0-py2.py3-none-any.whl (7.9 kB)
Collecting hyperlink>=17.1.1
  Downloading hyperlink-21.0.0-py2.py3-none-any.whl (74 kB)
Collecting idna>=2.5
  Downloading idna-3.1-py3-none-any.whl (58 kB)
Collecting incremental>=16.10.1
  Using cached incremental-17.5.0-py2.py3-none-any.whl (16 kB)
Collecting PyHamcrest>=1.9.0
  Downloading PyHamcrest-2.0.2-py3-none-any.whl (52 kB)
Collecting zope.interface>=4.4.2
  Downloading zope.interface-5.2.0.tar.gz (227 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/site-packages (from zope.interface>=4.4.2->Twisted==19.7.0->-r /requirements.txt (line 1)) (51.0.0)
Collecting six
  Downloading six-1.15.0-py2.py3-none-any.whl (10 kB)
Building wheels for collected packages: prometheus-client, Twisted, zope.interface
  Building wheel for prometheus-client (setup.py): started
  Building wheel for prometheus-client (setup.py): finished with status 'done'
  Created wheel for prometheus-client: filename=prometheus_client-0.4.2-py3-none-any.whl size=35285 sha256=f3149f489ce679d5ea7c70a06d2f14b35705b5d2b20ce36f4698eff88e0a1b2e
  Stored in directory: /root/.cache/pip/wheels/4e/be/ec/c91dde5309bc0e660bda4735162575e19e238983dd64c60bc9
  Building wheel for Twisted (setup.py): started
  Building wheel for Twisted (setup.py): finished with status 'done'
  Created wheel for Twisted: filename=Twisted-19.7.0-cp37-cp37m-linux_aarch64.whl size=3034359 sha256=fdacf8565c06b15cae0bb42d01102a0296feb678c4eb870c917c6aca3ae01025
  Stored in directory: /root/.cache/pip/wheels/bc/28/f9/e991f24acf0acd2ed39e7733413e85745ee998f843a311a8f6
  Building wheel for zope.interface (setup.py): started
  Building wheel for zope.interface (setup.py): finished with status 'done'
  Created wheel for zope.interface: filename=zope.interface-5.2.0-cp37-cp37m-linux_aarch64.whl size=195430 sha256=f22cc40da76c4591fd2e38874ba8d4278d809fd6be6ac4d43032c750b5106881
  Stored in directory: /root/.cache/pip/wheels/27/57/14/5b2a6cd8b6869470681f04a8ce56aeb6cba0cd534598730527
Successfully built prometheus-client Twisted zope.interface
Installing collected packages: six, idna, attrs, zope.interface, PyHamcrest, incremental, hyperlink, constantly, Automat, Twisted, prometheus-client
  WARNING: The script automat-visualize is installed in '/install/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
  WARNING: The scripts cftp, ckeygen, conch, mailmail, pyhtmlizer, tkconch, trial, twist and twistd are installed in '/install/bin' which is not on PATH.
  Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Successfully installed Automat-20.2.0 PyHamcrest-2.0.2 Twisted-19.7.0 attrs-20.3.0 constantly-15.1.0 hyperlink-21.0.0 idna-3.1 incremental-17.5.0 prometheus-client-0.4.2 six-1.15.0 zope.interface-5.2.0
INFO[0113] Taking snapshot of full filesystem...
INFO[0120] Saving file install for later use
INFO[0121] Deleting filesystem...
INFO[0123] Retrieving image manifest python:3.7-alpine
INFO[0123] Retrieving image python:3.7-alpine
INFO[0124] Retrieving image manifest python:3.7-alpine
INFO[0124] Retrieving image python:3.7-alpine
INFO[0125] Executing 0 build triggers
INFO[0125] Unpacking rootfs as cmd COPY --from=base /install /usr/local requires it.
INFO[0129] COPY --from=base /install /usr/local
INFO[0130] Taking snapshot of files...
INFO[0133] COPY exporter /exporter
INFO[0133] Taking snapshot of files...
INFO[0133] WORKDIR /exporter
INFO[0133] cmd: workdir
INFO[0133] Changed working directory to /exporter
INFO[0133] No files changed in this command, skipping snapshotting.
INFO[0133] ENTRYPOINT ["python", "/exporter/exporter.py"]
INFO[0133] USER 1001
INFO[0133] cmd: USER

Build de Paquets

Tekton n’est pas limité à l’assemblage d’images Docker. Nous pourrions l’utiliser pour toute sorte de scripts. Prenons l’exemple d’un paquet Debian, pour OpenLDAP LTB. Il nous faut une ressource de type Git, d’ou l’on va télécharger l’arboresence nécessaire à la généreration de nos paquets :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1alpha1
kind: PipelineResource
metadata:
  name: openldap-ltb-deb
  namespace: ci
spec:
  params:
  - name: url
    value: https://github.com/ltb-project/openldap-deb
  - name: revision
    value: master
  type: git
EOF

Il nous faut ensuite une Task, décrivant comment assembler nos paquets, depuis ce dépôt. Une fois les paquets générés, il faudra les envoyer vers un dépôt : nous utiliserons un serveur Nexus, sur lequel nous avons au préalable un dépôt APT de type Hosted. Disposant d’un compte de service permettant l’ajout d’objets à ce dépôt Nexus APT, nous pourrons créer la Task suivante :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: deb-build-ltb-openldap
  namespace: ci
spec:
  params:
  - default: https://nexus.example.com
    description: Nexus Repository Address
    name: nexusaddress
    type: string
  - default: admin
    description: Nexus Repository Username
    name: nexususer
    type: string
  - default: secret
    description: Nexus Repository Password
    name: nexussecret
    type: string
  - default: apt-ltb
    description: OpenLDAP LTB Nexus Repository Name
    name: nexusrepo
    type: string
  - default: 2.4.56
    description: OpenLDAP Version to Build
    name: ldapversion
    type: string
  resources:
    inputs:
    - name: source
      type: git
  steps:
  - command:
    - /bin/sh
    - -c
    - |
    set -x; \
        export DEBIAN_FRONTEND=noninteractive; \
        MYARCH=`uname -m`; \
        test "$MYARCH" = aarch64 && MYARCH=arm64; \
        apt-get update \
        && apt-get install -y wget tar build-essential autoconf automake autotools-dev debhelper dh-make \
            devscripts fakeroot file gnupg git lintian patch patchutils pbuilder libsodium23 libsodium-dev \
            libltdl7 libltdl-dev libsasl2-2 libsasl2-dev zlib1g zlib1g-dev openssl libssl-dev mime-support \
            mawk libcrack2-dev libwrap0-dev curl \
        && cd debian/paquet-berkeleydb-debian \
        && tar -xvzf db-4.6.21.NC.tar.gz \
        && cp -r db-4.6.21.NC/* berkeleydb-ltb-4.6.21.NC \
        && tar -czf berkeleydb-ltb_4.6.21.NC-4.orig.tar.gz berkeleydb-ltb-4.6.21.NC \
        && cd berkeleydb-ltb-4.6.21.NC/ \
        && dpkg-buildpackage -us -uc \
        && dpkg -i ../berkeleydb-ltb_4.6.21.NC-4-patch4_$MYARCH.deb \
        && cd ../../paquet-openldap-debian \
        && wget ftp://ftp.openldap.org/pub/OpenLDAP/openldap-release/openldap-$(inputs.params.ldapversion).tgz \
        && tar -xvzf openldap-$(inputs.params.ldapversion).tgz \
        && cp -r openldap-$(inputs.params.ldapversion)/* openldap-ltb-$(inputs.params.ldapversion)/ \
        && tar -czf openldap-ltb-$(inputs.params.ldapversion).orig.tar.gz \
            openldap-ltb-$(inputs.params.ldapversion) \
        && cd openldap-ltb-$(inputs.params.ldapversion) \
        && dpkg-buildpackage -us -uc \
        && dpkg -i ../openldap-ltb_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-explockout_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-mdb-utils_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-ppm_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-check-password_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-dbg_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && dpkg -i ../openldap-ltb-contrib-overlays_$(inputs.params.ldapversion).1_$MYARCH.deb \
        && cd ../../ \
        && ls -l paquet-* \
        && for f in paquet-berkeleydb-debian/berkeleydb-ltb_4.6.21.NC-4-patch4_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-explockout_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-mdb-utils_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-ppm_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-check-password_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-dbg_$(inputs.params.ldapversion).1_$MYARCH.deb \
                paquet-openldap-debian/openldap-ltb-contrib-overlays_$(inputs.params.ldapversion).1_$MYARCH.deb; \
        do \
            curl -u "$(inputs.params.nexususer):$(inputs.params.nexussecret)" \
                -k -X POST -H "Content-Type: multipart/form-data" --data-binary "@$f" \
                "$(inputs.params.nexusaddress)/repository/$(inputs.params.nexusrepo)/"; \
        done
    image: docker.io/debian:buster
    name: build
    securityContext:
      privileged: true
    workingDir: /workspace/source
EOF

Déclarer un Pipeline, liant la ressource Git à notre Task :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-ltb-openldap
  namespace: ci
spec:
  params:
  - default: 2.4.56
    description: OpenLDAP Version to Build
    name: ldapversion
    type: string
  resources:
  - name: app-git
    type: git
  tasks:
  - name: build
    params:
    - name: ldapversion
      value: $(params.ldapversion)
    resources:
      inputs:
      - name: source
        resource: app-git
    taskRef:
      kind: Task
      name: deb-build-ltb-openldap
EOF

Et lancer un job, en créant le PipelineRun suivant :

$ cat <<EOF | kubectl apply -f-
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
metadata:
  name: ltb-openldap-1
  namespace: ci
spec:
  pipelineRef:
    name: build-ltb-openldap
  resources:
  - name: app-git
    resourceRef:
      name: openldap-ltb-deb
  serviceAccountName: tkn-ci
  timeout: 2h0m0s
EOF

Nous pourrons suivre l’exécution du job, qui assemblera une suite de paquets, les installera en local, et si tout s’est bien passé : poussera les archives debian générées vers notre dépôt Nexus.

$ kubectl logs -n ci -c step-build ltb-openldap-1-build-7npcv-pod-mp28x
+ export DEBIAN_FRONTEND=noninteractive
+ uname -m
+ MYARCH=aarch64
+ test aarch64 = aarch64
+ MYARCH=arm64
+ apt-get update
Get:1 http://security.debian.org/debian-security buster/updates InRelease [65.4 kB]
Get:2 http://deb.debian.org/debian buster InRelease [121 kB]
Get:3 http://deb.debian.org/debian buster-updates InRelease [51.9 kB]
Get:4 http://security.debian.org/debian-security buster/updates/main arm64 Packages [254 kB]
Get:5 http://deb.debian.org/debian buster/main arm64 Packages [7737 kB]
Get:6 http://deb.debian.org/debian buster-updates/main arm64 Packages [7848 B]
Fetched 8238 kB in 4s (2346 kB/s)
Reading package lists...
+ apt-get install -y wget tar
...
+ curl -u XXX:YYY -k -X POST -H Content-Type: multipart/form-data --data-binary @paquet-openldap-debian/openldap-ltb-dbg_2.4.56.1_arm64.deb https://nexus.example.com/repository/apt-ltb/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 3978k    0     0  100 3978k      0   791k  0:00:05  0:00:05 --:--:--     0
+ curl -u XXX:YYY -k -X POST -H Content-Type: multipart/form-data --data-binary @paquet-openldap-debian/openldap-ltb-contrib-overlays_2.4.56.1_arm64.deb https://nexus.example.com/repository/apt-ltb/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  333k    0     0  100  333k      0  61846  0:00:05  0:00:05 --:--:--     0

Et depuis notre dépôt Nexus, nous pourrons confirmer que nos paquets sont maintenant disponibles :

Tekton Nexus

De la même façon, nous pourrions exécuter des tests unitaires, scanner notre code ou image, lancer un déploiement utilisant nos derniers montages pour y exécuter des tests d’intégration, … Automatiser les différentes étapes d’une chaine d’intégration et déploiements continue devient un jeu d’enfant. Vous retrouverez une suite d’exemples, sur le dépôt GitHub Catalog, du projet Tekton.

Tekton Triggers

Jusque la, nous avons vu comment exécuter nos jobs, mais dans le contexte d’une chaine d’intégration et déploiements continus, il nous faut maintenant traiter la réception d’évenements.

Tekton Triggers est une composante optionelle, ajoutant de nouveaux types d’objets au cluster, en vue de réceptionner et traiter des notifications http, par exemple pour programmer l’exécution d’un pipeline suite au push d’une modification sur un dépôt git.

Nous déploierons la dernière version depuis GitHub :

$ kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/previous/v0.10.2/release.yaml
$ kubectl get pods -n tekton-pipelines
NAME                                           READY   STATUS    RESTARTS   AGE
tekton-pipelines-controller-5cdb46974f-fzfdh   1/1     Running   0          25m
tekton-pipelines-webhook-6479d769ff-wxbxr      1/1     Running   0          25m
tekton-triggers-controller-5994f6c94b-fw2dm    1/1     Running   0          1m
tekton-triggers-webhook-68c7866d8-zdttm        1/1     Running   0          1m

Nous pouvons maintenant configurer un premier Event Listener.

L’Event Listener interroge l’API du cluster Kubernetes, il faudra avant tout lui créer un ServiceAccount, auquel on déleguera les privilèges necessaires :

$ cat <<EOF | kubectl apply -f-
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tkn-triggers-ci
  namespace: ci
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tekton-triggers
  namespace: ci
rules:
- apiGroups: ["triggers.tekton.dev"]
  resources: ["eventlisteners", "triggerbindings", "triggertemplates", "triggers"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["configmaps", "secrets"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["tekton.dev"]
  resources: ["pipelineruns", "pipelineresources", "taskruns"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["serviceaccounts"]
  verbs: ["impersonate"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tekton-triggers
  namespace: ci
subjects:
- kind: ServiceAccount
  name: tkn-triggers-ci
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: tekton-triggers
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: tekton-triggers
rules:
- apiGroups: ["triggers.tekton.dev"]
  resources: ["clustertriggerbindings"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tekton-triggers-ci
subjects:
- kind: ServiceAccount
  name: tkn-triggers-ci
  namespace: ci
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: tekton-triggers
EOF

Notre Event Listener doit traiter des notifications en provenance de GitHub. Nous allons donc créer un TriggerBinding, associant les champs de l’objet JSON envoyé par GitHub, aux variables que l’on souhaite réutiliser dans la programmation de notre job :

$ cat <<EOF | kubectl apply -f-
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
  name: github-pr-binding
  namespace: ci
spec:
  params:
  - name: gitrevision
    value: $(body.pull_request.head.sha)
  - name: gitrepositoryname
    value: $(body.repository.name)
  - name: gitrepositoryurl
    value: $(body.repository.clone_url)
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerBinding
metadata:
  name: github-push-binding
  namespace: ci
spec:
  params:
  - name: gitrevision
    value: $(body.after)
  - name: gitrepositoryname
    value: $(body.repository.name)
  - name: gitrepositoryurl
    value: $(body.repository.clone_url)
EOF

Nous allons ensuite créer un TriggerTemplate, décrivant que faire lorsqu’une notification est reçue. Dans notre cas, fonction des variables extraites par notre TriggerBinding, nous voudrons créer un PipelineRun, dont les ressources Git et Image seront deduites de la notification reçue:

$ cat <<EOF | kubectl apply -f-
apiVersion: triggers.tekton.dev/v1alpha1
kind: TriggerTemplate
metadata:
  name: github-pipelinerun-template
  namespace: ci
spec:
  params:
  - name: gitrevision
  - name: gitrepositoryname
  - name: gitrepositoryurl
  resourcetemplates:
  - apiVersion: tekton.dev/v1beta1
    kind: PipelineRun
    metadata:
      generateName: github-job-
    spec:
      pipelineRef:
        name: docker-build
      resources:
      - name: app-git
        resourceSpec:
          type: git
          params:
          - name: revision
            value: $(tt.params.gitrevision)
          - name: url
            value: $(tt.params.gitrepositoryurl)
      - name: app-image
        resourceSpec:
          type: image
          params:
          - name: url
            value: registry.registry.svc.cluster.local:5000/ci/$(tt.params.gitrepositoryname):$(tt.params.gitrevision)
      serviceAccountName: tkn-ci
      timeout: 1h0m0s
EOF

Enfin, nous allons créer un Secret, dans lequel conserver un token que l’on configurera par la suite, dans notre WebHook GitHub, un EventListener décrivant sous quelles conditions notifier notre TriggerBinding, et un Ingress, exposant l’Event Listener :

$ cat <<EOF | kubectl apply -f-
---
apiVersion: v1
kind: Secret
metadata:
  name: github-secret
  namespace: ci
stringData:
  secretToken: changeme
type: Opaque
---
apiVersion: triggers.tekton.dev/v1alpha1
kind: EventListener
metadata:
  name: github-listener
  namespace: ci
spec:
  serviceAccountName: tkn-triggers-ci
  triggers:
  - name: github-pr-listener
    interceptors:
      - github:
          secretRef:
            secretName: github-secret
            secretKey: secretToken
          eventTypes:
          - pull_request
    bindings:
    - ref: github-pr-binding
    template:
      ref: github-pipelinerun-template
  - name: github-push-listener
    interceptors:
      - github:
          secretRef:
            secretName: github-secret
            secretKey: secretToken
          eventTypes:
          - push
    bindings:
    - ref: github-push-binding
    template:
      ref: github-pipelinerun-template
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
  name: github-eventlistener
  namespace: ci
spec:
  rules:
  - host: git-ci-el.router.example.com
    http:
      paths:
      - backend:
          serviceName: el-github-listener
          servicePort: 8080
EOF

La création d’un EventListener aura demarré un nouveau Pod, en charge de receptionner et traiter les notifications :

# kubectl get pods -n ci
NAME                                               READY   STATUS      RESTARTS   AGE
...
el-github-listener-657bd5b5d8-9rgwx                1/1     Running     0          2m

Confirmer, depuis un navigateur, que l’addresse https://git-ci-el.router.example, définie dans notre Ingress, est bien joignable en dehors du cluster. Rendons-nous ensuite sur GitHub, dans les settings du dépôt de votre choix, section WebHooks :

github webhook 2

Rajouter un webhook :

Webhooc config

L’Event Listener ne semble pas supporter les notifications de type application/x-www-form-urlencoded, il faudra choisir l’option application/json.

Renseigner le Secret fonction de l’accessToken créé précédemment.

Cocher le type d’évenements sur lequel nous souhaitons recevoir une notification, push et pull requests :

Github webhook

Maintenant, nous pouvons pousser un commit sur ce dépôt, pour déclancher un job. Depuis les logs de l’EventListener, confirmer que l’on a bien reçu le message, et verifier qu’un nouveau Pod s’est lance. Nous pourrons suivre les logs d’execution du job :

$ kubectl logs -n ci el-github-listener-657bd5b5d8-9rgwx
{"level":"error","ts":"2021-01-12T08:48:49.089Z","logger":"eventlistener","caller":"sink/sink.go:209","msg":"event type ping is not allowed","knative.dev/controller":"eventlistener","/triggers-eventid":"85m9z","/trigger":"github-push-listener","stacktrace":"github.com/tektoncd/triggers/pkg/sink.Sink.processTrigger\n\tgithub.com/tektoncd/triggers/pkg/sink/sink.go:209\ngithub.com/tektoncd/triggers/pkg/sink.Sink.HandleEvent.func1\n\tgithub.com/tektoncd/triggers/pkg/sink/sink.go:129"}
{"level":"error","ts":"2021-01-12T08:48:49.093Z","logger":"eventlistener","caller":"sink/sink.go:209","msg":"event type ping is not allowed","knative.dev/controller":"eventlistener","/triggers-eventid":"85m9z","/trigger":"github-pr-listener","stacktrace":"github.com/tektoncd/triggers/pkg/sink.Sink.processTrigger\n\tgithub.com/tektoncd/triggers/pkg/sink/sink.go:209\ngithub.com/tektoncd/triggers/pkg/sink.Sink.HandleEvent.func1\n\tgithub.com/tektoncd/triggers/pkg/sink/sink.go:129"}
{"level":"info","ts":"2021-01-12T09:08:45.303Z","logger":"eventlistener","caller":"sink/sink.go:238","msg":"ResolvedParams : [{Name:gitrevision Value:17bb6ac63b8d504dc9d635cd2b6be396bc9964b3} {Name:gitrepositoryname Value:docker-registry-exporter} {Name:gitrepositoryurl Value:https://github.com/faust64/docker-registry-exporter.git}]","knative.dev/controller":"eventlistener","/triggers-eventid":"25m7f","/trigger":"github-push-listener"}
{"level":"info","ts":"2021-01-12T09:08:45.307Z","logger":"eventlistener","caller":"resources/create.go:95","msg":"Generating resource: kind: &APIResource{Name:pipelineruns,Namespaced:true,Kind:PipelineRun,Verbs:[delete deletecollection get list patch create update watch],ShortNames:[pr prs],SingularName:pipelinerun,Categories:[tekton tekton-pipelines],Group:tekton.dev,Version:v1beta1,StorageVersionHash:RcAKAgPYYoo=,}, name: github-job-","knative.dev/controller":"eventlistener"}
{"level":"info","ts":"2021-01-12T09:08:45.307Z","logger":"eventlistener","caller":"resources/create.go:103","msg":"For event ID \"25m7f\" creating resource tekton.dev/v1beta1, Resource=pipelineruns","knative.dev/controller":"eventlistener"}
$ kubectl get pods -n ci
NAME                                               READY   STATUS      RESTARTS   AGE
...
el-github-listener-657bd5b5d8-9rgwx                1/1     Running     0          12m
github-job-92pjk-build-b66g6-pod-2nvtk             5/5     Running     0          15s
$ kubectl logs -n ci -c step-build -f github-job-92pjk-build-b66g6-pod-2nvtk
E0112 09:09:02.750462      13 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
INFO[0007] Resolved base name python:3.7-alpine to base
INFO[0007] Retrieving image manifest python:3.7-alpine
...
INFO[0133] WORKDIR /exporter
INFO[0133] cmd: workdir
INFO[0133] Changed working directory to /exporter
INFO[0133] No files changed in this command, skipping snapshotting.
INFO[0133] ENTRYPOINT ["python", "/exporter/exporter.py"]
INFO[0133] USER 1001
INFO[0133] cmd: USER

L’Event Listener recevant une notification de GitHub, a bien créé un PipelineRun :

root@pandore:~/wip-triggers# kubectl get pipelinerun -n ci
NAME                         SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
docker-registry-exporter-1   True        Succeeded   2d21h       2d21h
github-job-92pjk             True        Succeeded   16m         13m
ltb-openldap-1               True        Succeeded   13h         12h
root@pandore:~/wip-triggers# kubectl get pipelinerun -n ci -o yaml github-job-92pjk
apiVersion: tekton.dev/v1beta1
kind: PipelineRun
...
spec:
  pipelineRef:
    name: docker-build
  resources:
  - name: app-git
    resourceSpec:
      params:
      - name: revision
        value: 17bb6ac63b8d504dc9d635cd2b6be396bc9964b3
      - name: url
        value: https://github.com/faust64/docker-registry-exporter.git
      type: git
  - name: app-image
    resourceSpec:
      params:
      - name: url
        value: registry.registry.svc.cluster.local:5000/ci/docker-registry-exporter:17bb6ac63b8d504dc9d635cd2b6be396bc9964b3
      type: image
  serviceAccountName: tkn-ci
  timeout: 1h0m0s

Comme prévu, le job a bien utilisé le hash reçu par notre notification, pour cloner le dépôt source, et tagguer l’image correspondante.

Nous pourrions par la suite compléter notre Pipeline, intégration continue, publication d’une release si le commit s’est fait sur votre branche master, …

Tekton CLI

Notons que Tekton fourni son propre client, tkn, téléchargeable depuis leur dépôt GitHub.

Celui-ci aura l’avantage de simplifier l’interraction avec vos jobs. Rien de révolutionnaire, son utilisation n’est en aucun cas obligatoire, kubectl peut tout faire, mais propose moins d’automatismes.

Citons par exemple l’execution de Tasks ou Pipelines, sans que l’on ait à rédiger de TaskRun ou PipelineRun. Tkn permet aussi de suivre les logs d’un job, aggregeant la sortie de l’ensemble des conteneurs en question, là ou Kubectl ne suivra qu’un conteneur à la fois.

Tekton Dashboard

Pour l’instant, nous avons manipulé exclusivement en ligne de commande, mais l’écosystème Tekton dispose bien sûr d’une Dashboard.

À nouveau, nous pourrons installer la dernière release, disponible sur GitHub :

$ kubectl apply -f https://github.com/tektoncd/dashboard/releases/download/v0.12.0/tekton-dashboard-release.yaml
$ kubectl get pods -n tekton-pipelines
NAME                                           READY   STATUS    RESTARTS   AGE
tekton-dashboard-56c78f485b-fw4tn              1/1     Running   0          2m
tekton-pipelines-controller-5cdb46974f-fzfdh   1/1     Running   0          50m
tekton-pipelines-webhook-6479d769ff-wxbxr      1/1     Running   0          50m
tekton-triggers-controller-5994f6c94b-fw2dm    1/1     Running   0          26m
tekton-triggers-webhook-68c7866d8-zdttm        1/1     Running   0          26m

Le Pod tekton-dashboard ayant démarré, il nous reste à créer un Ingress, exposant la Dashboard en dehors du SDN Kubernetes :

$ cat <<EOF | kubectl apply -f-
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tekton-dashboard
  namespace: tekton-pipelines
spec:
  rules:
  - host: tekton.router.example.com
    http:
      paths:
      - backend:
          serviceName: tekton-dashboard
          servicePort: 9097
EOF

Nous pouvons maintenant nous y connecter, depuis un navigateur. Nous y retrouverons nos PipelineRuns :

Pipeline run

Les EventListeners :

Event listener

Nous pourrons consulter les logs d’un job :

Log job

Importer des Tasks ou Pipelines depuis une source externe :

Import Tasks ou Pipeline

Ou lancer un Pipeline, sans passer par la ligne de commande :

image create pipeline

Dans l’ensemble cette Dashboard fait tout ce que l’on peut en attendre, mais il reste à notre charge d’identifier les utilisateurs s’y connectant, et peu probable que l’on puisse facilement restreindre le champ d’action d’un utilisateur une fois celui-ci connecté. Nous pourrions de meme regretter quelques soucis, de longue date, avec les navigateurs Chrome – Firefox et ses dérivés fonctionnent parfaitement.

Conclusion

La suite Tekton simplifie l’implémentation d’une chaine d’intégration et déploiement continues. Si OpenShift pouvait disposer d’un avantage certain dans ce domaine, en proposant les objets de type BuildConfig, Tekton nous offre ici une solution native Kubernetes, tout aussi capable.

Sachant que le projet en est toujours dans ses premières annéés, la roadmap reste conséquente et les fonctionnalitées qu’il reste à implementer ne manquent pas. Depuis décembre 2020, les dernière releases de Tekton Pipelines, Tekton Triggers et Tekton Dashboard permettent enfin de déployer Tekton sur ARM. Le cycle de release est assez soutenu, le projet murit doucement.

L’essentiel est là, fiable, malgrès une relative complexitée à mettre en oeuvre – les adeptes d’OpenShift remarqueront que l’exécution d’un job implique beaucoup de ressources, différentes, tandis qu’une BuildConfig restera, en règle générale, beaucoup plus simple. Cette architecture, plus riche, permet déjà une plus grande modularité.