FaaS and Gitlab

I have been messing around with Gitlab and FaaS for a while now. I am aiming to get this working to actual try out doing things like bug bounties. and have the ability to take on more workloads where possible.

So what is FaaS?

FaaS is functions as a service. This was made popular by AWS' lambda functions. It is also sometimes called serverless. Which while that isn't technically true, essentially you have micro applications running inside of docker containers which usually do small jobs and are 'isolated' from each other.

Getting started

My first step was to install faasd and faas-cli. Both are actually quite straight forward. I followed the instructions, which can be found here: https://willschenk.com/howto/2021/installing_faasd/

But basically, it was clone the repo and change directory to inside the hack directory within the repo, and run ./install.sh

Once complete, test your faasd install to check everything is working.

Creating a test project

Once the server is all up and running, the next thing to do is run through a tutorial on writing your own module. I chose the hello-python one. The tutorial can be found here.

Now import to Gitlab

This stage is as easy as creating the repo in gitlab and setting it up for the first time. Gitlab creates a readme page with instructions here, so I won't repeat them. Once complete, check your code is visible in gitlab. You will also need a gitlab runner setup. Once this is done, we have all the pre-requistes in place to start work on the pipe.

Nb. The runner should be able to run the docker in docker image, and may require a few minor config tweaks to allow containers to be privilged. For reference, part of my docker file looked like this:

  executor = "docker"
    MaxUploadedArchiveSize = 0
    tls_verify = false
    image = "docker:stable"
    privileged = true
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
    shm_size = 0

Setting up the CI/CD pipeline

So, now all we need to do is set up the pipeline and a few variables. The idea here is that we will build our module (in docker) and save the container to the local repository. After much trial and error, I found that I had to set my project to public, but almost all setting back to private to get the repository to actually work. The settings that remained public were connected to the repository.

In the end, my .gitlab-ci.yml file looked like this:

#- template: Security/Container-Scanning.gitlab-ci.yml

- test
- build

  faas_key: ${faas_key}
  OPENFAAS_URL: "https://faasd.home.php-systems.com"

  key: "${CI_COMMIT_REF_SLUG}"
  - "./faas-cli"
  - "./template"
  stage: build
  image: docker:dind
  - name: docker:dind
    command: ["--tls=false"]
    DOCKER_DRIVER: overlay

  - apk add --no-cache git
  - if [ -f "./faas-cli" ] ; then cp ./faas-cli /usr/local/bin/faas-cli || 0 ; fi
  - if [ ! -f "/usr/local/bin/faas-cli" ] ; then apk add --no-cache curl git && curl -sSL cli.openfaas.com | sh && chmod +x /usr/local/bin/faas-cli && /usr/local/bin/faas-cli template pull && cp /usr/local/bin/faas-cli ./faas-cli ; fi
  - echo ${faas_key} | /usr/local/bin/faas-cli login --username admin --password-stdin
  - /usr/local/bin/faas-cli registry-login --username $CI_REGISTRY_USER --password $CI_REGISTRY_PASSWORD --server $CI_REGISTRY -f *.yml
  - "/usr/local/bin/faas-cli build --tag=sha --parallel=2 -f *.yml"
  - /usr/local/bin/faas-cli push --tag=sha -f *.yml
  - /usr/local/bin/faas-cli deploy --tag=sha -f *.yml --env write_timeout=1s
  #- /usr/local/bin/faas-cli deploy --tag=sha --send-registry-auth


Revisiting this script, I added a series of loops to the commands and stepped through each of the yml files. This allowed separate containers to be created for the same code base.

As an Amazon Associate I earn from qualifying purchases.

If you have found this post useful, please consider donating.