Airflow on Kubernetes
This blog walks you through the steps on how to deploy Airflow on Kubernetes. If you to jump on the code directly here's the GitHub repo.
What is Airflow
Airflow is a platform created by the community to programmatically author, schedule, and monitor workflows.
Airflow lets you define workflow in the form of a directed acyclic graph(DAG) defined in a Python file. The most famous usecase of airflow is data/machine learning engineers constructing data pipelines that performs transformations.
Airflow with Kubernetes
There are a bunch of advantages of running Airflow over Kubernetes.
Airflow runs one worker pod per airflow task, enabling Kubernetes to spin up and destroy pods depending on the load.
Kubernetes spins up worker pods only when there is a new job. Whereas the alternatives such as celery always have worker pods running to pick up tasks as they arrive.
- A Docker image registry to push your Docker images
- Kubernetes cluster on GCP/AWS.
Airflow has 3 major components.
- Webserver - Which serves you the fancy UI with a list of DAGs, logs, and tasks.
- Scheduler - Which runs on the background and schedules tasks and manages them
- Workers/Executors - These are the processes that execute the tasks. Worker processes are spun up by Schedulers and tracked on their completion
Apart from these, there are
- Dag folders
- Log folders
There are different kinds of Executors one can use with Airflow.
- LocalExecutor - Used mostly for playing around in the local machine.
- CeleryExecutor - Uses celery workers to run the tasks
- KubernetesExecutor - Uses Kubernetes pods to run the worker tasks
Airflow with Kubernetes
On scheduling a task with airflow Kubernetes executor, the scheduler spins up a pod and runs the tasks. On completion of the task, the pod gets killed. It ensures maximum utilization of resources, unlike celery, which at any point must have a minimum number of workers running.
Building the Docker Image
The core part of building a docker image is doing a pip install.
RUN pip install --upgrade pip RUN pip install apache-airflow==1.10.10 RUN pip install 'apache-airflow[kubernetes]'
We also need a script that would run the
scheduler based on the Kubernetes pod or container. We have a file called
bootstrap.sh to do the same.
if [ "$1" = "webserver" ] then exec airflow webserver fi if [ "$1" = "scheduler" ] then exec airflow scheduler fi
Let's add them to the docker file too.
COPY bootstrap.sh /bootstrap.sh RUN chmod +x /bootstrap.sh ENTRYPOINT ["/bootstrap.sh"]
Let's build and push the image
docker build -t <image-repo-url:tag> . docker push <image-repo-url:tag>
This section explains various parts of
- A Kubernetes deployment running a pod running both webserver and scheduler containers
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: airflow namespace: airflow-example spec: replicas: 1 template: metadata: labels: name: airflow spec: serviceAccountName: airflow containers: - name: webserver ... - name: scheduler ... volumes: ... ...
- A service whose external IP is mapped to Airflow's webserver
apiVersion: v1 kind: Service metadata: name: Airflow spec: type: LoadBalancer ports: - port: 8080 selector: name: airflow
- A serviceaccount which with
Roleto spin up and delete new pods. These provide permissions to the Airflow scheduler to spin up the worker pods.
apiVersion: v1 kind: ServiceAccount metadata: name: airflow namespace: airflow-example --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: airflow-example name: airflow rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "list", "watch", "create", "update", "delete"] - apiGroups: ["batch", "extensions"] resources: ["jobs"] verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] ---
- Two persistent volumes for storing dags and logs
kind: PersistentVolume apiVersion: v1 metadata: name: airflow-dags spec: accessModes: - ReadOnlyMany capacity: storage: 2Gi hostPath: path: /airflow-dags/ --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: airflow-dags spec: accessModes: - ReadOnlyMany resources: requests: storage: 2Gi
- An airflow config file is created as a kubernetes config map and attached to the pod. Checkout
- The Postgres configuration is handled via a separate deployment
- The secrets like Postgres password are created using Kubernetes secrets
- If you want to additional env variables, use Kubernetes
You can deploy the airflow pods in 2 modes.
- Use persistent volume to store DAGS
- Get use git to pull dags from
To set up the pods, we need to run a
deploy.sh script that does the following
- Convert the templatized config under
templatesto Kube config files under
- Deletes existing pods, deployments. if any in the namespace
- Create new pods, deployments, and other Kube resources
export IMAGE=<IMAGE REPOSITORY URL> export TAG=<IMAGE_TAG> cd airflow-kube-setup/scripts/kube ./deploy.sh -d persistent_mode
Testing the Setup
By default, this setup copies all the examples into the dags; we can just run one of them and see if everything is working fine.
- Get the airflow URL by running
kubectl get services
- Log into the Airflow by using
airflow.You can change this value in
- Pick one of the DAG files listed
- On your terminal run
kubectl get pods --watchto notice when worker pods are created
- Click on
TriggerDagto trigger one of the jobs
- On the graph view, you can see the tasks running, and on your terminal new pods are created and shut down completing the tasks.
Maintainence and modification
Once it is deployed, you don't have to run this script every time. You can use basic kubectl commands to delete or restart pods.
kubectl get pods --watch kubectl logs <POD_NAME> <Container_name> kubectl exec -it $pod_name --container webserver -- /bin/bash
Got a Question?
Apache Airflow. https://airflow.incubator.apache.org/index.html