I want to execute celery tasks sequentially (serially, one by one) UNIQUE BY (taskid, args).
for example, we have Dockerfile and docker-compose.yml like these:
FROM python:3.8
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN pip install celery Redis flower
COPY . .
version: '3.3'
services:
handler:
build:
context: .
network: host
ports:
- 5000:5000
command: python main.py
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- redis
worker:
build:
context: .
network: host
command: celery --app main.app worker --uid=nobody --gid=nogroup --concurrency=1
volumes:
- .:/usr/src/app
environment:
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
- handler
- redis
redis:
image: redis:6-alpine
and we have a python code like this:
import os
import time
from celery import Celery
app = Celery(__name__)
app.conf.broker_url = os.environ.get(
"CELERY_BROKER_URL", "redis://localhost:6379"
)
app.conf.result_backend = os.environ.get(
"CELERY_RESULT_BACKEND", "redis://localhost:6379"
)
#app.task()
def add(x):
time.sleep(10)
return x + 10
ret = add.apply_async(args=(1), task_id="foo") # no.1
ret = add.apply_async(args=(1), task_id="foo") # no.2 it should be queued until no.1 finished
ret = add.apply_async(args=(2), task_id="foo") # no.3 it should be executed asap because args is different from above
ret = add.apply_async(args=(2), task_id="bar") # no.4 it should be executed asap because task_id is different from each of them
here is a task that return 10 + args(x) after sleeps 10sec.
I want to execute tasks sequentially UNIQUE BY the pair of (task_id, args).
is it possible? any opinions are welcomed.
and also, I don't mind whether I use/don't use celery, so if there is an alternative way instead of celery, I'd love to know the idea.
Related
My company uses self-managed AWS auto-scaling Docker runners, via Docker Machine. This configuration is documented here
We have a single runner/runner-manager EC2 instance whose config.toml contains several different runner configs, all with different tags so that different groups in our Gitlab org get a dedicated runner, by use of runner tags, all from a single runner which spins up the appropriate executor for the corresponding tag in the job definition.
The runner for my group has been working flawlessly for months. Today I created a job using the parallel:matrix: keyword
Build Images:
image: myimage
stage: build
script:
- docker build -f $DOCKERFILE -t $IMAGE_TAG
- docker push $IMAGE_TAG
parallel:
matrix:
- DOCKERFILE: $CI_PROJECT_DIR/Dockerfile
IMAGE_TAG: myrepo/myimage:standard
- DOCKERFILE: $CI_PROJECT_DIR/super.Dockerfile
IMAGE_TAG: myrepo/myimage:super
rules:
- when: always
When I push a commit neither this job or any others which should run are getting triggered. No error message or anything. The CI/CD->Jobs page does not show any jobs either.
This is the config.toml used on the runner manager. The runner I am attempting to run this job with is the first runner "my-runner"
concurrent = 100
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "my-runner"
limit = 6
url = "https://gitlab.com"
token = "XYZABC"
executor = "docker+machine"
[runners.custom_build_dir]
[runners.cache]
Type = "s3"
Path = "cache"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "mybucket"
BucketLocation = "us-east-1"
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]
shm_size = 0
[runners.machine]
IdleCount = 0
IdleTime = 600
MaxBuilds = 10
MachineDriver = "amazonec2"
MachineName = "gitlab-docker-machine-%s"
MachineOptions = ["amazonec2-instance-type=t3.medium", "amazonec2-vpc-id=vpc-xxxxxxxx", "amazonec2-security-group=my-security-group", "amazonec2-iam-instance-profile=xxxxxx", "amazonec2-root-size=32", "amazonec2-ami=ami-218k65t87w8b6posq", "amazonec2-subnet-id=subnet-xxxxxxxxx", "amazonec2-zone=a"]
[[runners.machine.autoscaling]]
Periods = ["* * 13-23 * * mon-fri *"]
Timezone = "UTC"
IdleCount = 1
IdleTime = 600
[[runners.machine.autoscaling]]
Periods = ["* * 2-11 * * * *"]
Timezone = "UTC"
IdleCount = 0
IdleTime = 300
[[runners]]
name = "other-runner"
limit = 6
url = "https://gitlab.com"
token = "LMNOP"
executor = "docker+machine"
...
...
...
There are several more runners defined in this config, but they are all very similar. Each is registered with different tags.
My Question: In the Gitlab CI docs it says
Multiple runners must exist, or a single runner must be configured to run multiple jobs concurrently
and to me it seems like multiple runners do exist, since the runner I am using has a limit of 6. Do the executors need to actually be spun up and sitting idle for this to work? Is there any way that I can get these parallel jobs to run without increasing my runner idle count?
Edit: Some additional information
This is just one job of about a dozen in this file(load-tests.yml)
My gitlab-ci.yml file imports jobs from about 10 other files via
include:
- local: .gitlab/load-test.yml
The pipeline never get created. If I comment out this job then the pipeline runs, including the other jobs in this file.
I can provide the entire file verbatim, but everything works fine if this job is not included. I'm fairly experienced with Gitlab-CI so I'm sure that the issue lies with this job and/or the runner config when using these keywords.
Tag and other keys are set in defaults in the .gitlab-ci.yml file. None of them are of significance, things like default variables, default before_script, cache, etc.
Edit: We are using Gitlab SAAS(premium I believe, but not sure) with the runner manager using Gitlab-Runner v14.1.0
i want each task to run one at a time after the previous one finishes. I have;
deploy_1:
script:
- scp -r $CI_PROJECT_DIR/script.sh ubuntu#domain1.tld.pl:/root/script.sh
stage: deploy
only:
- master
deploy_2:
script:
- scp -r $CI_PROJECT_DIR/script.sh ubuntu#domain2.tld.pl:/root/script.sh
stage: deploy
only:
- master
and more.. and there are two tasks running at the same time. In runner config:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "myrunner"
url = "https://XXXXXX/"
token = "XXXXX"
limit = 1
request_concurrency = 1
Any ideas?
I have to one by one because i get timeouts.
You could add
needs: ["deploy_1"]
To your deploy_2 job. It will chain deploy_2 to deploy_1
I have python script to execute ansible-playbook programmatically, python calls ansible API but play is not getting executed. I believe it is because start_at_task is set to None
What should be the value of start_at_task? could somebody help me
Ansible Version: 2.9.9
Python Version: 3.6.8
This is my run_playbook method
def run_playbook(play_book, extra_vars, servers, inventory_path, tags='all'):
base_playbook_path = os.environ.get('PLAYBOOK_PATH',
'/hom/playbooks/')
playbook_path = base_playbook_path + play_book
context.CLIARGS = ImmutableDict(tags=tags, connection='paramiko', remote_user='xyz', listtags=False, listtasks=False,
listhosts=False, syntax=False, module_path=None, forks=100,
private_key_file='/var/lib/jenkins/.ssh/xyz.pem', ssh_common_args=None, ssh_extra_args=None,
sftp_extra_args=None, scp_extra_args=None, become=None, become_method=None,
become_user=None, verbosity=True, check=False, start_at_task=None)
loader = DataLoader()
loader.load_from_file(base_playbook_path + '.vault_pass.txt')
inventory = InventoryManager(loader=loader, sources=inventory_path)
inventory.subset(servers)
variable_manager = VariableManager(loader=loader, inventory=inventory)
variable_manager._extra_vars = extra_vars
passwords = {}
playbook = PlaybookExecutor(playbooks=[playbook_path],
inventory=inventory,
variable_manager=variable_manager,
loader=loader, passwords=passwords)
result = playbook.run()
return result
and this is simple playbook that prints the kernel version
---
- name: Get Kernel Versions
gather_facts: no
hosts: all
become: yes
become_method: sudo
tasks:
- name: Fetch Kernel Version
shell: cat /etc/redhat-release
register: os_release
- debug:
msg: "{{ os_release.stdout }}"
Output:
PLAY [Get Kernel Versions] ****************************************************************************************************************************
PLAY RECAP ********************************************************************************************************************************************
0
I would like to use Ansible 2.9.9 Python API to get config file and parse it to json format from servers in hosts file.
I don't know how to call an existing ansible task using Python API.
Through the Ansible API document, how to integrate ansible task with the sample code.
Sample.py
#!/usr/bin/env python
import json
import shutil
from ansible.module_utils.common.collections import ImmutableDict
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
from ansible import context
import ansible.constants as C
class ResultCallback(CallbackBase):
"""A sample callback plugin used for performing an action as results come in
If you want to collect all results into a single object for processing at
the end of the execution, look into utilizing the ``json`` callback plugin
or writing your own custom callback plugin
"""
def v2_runner_on_ok(self, result, **kwargs):
"""Print a json representation of the result
This method could store the result in an instance attribute for retrieval later
"""
host = result._host
print(json.dumps({host.name: result._result}, indent=4))
# since the API is constructed for CLI it expects certain options to always be set in the context object
context.CLIARGS = ImmutableDict(connection='local', module_path=['/to/mymodules'], forks=10, become=None,
become_method=None, become_user=None, check=False, diff=False)
# initialize needed objects
loader = DataLoader() # Takes care of finding and reading yaml, json and ini files
passwords = dict(vault_pass='secret')
# Instantiate our ResultCallback for handling results as they come in. Ansible expects this to be one of its main display outlets
results_callback = ResultCallback()
# create inventory, use path to host config file as source or hosts in a comma separated string
inventory = InventoryManager(loader=loader, sources='localhost,')
# variable manager takes care of merging all the different sources to give you a unified view of variables available in each context
variable_manager = VariableManager(loader=loader, inventory=inventory)
# create data structure that represents our play, including tasks, this is basically what our YAML loader does internally.
play_source = dict(
name = "Ansible Play",
hosts = 'localhost',
gather_facts = 'no',
tasks = [
dict(action=dict(module='shell', args='ls'), register='shell_out'),
dict(action=dict(module='debug', args=dict(msg='{{shell_out.stdout}}')))
]
)
# Create play object, playbook objects use .load instead of init or new methods,
# this will also automatically create the task objects from the info provided in play_source
play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
# Run it - instantiate task queue manager, which takes care of forking and setting up all objects to iterate over host list and tasks
tqm = None
try:
tqm = TaskQueueManager(
inventory=inventory,
variable_manager=variable_manager,
loader=loader,
passwords=passwords,
stdout_callback=results_callback, # Use our custom callback instead of the ``default`` callback plugin, which prints to stdout
)
result = tqm.run(play) # most interesting data for a play is actually sent to the callback's methods
finally:
# we always need to cleanup child procs and the structures we use to communicate with them
if tqm is not None:
tqm.cleanup()
# Remove ansible tmpdir
shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)
sum.yml : generated summary file for each host
- hosts: staging
tasks:
- name: pt_mysql_sum
shell: PTDEST=/tmp/collected;mkdir -p $PTDEST;cd /tmp;wget percona.com/get/pt-mysql-summary;chmod +x pt*;./pt-mysql-summary -- --user=adm --password=***** > $PTDEST/pt-mysql-summary.txt;cat $PTDEST/pt-mysql-summary.out;
register: result
environment:
http_proxy: http://proxy.example.com:8080
https_proxy: https://proxy.example.com:8080
- name: ansible_result
debug: var=result.stdout_lines
- name: fetch_log
fetch:
src: /tmp/collected/pt-mysql-summary.txt
dest: /tmp/collected/pt-mysql-summary-{{ inventory_hostname }}.txt
flat: yes
hosts file
[staging]
vm1 ansible_ssh_host=10.40.50.41 ansible_ssh_user=testuser ansible_ssh_pass=*****
I have an airflow version 1.10.9 running in a docker.
It works but it has some errors that I can see using docker logs:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/local/lib/python3.6/dist-packages/airflow/executors/local_executor.py", line 112, in run
key, command = self.task_queue.get()
File "<string>", line 2, in get
File "/usr/lib/python3.6/multiprocessing/managers.py", line 757, in _callmethod
kind, result = conn.recv()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 250, in recv
buf = self._recv_bytes()
File "/usr/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/usr/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError
And also this one (not full trace), which is raised when I click in the succes green mark in the column Recent tasks.
self.fail('no filter named %r' % node.name, node.lineno)
File "/usr/local/lib/python3.6/dist-packages/jinja2/compiler.py", line 309, in fail
raise TemplateAssertionError(msg, lineno, self.name, self.filename)
jinja2.exceptions.TemplateAssertionError: no filter named 'min'
What I don't understand is howit raise an EOFError, but airflow still running. Neither I don't understand what does this error mean.
entrypoint.sh
#!/usr/bin/env bash
TRY_LOOP="20"
: "${AIRFLOW_HOME:="/usr/local/airflow"}"
: "${AIRFLOW__CORE__FERNET_KEY:=${FERNET_KEY:=$(python -c "from cryptography.fernet import Fernet; FERNET_KEY = Fernet.generate_key().decode(); print(FERNET_KEY)")}}"
: "${AIRFLOW__CORE__EXECUTOR:="LocalExecutor"}"
# Load DAGs examples (default: Yes)
if [[ -z "$AIRFLOW__CORE__LOAD_EXAMPLES" && "${LOAD_EX:=n}" == n ]]; then
AIRFLOW__CORE__LOAD_EXAMPLES=False
fi
export \
AIRFLOW_HOME \
AIRFLOW__CORE__EXECUTOR \
AIRFLOW__CORE__FERNET_KEY \
AIRFLOW__CORE__LOAD_EXAMPLES \
# Install custom python package if requirements.txt is present
if [ -e "/requirements.txt" ]; then
$(command -v pip) install --user -r /requirements.txt
fi
wait_for_port() {
local name="$1" host="$2" port="$3"
local j=0
while ! nc -z "$host" "$port" >/dev/null 2>&1 < /dev/null; do
j=$((j+1))
if [ $j -ge $TRY_LOOP ]; then
echo >&2 "$(date) - $host:$port still not reachable, giving up"
exit 1
fi
echo "$(date) - waiting for $name... $j/$TRY_LOOP"
sleep 5
done
}
# Other executors than SequentialExecutor drive the need for an SQL database, here PostgreSQL is used
if [ "$AIRFLOW__CORE__EXECUTOR" != "SequentialExecutor" ]; then
# Check if the user has provided explicit Airflow configuration concerning the database
if [ -z "$AIRFLOW__CORE__SQL_ALCHEMY_CONN" ]; then
# Default values corresponding to the default compose files
: "${POSTGRES_HOST:="XXX"}"
: "${POSTGRES_PORT:="XXX"}"
: "${POSTGRES_USER:="XXX"}"
: "${POSTGRES_PASSWORD:="XXX"}"
: "${POSTGRES_DB:="XXX"}"
: "${POSTGRES_EXTRAS:-""}"
AIRFLOW__CORE__SQL_ALCHEMY_CONN="postgresql+psycopg2://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}${POSTGRES_EXTRAS}"
export AIRFLOW__CORE__SQL_ALCHEMY_CONN
# Check if the user has provided explicit Airflow configuration for the broker's connection to the database
if [ "$AIRFLOW__CORE__EXECUTOR" = "CeleryExecutor" ]; then
AIRFLOW__CELERY__RESULT_BACKEND="db+postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}#${POSTGRES_HOST}:${POSTGRES_PORT}/${POSTGRES_DB}${POSTGRES_EXTRAS}"
export AIRFLOW__CELERY__RESULT_BACKEND
fi
else
if [[ "$AIRFLOW__CORE__EXECUTOR" == "CeleryExecutor" && -z "$AIRFLOW__CELERY__RESULT_BACKEND" ]]; then
>&2 printf '%s\n' "FATAL: if you set AIRFLOW__CORE__SQL_ALCHEMY_CONN manually with CeleryExecutor you must also set AIRFLOW__CELERY__RESULT_BACKEND"
exit 1
fi
# Derive useful variables from the AIRFLOW__ variables provided explicitly by the user
POSTGRES_ENDPOINT=$(echo -n "$AIRFLOW__CORE__SQL_ALCHEMY_CONN" | cut -d '/' -f3 | sed -e 's,.*#,,')
POSTGRES_HOST=$(echo -n "$POSTGRES_ENDPOINT" | cut -d ':' -f1)
POSTGRES_PORT=$(echo -n "$POSTGRES_ENDPOINT" | cut -d ':' -f2)
fi
wait_for_port "Postgres" "$POSTGRES_HOST" "$POSTGRES_PORT"
fi
# CeleryExecutor drives the need for a Celery broker, here Redis is used
if [ "$AIRFLOW__CORE__EXECUTOR" = "CeleryExecutor" ]; then
# Check if the user has provided explicit Airflow configuration concerning the broker
if [ -z "$AIRFLOW__CELERY__BROKER_URL" ]; then
# Default values corresponding to the default compose files
: "${REDIS_PROTO:="redis://"}"
: "${REDIS_HOST:="redis"}"
: "${REDIS_PORT:="6379"}"
: "${REDIS_PASSWORD:=""}"
: "${REDIS_DBNUM:="1"}"
# When Redis is secured by basic auth, it does not handle the username part of basic auth, only a token
if [ -n "$REDIS_PASSWORD" ]; then
REDIS_PREFIX=":${REDIS_PASSWORD}#"
else
REDIS_PREFIX=
fi
AIRFLOW__CELERY__BROKER_URL="${REDIS_PROTO}${REDIS_PREFIX}${REDIS_HOST}:${REDIS_PORT}/${REDIS_DBNUM}"
export AIRFLOW__CELERY__BROKER_URL
else
# Derive useful variables from the AIRFLOW__ variables provided explicitly by the user
REDIS_ENDPOINT=$(echo -n "$AIRFLOW__CELERY__BROKER_URL" | cut -d '/' -f3 | sed -e 's,.*#,,')
REDIS_HOST=$(echo -n "$POSTGRES_ENDPOINT" | cut -d ':' -f1)
REDIS_PORT=$(echo -n "$POSTGRES_ENDPOINT" | cut -d ':' -f2)
fi
wait_for_port "Redis" "$REDIS_HOST" "$REDIS_PORT"
fi
case "$1" in
webserver)
airflow initdb
if [ "$AIRFLOW__CORE__EXECUTOR" = "LocalExecutor" ] || [ "$AIRFLOW__CORE__EXECUTOR" = "SequentialExecutor" ]; then
# With the "Local" and "Sequential" executors it should all run in one container.
airflow scheduler &
fi
exec airflow webserver
;;
worker|scheduler)
# Give the webserver time to run initdb.
sleep 10
exec airflow "$#"
;;
flower)
sleep 10
exec airflow "$#"
;;
version)
exec airflow "$#"
;;
*)
# The command is something like bash, not an airflow subcommand. Just run it in the right environment.
exec "$#"
;;
esac
Airflow.cfg
[core]
# The home folder for airflow, default is ~/airflow
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository
# This path must be absolute
dags_folder = /usr/local/airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /usr/local/airflow/logs
# Airflow can store logs remotely in AWS S3, Google Cloud Storage or Elastic Search.
# Users must supply an Airflow connection id that provides access to the storage
# location. If remote_logging is set to true, see UPDATING.md for additional
# configuration requirements.
remote_logging = False
remote_log_conn_id =
remote_base_log_folder =
encrypt_s3_logs = False
# Logging level
logging_level = INFO
fab_logging_level = WARN
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
# logging_config_class = my.path.default_local_settings.LOGGING_CONFIG
logging_config_class =
# Log format
# we need to escape the curly braces by adding an additional curly brace
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# Log filename format
# we need to escape the curly braces by adding an additional curly brace
log_filename_template = {{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log
log_processor_filename_template = {{ filename }}.log
# Hostname by providing a path to a callable, which will resolve the hostname
hostname_callable = socket:getfqdn
# Default timezone in case supplied date times are naive
# can be utc (default), system, or any IANA timezone string (e.g. Europe/Amsterdam)
default_timezone = utc
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
executor = LocalExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
sql_alchemy_conn = $AIRFLOW_CONN_PROD_RDS
# If SqlAlchemy should pool database connections.
sql_alchemy_pool_enabled = True
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool. 0 indicates no limit.
sql_alchemy_pool_size = 5
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite. If the number of DB connections is ever exceeded,
# a lower config value will allow the system to recover faster.
sql_alchemy_pool_recycle = 1800
# How many seconds to retry re-establishing a DB connection after
# disconnects. Setting this to 0 disables retries.
sql_alchemy_reconnect_timeout = 300
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 32
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 128
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# Whether to load the examples that ship with Airflow. It's good to
# get started, but you probably want to set this to False in a production
# environment
load_examples = False
# Where your Airflow plugins are stored
plugins_folder = /usr/local/airflow/plugins
# Secret key to save connection passwords in the db
fernet_key =
# Whether to disable pickling dags
donot_pickle = False
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 30
# The class to use for running task instances in a subprocess
task_runner = BashTaskRunner
# If set, tasks without a `run_as_user` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
# What security module to use (for example kerberos):
security =
# If set to False enables some unsecure features like Charts and Ad Hoc Queries.
# In 2.0 will default to True.
secure_mode = False
# Turn unit test mode on (overwrites many configuration options with test
# values at runtime)
unit_test_mode = False
# Name of handler to read task instance logs.
# Default to use task handler.
task_log_reader = task
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
# Whether to override params with dag_run.conf. If you pass some key-value pairs through `airflow backfill -c` or
# `airflow trigger_dag -c`, the key-value pairs will override the existing ones in params.
dag_run_conf_overrides_params = False
[cli]
# In what way should the cli access the API. The LocalClient will use the
# database directly, while the json_client will use the api running on the
# webserver
api_client = airflow.api.client.local_client
# If you set web_server_url_prefix, do NOT forget to append it here, ex:
# endpoint_url = http://localhost:8080/myroot
# So api will look like: http://localhost:8080/myroot/api/experimental/...
endpoint_url = http://localhost:8080
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
[lineage]
# what lineage backend to use
backend =
[atlas]
sasl_enabled = False
host =
port = 21000
username =
password =
[operators]
# The default owner assigned to each new operator, unless
# provided explicitly or passed via `default_args`
default_owner = Airflow
default_cpus = 2
default_ram = 2048
default_disk = 2048
default_gpus = 0
[hive]
# Default mapreduce queue for HiveOperator tasks
default_hive_mapred_queue =
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
base_url = http://localhost:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the webserver waits before killing gunicorn master that doesn't respond
web_server_master_timeout = 120
# Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = temporary_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
error_logfile = -
# Expose the configuration file in the web server
expose_config = False
# Set to true to turn on authentication:
# https://airflow.incubator.apache.org/security.html#web-authentication
authenticate = False
# Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
# Filtering mode. Choices include user (default) and ldapgroup.
# Ldap group filtering requires using the ldap backend
#
# Note that the ldap server needs the "memberOf" overlay to be set up
# in order to user the ldapgroup mode.
owner_mode = user
dag_default_view = tree
dag_orientation = LR
demo_mode = False
log_fetch_timeout_sec = 30
hide_paused_dags_by_default = False
page_size = 100
# Use FAB-based webserver with RBAC feature
rbac = False
# Define the color of navigation bar
navbar_color = #007A87
# Default dagrun to show in UI
default_dag_run_display_number = 25
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
smtp_port = 25
smtp_mail_from = airflow#example.com
[celery]
celery_app_name = airflow.executors.celery_executor
worker_concurrency = 16
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#broker-settings
broker_url =
# The Celery result_backend. When a job finishes, it needs to update the
# metadata of the job. Therefore it will post a message on a message bus,
# or insert it into a database (depending of the backend)
# This status is used by the scheduler to update the state of the task
# The use of a database is highly recommended
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#task-result-backend-settings
result_backend = sqlite:///~/airflow_tutorial/airflow.db
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# The root URL for Flower
# Ex: flower_url_prefix = /flower
flower_url_prefix =
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
# In case of using SSL
ssl_active = False
ssl_key =
ssl_cert =
ssl_cacert =
[celery_broker_transport_options]
# This section is for specifying options which can be passed to the
# underlying celery broker transport. See:
# http://docs.celeryproject.org/en/latest/userguide/configuration.html#std:setting-broker_transport_options
# The visibility timeout defines the number of seconds to wait for the worker
# to acknowledge the task before the message is redelivered to another worker.
# Make sure to increase the visibility timeout to match the time of the longest
# ETA you're planning to use.
#
# visibility_timeout is only supported for Redis and SQS celery brokers.
# See:
# http://docs.celeryproject.org/en/master/userguide/configuration.html#std:setting-broker_transport_options
#
#visibility_timeout = 21600
[dask]
# This section only applies if you are using the DaskExecutor in
# [core] section above
# The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
# TLS/ SSL settings to access a secured Dask scheduler.
tls_ca =
tls_cert =
tls_key =
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 5
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 5
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 0
# How many seconds to wait between file-parsing loops to prevent the logs from being spammed.
min_file_parsing_loop_time = 1
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = ~/airflow_tutorial/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = True
# This changes the batch size of queries in the scheduling main loop.
# If this is too high, SQL query performance may be impacted by one
# or more of the following:
# - reversion to full table scan
# - complexity of query predicate
# - excessive locking
#
# Additionally, you may hit the maximum allowable query length for your db.
#
# Set this to 0 for no limit (not advised)
max_tis_per_query = 512
# Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
max_threads = 2
authenticate = False
[ldap]
# set this to ldaps://<your.ldap.server>:<port>
uri =
user_filter = objectClass=*
user_name_attr = uid
group_member_attr = memberOf
superuser_filter =
data_profiler_filter =
bind_user = cn=Manager,dc=example,dc=com
bind_password = insecure
basedn = dc=example,dc=com
cacert = /etc/ca/ldap_ca.crt
search_scope = LEVEL
[mesos]
# Mesos master address which MesosExecutor will connect to.
master = localhost:5050
# The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
# Number of cpu cores required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_cpu = 1
# Memory in MB required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_memory = 256
# Enable framework checkpointing for mesos
# See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
# Failover timeout in milliseconds.
# When checkpointing is enabled and this option is set, Mesos waits
# until the configured timeout for
# the MesosExecutor framework to re-register after a failover. Mesos
# shuts down running tasks if the
# MesosExecutor framework fails to re-register within this timeframe.
# failover_timeout = 604800
# Enable framework authentication for mesos
# See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
# Mesos credentials, if authentication is enabled
# default_principal = admin
# default_secret = admin
# Optional Docker Image to run on slave before running the command
# This image should be accessible from mesos slave i.e mesos slave
# should be able to pull this docker image before executing the command.
# docker_image_slave = puckel/docker-airflow
[kerberos]
ccache = /tmp/airflow_krb5_ccache
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = airflow.keytab
[github_enterprise]
api_rev = v3
[admin]
# UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
[elasticsearch]
elasticsearch_host =
# we need to escape the curly braces by adding an additional curly brace
elasticsearch_log_id_template = {dag_id}-{task_id}-{execution_date}-{try_number}
elasticsearch_end_of_log_mark = end_of_log
[kubernetes]
# The repository and tag of the Kubernetes Image for the Worker to Run
worker_container_repository =
worker_container_tag =
# If True (default), worker pods will be deleted upon termination
delete_worker_pods = True
# The Kubernetes namespace where airflow workers should be created. Defaults to `default`
namespace = default
# The name of the Kubernetes ConfigMap Containing the Airflow Configuration (this file)
airflow_configmap =
# For either git sync or volume mounted DAGs, the worker will look in this subpath for DAGs
dags_volume_subpath =
# For DAGs mounted via a volume claim (mutually exclusive with volume claim)
dags_volume_claim =
# For volume mounted logs, the worker will look in this subpath for logs
logs_volume_subpath =
# A shared volume claim for the logs
logs_volume_claim =
# Git credentials and repository for DAGs mounted via Git (mutually exclusive with volume claim)
git_repo =
git_branch =
git_user =
git_password =
git_subpath =
# For cloning DAGs from git repositories into volumes: https://github.com/kubernetes/git-sync
git_sync_container_repository = gcr.io/google-containers/git-sync-amd64
git_sync_container_tag = v2.0.5
git_sync_init_container_name = git-sync-clone
# The name of the Kubernetes service account to be associated with airflow workers, if any.
# Service accounts are required for workers that require access to secrets or cluster resources.
# See the Kubernetes RBAC documentation for more:
# https://kubernetes.io/docs/admin/authorization/rbac/
worker_service_account_name =
# Any image pull secrets to be given to worker pods, If more than one secret is
# required, provide a comma separated list: secret_a,secret_b
image_pull_secrets =
# GCP Service Account Keys to be provided to tasks run on Kubernetes Executors
# Should be supplied in the format: key-name-1:key-path-1,key-name-2:key-path-2
gcp_service_account_keys =
# Use the service account kubernetes gives to pods to connect to kubernetes cluster.
# It's intended for clients that expect to be running inside a pod running on kubernetes.
# It will raise an exception if called from a process not running in a kubernetes environment.
in_cluster = True
[kubernetes_secrets]
# The scheduler mounts the following secrets into your workers as they are launched by the
# scheduler. You may define as many secrets as needed and the kubernetes launcher will parse the
# defined secrets and mount them as secret environment variables in the launched workers.
# Secrets in this section are defined as follows
# <environment_variable_mount> = <kubernetes_secret_object>:<kubernetes_secret_key>
#
# For example if you wanted to mount a kubernetes secret key named `postgres_password` from the
# kubernetes secret object `airflow-secret` as the environment variable `POSTGRES_PASSWORD` into
# your workers you would follow the following format:
# POSTGRES_PASSWORD = airflow-secret:postgres_credentials
#
# Additionally you may override worker airflow settings with the AIRFLOW__<SECTION>__<KEY>
# formatting as supported by airflow normally.
All this setup is running under a c5.large EC2 in AWS