using variables in ansible hosts targetting in playbook - scope

I am trying to write a very flexible playbook that targets hosts based on environment they are in. I am using as many variables as possible, so the playbook can be reused for other projects/environments with minimal changes.
I have a single application.yml
---
- name: Prepare app-server for "The app"
hosts: "{{'env'}}_super_app"
vars:
vars_files:
- "environments/{{env}}.yml"
sudo: yes
tasks:
- command: echo {{env}}
roles:
- common
- nginx
- php5-fpm
- nodejs
- newrelic
- users
- composer
- name: Install and configure mysql for "The super app"
hosts:
- "{{env}}_super_db"
vars:
vars_files:
- "environments/{{env}}.yml"
sudo: yes
roles:
- common
- mysql
- newrelic
Here is the playbook directory structure:
├── environments
│   ├── prod.yml << environment specific vars
│   ├── stag.yml << environment specific vars
│   └── uat.yml << environment specific vars
├── roles
│   ├── common
│   ├── composer
│   ├── mysql
│   ├── newrelic
│   ├── nginx
│   ├── nodejs
│   ├── php5-fpm
│   └── users
├── users
│   └── testo.yml
├── prod << inventory file for production
├── README.md
├── application.yml << application playbook
├── stag << inventory file for staging
├── uat << inventory file for uat
Here are the contents of the uat inventory file:
[uat_super_app]
10.10.10.4
[uat_super_db]
10.10.10.5
When I run my playbook, I pass the environment as an extra variable:
ansible-playbook -K -i uat application.yml -e="env=uat" --check
The idea being:
If {{env}} is set to uat, then the environments/uat.yml vars will be used, AND hosts [uat_super_app] will be targeted based on {{env}}_super_app.
If I or anyone makes a mistake and tries to run the uat vars against a production inventory, the hosts will not match, and ansible will not run the playbook.
ansible-playbook -K -i prod application.yml -e="env=uat" --check
This playbook works when I do not use variables in hosts targeting.
The problem is that no hosts match either way:
ansible-playbook -K -i uat application.yml -e="env=uat" --check -vvvv
SUDO password:
PLAY [Prepare app-server for "The app"] *******************************
skipping: no hosts matched
PLAY [Install and configure mysql for "The app"] **********************
skipping: no hosts matched
PLAY RECAP ********************************************************************

hosts: "{{'env'}}_super_app"
That looks like you are using the string env, not the variable, which evaluates to env_super_app. I think you meant:
hosts: "{{ env }}_super_app"

Thanks udondan, but that didn't work.
The solution was to pass the vars in a similar way that the playbook expects to see them.
If I were specifying the env in the playbook I would have:
---
- name: Prepare app-server for "The app"
hosts: uat_super_app
vars:
- env: uat
So the correct way of passing the variable is:
ansible-playbook -K -i uat application.yml -e='vars: env=uat'
To test this I used the --list-hosts option:
ansible-playbook -K -i uat application.yml -e='vars: env=uat' --list-hosts
playbook: application.yml
play #1 (Prepare app-server for "The super app"): host count=1
10.10.10.4
play #2 (Install and configure mysql for "The super app"): host count=1
10.10.10.5

Related

GitHub Actions not creating Rust binaries

I am using GitHub Actions to cross-compile my Rust program. The action completes successfully, and files are created in the target directory, but there is no binary. This is my workflow file:
name: Compile and save program
on:
push:
branches: [main]
paths-ignore: ["samples/**", "**.md"]
workflow_dispatch:
jobs:
build:
strategy:
fail-fast: false
matrix:
target:
- aarch64-unknown-linux-gnu
- i686-pc-windows-gnu
- i686-unknown-linux-gnu
- x86_64-pc-windows-gnu
- x86_64-unknown-linux-gnu
name: Build executable
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Set up Rust
uses: actions-rs/toolchain#v1
with:
toolchain: stable
- name: Install dependencies
run: |
rustup target add ${{ matrix.target }}
- name: Build
uses: actions-rs/cargo#v1
with:
use-cross: true
command: build
args: --target ${{ matrix.target }} --release --all-features --target-dir=/tmp
- name: Debug missing files
run: |
echo "target dir:"
ls -a /tmp/release
echo "deps:"
ls -a /tmp/release/deps
- name: Archive production artifacts
uses: actions/upload-artifact#v3
with:
name: ${{ matrix.target }}
path: |
/tmp/release
And this is the layout of the created directory when targeting Windows x86_64 (the only difference when targeting other platforms is the names of the directories within .fingerprint and build):
.
├── .cargo-lock
├── .fingerprint/
│ ├── libc-d2565b572b77baea/
│ ├── winapi-619d3257e8f28792/
│ └── winapi-x86_64-pc-windows-gnu-7e7040207fbb5417/
├── build/
│ ├── libc-d2565b572b77baea/
│ ├── winapi-619d3257e8f28792/
│ └── winapi-x86_64-pc-windows-gnu-7e7040207fbb5417/
├── deps/
│ └── <empty>
├── examples/
│ └── <empty>
└── incremental/
└── <empty>
As you can see, there is no binary, and this is reflected in the uploaded artifact.
What is causing this?
EDIT 1
The program builds fine on my local device. My .cargo/config.toml is below.
[target.aarch64-unknown-linux-gnu]
linker = "aarch64-linux-gnu-gcc"
And this is my Cargo.toml:
[package]
name = "brainfuck"
version = "0.4.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
console = "0.15.2"
either = "1.8.0"
EDIT 2
While messing around in a test repo, I discovered that this issue only arises when specifying the target. If I don’t specify a target and just use the default system target, I get a binary as expected.
It turns out I didn’t read the cargo docs properly. The build cache docs mention that the results of a build with a specified target are stored in target/<triple>/debug/, and that is indeed where they were.

How does the syntax to specify file type changes in Gitlab's CI rules work?

My file structure consits of 2 main directories, resources and src, resources has images in a subdirectory, and various json files. src has many nested directories with .ts files in each:
├── package.json
├── package-lock.json
├── README.md
│
├── .docker
│ ├── Dockerfile
│ └── aBashScript.sh
│
├── resources
│ ├── data.json
│ └── images
│ └── manyimages.png
│
├── src
│ ├── subdirectory1
│ └── NestedDirectories
│
├── .gitlab-ci.yml
├── tsconfig.eslint.json
├── tsconfig.json
├── eslintrc.json
└── prettierrc.json
My gitlab-ci.yml has two stages, build and deploy
What I want:
1- If it's a commit on branches "main" or "dev" and If anything that affects the actual project changes, run build.
That is anything under resources, or src (and their nested directories), the Dockerfile, package.json and package-lock.json
I'd be content with "any .ts file changed" too, since all other criteria is usually only when this happens.
2- If build ran and it's a commit on the default branch ("main") then run the deploy stage.
Also for clarification when I say there's a commit on branch X, I mean as in an accepted merge request, or well an actual change on that branch. At some point in my tinkering it was running on (non accepted) merge requests, but I forgot what I changed to fix that.
What happens:
1- If I specify the changes rule on build then it never runs, however even if build doesn't run deploy always runs (if on branch "main")
.gitlab-ci.yml
variables:
IMAGE_TAG: project
stages:
- build
- deploy
build_image:
stage: build
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin
script:
- |
if [[ "$CI_COMMIT_BRANCH" == "$CI_DEFAULT_BRANCH" ]]; then
tag="latest"
echo "Running on default branch '$CI_DEFAULT_BRANCH': tag = '$tag'"
else
tag="$CI_COMMIT_REF_SLUG"
echo "Running on branch '$CI_COMMIT_BRANCH': tag = $tag"
fi
- docker build -f .docker/Dockerfile -t $REPO_NAME:$IMAGE_TAG-$tag .
- docker push $REPO_NAME:$IMAGE_TAG-$tag
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "dev"'
changes:
- \*.ts
- \*.json
- Dockerfile
deploy:
stage: deploy
before_script:
- chmod SSH_KEY
script:
- ssh -o StrictHostKeyChecking=no -i $SSH_KEY $VPS "
echo $REGISTRY_PASS | docker login -u $REGISTRY_USER --password-stdin &&
cd project &&
docker-compose pull &&
docker-compose up -d"
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
This is the most basic one I could cobble up, basically excluding just the readme, but the build stage doesn't run (deploy does run even if build didn't)
Normally this is something I'd be able to "brute force" figure out myself, but to avoid uselessly modifying my files to test the changes rule, I've only been able to test this when making actual modifications to the project.
There seems to be a lot of examples from questions and tutorials out there, but I think something is off with my file structure as I've had no luck copying their changes rule
The changes: entries are glob patterns, not regex. So in order for you to match .ts files in any directory, you'll need to use "**/*.ts" not *.ts (which would only match files in the root).
changes:
- "**/*.ts"
- "**/*.json"
# ...
If build ran and it's a commit on the default branch ("main") then run the deploy stage.
To get this effect, you'll want your deploy job to share some of the rules of your build job.
deploy:
rules:
- if: "$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH"
changes:
- Dockerfile
- "**/*.ts"
- "**/*.json"
Or a little fancier way that reduces code duplication:
rules:
- if: "$CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH"
when: never # only deploy on default branch
- !reference [build_image, rules]

Managing multiple terraform statefiles with gitlab

Just as an example, my git repository would look like so:
└── git_repo_example
├── modules
│   ├── module_1
│   │   ├── main.tf
│   │   └── var.tf
│   └── module_2
│   ├── main.tf
│   └── var.tf
└── projects
├── project_1
│   ├── main.tf
│   ├── terraform.tfstate
│   └── vars.tf
└── project_2
├── main.tf
├── terraform.tfstate
└── vars.tf
7 directories, 10 files
My team wants to make our terraform state files gitlab-managed, so that the statefiles would be locked in case multiple people want to run or modify a single project at the same time.
All of the examples I can find for managing terraform via gitlab only seem to assume 1 tfstate file and project, but my repository has multiple. Breaking this up into multiple repositories would be difficult to manage since they all reference the same modules, and it seems that placing everything into one folder is against terraform best-practices.
How would one best go about managing one repository with multiple terraform projects / statefiles via gitlab?
I have a similar-ish directory structure that works well with GitLab managed state per project directory.
I'd recommend replacing local TF development with GitLab CI/CD pipelines, using the provided GitLab container image as it supports the GitLab backend by default.
I use environments (representing each project directory) to mange the pipeline (CI/CD variables are managed by environment). The TF statefile is named according to the TF_ADDRESS variable:
image: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}/${ENVIRONMENT}
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${ENVIRONMENT}
Here a build job is defined to create the TF plan and run only when the development directory is modified and merged to the default branch. An identical job for the production directory is also defined. Each environment references a unique TF state file managed by GitLab:
.plan:
stage: build
environment:
name: $ENVIRONMENT
script:
- gitlab-terraform plan
- gitlab-terraform plan-json
cache:
policy: pull # don't update the cache
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
Development Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: development
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "development/**/*"
Production Build Terraform Plan:
extends: .plan
variables:
ENVIRONMENT: production
rules:
- if: '$CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH'
changes:
- "*"
- "production/**/*"
I have exactly the same kind of terraform scripts repository, with a "run" script at the top, which will do, for each application, a
cd modules/modulesx
terraform init -backend-config=backend.tf -reconfigure
With backend.tf (here for Azure):
container-name = "tfbackend"
key = "aStorageAccount/aFileShare/path/to/modulex.tfstate"
resource_group_name = "xxx"
That does create a modules/modulex/.terraform/terraform.tfstate
However, this file is local and not versioned nor "locked".

Terraform and AWS ECS: Configs/Secrets similar to Kubernetes or Docker swarm

Is there anything similar to Kubernetes / Docker Swarm secrets in the AWS ECS ecosystem? I know of the options to use SSM parameter store or S3 buckets, but both are not comparable in their usability the solutions in Kubernetes or Swarm.
The SSM parameter store limited to 4/8 KB per secret, makes it impossible to use for bigger config files. Kubernetes allows up to 1 MB per secret.
And both SSM PS as well as S3 require me to maintain distinct images for everything that doesn't support configuration via environment variables (the only way I know of to get config data into an container in ECS).
Am I missing an obvious simpler solution?
Currently the workflow looks like this for me:
Create SSM parameter with base64 encoding
Create image (for example nginx) that parses environment variables into target files, for example nginx.conf, and then calls the default entrypoint and passes on any arguments.
Use secrets in my Terraform ECS task definition.
Here is an example for a Dockerfile and run.sh
#!/bin/sh
echo "version 1.0"
if [ -z ${NGINX_CONF} ]
then
echo "no settings received for nginx.conf, connecting with the default settings"
else
echo ${NGINX_CONF} | base64 -d > /etc/nginx/nginx.conf
echo "created /etc/nginx/nginx.conf"
fi
/docker-entrypoint.sh $#
FROM nginx:latest
COPY files/run.sh /bin/run.sh
ENTRYPOINT [ "/bin/run.sh" ]
CMD ["nginx", "-g", "daemon off;"]
I have found a solution I am happy with for now:
Here is how I manage configs/secrets with Terraform / AWS ECS
The basic idea is as following: We store configuration files without sensitive data in the repository next to Terraform. Secrets are stored in AWS parameter store. To get the data into our containers at runtime, we modify the entrypoint. We could of course just create modified images, but that creates a big maintenance overhead in my opinion. Using the entrypoint approach, we can keep using vanilla images.
The disadvantage is that I have to create custom entrypoint scripts. That means that I have to find the Dockerfile of the image I'm interested in and extract the commands used to start the actual process running in the image.
I have a git repository like this:
├── files
│   └── promstack
│   ├── grafana
│   │   ├── default-datasources.yml
│   │   ├── grafana.ini
│   │   └── run.sh
│   ├── loki
│   │   └── run.sh
│   ├── nginx
│   │   ├── nginx.conf
│   │   └── run.sh
│   └── prometheus
│   ├── prometheus.yml
│   ├── rules-alerting.yml
│   ├── rules-recording.yml
│   └── run.sh
├── myscript.tf
└── variables.tf
The run.sh scripts represent the entrypoints. Here is an exemplary run.sh:
#!/bin/sh
set -x
require () {
if [ ! "$1" ]; then
echo "ERROR: var not found"
exit 1
fi
}
expand () {
var_name="${1}"
file="${2}"
eval var="\$$var_name"
sed -i "s+\${${var_name}}+${var}+g" ${file}
sed -i "s+\$${var_name}+${var}+g" ${file}
}
require ${GRAFANA_INI}
require ${DEFAULT_DATASOURCES_YML}
require ${DOMAIN}
echo ${GRAFANA_INI} | base64 -d > /etc/grafana/grafana.ini
chmod 666 /etc/grafana/grafana.ini
expand DOMAIN /etc/grafana/grafana.ini
echo ${DEFAULT_DATASOURCES_YML} | base64 -d > /etc/grafana/provisioning/datasources/default.yml
chmod 666 /etc/grafana/provisioning/datasources/default.yml
su -s "/bin/sh" -c "/run.sh" grafana
Here a part from a Terraform script (ECS Container task definition to be exact):
{
name: "grafana",
image: "grafana/grafana:7.0.5",
portMappings: [{
containerPort : 3000,
hostPort: 0,
protocol: "tcp"
}],
user: "0",
entryPoint: [ "/bin/sh", "-c", join(" ", [
"export DEFAULT_DATASOURCES_YML=${base64encode(file("${path.module}/files/promstack/grafana/default-datasources.yml"))};",
"export GRAFANA_INI=${base64encode(file("${path.module}/files/promstack/grafana/grafana.ini"))};",
"echo '${base64encode(file("${path.module}/files/promstack/grafana/run.sh"))}' | base64 -d | sh;"
])],
secrets: [
{
name: "DOMAIN",
valueFrom: "<my ssm parameter>"
}
]
},

How to deploy to appengine flexible using bazel and google cloud deploy using non containerized artifact?

I have a project structure like:
.
├── app
│   ├── BUILD
│   ├── entry.py
│   ├── forms.py
│   ├── __init__.py
│   ├── jinja_custom_filter.py
│   ├── models.py
│   ├── __pycache__
│   ├── static
│   ├── templates
│   ├── utils.py
│   └── views.py
├── app.db
├── app.yaml
├── BUILD
├── cloudbuild.yaml
├── config.py
├── __init__.py
├── LICENSE
├── manage.py
├── requirements.txt
├── run.py
└── WORKSPACE
4 directories, 20 files
Project uses flask, sqlalchemy (see further below)
How does one deploy using google cloud builder to appengine using non-containerized deployment option? just artifact?
Here is my cloudbuild.yaml?:
# In this directory, run the following command to build this builder.
# $ gcloud builds submit
steps:
# Fetch source.
#- name: "docker.io/library/python:3.6.8"
# args: ['pip', 'install', '-t', '/workspace/lib', '-r', '/workspace/requirements.txt']
#- name: 'gcr.io/cloud-builders/git'
# args: ['clone', '--single-branch', '--branch=develop', 'https://github.com/codecakes/<myproject>_gae.git', '<myproject>_gae']
# Build the Bazel builder and output the version we built with.
#- name: 'gcr.io/cloud-builders/docker'
# args: ['build', '--tag=gcr.io/$PROJECT_ID/deploy:latest', '.']
# Build the targets.
#- name: 'gcr.io/$PROJECT_ID/bazel'
- name: 'gcr.io/cloud-builders/bazel'
args: ['build', '--spawn_strategy=standalone', '//app:entry', '--copt', '--force_python=PY3', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo', '--verbose_failures']
dir: '.'
- name: 'gcr.io/cloud-builders/bazel'
# args: ['run', '--spawn_strategy=standalone', '//:run', '--copt', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--output_groups=pyversioninfo']
args: ['build', '--spawn_strategy=standalone', ':run', '--copt', '--aspects=#bazel_tools//tools/python:srcs_version.bzl%find_requirements', '--verbose_failures=true', '--show_timestamps=true', '--python_version=PY3', '--build_python_zip', '--sandbox_debug', '--color=yes', '--curses=yes', '--jobs=10', '--loading_phase_threads=HOST_CPUS']
dir: '.'
artifacts:
objects:
location: 'gs://<myproject>/'
paths: ['cloudbuild.yaml']
#images: ['gcr.io/$PROJECT_ID/deploy:latest']
I then do
sudo gcloud builds submit --config cloudbuild.yaml ./
which gives me
ID CREATE_TIME DURATION SOURCE IMAGES STATUS
d4dfd7dd-0f77-49d1-ac4c-4e3a1c84e3ea 2019-05-04T07:52:13+00:00 1M32S gs://<myproject>_cloudbuild/source/1556956326.46-3f4abd9a558440d8ba669b3d55248de6.tgz - SUCCESS
then i do
sudo gcloud app deploy app.yaml --user-output-enabled --account=<myemail>
which gives me:
Updating service [default] (this may take several minutes)...failed.
ERROR: (gcloud.app.deploy) Error Response: [3] Docker image asia.gcr.io/<myproject>/appengine/default.20190502t044929:latest was either not found, or is not in Docker V2 format. Please visit https://cloud.google.com/container-registry/docs/ui
My workspace file is:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
# Use the git repository
git_repository(
name = "bazel_for_gcloud_python",
remote = "https://github.com/weisi/bazel_for_gcloud_python.git",
branch="master",
)
# https://github.com/bazelbuild/rules_python
git_repository(
name = "io_bazel_rules_python",
remote = "https://github.com/bazelbuild/rules_python.git",
# NOT VALID! Replace this with a Git commit SHA.
branch= "master",
)
# Only needed for PIP support:
load("#io_bazel_rules_python//python:pip.bzl", "pip_repositories")
pip_repositories()
load("#io_bazel_rules_python//python:pip.bzl", "pip_import")
# This rule translates the specified requirements.txt into
# #my_deps//:requirements.bzl, which itself exposes a pip_install method.
pip_import(
name = "my_deps",
requirements = "//:requirements.txt",
)
# Load the pip_install symbol for my_deps, and create the dependencies'
# repositories.
load("#my_deps//:requirements.bzl", "pip_install")
pip_install()
And app.yaml:
runtime: custom
#vm: true
env: flex
entrypoint: gunicorn -b :$PORT run:app
runtime_config:
# You can also specify 2 for Python 2.7
python_version: 3
handlers:
- url: /$
secure: always
script: auto
- url: /.*$
static_dir: app/static
secure: always
# This sample incurs costs to run on the App Engine flexible environment.
# The settings below are to reduce costs during testing and are not appropriate
# for production use. For more information, see:
# https://cloud.google.com/appengine/docs/flexible/python/configuring-your-app-with-app-yaml
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
packages
alembic==0.8.4
Babel==2.2.0
blinker==1.4
coverage==4.0.3
decorator==4.0.6
defusedxml==0.4.1
Flask==0.10.1
Flask-Babel==0.9
Flask-Login==0.3.2
Flask-Mail==0.9.1
Flask-Migrate==1.7.0
Flask-OpenID==1.2.5
flask-paginate==0.4.1
Flask-Script==2.0.5
Flask-SQLAlchemy==2.1
Flask-WhooshAlchemy==0.56
Flask-WTF==0.12
flipflop==1.0
future==0.15.2
guess-language==0.2
gunicorn==19.9.0
itsdangerous==0.24
Jinja2==2.8
Mako==1.0.3
MarkupSafe==0.23
pbr==1.8.1
pefile==2016.3.28
PyInstaller==3.2
python-editor==0.5
python3-openid==3.0.9
pytz==2015.7
six==1.10.0
speaklater==1.3
SQLAlchemy==1.0.11
sqlalchemy-migrate==0.10.0
sqlparse==0.1.18
Tempita==0.5.2
Werkzeug==0.11.3
Whoosh==2.7.0
WTForms==2.1
As I understand after talking to google cloud developers, you have to use dockerfile to allow google cloud build a it builds using containers to push your image to app engine.
See my own work around

Resources