not able to execute block module in ansible yml script - linux

I am using block module in below ansible playbook. Basically if files exist then only I want to execute Play 2 and Play3 but for some reason I get an error when I execute below playbook.
---
- name: Play 1
hosts: 127.0.0.1
tasks:
- name: find the latest file
find: paths=/var/lib/jenkins/jobs/process/workspace/files
file_type=file
age=-1m
age_stamp=mtime
register: files
- name: Play 2 & 3 if Play 1 has a file
block:
- name: Play 2
hosts: all
serial: 5
tasks:
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data01/admin/files/goldy.init.qa
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data02/admin/files/goldy.init.qa
- name: Play 3
hosts: 127.0.0.1
tasks:
- name: execute command
shell: ./data_init --init_file ./goldy.init.qa
when: files != ""
Below is the error. Any idea what is wrong I am doing here?
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/var/lib/jenkins/jobs/process/workspace/test.yml': line 14, column 9, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
block:
- name: Play 2
^ here

I think the confusion here stems from the mismatch of Play and Block. Ansible playbooks may contain one or many Plays, a Play is a top order structure in a Playbook (remember Playbooks are just YAML so it's all effectively a data structure). Blocks come in when you want to combine a serious of tasks as efficiently an unit that you can take group action on such as conditionals, but also for error catching and recovery. Blocks are part of a Play, they can be put almost anywhere a task can. However in the syntax you've defined new Plays nested within others which is not allowed. Hope this helps, happy automating!

there are several things wrong in this and I assume you're new to ansible. You cannot put a name on a block. your structure is also wrong. files is not defined. try:
---
- name: Play 1
hosts: 127.0.0.1
tasks:
- name: find the latest file
find: paths=/var/lib/jenkins/jobs/process/workspace/files
file_type=file
age=-1m
age_stamp=mtime
register: files
- debug:
msg: "{{ files }}"
when: files != ""
- block:
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data01/admin/files/goldy.init.qa
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data02/admin/files/goldy.init.qa
- name: execute command
shell: ./data_init --init_file ./goldy.init.qa
when: files != ""

Related

Argo workflow template execution failing when I use when condition with withParam loop along with the artifact input

I've the following workflow template which has when condition when: "'{{item}}' =~ '^tests/'", artifacts input as file path in AWS S3 bucket and withParam loop.
Here is my workflow template
apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: process-wft
spec:
entrypoint: main
templates:
- name: main
inputs:
parameters:
- name: dir-process
default: true
- name: dir
artifacts:
- name: Code
dag:
tasks:
- name: process-wft-tests
when: "'{{item}}' =~ '^tests/'"
templateRef:
name: tf-wf-rn
template: main
arguments:
parameters:
- name: dir-process
value: "{{inputs.parameters.dir-process}}"
artifacts:
- name: Code
from: "{{inputs.artifacts.Code}}"
withParam: "{{inputs.parameters.dir}}"
here is my input artifact Code extracted result which is being passed from my workflow
inputs:
artifacts:
- archive:
tar:
compressionLevel: 9
archiveLogs: true
globalName: GitSource
name: Code
path: /mnt/out/code
s3:
key: process-kfxqf/process-kfxqf-1938174407/GitSource.tgz
It is giving the below Error when I run my workflow
message: failed to resolve {{inputs.artifacts.Code}}
What's the mistake am doing here? if it doesn't work what's the alternate way to get this worked?
Note: I tried workflow execution by removing when condition, it is working fine. It is giving issue only when I add when condition.
This is a bug.
It seems that when the condition evaluates to false, some code skips populating the artifact (makes sense, save some time), but some other code doesn't respect the when condition and still expects the artifact to be populated.
Potential workarounds:
Move the conditional logic into the container.
remove the when condition
pass the dir parameter to the main template in your tf-wf-rn WorkflowTemplate
change the main template to run the regex against the dir parameter - if it doesn't match, just exit 0
This could make the workflow much slower, because you'll have to spin up a pod for each iteration of the loop to determine if there's actually any work to be done.
If you can calculate all the information about the artifact, pass that information as parameters to the main template in your tf-wf-rn WorkflowTemplate. Then actually load the artifact in that non-conditioned, non-looped template. (Basically hopscotch over the problematic code.)
Try an older version.
If you find a working older version, please 1) comment on the bug report and 2) make sure the older version doesn't have any relevant security vulnerabilities before running it on a production system.

Ansible: How to add Linux modul command path (opkg not in path)

My Linux box (busybox) using read-only filesystems mostly. I have option to install my programs different path like PATH=/opt/bin:/opt/sbin. The package manager also sitting in this folder (exec. file name: /opt/bin/opkg) .
When I want to use Ansible opkg module I got the following error:
"Failed to find required executable opkg in paths: /bin:/usr/bin:/bin:/usr/sbin:/sbin"
Question: How can I say to my Ansible to look for opkg package in different path?
Any ideas are welcome!
Thank you!
I found some useful link:
https://docs.ansible.com/ansible/latest/reference_appendices/faq.html
Ansible - Set environment path as inventory variable
And here is my example:
---
- hosts: CBOX-0001
gather_facts: True
gather_subset:
- "!all"
environment:
PATH: "/opt/bin:/opt/sbin:/usr/bin:/usr/sbin:{{ ansible_env.PATH }}"
collections:
- community.general
tasks:
- name: "install opkg packages"
opkg:
name: "{{ item }}"
state: present
with_items:
- screen
- mc
- rclone

Ansible return code error: 'dict object' has no attribute 'rc'

I am using Ansible to automate the installation, configuration and deployment of an application server which uses JBOSS, therefore I need to use the in-built jboss-cli to deploy packages.
This Ansible task is literally the last stage to run, it simply needs to check if a deployment already exists, if it does, undeploy it and redeploy it (to be idempotent).
Running the below commands manually on the server and checking the return code after each command works as expected, something, somewhere in Ansible refuses to read the return codes correctly!
# BLAZE RMA DEPLOYMENT
- name: Check if Blaze RMA has been assigned to dm-server-group from a previous Ansible run
shell: "./jboss-cli.sh --connect '/server-group=dm-server-group/deployment={{ blaze_deployment_version }}:read-resource()' | grep -q success"
args:
chdir: "{{ jboss_home_dir }}/bin"
register: blaze_deployment_status
failed_when: blaze_deployment_status.rc == 2
tags: # We always need to check this as the output determines whether or not we need to undeploy an existing deployment.
- skip_ansible_lint
- name: Undeploy Blaze RMA if it has already been assigned to dm-server-group from a previous Ansible run
command: "./jboss-cli.sh --connect 'undeploy {{ blaze_deployment_version }} --all-relevant-server-groups'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_deployment_status.rc == 0
register: blaze_undeployment_status
- name: Deploy Blaze RMA once successfully undeployed
command: "./jboss-cli.sh --connect 'deploy {{ jboss_deployment_dir }}/{{ blaze_deployment_version }} --server-groups=dm-server-group'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_undeployment_status.rc == 0 or blaze_deployment_status.rc == 1
Any advice would be appreciated.
Your second command contains a when clause. If it is skipped, ansible still registers the variable but there is no rc attribute in the data.
You need to take this into consideration when using the var in the next task. The following condition on last task should fix your issue.
when: blaze_undeployment_status.rc | default('') == 0 or blaze_deployment_status.rc == 1
Also there is maybe the same as author situation, when you run ansible --check

Ansible task write to local log file

Using Ansible I would like to be able to write the sysout of a task running a command to a local(i.e. on the managed server) log file.
For the moment I can only do this using a task like this:
- name: Run my command
shell: <command> <arg1> <arg3> ... |tee -a <local log file>
The reason to do this is that the takes a long time to complete(i.e. we cannot wait until it finishes to get its output) and would like to collect the output during its execution.
Is there any "Ansible" way to redirect to sysout of the command to a local log file during its execution without using the tee pipe?
Not 100% answers your question as you wont get a constantly updating file in your manager server but you could use async commands
# Requires ansible 1.8+
- name: 'YUM - async task'
yum:
name: docker-io
state: installed
async: 1000
poll: 0
register: yum_sleeper
- name: 'YUM - check on async task'
async_status:
jid: "{{ yum_sleeper.ansible_job_id }}"
register: job_result
until: job_result.finished
retries: 30
and then dump the contents of yum_sleeper to a file with
- name: save log locally
copy:
content: '{{ yum_sleeper.stdout }}'
dest: file.log
delegate_to: localhost

Did not find expected key while parsing a block mapping

Am running ansible play-book but getting below error, -using ansible 2.7.6, ubuntu 16.04.
in playbook am mentioned
(<unknown>): did not find expected key while parsing a block mapping at line 6 column 3
I tried without become-yes,ubuntu,sudo that also getting the same issue and ansible saying:
The offending line appears to be:
- name: build npm
^ here
- hosts: all
vars:
app_dir: /home/ubuntu/app/backend-app-name
tasks:
- name: build npm
command: "chdir={{ app_dir }} {{ item }}"
with_items:
- /usr/bin/npm run build
become: yes
become_user: ubuntu
become_method: sudo
Indentation is wrong. Correct syntax is
tasks:
- name: build npm
command: ...
with_items:
- /usr/bin/npm run build
become: yes
become_user: ubuntu
become_method: sudo
I received this same error when there was an extra single-quote in the YAML task.
While parsing a block mapping, did not find expected key.
- task: DotNetCoreCLI#2
inputs:
command: 'pack'
packagesToPack: '**/DoesNotMatter/OL.csproj'
#...
versionEnvVar: 'PACKAGEVERSION''
See last (extra ') character of code sample.
Removed Trailing Whitespace
I had a similar issue when rubocop parsed the yaml file.
› ruby_koans (mark) rubocop --auto-gen-config
(.rubocop.yml): did not find expected key while parsing a block mapping at line 1 column 1
Removed trailing white space. (in VSCode using "Trim Trailing Whitespace" setting.
› ruby_koans (mark) rubocop --auto-gen-config
Added inheritance from `.rubocop_todo.yml` in `.rubocop.yml`.
Phase 1 of 2: run Layout/LineLength cop
Inspecting 42 files

Resources