Am running ansible play-book but getting below error, -using ansible 2.7.6, ubuntu 16.04.
in playbook am mentioned
(<unknown>): did not find expected key while parsing a block mapping at line 6 column 3
I tried without become-yes,ubuntu,sudo that also getting the same issue and ansible saying:
The offending line appears to be:
- name: build npm
^ here
- hosts: all
vars:
app_dir: /home/ubuntu/app/backend-app-name
tasks:
- name: build npm
command: "chdir={{ app_dir }} {{ item }}"
with_items:
- /usr/bin/npm run build
become: yes
become_user: ubuntu
become_method: sudo
Indentation is wrong. Correct syntax is
tasks:
- name: build npm
command: ...
with_items:
- /usr/bin/npm run build
become: yes
become_user: ubuntu
become_method: sudo
I received this same error when there was an extra single-quote in the YAML task.
While parsing a block mapping, did not find expected key.
- task: DotNetCoreCLI#2
inputs:
command: 'pack'
packagesToPack: '**/DoesNotMatter/OL.csproj'
#...
versionEnvVar: 'PACKAGEVERSION''
See last (extra ') character of code sample.
Removed Trailing Whitespace
I had a similar issue when rubocop parsed the yaml file.
› ruby_koans (mark) rubocop --auto-gen-config
(.rubocop.yml): did not find expected key while parsing a block mapping at line 1 column 1
Removed trailing white space. (in VSCode using "Trim Trailing Whitespace" setting.
› ruby_koans (mark) rubocop --auto-gen-config
Added inheritance from `.rubocop_todo.yml` in `.rubocop.yml`.
Phase 1 of 2: run Layout/LineLength cop
Inspecting 42 files
Related
I am using Ansible to automate the installation, configuration and deployment of an application server which uses JBOSS, therefore I need to use the in-built jboss-cli to deploy packages.
This Ansible task is literally the last stage to run, it simply needs to check if a deployment already exists, if it does, undeploy it and redeploy it (to be idempotent).
Running the below commands manually on the server and checking the return code after each command works as expected, something, somewhere in Ansible refuses to read the return codes correctly!
# BLAZE RMA DEPLOYMENT
- name: Check if Blaze RMA has been assigned to dm-server-group from a previous Ansible run
shell: "./jboss-cli.sh --connect '/server-group=dm-server-group/deployment={{ blaze_deployment_version }}:read-resource()' | grep -q success"
args:
chdir: "{{ jboss_home_dir }}/bin"
register: blaze_deployment_status
failed_when: blaze_deployment_status.rc == 2
tags: # We always need to check this as the output determines whether or not we need to undeploy an existing deployment.
- skip_ansible_lint
- name: Undeploy Blaze RMA if it has already been assigned to dm-server-group from a previous Ansible run
command: "./jboss-cli.sh --connect 'undeploy {{ blaze_deployment_version }} --all-relevant-server-groups'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_deployment_status.rc == 0
register: blaze_undeployment_status
- name: Deploy Blaze RMA once successfully undeployed
command: "./jboss-cli.sh --connect 'deploy {{ jboss_deployment_dir }}/{{ blaze_deployment_version }} --server-groups=dm-server-group'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_undeployment_status.rc == 0 or blaze_deployment_status.rc == 1
Any advice would be appreciated.
Your second command contains a when clause. If it is skipped, ansible still registers the variable but there is no rc attribute in the data.
You need to take this into consideration when using the var in the next task. The following condition on last task should fix your issue.
when: blaze_undeployment_status.rc | default('') == 0 or blaze_deployment_status.rc == 1
Also there is maybe the same as author situation, when you run ansible --check
I am using block module in below ansible playbook. Basically if files exist then only I want to execute Play 2 and Play3 but for some reason I get an error when I execute below playbook.
---
- name: Play 1
hosts: 127.0.0.1
tasks:
- name: find the latest file
find: paths=/var/lib/jenkins/jobs/process/workspace/files
file_type=file
age=-1m
age_stamp=mtime
register: files
- name: Play 2 & 3 if Play 1 has a file
block:
- name: Play 2
hosts: all
serial: 5
tasks:
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data01/admin/files/goldy.init.qa
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data02/admin/files/goldy.init.qa
- name: Play 3
hosts: 127.0.0.1
tasks:
- name: execute command
shell: ./data_init --init_file ./goldy.init.qa
when: files != ""
Below is the error. Any idea what is wrong I am doing here?
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/var/lib/jenkins/jobs/process/workspace/test.yml': line 14, column 9, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
block:
- name: Play 2
^ here
I think the confusion here stems from the mismatch of Play and Block. Ansible playbooks may contain one or many Plays, a Play is a top order structure in a Playbook (remember Playbooks are just YAML so it's all effectively a data structure). Blocks come in when you want to combine a serious of tasks as efficiently an unit that you can take group action on such as conditionals, but also for error catching and recovery. Blocks are part of a Play, they can be put almost anywhere a task can. However in the syntax you've defined new Plays nested within others which is not allowed. Hope this helps, happy automating!
there are several things wrong in this and I assume you're new to ansible. You cannot put a name on a block. your structure is also wrong. files is not defined. try:
---
- name: Play 1
hosts: 127.0.0.1
tasks:
- name: find the latest file
find: paths=/var/lib/jenkins/jobs/process/workspace/files
file_type=file
age=-1m
age_stamp=mtime
register: files
- debug:
msg: "{{ files }}"
when: files != ""
- block:
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data01/admin/files/goldy.init.qa
- name: copy latest file
copy: src=data_init/goldy.init.qa dest=/data02/admin/files/goldy.init.qa
- name: execute command
shell: ./data_init --init_file ./goldy.init.qa
when: files != ""
I'm running a playbook which defined several packages to install via apt:
- name: Install utility packages common to all hosts
apt:
name: "{{ item }}"
state: present
autoclean: yes
with_items:
- aptitude
- jq
- curl
- git-core
- at
...
A recent ansible update on my system now renders this message concerning the playbook above:
[DEPRECATION WARNING]: Invoking "apt" only once while using a loop via squash_actions is deprecated. Instead of
using a loop to supply multiple items and specifying `name: {{ item }}`, please use `name: [u'aptitude',
u'jq', u'curl', u'git-core', u'at', u'heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',
u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']` and remove the loop.
If I'm understanding this correctly, Ansible now wants this list of packages as an array which leaves this:
name: [u'aptitude', u'jq', u'curl', u'git-core', u'at','heirloom-mailx', u'sudo-ldap', u'sysstat', u'vim', u'at', u'ntp',u'stunnel', u'sysstat', u'arping', u'net-tools', u'lshw', u'screen', u'tmux', u'lsscsi']
Is there a better way? Just seems like I'll be scrolling right forever in VIM trying to maintain this. Either that, or word wrap it and deal with a word-cloud of packages.
You can code the array in YAML style to make it more readable:
- name: Install utility packages common to all hosts
apt:
name:
- aptitude
- jq
- curl
- git-core
- at
state: present
autoclean: yes
I had this same question and it looks like each set of packages with the same states will have to be their own block. Looking at Ansible's documentation, they have a block for each state as an example so I took that example, cut up my packages based off their states and followed ignacio's example and it ended up working perfectly.
So basically it would look like this
- name: Install packages required for log-deployment
apt:
name:
- gcc
- python-devel
state: latest
autoclean: yes
- name: Install packages required for log-deployment
apt:
name:
- python
- mariadb
- mysql-devel
state: installed
Hope that makes sense and helps!
I came across this exact same problem , but with a much longer list of apps , held in a vars file. This is the code I implemented to get around that problem. The list of the apps is placed into the "apps" variable and Ansible iterates over that.
- name: Install default applications
apt:
name: "{{item}}"
state: latest
loop: "{{ apps }}"
when: ansible_distribution == 'Ubuntu' or ansible_distribution == 'Debian'
tags:
- instapps
The file holding the list of apps to install is in the Defaults directory in the role directory for this task - namely the "common" role directory.
roles
- common
- Defaults
- main.yml
For a record, you can have this written this way:
- name: Install utility packages common to all hosts
apt:
state: present
autoclean: yes
pkg: [
"aptitude",
"jq",
"curl",
"git-core",
"at",
]
however if you are upgrading existing roles to new apt's requirements, Ignacio's accepted answer is far better as all you need is just to add some indentation to already existing entries.
I need to install a number of packages on linux boxes. Some (few) of the packages may be missing for various reasons (OS version, essentially)
- vars:
pkgs:
- there_1
- not_there_1
- there_2
...
but I would like too manage them from a single playbook. So I cannot stick them all in a single
yum: state=latest name="{{pkgs}}"
Because missing packages would mess the transaction so that nothing gets installed.
However, the obvious (and slow) one by one install also fails, because the first missing package blows the entire loop out of the water, thusly:
- name Packages after not_there_1 are not installed
yum: state=latest name="{{item}}"
ignore_errors: yes
with_items: "{{ pkgs }}"
Is there a way to ignore errors within a loop in such a way that all items be given a chance? (i.e. install errors behave as a continue in the loop)
If you need to loop a set of tasks a a unit it would be -so- nice if we could use with_items on an error handling block right?
Until that feature comes around you can accomplish the same thing with include_tasks and with_items. Doing this should allow a block to handle failed packages, or you could even include some checks and package installs in the sub-tasks if you wanted.
First setup a sub-tasks.yml to contain your install tasks:
Sub-Tasks.yml
- name: Install package and handle errors
block:
- name Install package
yum: state=latest name="{{ package_name }}"
rescue:
- debug:
msg: "I caught an error with {{ package_name }}"
Then your playbook will setup a loop of these tasks:
- name: Install all packages ignoring errors
include_tasks: Sub-Tasks.yml
vars:
package_name: "{{ item }}"
with_items:
- "{{ pkgs }}"
I'm trying to make a playbook to install halyard on a remote host using ansible-playbook. One of the tasks is to execute the "InstallHalyard.sh" wrapper scripts to the actual installer (reference: https://www.spinnaker.io/setup/install/halyard/).
No error / failure present, but also nothing has changed, as if the InstallHalyard.sh script is not executed. Though, it's fine if I do it by typing manual command. The other similar task works perfectly.
You can see the InstallHalyard.sh script implementation: here
Any idea about what is happening?
Here is the task in my playbook:
name: Run InstallHalyard.sh
become: yes
become_user: sudo
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
Any help would be appreciated, thank you very much :)
EDIT:
Already tried using script, command, and shell module.
FYI, the InstallHalyard.sh script will call itself by passing an env variable and needs to do curl -O
I'm suspecting the "export" operation inside the script doesn't work as Ansible has a different understandings for environment vars. (For example, ansible will not recognize "$HOME" instead it uses "{{ ansible_env.HOME }}".)
EDIT:
I found the script does an operation that makes a fork, does ansible handle this kind of operation?
Tested on localhost, got the same result.
SOLUTION
It is because of the different interpretation of environment vars of ansible. If I executed the script manually, the wrapper sets the environment variable and pass it to the next script calls, which ansible not able to do. So what I do is setting the environment var manually with ansible module before executing the script (just adding 2 lines)
Here is my revised task:
name: Run InstallHalyard.sh
become: yes
become_user: sudo
environment:
HAL_USER: "{{ ansible_env.USER }}"
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
The question is really how do I debug the output of a failing command. I'm guessing the command is failing with a useful error message but that message is not being displayed anywhere.
Here is an example to help debug this failing script:
-name: Run InstallHalyard.sh
become: yes
become_user: sudo
shell: "{{ ansible_env.HOME }}/.spinnaker/InstallHalyard.sh"
register: output
- debug: msg="{{ output.stdout }}"
- debug: msg="{{ output.stderr }}"
Try that is see what error message it was giving you.