I have an API server which may trigger multiple node-ansibles simultaneously to connect to a remote machine to do something.
Here's the node.js code:
// app.js
const Ansible = require('node-ansible')
let ansibleNum = 10
for (let i = 0; i < ansibleNum; i += 1) {
let command = new Ansible.Playbook().playbook('test')
command.inventory('hosts')
command.exec()
.then(successResult => {
console.log(successResult)
})
.catch(err => {
console.log(err)
})
}
And the ansible playbook:
# test.yml
---
- hosts: all
remote_user: ubuntu
become: true
tasks:
- name: Test Ansible
shell: echo hello
register: result # store the result into a variable called "result"
- debug: var=result.stdout_lines
As ansibleNum increases, the probability of the failure of ansible playbook also increases.
The failure message is:
fatal: [10.50.123.123]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Shared connection to 10.50.123.123 closed.\r\n", "unreachable": true}
I've read another similar question here, but the solutions provided by it doesn't work in my case.
Another way to trigger the problem is by executing
ansible-playbook -i hosts test.yml & ansible-playbook -i hosts test.yml.
This command runs ansible without node.js.
I've pushed the code to github. You can download it directly.
Anyone knows why the shared connection got closed?
I've set ControlMaster argument to auto by following the document here.
It's strange that setting the connection type to paramiko solves my problem.
Here's the config file located in ~/.ansible.cfg:
[defaults]
transport = paramiko
Based on this document, it seems that paramiko doesn't support persistent connection.
I'm still confused about why this setting solves my problem.
Related
I am trying to use azure_rm plugin in ansible to generate dynamic hosts on Azure platform. With keyed group conditional, I am able to successfully make it work with an ansible ad-hoc command. However, it does not work when I try to pass the same with "ansible-playbook". Can anyone please help how could I run an ansible-playbook the same way ?
Below is my dynamic inventory generation file:
---
plugin: azure_rm
auth_source: msi
keyed_groups:
- prefix: tag
key: tags
When I use the file to ping the target VM, below is a success response.
Command used:
ansible -m ping tag_my_devops_ansible_slave -i dynamic_inventory_azure_rm.yml
Response:
devops-eastus2-dev-ansibleslave-vm_2f44 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
However, when I use the same with ansible-playbook, I get the below error.
Command used:
ansible-playbook tag_cdo_devops_ansible_slave -i dynamic_inventory_azure_rm.yml test-playbook.yml
Error:
ansible-playbook: error: unrecognized arguments: test-playbook.yml
Can anyone please help on how to execute an ansible-playbook for the above use case ?
The ansible-playbook command does not accept a list of targets on the command line, rather the playbook file has hosts: as a top-level key indicating the hosts to which the playbook will apply.
So, if the playbook is always going to be used with that tag, you can just indicate that in the playbook:
- hosts: tag_cdo_devops_ansible_slave
tasks:
- debug: var=ansible_host
It also appears that hosts: does honor jinja2 templating, so you can achieve what you're trying to do via:
- hosts: '{{ azure_playbook_hosts }}'
tasks:
- debug: var=ansible_host
and then ansible-playbook -e azure_playbook_hosts=tag_cdo_devops_ansible_slave -i dynamic_inventory_azure_rm.yml test-playbook.yml
Or you can create a dedicated inventory file that only returns hosts matching your desired tag, and then use -i for that inventory along with hosts: all in the playbook file.
I'm using my infrastructure (IAC) at aws with terragrunt + terraform.
I already added the ssh key, GPG key to the git lab and left the repository unprotected in the branch, to do a test, but it didn't work
This would be the module call, coming to be equal to the main.tf of terraform.
# ---------------------------------------------------------------------------------------------------------------------
# Configuração do Terragrunt
# ---------------------------------------------------------------------------------------------------------------------
terragrunt = {
terraform {
source = "git::ssh://git#gitlab.compamyx.com.br:2222/x/terraform-blueprints.git//route53?ref=0.3.12"
}
include = {
path = "${find_in_parent_folders()}"
}
}
# ---------------------------------------------------------------------------------------------------------------------
# Parâmetros da Blueprint
#
zone_id = "ZDU54ADSD8R7PIX"
name = "k8s"
type = "CNAME"
ttl = "5"
records = ["tmp-elb.com"]
The point is that when I give an init terragrunt, in one of the modules I have the following error:
ssh: connect to host gitlab.company.com.br port 2222: Connection timed out
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
[terragrunt] 2020/02/05 15:23:18 Hit multiple errors:
exit status 1
I took the test
ssh -vvvv -T gitlab.companyx.com.br -p 2222
I also got timed out
This doesn't appear to be a terragrunt or terraform issue at all, but rather, an issue with SSH access to the server.
If you are getting a timeout, it seems like it's most likely a connectivity issue (i.e., a firewall/network ACL is blocking access on that port from where you are attempting to access it).
If it were an SSH key issue, you'd get an "access denied" or similar issue, but the timeout definitely leads me to believe it's connectivity.
I have a Jenkins pipeline and I'm trying to run a Postgres container and connect to it for some nodejs integrations tests. My Jenkins file looks like this:
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sh 'npm run test'
}
}
}
What hostname should I use to connect to the postgres database inside of my nodejs code? I have tried localhost but I get a connection refused exception:
ERROR=Error: connect ECONNREFUSED 127.0.0.1:5432
ERROR:Error: Error: connect ECONNREFUSED 127.0.0.1:5432
Additional Details:
I've added a sleep for 30 seconds for the container to start up. I know there are better ways to do this but for now I just want to solve the connection problem.
I run docker logs on the container to see if it is ready to accept connections, and it is.
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine').withRun('-e "POSTGRES_USER=fred" -e "POSTGRES_PASSWORD="foobar" -e "POSTGRES_DB=mydb" -p 5432:5432') { c->
sleep 60
sh "docker logs ${c.id}"
sh 'npm run test'
}
}
}
tail of docker logs command:
2019-09-02 12:30:37.729 UTC [1] LOG: database system is ready to accept connections
I am running Jenkins itself in a docker container, so I am wondering if this is a factor?
My goal is to have a database startup with empty tables, run my integration tests against it, then shut down the database.
I can't run my tests inside of the container because the code I'm testing lives outside the container and triggered the jenkins pipeline to begin with. This is part of a Jenkins multi-branch pipeline and it's triggered by a push to a feature branch.
your code sample is missing a closing curly bracket and has an excess / mismatched quote. That way it is not clear whether you actually did (or wanted to) run your sh commands, inside or outside the call.
Depending on where the closing bracket was, the container might already have shut down.
Generally, the Postgres connection is fine with that fixed syntax issues:
pipeline {
agent any
stages {
stage('Test') {
steps {
script {
docker.image('postgres:10.5-alpine')
.withRun('-e "POSTGRES_USER=fred" '+
'-e "POSTGRES_PASSWORD=foobar" '+
'-e "POSTGRES_DB=mydb" '+
'-p 5432:5432'
){
sh script: """
sleep 5
pg_isready -h localhost
"""
//sh 'npm run test'
}
}
}
}
}
}
results in:
pg_isready -h localhost
localhost:5432 - accepting connections
Please refer this link to understand what I have done.
short description
I need to run top command in remote machine and get that result content then save that file in local machine
test.yml
---
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: Execute the script
command: sh /home/raj/top.sh
async: 45
poll: 5
- name: 'Copy system.txt to local machine'
synchronize: mode=pull src=system.txt dest=/home/bu
top.sh
#!/bin/bash
top > system.txt
problem
top.sh never end so I am trying to every five seconds poll the result and copy into local machine but it is not working.it throws below error.
stderr: top: failed tty get
<job 351267881857.24744> FAILED on 192.168.1.7
note: I got this error only when I include async and poll option
Hello Bilal I Hope this is useful for you
your syntax: using poll:5 follw this link http://docs.ansible.com/ansible/playbooks_async.html
poll is using wait on the task to complete. But top command doesn,t stop until use stop or system shutdown. use poll:0
" Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value of 0:"
Now forget the task, collect top result file from remote and store to local use below syntax
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: collecting top result
command: sh /home/raj/top.sh
async: 45
poll: 0
- name: 'Copy top command result to local machine'
synchronize: mode=pull src=/home/raj/Top.txt dest=/home/raj2/Documents/Ansible
top.sh:
#!/bin/bash
top -b > /home/raj/Top.txt
This is works for me. ping me if you have any problem.
Do you need to run the top command itself, or is this just an example of a long-running program you want to monitor?
The error you're receiving:
top: failed tty get
...happens when the top command isn't running in a real terminal session. The mode of ssh that Ansible uses doesn't support all the console features that the full blown terminal session would have - which is what top expects.
When I try to connect through SSH from any language (tried with Golang & Nodejs) to one of my servers from Windows the agent forwarding doesn't work.
I'm saying this because some commands like git pull are throwing errors (Permission denied (publickey)), while there aren't if I login directly using Putty.
I tried to use the env. variable SSH_AUTH_SOCK but it seems there's no such variable set on Windows. I expected Pageant doing the job.
Code example in NodeJS (simple-ssh lib):
this.ssh = new SSH({
// other unimportant variables
agent: process.env.SSH_AUTH_SOCK, // which is undefined
agentForward: true
});
How does this work on Windows?
For pageant on Windows, you should use the special 'pageant' value for agent instead:
this.ssh = new SSH({
// other unimportant variables
agent: 'pageant',
agentForward: true
});