Please refer this link to understand what I have done.
short description
I need to run top command in remote machine and get that result content then save that file in local machine
test.yml
---
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: Execute the script
command: sh /home/raj/top.sh
async: 45
poll: 5
- name: 'Copy system.txt to local machine'
synchronize: mode=pull src=system.txt dest=/home/bu
top.sh
#!/bin/bash
top > system.txt
problem
top.sh never end so I am trying to every five seconds poll the result and copy into local machine but it is not working.it throws below error.
stderr: top: failed tty get
<job 351267881857.24744> FAILED on 192.168.1.7
note: I got this error only when I include async and poll option
Hello Bilal I Hope this is useful for you
your syntax: using poll:5 follw this link http://docs.ansible.com/ansible/playbooks_async.html
poll is using wait on the task to complete. But top command doesn,t stop until use stop or system shutdown. use poll:0
" Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value of 0:"
Now forget the task, collect top result file from remote and store to local use below syntax
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: collecting top result
command: sh /home/raj/top.sh
async: 45
poll: 0
- name: 'Copy top command result to local machine'
synchronize: mode=pull src=/home/raj/Top.txt dest=/home/raj2/Documents/Ansible
top.sh:
#!/bin/bash
top -b > /home/raj/Top.txt
This is works for me. ping me if you have any problem.
Do you need to run the top command itself, or is this just an example of a long-running program you want to monitor?
The error you're receiving:
top: failed tty get
...happens when the top command isn't running in a real terminal session. The mode of ssh that Ansible uses doesn't support all the console features that the full blown terminal session would have - which is what top expects.
Related
yst_c_testInbound is an existing job in the box yst_b_test_Inbound_U01.
Changing DNS alias name from old DNS name "str-uat.capint.com" to new DNS name "str-r7uat.capint.com"
set AUTOSERV & set SERVER1 & set SERVER2 being set properly.
Job is created successfully if the machine name is given for the tag "machine" in the jil file content. Old DNS name also working file.
It is giving the following Error for the new DNS name. Pls let me know what is the issue with DNS.
ping of the str-r7uat.capint.com is working fine
Error:
C:\AutoSys_Tools\bin>jil < yst_c_testInbound.jil
CAUAJM_I_50323 Inserting/Updating job: yst_c_testInbound
CAUAJM_E_10281 ERROR for Job: yst_c_testInbound < machine 'str-r7uat.capint.com' does not exist >
CAUAJM_E_10302 Database Change WAS NOT successful.
CAUAJM_E_50198 Exit Code = 1
JIL file Content - yst_c_testInbound.jil
update_job: yst_c_testInbound job_type: CMD
box_name: yst_b_test_Inbound_U01
command: perl -w $SYSTR_PL/strInBound.pl -PortNo 12222
machine: str-r7uat.capint.com
owner: testulnx
permission:
date_conditions: 0
description: "JMS Flow process to send the messages from STR to MQ"
std_out_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
std_err_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
alarm_if_fail: 0
profile: "/apps/profile/test_profile"
alarm_if_terminated: 0
timezone: US/Eastern
While creating the job using the JIL file yst_c_testInbound.jil
Getting below error
You need to add the machine first. You cant update a job with not defined machine.
If you run:
autorep -M str-r7uat.capint.com
It will most likely return CAUAJM_E_50111 Invalid Machine Name: str-r7uat.capint.com
so add the machine first, then you can run the update job JIL.
Cheers.
Please let me know if I can execute a shell script on the same server as Prometheus/alertmanager on an alert trigger?
If so, help me with the configurations.
You can use Prometheus-am-executor to run any shell or even python script too.
Here is sample.yml file
listen_address: ":8094" # Where alertmanager sending alerts
# Display more output
verbose: true
commands:
- cmd: python3
args: ["script.py"] # Script which you want to execute
A maintained alternative is https://github.com/adnanh/webhook which allows you to do install local webhooks with scripts attached.
Example config:
- id: redeploy-webhook
execute-command: "/var/scripts/redeploy.sh"
command-working-directory: "/var/webhook"
Default port of the webhook process is 9000, so the following URL would execute the redeploy.sh script from above config example.
http://yourserver:9000/hooks/redeploy-webhook
Which can be then used in your alertmanager config:
receivers:
- name: 'general'
webhook_config:
- url: http://yourserver:9000/hooks/redeploy-webhook
I am using saltstack to start up an arangodb instance on a centos7 machine. I would like to start it up with a custom password, so I would like to run ARANGODB_DEFAULT_ROOT_PASSWORD=<my password> arango-secure-installation after the arangodb 3.5 rpm is installed on the machine but before it starts up, because you can only set the password while it is not running. I'm not sure how to do that exactly with salt stack, but I assume it has something to do with the cmd.run salt function.
Here's the installation/startup salt code I have:
arangodb_3_server:
pkg.latest:
- refresh: True
- pkgs:
- arangodb3
cmd.run:
- name: "ARANGODB_DEFAULT_ROOT_PASSWORD={{ arangodb.get('ARANGO_ROOT_PASSWORD', '') }} arango-secure-installation"
service.running:
- name: arangodb3
- enable: True
- watch:
- file: /etc/arangodb3/arangod.conf
So I'm wondering can I basically just put the secure-installation command somewhere to accomplish this? From what I've tried I've only gotten compilation errors or it doesn't set the password.
in Ubuntu I used policy-rc.d to return a non-zero code. I did not find an alternative solution for CentOS. you can stop the service using service.dead after installation, then run your command with cmd.run, and then start service by service.running
Tasks image_resource property is marked as optional in the documentation, but GNU/Linux tasks fail without it.
Also, the docs for the type property of image_resource say:
Required. The type of the resource. Usually docker-image
But I couldn't find any information about other supported types.
How can I run tasks on the underlying system without any container technology, like in my Windows and macOS workers?
In Concourse, you really are not supposed to do anything outside of Docker. That is one of the main features. Concourse runs in Docker containers and starts new containers for each build. If you want to run one or more Linux commands in sh or bash in the container, you can try something like this below, for your task config.
- task: linux
config:
platform: linux
image_resource:
type: docker-image
source: {repository: ubuntu, tag: '18.04'}
run:
dir: /<path-to-dir>
path: sh
user: root
args:
- -exc
- |
echo "Running in Linux!"
ls
scp <you#your-host-machine:file> .
telnet <your-host-machine>
<whatever>
...
I've got a server on which Supervisord is managing my processes. I normally start supervisord with the following command:
sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf
I'm now trying to set things up with Ansible, but I'm unsure of how I should start Ansible. I can of course do it using something like:
- name: run supervisord
command: "/var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf"
This works, but only the first time that you run it. Second time you run the same script supervisord is of course already running, which causes the following error:
TASK [run supervisord]
******************************************************* fatal: [ansible-test1]: FAILED! => {"changed": true, "cmd":
["/var/www/imd/venv/bin/supervisord", "-c",
"/var/www/imd/deploy/supervisord.conf"], "delta": "0:00:00.111700",
"end": "2016-06-03 11:57:38.605804", "failed": true, "rc": 2, "start":
"2016-06-03 11:57:38.494104", "stderr": "Error: Another program is
already listening on a port that one of our HTTP servers is configured
to use. Shut this program down first before starting
supervisord.\nFor help, use /var/www/imd/venv/bin/supervisord -h",
"stdout": "", "stdout_lines": [], "warnings": []}
Does anybody know how I can correctly run supervisord with Ansible? All tips are welcome!
[EDIT]
Because the solution in the answer by mbarthelemy doesn't work for socket files I now managed to get it working with the following:
- name: run supervisord
shell: if [ ! -S /var/run/supervisor.sock ]; then sudo /var/www/imd/venv/bin/supervisord -c /var/www/imd/deploy/supervisord.conf; fi
This of course is not very "ansibleish". If anybody has a real Ansible-based solution that would still be really welcome.
You can use supervisor module
- supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
or
- name: Restart my_app
supervisorctl:
name: my_app
state: restarted
config: /var/opt/my_project/supervisord.conf
full documentation on https://docs.ansible.com/ansible/2.7/modules/supervisorctl_module.html#examples
Your situation is specific since you don't seem to use a regular Supervisor installed as a normal system package ; in that case you would start/stop/restart it like any other regular system service, using Ansible's service module.
By default, upon starting Supervisor creates a socket to listen for administrations commands from supervisorctl. When it stops it it supposed to remove it.
Try to find where this socket is created in your specific setup (default would be /var/run/supervisor.sock).
Then, let the Ansible command module know that if the Suopervisord process is already running, the socket exists, using the creates option (documentation). This way it won't try to run the command if it's already running:
- name: run supervisord
command: "./venv/bin/supervisord -c ./deploy/supervisord.conf"
args:
chdir=/var/www/imd
creates=/var/run/supervisor.sock
Edit : while this would be the right answer if /var/run/supervisor.sock were a file, it won't work because it's a socket, and Ansible's create parameter won't work.
The most Ansible-ish solution I can think of is using an external Ansible module like one of these, to check if you process already exists (test_process) or is already listening (test_tcp)