yst_c_testInbound is an existing job in the box yst_b_test_Inbound_U01.
Changing DNS alias name from old DNS name "str-uat.capint.com" to new DNS name "str-r7uat.capint.com"
set AUTOSERV & set SERVER1 & set SERVER2 being set properly.
Job is created successfully if the machine name is given for the tag "machine" in the jil file content. Old DNS name also working file.
It is giving the following Error for the new DNS name. Pls let me know what is the issue with DNS.
ping of the str-r7uat.capint.com is working fine
Error:
C:\AutoSys_Tools\bin>jil < yst_c_testInbound.jil
CAUAJM_I_50323 Inserting/Updating job: yst_c_testInbound
CAUAJM_E_10281 ERROR for Job: yst_c_testInbound < machine 'str-r7uat.capint.com' does not exist >
CAUAJM_E_10302 Database Change WAS NOT successful.
CAUAJM_E_50198 Exit Code = 1
JIL file Content - yst_c_testInbound.jil
update_job: yst_c_testInbound job_type: CMD
box_name: yst_b_test_Inbound_U01
command: perl -w $SYSTR_PL/strInBound.pl -PortNo 12222
machine: str-r7uat.capint.com
owner: testulnx
permission:
date_conditions: 0
description: "JMS Flow process to send the messages from STR to MQ"
std_out_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
std_err_file: ">>$STR_LOG/tradeflow_arts_impact_$$YST_STR_CURR_BUS_DATE.log"
alarm_if_fail: 0
profile: "/apps/profile/test_profile"
alarm_if_terminated: 0
timezone: US/Eastern
While creating the job using the JIL file yst_c_testInbound.jil
Getting below error
You need to add the machine first. You cant update a job with not defined machine.
If you run:
autorep -M str-r7uat.capint.com
It will most likely return CAUAJM_E_50111 Invalid Machine Name: str-r7uat.capint.com
so add the machine first, then you can run the update job JIL.
Cheers.
Related
I am trying to copy a war file from my localhost to a Tomcat web apps folder using command line script in Azure DevOps. My release is getting success but the war file is not getting copying to the destination folder. How to fix this issue?
Path folder mentioned below
The script you shared echo cd E:\apache-tomcat-9.0.41\webapps just print cd E:\apache-tomcat-9.0.41\webapps, check the pic below. We could refer to this doc for more details.
We could copy the file via below cmd. Add task Command line and enter below script
cd {file path}
copy {file name} {target path}
According to the screenshot you shared, you could try below script
cd C:\agent\_work\r1\a\_nitesh482.Devops\warfile\webapp\target
copy webapp.war E:\apache-tomcat-9.0.41\webapps
Result:
Update1
If the file does not exist, we will get the message: The system cannot find the file specified. and error message: [error]Cmd.exe exited with code '1'
Update2
Please ensure that you are using self-hosted agent instead of hosted agent. If you are using hosted agent, we will get the error message, check the pic below.
Self-hosted agent:
Hosted agent:
I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:
Log groups: /aws/elasticbeanstalk/my-environment
Log streaming: Enabled
Retention: 3 days
Lifecycle: Keep after termination.
However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.
Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log.
If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.
Any help is gladly appreciated
I was able to get around this problem.
So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!
After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:
file_fingerprint_lines=1-20
With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.
After doing so, you need to restart the AWS Logs Service on the instance.
For this you would execute:
sudo service awslogs stop
sudo service awslogs start
or simpler:
sudo service awslogs restart
After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console!
However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.
To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.
I added a file called logs.config to the newly created .ebextensions directory and placed the following content:
files:
"/etc/awslogs/config/beanstalklogs.conf":
mode: "000644"
user: root
group: root
content: |
[/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
log_stream_name={instance_id}
file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
file_fingerprint_lines=1-20
commands:
01_remove_eb_stream_config:
command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
02_restart_log_agent:
command: 'service awslogs restart'
Changing of course EB-ENV-NAME by my environment name on EB.
Hope it can help someone else!
For 64 bit Amazon Linux 2 the setup is slightly different.
For the delivery of log the AWS CloudWatch Agent is installed in /opt/aws/amazon-cloudwatch-agent and the Elastic Beanstalk configuration is in /opt/aws/amazon-cloudwatch-agent/etc/beanstalk.json. It is set to log the output of the container assuming there's a file called stdouterr.log, here's a snippet of the config:
{
"file_path": "/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_group_name": "/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log",
"log_stream_name": "{instance_id}"
}
However when I look for the file_path it doesn't exist, instead I have a file path that encodes the current docker container ID /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log.
This logfile is created by a script /opt/elasticbeanstalk/config/private/eb-docker-log-start that is started by the eb-docker-log service, the default contents of this file are:
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
To temporarily fix the logging you can manually run (replacing the docker ID) and then logs will start to appear in CloudWatch:
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-e4e26c0bc464-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
To make this permanant I added an .ebextension to fix the eb-docker-log service so it re-makes this link so create a file in your source code in .ebextensions called fix-cloudwatch-logging.config and set it's contents to:
files:
"/opt/elasticbeanstalk/config/private/eb-docker-log-start" :
mode: "000755"
owner: root
group: root
content: |
EB_CONFIG_DOCKER_CURRENT_APP=`cat /opt/elasticbeanstalk/deployment/.aws_beanstalk.current-container-id | cut -c 1-12`
mkdir -p /var/log/eb-docker/containers/eb-current-app/
ln -sf /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log /var/log/eb-docker/containers/eb-current-app/stdouterr.log
docker logs -f $EB_CONFIG_DOCKER_CURRENT_APP >> /var/log/eb-docker/containers/eb-current-app/eb-$EB_CONFIG_DOCKER_CURRENT_APP-stdouterr.log 2>&1
commands:
fix_logging:
command: systemctl restart eb-docker-log.service
cwd: /home/ec2-user
test: "[ ! -L /var/log/eb-docker/containers/eb-current-app/stdouterr.log ] && systemctl is-active --quiet eb-docker-log"
Please refer this link to understand what I have done.
short description
I need to run top command in remote machine and get that result content then save that file in local machine
test.yml
---
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: Execute the script
command: sh /home/raj/top.sh
async: 45
poll: 5
- name: 'Copy system.txt to local machine'
synchronize: mode=pull src=system.txt dest=/home/bu
top.sh
#!/bin/bash
top > system.txt
problem
top.sh never end so I am trying to every five seconds poll the result and copy into local machine but it is not working.it throws below error.
stderr: top: failed tty get
<job 351267881857.24744> FAILED on 192.168.1.7
note: I got this error only when I include async and poll option
Hello Bilal I Hope this is useful for you
your syntax: using poll:5 follw this link http://docs.ansible.com/ansible/playbooks_async.html
poll is using wait on the task to complete. But top command doesn,t stop until use stop or system shutdown. use poll:0
" Alternatively, if you do not need to wait on the task to complete, you may “fire and forget” by specifying a poll value of 0:"
Now forget the task, collect top result file from remote and store to local use below syntax
- hosts: webservers
remote_user: root
tasks:
- name: 'Copy top.sh to remote machine'
synchronize: mode=push src=top.sh dest=/home/raj
- name: collecting top result
command: sh /home/raj/top.sh
async: 45
poll: 0
- name: 'Copy top command result to local machine'
synchronize: mode=pull src=/home/raj/Top.txt dest=/home/raj2/Documents/Ansible
top.sh:
#!/bin/bash
top -b > /home/raj/Top.txt
This is works for me. ping me if you have any problem.
Do you need to run the top command itself, or is this just an example of a long-running program you want to monitor?
The error you're receiving:
top: failed tty get
...happens when the top command isn't running in a real terminal session. The mode of ssh that Ansible uses doesn't support all the console features that the full blown terminal session would have - which is what top expects.
I am trying to set up a Spark JobServer (SJS) to execute jobs on a Standalone Spark cluster. I am trying to deploy SJS on one of the non-master nodes of SPARK cluster. I am not using the docker, but trying to do manually.
I am confused with the help documents in SJS github particulary the deployment section. Do I need to edit both local.conf and local.sh to run this?
Can someone point out the steps to set up the SJS in the spark cluster?
Thanks!
Kiran
Update:
I created a new environment to deploy jobserver in one of the nodes of the cluster: Here are the details of it:
env1.sh:
DEPLOY_HOSTS="masked.mo.cpy.corp"
APP_USER=kiran
APP_GROUP=spark
INSTALL_DIR=/home/kiran/job-server
LOG_DIR=/var/log/job-server
PIDFILE=spark-jobserver.pid
JOBSERVER_MEMORY=1G
SPARK_VERSION=1.6.1
MAX_DIRECT_MEMORY=512M
SPARK_HOME=/home/spark/spark-1.6.1-bin-hadoop2.6
SPARK_CONF_DIR=$SPARK_HOME/conf
SCALA_VERSION=2.11.6
env1.conf
spark {
master = "local[1]"
webUrlPort = 8080
job-number-cpus = 2
jobserver {
port = 8090
bind-address = "0.0.0.0"
jar-store-rootdir = /tmp/jobserver/jars
context-per-jvm = false
jobdao = spark.jobserver.io.JobFileDAO
filedao {
rootdir = /tmp/spark-job-server/filedao/data
}
datadao {
rootdir = /tmp/spark-jobserver/upload
}
result-chunk-size = 1m
}
context-settings {
num-cpu-cores = 1
memory-per-node = 1G
}
home = "/home/spark/spark-1.6.1-bin-hadoop2.6"
}
Why don't you set JOBSERVER_FG=1 and try running server_start.sh, this would run the process in foreground and should display the error to stderr.
Yes, you have edit both files adapting them for your cluster.
The deploy steps are explained below:
Copy config/local.sh.template to <environment>.sh and edit as appropriate.
This file is mostly for environment variables that are used by the deployment script and by the server_start.sh script. The most important ones are: deploy host (it's the ip or hostname where the jobserver will be run), user and group of execution, JobServer memory (it will be the driver memory), spark version and spark home.
Copy config/shiro.ini.template to shiro.ini and edit as appropriate. NOTE: only required when authentication = on
If you are going to use shiro authentication, then you need this step.
Copy config/local.conf.template to <environment>.conf and edit as appropriate.
This is the main configuration file for JobServer and for the contexts that JobServer will create. The full list of the properties you can set in this file can be seen on this link.
bin/server_deploy.sh <environment>
After editing the configuration files, you can deploy using this script. The parameter must be the name that you chose for your .conf and .sh files.
Once you run the script, JobServer will connect to the host entered in the .sh file and will create a new directory with some control files. Then, every time you need to change a configuration entry, you can do it directly on the remote machine: the .conf file will be there with the name you chose and the .sh file will be renamed to settings.sh.
Please note that, if you haven't configured an SSH key based connection between the machine where you run this script and the remote machine, you will be prompted for password during its execution.
If you have problems with the creation of directories on the remote machine, you can try and create them yourself with mkdir (they must match the INSTALL_DIR configuration entry of the .sh file) and change their owner user and group to match the ones entered in the .sh configuration file.
On the remote server, start it in the deployed directory with server_start.sh and stop it with server_stop.sh
This is very informative. Once you have done all other steps, you can start JobServer service on the remote machine by running the script server_start.sh and you can stop it with server_stop.sh
I began modifying a profile and made some mistakes along the way.
Because of this I have PIDs in the profile which I'd like to delete entirely.
These can be seen in the fabric:profile-display default output shown at the bottom of this post.
They are:
http:
patch.repositories=http:
org.ops4j.pax.url.mvn.repositories=http:
I can't find the correct command to delete this. I've tried:
config:delete org.ops4j.pax.url.mvn.repositories=http:
which successfully completes. But the default profile still lists this pid.
I've also tried:
fabric:profile-edit --delete -p org.ops4j.pax.url.mvn.repositories=http: default
which fails with:
Error executing command: String index out of range: -1
This indicates a property path /property must be specified.
Appending simply / doesn't work either.
One more problem is that I have a pid with a seemingly empty name, as indicated by this line:
PID: (nothing follows this output prefix).
Output of fabric:profile-display default:
Profile id: default
Version : 1.0
Parents :
Associated Containers :
Container settings
----------------------------
Repositories :
mvn:org.fusesource.fabric/fuse-fabric/7.0.1.fuse-084/xml/features
Features :
fabric-agent
karaf
fabric-jaas
fabric-core
Agent Properties :
patch.repositories = http://repo.fusesource.com/nexus/content/repositories/releases,
http://repo.fusesource.com/nexus/content/groups/ea
org.ops4j.pax.url.mvn.repositories = http://repo1.maven.org/maven2,
http://repo.fusesource.com/nexus/content/repositories/releases,
http://repo.fusesource.com/nexus/content/groups/ea,
http://repository.springsource.com/maven/bundles/release,
http://repository.springsource.com/maven/bundles/external,
http://scala-tools.org/repo-releases
org.ops4j.pax.url.mvn.defaultRepositories = file:${karaf.home}/${karaf.default.repository}#snapshots,
file:${karaf.home}/local-repo#snapshots
Configuration details
----------------------------
PID:
PID: org.ops4j.pax.url.mvn
org.ops4j.pax.url.mvn.useFallbackRepositories false
org.ops4j.pax.url.mvn.disableAether true
org.ops4j.pax.url.mvn.repositories ${profile:org.fusesource.fabric.agent/org.ops4j.pax.url.mvn.repositories}
org.ops4j.pax.url.mvn.defaultRepositories ${profile:org.fusesource.fabric.agent/org.ops4j.pax.url.mvn.defaultRepositories}
PID: patch.repositories=http:
PID: org.ops4j.pax.url.mvn.repositories=http:
PID: http:
PID: org.fusesource.fabric.zookeeper
zookeeper.url ${zk:root/ip}:2181
I'd be extremely grateful if someone could point the correct command(s).
I had a look at the command-line code for fabric:profile-edit with --delete and unfortunately this function seems to be desgined for deleting key/value pairs from the PID, rather than deleting the PID itself.
(Here's the code for ProfileEdit.java on github)
So basically you can use that command to "empty out" the PIDs, but not to remove them.
fabric:profile-edit --delete --pid mypid/mykey=myvalue myprofile
Knowing that this doesn't help you much, I asked my colleague who sits next to me (and is much smarter than me) and he recommended the following:
Enable fuse management console with container-add-profile root fmc
Opem fmc in a browser (mine is on localhost at port 8181), go to the Profiles page, choose your profile from the list
Go to the Config Files tab, find the PID you want to nuke and click the cross (X).
Et voila, the pid should be gone. Interested to know if this works for you, including on the "blank" profile...
The following works in Fuse 6.2:
1) for property files (which become PID objects)
# create
profile-edit --resource foobar.properties default
# delete
profile-edit --delete --pid foobar default
2) for arbitrary files
# create
profile-edit --resource foobar.xml default
#delete
only via hawtio web console, see screenshot: