I want to use the publish over SSH Jenkins plugin to transfer some files and then run some commands on a target linux machine. That target machine needs to be specified as a build parameter though as we have around 10 potential target machines.
I know I can use parameterized publishing to choose which host to use for the publish over SSH build step, but it seems I have to define every possible server here and duplicate the file set and commands for each one. Is there a way to supply the file set and commands just once and have this apply to all potential target machines?
ie Can the bit in the red circle in this screenshot be specified just once for both servers? And I don't mean just put these commands in a script and transfer that for all servers - I'd still need to put the transfer command in each server configuration block.
Thanks,
Sarah
Related
Hope you are doing great.
I am reaching out to the community as I am currently stuck with a problem of executing a sequence of commands from a linux machine using jmeter.
A bit of Background:-
I have an external VM which is used to mimic the transfer of file to various inbound channels.
This VM is basically acting as a third party which hosts files which are then transferred to different location by following a sequence of commands
The Sequence of Commands that I am trying to execute to mimic the third party are as below
ls (to list the files in the Home Dir)
mv test123.txt test456.txt (This renames the file in the home Dir from test123.txt to test456.txt)
Then we connect to the File exchange server using the command below
sftp -P 24033 testuser#test-perf.XYZ.com
password is test#123456
Once Connected we execute the below sequence
ls(This will list folders Inbound or Route)
CD Route (To change Dir to Route)
ls (List the account ID's)
put test456.txt 12345 (12345 is the account ID)
Post the execution of the last command the file is transferred to internal folder based on account ID
I did some search on stack over flow and found a couple of links but was not able to make successful use of it to simulate the above sequence of commands
The closest one I could find is as below
How to execute Linux command or shell script from APACHE JMETER
But this does not talk about executing from a linux machine itself
Any help on how to approach this one will help me out. Thanks in advance
PS:- I am using jmeter cause I have to keep this sequence executing continuously till I transfer the expected number of file in a peak hour durations and these files are of different sizes ranging from few MB's to a couple of GB's
New Edit
I used the JSR223 Pre-Processor where I have my sequence of commands and then I call that command in the OSS Sampler and created a script as below
The script executes on the Linux box without any error but the file is not transferred to the destination. Am I missing something?
On some research I did found an lftp command but not sure how to use in my case and if that will work or not.
Any suggestions?
To execute commands on local Linux machine you can use OS Process Sampler
To execute commands on remote Linux machine you can use SSH Command Sampler
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information if needed.
To transfer the file from local to remote you can use SSH SFTP Sampler
In order to get SSH Command and SSH SFTP Samplers install SSH Protocol Support plugin using JMeter Plugins Manager:
I have several scripts on my local machine. These scripts run install and configuration commands to setup my Elasticsearch nodes. I have 15 nodes coming and we definitely do not want to do that by hand.
For now, let's call them Script_A, Script_B, Script_C and Script_D.
Script_A will be the one to initiate the procces, it currently contains:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
ssh root#$hostname
This works fine obviously and I can get into any server I need to. My confusion is running the other scripts remotely. I have read few other articles/SO questions but I'm just not understanding the methodology.
I will have a directory on my machine as follows:
Elasticsearch_Installation
|
|=> Scripts
|
|=> Script_A, Script_B, etc..
Can I run the Script_A, which remotes into the server, then come back to my local and run Script_B and so on within the remote server without moving the files over?
Please let me know if any of this needs to be clarified, I'm fairly new to the Linux environment in general.. much less running remote installs from scripts over the network.
Yes you can. Use ssh in non interactive mode, it will be like launching a command in your local environment.
ssh root#$hostname /remote/path/to/script
Nothing will be changed in your local system, you will be at the same point where you launched the ssh command.
NB: this command will ask you a password, if you want a really non interactive flow, set up host a passwordless login, like explained here
How to ssh to localhost without password?
You have a larger problem than just setting up many nodes: you have to be concerned with ongoing maintenance and administration of all those nodes, too. This is the space in which configuration management systems such as Puppet, Ansible, and others operate. But these have a learning curve to overcome, and they require some infrastructure of their own. You would probably benefit from one of them in the medium-to-long term, but if your new nodes are coming next week(ish) then you probably want a solution that you can use immediately to get up and going.
Certainly you can ssh into the server to run commands there, including non-interactively.
My confusion is running the other scripts remotely.
Of course, if you want to run your own scripts on the remote machine then they have to be present there, first. But this is not a major problem, for if you have ssh then you also have scp, the secure copy program. It can copy files to the remote machine, something like this:
#!/bin/bash
read -p "Enter the hostname of the remote machine: " hostname
echo "Now connecting to $hostname!"
scp Script_[ABCD] root#${hostname}:./
ssh root#hostname ./Script_A
I also manage Elasticsearch clusters with multiple nodes. A hack that works for me is using Terminator Terminal Emulator and split it into multiple windows/panes, one for each ES node. Then you can broadcast the commands you type in one window into all the windows.
This way, you run commands & view their results almost interactively across all nodes parallely. You could also save this layout of windows in Terminator, and then you can get this view quickly using a shortcut.
PS, this approach will only work of you have only small number of nodes & that too for small tasks only. The only thing that will scale with the number of nodes & the number of times and variety of tasks you need to perform will probably be a config management solution like Puppet or Salt.
Fabric is another interesting project that may be relevant to your use case.
I have created a simple freestyle job in Jenkins locally on a windows box. I have created a single parameter for the job using https://wiki.jenkins.io/display/JENKINS/Extensible+Choice+Parameter+plugin
However, Once I create this same job in another instance of jenkins running on linux CloudBees Jenkins instance, the pluggin no longer provides files in the configuration or dropdown when I run with parameters.
Notice the linux on left and windows on right. Is this really releated to linux or perhaps another issue?
Does this require significant work to get the job that is running on Linux Jenkins instance being able to reference UNC path that we use when we run Job on windows instance? Not as up to speed as others with Linux so I need help on this one
Linux able to access windows fire share?
Where is the gap in my understanding of how file shares work in this way?
I would imagine that if a plugin exists for Jenkins, unless otherwise stated in documentation, the plugin should work on whatever is running jenkins
Perhaps linux has an etirely different way of working with files/shares path(s)?
I have Jenkins installed on a Linux server. It can run builds on itself. I want to create either a Freestyle Project or an External Job that transfers a bash script and runs it on two separate linux servers. Where in the GUI do I configure the destination server when I create a build? I have added "nodes" in the GUI. I can see the free space of the servers in the Jenkins GUI, so I know the credentials work. But when I create a build, I see no field that would tell Jenkins to push the bash scripts and run them on certain servers.
Are Jenkins nodes just servers that lend computing power to the master server? Or are they the targets of Jenkins builds? I believe that Jenkins "slaves" provide computing power to the Jenkins master server.
Normally Jenkins is used to integrate code. What do you call the servers that Jenkins pushes code into? They would be called Chef clients or Puppet agents if I was using Chef or Puppet for integrating code. I've been doing my own research, but I don't seem to know the specific vocabulary.
I've been working with such tools for several years. And for as far as I know there isn't a Ubiquitous Language for this.
The node's you can configure in Jenkins itself to add 'computing power' are indeed called build slaves.
Usually, external machines you will copy to, deploy to or otherwise use in jobs are called "target machine". As it will be the target of an action in your job.
Nodes can be used in several forms, you can use agents, which will require a small installation on the node machine. Which will create a running agent service with which Jenkins can communicate.
Another way is simply allow Jenkins to connect to a machine via ssh and let it execute commands there. Both are called nodes and could be called build slaves. But the first are usually dedicated nodes while the second can be any kind of machine as long as the ssh user can execute the build.
I also have not found any different terms for these two types.
It's probably not a real answer to your questions, but I do hope it helped.
We have a Jenkins server that uses the SSH plugin to configure a SSH remote hosts within the global Jenkins configuration. These ssh connections are using a public/private key for authentication to the remote host.
We then use these configured SSH remote hosts in the build step "Execute shell script on remote host using ssh" (I believe this is also part of the SSH plugin) in a number of jobs.
The problem I'm having is that any job using the execute shell script on remote host using ssh must be running on a Windows slave, since I haven't found a way to put in some sort of relative path to the keyfile.
On windows the file would be located at: C:\Users\<username>\.ssh\
On linux the file would be located at: /home/<username>/.ssh/
I've tried many iterations of using system environment variables, setting environment variables on the node configuration page and using these as part of the keyfile path without any luck.
Am I missing something? Is there a better way to handle it. There must be a way to manage keyfile locations and differences between ssh remote hosts across slaves.
Unfortunately, I believe there isn't a way to specify a relative path — the keyfile path configured must be the same on every build slave. Far from ideal, I know.
I'm not sure how Windows would handle it, but perhaps something like /ssh/whatever.key would work, if you were to place the file at c:\ssh\whatever.key and /ssh/whatever.key for Windows and Linux machines, respectively.
In any case, the plugin has since been modified to use the Jenkins Credentials plugin, which allows you to manage username/password or private key-based credentials from the Jenkins UI, without having to place files on disk.
However, although this has been integrated into the SSH plugin, there has not yet been a new release containing this functionality, but it looks like it should be coming "soon".
So if the workaround doesn't work, you can try to:
Wait for a new release
Post on the jenkinsci-users list to ask about a new release
Download and install a recent build of the plugin
(though I would be careful to back up the existing job config before trying this; or try it on a separate Jenkins instance)