Maintain code on bash scripts or Jenkins? - linux

I'm currently working with Linux VMs and I use Jenkins Pipelines to run various jobs written in bash. I have 2 options regarding where the code is wrote and maintained:
In pipelines with sh '#some code' (Git integrated)
In bash scripts placed in the VM with sh './bashscript'
Which one would you suggest?

Use GIT to store scripts or code related, as GIT is a version control system, and all users who have access can access the file for viewing or making changes.
When the Jenkins job runs, a workspace folder is created on the server in which the job is running on, and the script would be copied from GIT into the folder.

Related

Jenkins - workspace for declarative Jenkins pipeline?

I am running a test job on Jenkins, using Jenkinsfile pipeline. The job targets the node ubuntu-node.
After the job is done, when I select the "Workspace" link, I get 2 entries, for example:
Workspaces for validation_my_proj #105
/home/jenkins/workspace/validation_my_proj#script on master
/home/jenkins/workspace/validation_my_proj on ubuntu-node
Could someone explain why do I have 2 workspaces? What does the first line validation_my_proj#script on master mean?
I am having some problems with linking executable produced by the build with a shared library (using Meson build system), and I am wandering does this Workspace setup have anything to do with it, because locally all works OK, only on Jenkins not.

Automating build installation process

I work on a SaaS based product of my company which is hosted on private cloud. So every time a fresh BOM package is available by the DEV team. In the common share folder , we- the testing team installs the build on our application servers (3 multi node servers, with one being primary and the other two being secondary).
The build installation is entirely done manually.on the three app servers(linux machine), where in the steps we follow are as below
Stop all the app servers
Copy the latest build from a code repository server(copy the .zip build file)
Unzip the content s if the folder on to a folder in the appserver (using the unzip command)
Run backup of existing running build on all three folders( command is something like - ant-f primaryBackup.xml, ant-f secondary backup.xml )
Then run the install on all three serverscommand is something like - ant-f primaryInstall.xml, ant-f secondaryInstall.xml )
Then restart all the server and check if the latest build is successfully applied.
Question: I am wanting to automate this entire process, such that I am just required to give the latest build number to be installed and the script takes care of the whole installation .
Presently I don't understand how this can be done ? Where should I start? Is this feasible? Will a shell script of the entire process be the solution?
There are many build automation/continuous deployment tools out there that would help you with a solution for automating your deployment pipeline. Some of the more popular configuration automation tools out there are puppet, chef, ansible, and saltstack. I only have experience with ansible and chef but my interpretation has been that chef is the more "user-friendly" option. I would start there... (chef uses the ruby language and ansible uses python).
I can answer specific questions about this, but hour original question is really open ended and broad.
free tutorials: https://learn.chef.io/
EDIT: I do not suggest provisioning your servers/deployments using bash scripts... that is generally messy and as your automation grows (which it likely will), your code will gradually become unmanageable. Using something like chef, you could set periodic checks for new code in your repositories and deploy when new code is detected (or upon certain conditions being met). you could write strait bash code within a ruby bock that will remotely stop/start a service like this (example):
bash 'Copying the conf file' do
cwd "current/working/directory"
user 'user_name'
code <<-EOH
nohup ./startservice.sh &
sleep 2m
nohup ./startservice.sh &
sleep 3m
EOH
end
to copy code from git for example... I am assuming github in this example, as i do not know where your code resides:
git "/opt/mysources/couch" do
repository "git://git.apache.org/couchdb.git"
reference "master"
action :sync
ssh_wrapper "/some/path/git_wrapper.sh"
end
lets say that your code is anywhere else.. bamboo or Jenkins for example... there is a ruby/chef resource for it or some way to call it using strait ruby code.
This is something that "you" and your team will have to figure out a strategy for.
You could untar a file with a tar resource like so:
tar_package 'http://pgfoundry.org/frs/download.php/1446/pgpool-3.4.1.tar.gz' do
prefix '/usr/local'
creates '/usr/local/bin/pgpool'
end
or use the generic linux command to like so:
execute 'extract_some_tar' do
command 'tar xzvf somefile.tar.gz'
cwd '/directory/of/tar/here'
not_if { File.exists?("/file/contained/in/tar/here") }
end
You can start up the servers in the way that I wrote the first block of code (assuming they are services.. if you need to restart the actual servers, then you can just run init 6 or something.
This is just en example of the flexibility these utilities offer

Execute a script after every git push

There is a server running and has a git instance on it. I want a script to run everytime a user does git push to the server. I want my script to be executed then git push to continue.
Any work arounds?
You've tagged this GitHub so I'm assuming that you are referring to public GitHub and not GitHub enterprise.
You cannot run a script "server-side" on GitHub's servers because that would obviously be a massive vulnerability but you can set up a web hook to trigger a script on another server.
Basically whenever someone does a push, a specific URL will be sent data about the push. You can then trigger a script from this. For more information on web hooks, see the GitHub API docs.
I am not sure If you want a scipt to run prior to push or after. So here is my answer for pre-push. But if you want post-push (i.e after push) you have to change the pre-push hooks accordingly to check if pushed successfully and then you can do post push thing.
As suggested by #Travis, git hooks is the one that you are looking for. So to execute a script pre-push, you have to do is to create a pre-push file in the .git/hooks. So in this case put your bunch of code in the pre-post script file .git/hooks/pre-push and save it. Then make it executable by chmod +x .git/hooks/pre-push. After you done with this successfully you will be able to see the script gets executed each time you do run push command.
PS: Please note that I haven't tested this whole but expected to work in this way.
In Short, assuming you(Linux user) are in the project directory
vim .git/hooks/pre-push # then add your code and save the file
# Also put the shebang on top to identify the interpreter
chmod +x .git/hooks/pre-push # make it executable
You should look into git hooks:
8.3 Customizing Git - Git Hooks
and, another site regarding this technology:
githooks.com

How to migrate Jenkins job from windows local machine to Linux server?

I installed Jenkins in my local machine(Windows) & I have created one new job using Jenkins and it's works perfectly... Now I have installed Jenkins in one dedicated Linux Server... How to migrate the job from windows(local machine) to newly installed Jenkins on Linux server??
The safest solution is to use the Job Import plugin.
Install this plugin on the Linux server, and next import the job from the Windows Jenkins URL :)
You can also check in your Job's configs with some smart .gitignore (or whatever your choice of SCM is) and use the %JENKINS_HOME% as a checked in and versioned directory in the SCM of your choice.
Job config's are OS independent, though the job itself might have OS specific scripts (if you use a shell script instead of a mvn pom file / ant build.xml).
Then you can just check out your checked in job repo to the new linux host's $jenkins_home directory and start up jenkins. all your jobs should be found and added to your linux jenkins (with out the need for a plugin).
Generally speaking... the less plugins, the more stable your Jenkins install will be.

Jenkins to SCP a few folders and files from SVN to linux server?

I use jenkins to do auto deployment weekly to a tomcat server, and it is fairly simple to do using the "curl" with the tomcat manager. and since i am only uploading a .war file, so its very straight forward.
But when comes to a backend console application, Anyone has any idea how to use jenkins to upload an entire "set of folders with files" onto a linux box? The project that i have is built via ant and has all the folder inside the SVN.
A couple things come to mind.
Probably the most straightforward thing to do is use the ant scp task to push the directory / directories up to the server. You'll need the jsch jar on your Ant classpath to make it work, but that's not too bad to deal with. See the Ant docs for the scp task here. If you want to keep your main build script clean, just make another build script that Jenkins can run named 'deploy.xml' or similar. This has the added benefit that you can use it from places other than Jenkins.
Another idea is to check them out directly on the server from SVN. Again, ant can probably help you with this if you use the sshexec task, and run the subversion task inside of that. SSHexec docs here
Finally, Jenkins has a "Publish Over SSH" plugin you might try out. I've not used it personally, but it looks promising! Right over here!

Resources