Automating build installation process - linux

I work on a SaaS based product of my company which is hosted on private cloud. So every time a fresh BOM package is available by the DEV team. In the common share folder , we- the testing team installs the build on our application servers (3 multi node servers, with one being primary and the other two being secondary).
The build installation is entirely done manually.on the three app servers(linux machine), where in the steps we follow are as below
Stop all the app servers
Copy the latest build from a code repository server(copy the .zip build file)
Unzip the content s if the folder on to a folder in the appserver (using the unzip command)
Run backup of existing running build on all three folders( command is something like - ant-f primaryBackup.xml, ant-f secondary backup.xml )
Then run the install on all three serverscommand is something like - ant-f primaryInstall.xml, ant-f secondaryInstall.xml )
Then restart all the server and check if the latest build is successfully applied.
Question: I am wanting to automate this entire process, such that I am just required to give the latest build number to be installed and the script takes care of the whole installation .
Presently I don't understand how this can be done ? Where should I start? Is this feasible? Will a shell script of the entire process be the solution?

There are many build automation/continuous deployment tools out there that would help you with a solution for automating your deployment pipeline. Some of the more popular configuration automation tools out there are puppet, chef, ansible, and saltstack. I only have experience with ansible and chef but my interpretation has been that chef is the more "user-friendly" option. I would start there... (chef uses the ruby language and ansible uses python).
I can answer specific questions about this, but hour original question is really open ended and broad.
free tutorials: https://learn.chef.io/
EDIT: I do not suggest provisioning your servers/deployments using bash scripts... that is generally messy and as your automation grows (which it likely will), your code will gradually become unmanageable. Using something like chef, you could set periodic checks for new code in your repositories and deploy when new code is detected (or upon certain conditions being met). you could write strait bash code within a ruby bock that will remotely stop/start a service like this (example):
bash 'Copying the conf file' do
cwd "current/working/directory"
user 'user_name'
code <<-EOH
nohup ./startservice.sh &
sleep 2m
nohup ./startservice.sh &
sleep 3m
EOH
end
to copy code from git for example... I am assuming github in this example, as i do not know where your code resides:
git "/opt/mysources/couch" do
repository "git://git.apache.org/couchdb.git"
reference "master"
action :sync
ssh_wrapper "/some/path/git_wrapper.sh"
end
lets say that your code is anywhere else.. bamboo or Jenkins for example... there is a ruby/chef resource for it or some way to call it using strait ruby code.
This is something that "you" and your team will have to figure out a strategy for.
You could untar a file with a tar resource like so:
tar_package 'http://pgfoundry.org/frs/download.php/1446/pgpool-3.4.1.tar.gz' do
prefix '/usr/local'
creates '/usr/local/bin/pgpool'
end
or use the generic linux command to like so:
execute 'extract_some_tar' do
command 'tar xzvf somefile.tar.gz'
cwd '/directory/of/tar/here'
not_if { File.exists?("/file/contained/in/tar/here") }
end
You can start up the servers in the way that I wrote the first block of code (assuming they are services.. if you need to restart the actual servers, then you can just run init 6 or something.
This is just en example of the flexibility these utilities offer

Related

Maintain code on bash scripts or Jenkins?

I'm currently working with Linux VMs and I use Jenkins Pipelines to run various jobs written in bash. I have 2 options regarding where the code is wrote and maintained:
In pipelines with sh '#some code' (Git integrated)
In bash scripts placed in the VM with sh './bashscript'
Which one would you suggest?
Use GIT to store scripts or code related, as GIT is a version control system, and all users who have access can access the file for viewing or making changes.
When the Jenkins job runs, a workspace folder is created on the server in which the job is running on, and the script would be copied from GIT into the folder.

Installing Node in a linux grid server

So some background, I'm installing Node on a host server, but it's a grid server not a server that's solely for my website.
The grid server doesn't have a root user/ administrative powers. So to install node I found this workaround: http://iantearle.com/blog/media-temple-grid-and-nodejs . It's a Linux Grid server, I've never used Linux so if someone could explain to me what the commands mean, especially: ./configure --prefix=~/opt/
Lastly I followed the steps but when I try to run the node command in the server it says node:command not found - which is why I'm trying to understand the steps. Thanks
To explain the process:
Configure
The configure script is responsible for getting ready to build the software on your specific system. It makes sure all of the dependencies for the rest of the build and install process are available, and finds out whatever it needs to know to use those dependencies.
Unix programs are often written in C, so we’ll usually need a C compiler to build them. In these cases the configure script will establish that your system does indeed have a C compiler, and find out what it’s called and where to find it.
Make
Once configure has done its job, we can invoke make to build the software. This runs a series of tasks defined in a Makefile to build the finished program from its source code.
The tarball you download usually doesn’t include a finished Makefile. Instead it comes with a template called Makefile.in and the configure script produces a customised Makefile specific to your system.
3.Make Install
Now that the software is built and ready to run, the files can be copied to their final destinations. The make install command will copy the built program, and its libraries and documentation, to the correct locations.
--prefix=~/opt/ -> will set the build directory to /home/yourhome/opt directory.
Now if you didnt get errors while doing those 3 steps explained above make sure you did the following:
nano ~/.bash_profile
export PATH=~/opt/bin:${PATH}
nano is a text editor and you are opening .bash_profile file with it.
you need to add export PATH=~/opt/bin:${PATH} in that file and save it using ctrl+x
Then restart your terminal.
Specified github repository for nodejs is outdated. use the following link instead.
git clone https://github.com/nodejs/node.git
P.S node:command not found usually happens when the program is not installed correctly or it's executable isnt in your terminal's PATH variable.

Jenkins tfs plug-in and checkout source on remote node

First, I'm a Jenkins neophyte. I have made a free-style software project in Jenkins to perform my Linux build. The Jenkins server is running on Windows so there are slave nodes configured for doing this Linux build. The sources are kept in a TFS server.
I updated our TFS plugin to the latest of 4.0.0. This plugin says that it is no longer necessary for slave nodes to have the Team Explorer Everywhere package installed as it uses the Java API. However, when I kick off my build, I get this:
Started by user Andy Falanga (afalanga)
[EnvInject] - Loading node environment variables.
Building remotely on dmdevlnx64-01 (PY27-64 CENTOS6-64 LOG4CPLUS PY26-64) in workspace /home/builder/jenkins/workspace/Linux Autotools Build
Deleting project workspace... done
Querying for remote changeset at '$/Sources/Branches/Andy/AutotoolsMigration' as of 'D2015-10-05T18:26:27Z'...
Query result is: Changeset #4872 by 'WINNTDOM\afalanga' on '2015-09-25T23:36:24Z'.
Listing workspaces from http://ets-tfs:8080/tfs/SoftwareCollection...
... Long list of workspaces
Workspace Created by Team Build
Getting version 'C4872' to '/home/builder/jenkins/workspace/Linux Autotools Build'...
Finished getting version 'C4872'.
[Linux Autotools Build] $ /bin/bash /tmp/hudson7081873611439714406.sh
Bootstrapping autotools
/tmp/hudson7081873611439714406.sh: line 4: ./bootstrap: No such file or directory
Build step 'Execute shell' marked build as failure
Notifying upstream projects of job completion
Finished: FAILURE
I log into that system and look in the directory /home/builder/jenkins/workspace/Linux Autotools Build and sure enough, there's nothing there. My configuration is pretty simple.
I have discard old builds checked and a simple rotation (this is just me learning how to use it).
I have it set to "Restrict where the build is done" and a label which associates to the 3 slave nodes for doing this build.
All TFS credentials are input and correct.
No build triggers
A simple shell script for Build->Execute Shell which bootstraps the autotools and calls configure and then make.
What am I doing incorrectly?
I found the answer and am posting it here in case someone runs into this. This seems better than simply deleting the question. The TFS plugin doesn't seem to like spaces in the project name. The name before Linux Autotools Build which didn't work and the name now, LinuxAutotoolsBuild which does.
The errors provided by the Jenkins system didn't provide enough information for this to be apparent. After trying a few other things the thought occurred, "Perhaps the spaces are causing grief."
Hope this helps someone.

Automating an install of Apache Ant

I've manually installed ANT on many servers simply by unzipping the ant files in a location and setting up the ~/.bash_profile to configure the users' path to see it.
I need to automate the setup now on servers which do not have internet connectivity.
We are using Nolio for deployment, but I don't care if the automation is done via nolio. If it can be scripted, I can easily just make Nolio call the script.
I don't think editing the users' .bash_profiles is a good way to do the automation.
So, assuming I get Ant on to the servers and unzip it, what's the best way to install it so that all users will have access to it?
You can try using pssh (parallel ssh). It's pretty awesome. Create a file with all your remote hosts, run:
pssh -h "command1 && command2 && command3"
You can use pscp to deliver scripts, then use pssh to execute them. Works very well. Alternatively, you could become a puppet master and work everything off puppet. You can do some cool stuff with it, like automating builds based on hostname convention. LAMP build? Name the host web01.blarg.awesome or whatever, setup puppet to recognize it based on a regex, then deliver the appropriate packages.
GL.

Run build script in Code::Blocks instead of building a target

Background:
I recently joined a software development company as an intern and am getting used to a new build system. The software is for an embedded system, and lets just say that all building and compiling is done on a buildbox. The building makes use of code generation using xml files, and then makes use of make files, spec files, and version files as well.
We develop on our own comps, (linux - mandriva distro) and build using the following methods:
ssh buildserver
use mount to mount drive on personal computer to the buildserver
set the environment using . ./set_env (may not be exactly that)
cd app_dir/obj (where makefile is)
make spec_clean
make spec_all
make clean
make
The Question:
I am a newbie to Code::Blocks and linux and was wondering how to set up a project file so that it can simply run a script file to execute these commands, instead of actually invoking the build process on my actual computer. Sort of like a pre-build script. I want to pair the execution of this script simply to Ctrl-F9 (build) and capture any output from the above commands in the build log window.
In other words, there is no build configuration or target that the project needs to worry about. I don't even need a compiler installed on my computer! I wish to set this up so that I can have the full features of an IDE.
Appreciate any suggestions!
put your script in a shell script file. E.g.,
#!/bin/sh
mount ... /mnt/path/buildserver
. ./set_env
cd app_dir/obj
make spec_clean
make spec_all
make clean
make
Say you name it as /path/to/my_build_script, then chmod 755 /path/to/my_build_script and invoke the following from your ssh client machine:
script -c ssh buildserver "path/to/my_build_script"
When finish, then check for the file typescript under current directory.
HTH

Resources