From what I understand, CloudFormation template can retrieve a file from remote and run it (Ex: bash shell), for example: download a bash script to install Graphite/OpenTSDB RRD tools.
My question is: is there any best practice between using CloudFormation template commands to do installation steps by steps vs using CloudFormation template to retrieve the bash script to run installation?
Thanks
There is no "best" way to do it, there are only lots of different options with different trade-offs.
Putting scripts in your CF template quickly becomes tiresome because you have to quote your data.
Linking to shell scripts can get complex because you have to specify everything in detail, and the steps can get brittle.
After a while, you'll want to use Puppet or Chef. These let you declare what you want "Apache 2.1 should be installed, the config file should look like this.." instead of specifying how it should be done. This can keep complex things organized. (But has a learning curve. Look into OpsWorks.)
After that, you'll want to bundle your image into an AMI (speeds things up if your build take a while, and relies on other servers on the internet being up to install!)
I'd suggest you use user-data, given as a parameter to your template. whether it is saved locally or remotely, it is best to separate your infrastructure details (i.e the template) from the boot logic (the shell script). the user data can be a shell script, and it will get invoked when your instances boot.
Here's an example of providing user-data as a parameter:
"Parameters":{
"KeyName":{
"Description":"N/A",
"Type":"String"
},
"initScript":{
"Description":"The shell script to be executed on boot",
"Type":"String"
},
},
"Resources":{
"workersGroup1":{
"GlobalWorker":{
"Type":"AWS::EC2::Instance",
"Properties":{
"InstanceType":"t1.micro",
"ImageId":"ami-XXXX",
"UserData":{"Fn::Base64":{"Fn::Join":["", [{"Ref":"initScript"}]]}},
...
Related
I have a shell script that installs a particular software on an Azure VM. Say, it is install_software.sh.
There are environment specific parameters defined in a .param file. For example,
INSTALLATION_PATH=XYZ
INSTALLER_LOCATION=ABC
I plan to do this:
Create 3 param files specific to DEV, QA, PROD environments
Load all the 3 files from GitHub into the VM
Accept environment name as an argument while executing the script, example:
sh install_software.sh DEV
Check if $1 of the executed command is DEV, and EXPORT from DEV .param file.
Now, do you think this is a good approach? Or, is there any smarter approach. Appreciate if I can be pointed to any sample code snippets too. Thank you very much.
I am trying to debug a shell script that is executed via a Jenkins job. The first thing the script does is include another script that is in a completely different repo. My instinct is telling me that the user that Jenkins is executing the script from has access to the directory for the other repo through $PATH or some other similar mechanism, but nothing I’m seeing indicates this.
I’ve looked over variables in http://$host/systemInfo, tried logging on to the Linux box, switched to various users and searched through command history for each, looked at $PATH variable for each, and even tried executing a test shell script with the same include as different users. Still not seeing anything to indicate how Jenkins is able to include a file from a different repo and have not been able to get the include to work in my test script.
My main questions are:
How can I determine what user Jenkins is executing the original shell script as? I would assume user 'jenkins' but I'm not able to get the include to work in my test script executing as this user.
How is Jenkins able to include a script from a different repo?
I'm sure I'm just running into some fundamental Jenkins ignorance on my part but not finding answers. Thanks in advance for any insight.
Finally found the answer and it seems really obvious now that I see it. The Jenkins server that the job runs from has a PATH environment variable defined in the server config in the Jenkins interface. This PATH points to the directory containing the external script.
I've just started to get to grips with Jenkins. It currently performs the following tasks:
Pulls the latest codebase from git
Uploads the codebase via sftp to my environment
Sends a notification email to the testers and the PM to inform them of a completed deployment.
However for it to be truly useful I need it to perform two more tasks:
Delete the robots.txt and .htaccess file which exists in the git repo and replace it with a predefined version which is specific for the server
Go through all the code and remove specific code-blocks (perhaps something in between comments: eg. /** Dev only **/ Code to be removed goes here /** Dev only **/ or something like that).
Are there any plugins which can accomplish these things or would I have to read up on writing groovy scripts for this sort of thing (I don't know anything about those yet).
On a related note: I'd also love it if it could combine kit and SASS files, however I can't see a plugin for these things, however I assume I can just install compass on my build server and then run it via command line in the build process. Is that correct?
Instead of putting your build tasks directly into the Jenkins job, I recommend writing a build script to accomplish your publishing/deployment tasks.
Jenkins is great for having a single point of automation that is easy to run, can publish build results, and can track successes and failures. In my experience though, you're better off not putting your individual tasks and configuration steps into the Jenkins job configuration. At some point, you'll want to be able to run this job without Jenkins, either because you want to test local changes, or you want to handle multiple jobs and trying to keep job configurations in sync is not fun, or because you're moving to another build/deployment system. Also, putting the build script into a file allows you to put it into your source control system and track changes.
My advice: choose a scripting language (Python, Ruby, Perl, whatever you're comfortable with) or build system (SCons and Rake are options) and write a build script. In Python Ruby, and Perl, it's easy to manipulate files (#1) and all have a wide choice of templating systems that will accomplish #2. Then the Jenkins job becomes running your build script on the command line (or executing through a language-specific builder). And the build script can include running any of the tasks that you decide to put in your build (compass, etc).
How can I schedule a build without tag over Windows, Linux and WCE in Hudson using a shell script and generate a report that will be sent to a specified server?
And so the conditions are :
1. How can I create the build without creating a new tag?
2. How is it possible to excute .sh over windows and WCE (Windows Mobile), is it simply by going through Cygwin? Moreover, having a cross-platform (3 platforms) build does it mean that I must run the build 3 times?
3. How to generate a report and save it in a directory of a server that I'm authorized to access to?
I know that I asked many questions at once. It is because this is my first use of Hudson and these are kind of details. Moreover, I don't want to make a mistake by creating new tags during my tests. The 1st and 3rd questions are the most important. If anyone gives me the right answer to them, I'll choose it as the right answer.
Thank you a lot.
first, people nowadays mostly use jenkins instead of hudson (open source, better support)
build can be started manually in hudson / jenkins, just click the green arrow. It will create a new build but won't change your repository (unless the last step of your build is creating a tag, in that case, just remove that step for testing)
Usually, .sh scripts run in shell excecutables (ash, sh, bash, csh...) and are not supported of the shell on windows. You'll have to go through cygwin or have a platform specific build command
kind of not clear for me. If you use jenkins to build a matric build (with the matrix axis being your target platform), you'll have automatically a nice report in jenkins itself (status of each build). You can keep artifacts (use post-build action : archive the artifacts) or use another plugin to publish the file you like (exemple : ftp reporting)
Sorry not being able to be more precise, that's how far I understand your questions.
I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.