How would you get ansible or puppet to deal with the following use case:
An application, X version 1, is installed with its configuration variables for version 1. Subsequently X version 2 is released with a different config variable set (i.e. that appplication may have added or removed a variable from their files under /etc). I want to upgrade to X version 2 and preserve old configuration from X version 1. I also want the option to rollback to X version 1 at a later date restoring it to the configuration state it had prior to upgrading to X version 2.
How would you go about ensuring this using Ansible or Puppet?
Your question is likely to be flagged as overly broad because there are so many potential answers/approaches, and it's going to depend greatly upon a number of other questions, such as:
Are you using a package manager (rpm, apt, etc) or are you installing applications manually, using gnu automake, or something else?
Precisely what sorts of configuration files are involved, how many, where are they located, etc?
At the most basic level, if you're relying on well-maintained packages then simply using the appropriate package manager may suffice. If you're doing anything beyond that then you're going to have to customize things based on your own preferences. There is no single wrong or right answer as to how to do this sort of thing simply because there are so many different approaches based on your individual needs/requirements.
As way of one example, suppose you have an application that relies on the configuration file /etc/service.conf, which only has a single entry containing a version number:
version: 1.2.3
You could simply template this file and specify the version number in Ansible or Puppet. For example, in Ansible you would just have a template that looks like this:
version: {{ version }}
And then a playbook that looks something like this:
- hosts: localhost
vars:
version: 1.2.3
tasks:
- yum: name=package-{{ version }}
state=present
- template: src=service.template
dest=/etc/service.conf
Of course you might want to expand this to ensure other versions of the package are removed so only the latest version exists.
If your environment is more complex, for example having a lot of different configuration files that need to be maintained and/or templating not being a viable solution then you probably want to implement some sort of backup/archiving of the configuration files before updating them. This could also be done any one of a number of ways, for example:
Using the Ansible fetch module to fetch configuration files from the target server
Simply invoking tar, cp, or something similar to make a backup of the files on the target server
You could also design a completely unique method of maintaining multiple versions of applications. For example, we use symlinks to manage multiple versions of third party applications as well as our own applications. We build and install different versions of Apache in locations like /deploy/software/httpd-2.2.21, /deploy/software/httpd-2.4.0, etc. and have a symlink named /deploy/software/httpd that points to the version we currently want to run. All the version-specific configuration files, etc. remain in the version-specific directory, so switching versions is as simple as shutting down Apache, changing the symlink, and restarting Apache.
Related
I just cloned our remote repo on my new system that I just installed Node/npm on. Then I ran npm install to get all the packages installed. Was this not the right command?
VSCode is showing me huge differences in the lock file. The lockFileVersion changed from 1 to 2 and there many, perhaps hundreds, of changes in this huge file. Why would that happen and what is the potential impact of checking this in?
It looks like the changes are mostly related to node modules. example:
"node_modules/css-declaration-sorter/node_modules/chalk/node_modules/supports-color": {}
Where that entry wasn't there in the existing repo.
Or am I making a big deal out of nothing?
Your package.json file specifies limits and ranges of acceptable versions, while the lock file specifies the exact versions you are using, taking into account all the dependency resolutions that were available the last time you ran install.
In general, if your code builds and runs, you want to publish the lock file to your repository. This will ensure the production build will use the exact versions you have built with.
Is it possible to have a directory isolated bin folder? All packages installed to be available only in that specific directory?
For example I have a directory ~/projects and I would like to have git command available only in that folder.
I think you may be interested in using one of these two tools:
https://github.com/kennethreitz/autoenv
https://github.com/direnv/direnv
The first tool (autoenv, mostly written in Bash) is simpler to install and use but is not maintained anymore, and the second tool (direnv, mostly written in Go) provides more features, including the ability to unset environment variables.
For more details on their respective features, you can take a look at this GitHub issue.
Okay, I have a couple inquiries:
1 - Let's say I have a solution that references several external projects. I want to reference specific Labels (that represent stable versions) on those external projects. I know that you can do this by doing a Get Specific Version by Label on those projects. But once you've done that, is there a convenient way to do a Get on the whole solution, and have it preserve all of the specific versions?
Ultimately, I would like to do a single Get and have it get latest where that is applicable and get specific versions where that is applicable. It seems frustrating to have to do separate Gets on all the projects.
2 - Is it possible to build binaries from labels? When an external project is a stable version that isn't going to change, it makes sense to just reference the binary. When you create a label and build it, does it generate binaries in a specific location for that label that can be referenced?
On your first question: While TFS allows you to grab sources by Label, there is no way to setup a a workspace configuration that is bound to a specific Label or Changeset for a specific path. The only thing I can think of would be to create a batch file which fetches the latest version:
tf get $/Project/Sources /Version:T /recusive
tf get $/Project/ComponentA /version:LMyLabelName1 /recursive
tf get $/Project/ComponentB /version:LMyLabelName2 /recursive
The way forward to do this is to publish your external references to a NuGet repository (can be your own) and then configure NuGet to get a specific version. A CI build can publish a new version to your NuGet server. And you can setup your own server so that you don't need to publish all your binaries to a public server.
On your second question: yes you can build by label in the Queue Build screen you can setup the version to build which will be built:
You can specify a Changeset number (C######), Label (LLabelname) etc. Any version spec will do (see the commandline docs for a explanation on version specs).
By default, no easy referencable name is generated if you build by label. I suspect that some clever build customization will allow you to drop the build output in a predefined folder based on the label name, but there is no such out-of-the-box functionality.
I've been playing around with how I want to setup my upgrades for an application install I'm creating using Installshield, using basic MSI projects.
I don't support any additional features/components and most of the upgrades will just be files/folders being added/removed from the default component.
I seem to be having difficulty in removing files/folders when creating an upgrade. I create my upgrade by copy/pasting the original setup.ism (i.e Version 1 of my install), so that I have all the files/folders of the original install, and then I just add/remove any changes. Is this correct? or should the upgrade.ism only contain newly added/removed files folders?
I first tried a Minor upgrade. I figured out how to remove files (right click - delete, and then add an entry to the RemoveFiles editor), but I haven't figured out how to remove folders. I don't want to have to manually add each file to the RemoveFiles table as there is likely to be hundreds of them. How can I have an upgrade remove a folder and all its children?
I've also tried the Major upgrade, which is very easy as I don't have to worry about removing files/folders due to it uninstalling first. But then I don't get the dialog that informs the user that it is actually an upgrade.
You can use your Action property defined in the UpgradeTable to detect if a major upgrade is occurring a present different UI elements to your user.
Most people will never need minor upgrades and/or patches. For most applications the major upgrade is the simplest approach to maintain and the downside of shipping the entire package is minimal. It's only for really large installers shipped to thousands or millions of customers this becomes an issue.
To remove a file during a minor upgrade you need to 'puncture' the component. You author it as transitive (InstallShield condition reevaluate=true) and give it an expression that always returns false. Checkout:
Uninstall a component during minor upgrade
Your approach of removing the component and authoring a rule in the RemoveFile table is incorrect. This breaks the component rules and reference counting.
It's a good idea to learn how minor upgrades work and what you can and can't do but don't be surprised if you find yourself leaning on major upgrades more.
I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.