I have a number of docker hosts - for arguments sake lets use 3 as an example. On those 3 hosts I have a total of 10 applications. Each will be standalone, or part of a stack. Most of the applications do hold some persistent data (configuration files and the like).
I have in the past, used a flat svn structure.
docker-data
stack1
app1
app2
app3
stack2
app4
app5
stack3
app6
app7
app8
app9
The whole structure and the config files would be under svn and loaded on all 3 hosts. Having to remember which host had which apps, and make changes and commit as needed.
Where host 1 = stack 1, host 2 = stack2, host3 = stack3 and app8 and 9
As part of rebuilding I was going to look at moving to git and a better structure.
I did see someone suggest the my entire structure go into git as the master
With a branch for each host.
master
docker-data
stack1
app1
app2
app3
stack2
app4
app5
stack3
app6
app7
app8
app9
host1
docker-data
stack1
app1
app2
app3
This seemed quite a good approach. But what struck me is how to set it up initially.
If all my apps are in the master how do I initialise the branch for host one and only pull app1, 2 and 3?
I assume I them do a merge back to master if any of those configurations change.
And finally, I want to move app8 into host1's branch from host 3.
Is this over complicated? and are there any posts, or commands that would help me facilitate this?
In general, it's not a good idea to use different Git branches for configuration on different systems because you run into problems like you've encountered. If you do this, you'll also likely run into conflicts if you need to do any merging.
The typical way that one manages configuration for different systems is a templating system of some sort. For example, you could use Ruby's ERB to write templates, and then configure each host with a YAML configuration file, with a build step producing an output directory with the configurations for each host. You will have just one main branch, with additional feature branches that get merged in as they're ready.
This approach is similar to the way that other configuration systems like Puppet and Ansible are designed to work, and it is generally much more robust than using separate branches for separate configurations.
Related
I'm hoping to leverage GitLab CI/CD / gitlab-runner to keep custom code up to date on a fleet of servers.
Desired effect is that when a commit is made against a certain project in GitLab, several servers then automatically pull those changes down.
Is it possible to leverage gitlab-runner's in this way, so that every runner registered with the project executes the contents of the .gitlab-ci.yml file? Or is there a better tool to accomplish this?
I could use Ansible to push updates files down to each server, but I was looking for something easier to solve for - something inherent in GitLab.
Edit: Alternative Solution
I decided to go the route of pre- and post-hook files in my repos as described here:
https://gist.github.com/noelboss/3fe13927025b89757f8fb12e9066f2fa
Basically I will be denoting a primary server as the main source for code pushes into the master repo, and have defined my entire fleet as remote repos in .git/config there. Using post-hooks inside of bare repo's on all of my servers, I can then copy my code into the proper execution path.
#ahxn81 Runners aren't really intended to be used in the pull way you describe. The Ansible push method you proposed is more in line with typical deploy flow. I can see why you might prefer the simplicity of the pull method over pushing via script. I guess a fleet of servers these days is via kubernetes or docker swarm which can simplify deployment after an initial setup headache.
I have multiple servers on a load balanced environment running the same Node application. I want the code on these servers be the same everywhere. I currently maintain a git repo for the code on these servers but have to manually SSH into each of these and pull the code from the git repo. Is there any simple way I can push the code onto all the servers?
Straightforward solutions that came in my mind:
1) use a cron job on the servers that makes the work you're doing manually, i.e., git pull (need linux), or
2) use git hooks to trigger the pull on the other servers. With this solution you need to have the list of servers to trigger the update. Hooks are basically scripts that are executed before/after events like commits, push, etc.
It looks your question is how to deploy Node.js app into multiple server. Here related question
I have a Jenkins set-up consisting of one Master and two Slaves. I have Jenkins jobs (which run only on the slaves) which will create binaries on every commit. Currently, Jenkins archives these artifacts into some place within the Jenkins Master. When i wish to download the binaries using a bash shell script, i use wget url_link_to_particular_artifact. I wish to change this. I want to copy all the generated artifacts into one common location on the master node. So, the url would remain the same and only the last part would change with respect to the generated binary name. I label my binaries with tags so it is easy to retrieve them later on. Now, is there a plugin which will copy artifacts into the master node but to the location that I can provide. The master and slave nodes are all redhat linux machines.
I have already gone through the Artifactory Plugin and I do not wish to use it. I want something really simple to implement. Is there really a need for a web server to be running at the location on the master where I wish to copy the artifacts into? Can i transfer the artifacts from slave to master over SSH? If yes, how?
EDIT:
I have made some progress and I am sort of stuck now: Assuming we have a web-server on the Jenkins master node that is running. Is it possible for the slave nodes to send the artifacts to this location and the web-server sort of writes it into the file system at that location on the Master??
This, of course, is possible, but let me explain to you, why this is a bad idea.
Jenkins is not your artifact repository. Indeed you can store your artifacts in Jenkins, but it was not designed to do so. If you will do that for most of your jobs, you will run into problems with disk space, etc. or even race condition with names.
Not to mention that you don't want to have hundreds or thousands of files in one directory.
Better approach would be to use an artifact repository, such as Nexus to store your artifacts. You can manage and retrieve them easily thru different channels.
Keep in mind that it would be nice to keep your Jenkins in stateless mode and version control your configuration for easy restoration.
If you still want to store your artifacts in one web location, I'd suggest to setup an nginx server, proxy /jenkins calls to jenkins and /artifacts to your artifacts directory.
There is the need that one puppet agent contacts some different puppet masters.
Reason: there are different groups that create different and independent sets of manifests.
Possible groups and their tasks
Application Vendor: configuration of application
Security: hardening
Operations: routing tables, monitoring tools
Each of these groups should run it's own puppet master - the data (manifests and appropriate data) should be strictly separated. If it is possible, one group should even not see / have access to the manifests of the others (we are using MAC on the puppet agent OSes).
Thoughts and ideas that all failed:
using (only) hira is not flexible as needed - there is the need to have different manifests.
r10k: supports more than one environment, but in each environment can only access one set of manifests.
multi but same puppet server using e.g. DNS round robin: this is the other way round. We need different puppet masters.
Some ways that might be possible but...
running multiple instances of puppet agents. That 'feels' strange. Advantage: the access rights can be limited in the way as needed (e.g. the application puppet agent can run under the application user).
patching puppet that it can handle more than one puppet master. Disadvantage: might be some work.
using other mechanisms to split responsibility. Example: use different git-repositories. Create one puppet master. The puppet master pulls all the different repositories and serves the manifests.
My questions:
Is there a straight forward way implementing this requirement with puppet?
If not, is there some best practice how to do this?
While I think what you are trying to do here is better tackled by incorporating all of your modules and data onto a single master, and that utilizing environments will be effectively the exact same situation (different masters will provide a different set of modules/data) this can be achieved by implementing a standard multi-master infrastructure (one CA master for cert signing, multiple compile masters with certs signed by the same CA master, configured to forward cert traffic elsewhere) and configure each master to have whatever you need. You then end up having to specify which master you want to check in to on each run (a cronjob or some other approach), and have the potential for one checkin to change settings set by another (kinda eliminating the hardening/security concept).
I would urge you to think deeper on how to collaborate your varied aspects (git repos for each division's hiera data and modules that have access control) so that a central master can serve your needs (and access to that master would be the only way to get data/modules from everywhere).
This type of setup will be complex to implement, but the end result will be more reliable and maintainable. Puppet inc. may even be able to do consultation to help you get it right.
There are likely other approaches too, just fyi.
I've often found it convenient to multi-home a puppet agent for development purposes, because with a localĀ puppet server you can instantly test manifest changes - there's no requirement to commit, push and r10k deploy environment like there is if you're just using directory environments and a single (remote) puppet server.
I've found the best way to do that is to just vary the path configuration (otherwise you run into problems with e.g. the CA certs failing to verify against the other server) - a form of your "running multiple instances of puppet agents" suggestion. (I still run them all privileged, so they can all use apt package {} etc.)
For Puppet 3, I'd do this by varying the libdir with --libdir (because the ssldir was under the libdir), but now (Puppet 4+) it looks more sensible to vary the --confdir. So, for example:
$ sudo puppet agent -t # Runs against main puppet server
$ sudo puppet agent -t \
--server=puppet.dev.example.com \
--confdir=/etc/puppetlabs/puppet-dev # Runs against dev puppet server
I have a set of servers with code bases on them. lets call them p1, p2, p3. I have a development server which i use to store the code d1. Each p server is different with a different code base.
I'm trying to figure out how to manage the git repo's correctly so that each of the "p" servers keep the "d1" server up to date.
here's what i did.
Create a git repo on p1 and initial commit.
created a --bare clone of the repo and scp'd it to the d1 server.
Repeated this for all servers.
my d1 server now has a /git/ folder with subfolders
p1, p2,p3. each of these has the normal content of
HEAD Branches config description hooks info objects refs.
I can clone these repos to another machine or folder and i get to see the actual files which is exactly what i wanted.
OK so here is my problem.
How to I keep the p1 repo up to date when someone clones the d1 copy and commits to it.
Do I need to run git fetch on p1
or should i have people change p1 and then git push to d1.
you can implement mirroring with gitolite to keep a central server with all the latest code from the others
from http://gitolite.com/gitolite/mirroring.html
Mirroring is simple: you have one "master" server and one or more
"slave" servers. The slaves get updates only from the master; to the
rest of the world they are at best read-only.
In the following pictures, each box (A, B, C, ...) is a repo. The
master server for a repo is colored red, slaves are green. The user
pushes to the repo on the master server (red), and the master server
-- once the user's push succeeds -- then does a git push --mirror to the slaves. The arrows show this mirror push.
The first picture shows what gitolite mirroring used to be like a long
time ago (before v2.1, actually). There is exactly one master server;
all the rest are slaves. Each slave mirrors all the repos that the
master carries, no more and no less.
This is simple to understand and manage, and might actually be fine
for many small sites. The mirrors are more "hot standby" than anything
else.
But when you have 4000+ developers on 500 repos using 25 servers in 9
cities, that single server tends to become a wee bit stressed.
Especially when you realise that many projects have highly localised
development teams. For example, if most developers for a project are
in city X, with perhaps a few in city Y, then having the master server
in city Z is... suboptimal :-)
And so, for about 3 years now, gitolite could do this:
You can easily see the differences in this scenario, but here's a more
complete description of what gitolite can do:
Different masters and sets of slaves for different repos.
This lets you do things like:
Use the server closest to most of its developers as the master for
that repo. Mirror a repo to only some of the servers. Have repos that
are purely local to a server (not mirrored at all). Push to a slave on
demand or via cron (helps deal with bandwidth or connectivity
constraints). All this is possible whether or not the gitolite-admin
repo is mirrored -- that is, all servers have the exact same
gitolite-admin repo or not.
Pushes to a slave can be transparently forwarded to the real master.
Your developers need not worry about where a repo's master is -- they
just write to their local mirror for all repos, even if their local
mirror is only a slave for some.