I've googled this and searched for answers here but either cannot find it or I'm making this too complicated in my head.
This is the situation: There are two machines, A and B. Both of them install a binary.
Machine A will have to run a script once the binary is installed on machine B, not before.
This is the first step but all other steps are similar, there is a dependency on the other machine.
I can't seem to find a way to do this in puppet. Can someone put me on the right track please?
Thank you
You're describing an orchestration problem, and Puppet itself is not intended or built for such problems. Puppet, Inc. offers an Orchestrator product bundled with the professional edition of the software, and that would be your best bet if it is available to you. Alternatively, previous versions of Puppet used MCollective, which should be available to you even if you're using open-source Puppet.
If there were just one such interaction between the machines in question then it might make sense to hack together some orchestration with Puppet itself -- it is possible. But you seem to be saying that you have multiple points where the two machines need to synchronize, and I really can't recommend trying to build that out with Puppet. If you could, say, fully configure machine B before configuring A, so that there is only one synchronization point, then that might be a different story.
Related
I have a bunch of TeamCity agents (Windows, Linux, AWS, OpenShift). To have consistent state it is desirable that all of them have the same software+version installed. Manually checking them is very tedious
Hence I have decided to have an application which shows this information as a dashboard, i.e. a snapshot view of all the agents and the software installed on them. I have decided to use Python(v3.6) for the implementation. I am not a hardcore developer hence this will be learn and do project for me.
I was thinking of some sort of a code-base on all agents that would fetch the necessary details for that agent only.
I will then have a central server that will utilize this data from each agent and then display it in the form of a dashboard.
Please let me know if the above design is the proper way of doing it or please suggest some alternate if any.
If I can have some pointers as how to go about implementing it that would be of great help.
If you have full control over the agents' machines, consider using Ansible to enforce the desired configurations. In general it is much more convenient an safe to control agents' configurations rather than ask them if they have proper ones. And with Ansible or similar configuration management tool you can do this in a scalable way.
Scenario: 2 developers working on the same project (VS2010, C#, MVC3, WinXP) on seperate stand alone computers. Due to IA restriction (DOD) we are NOT allowed to connect these two computers in any way. The only way we are allowed to pass data between computers is via a CD-R/DVD-R disk. We need to be able to share a SVN repository for the code we are writing. I'm trying to figure out what the best way to do this would be.
Will this scenario even work? What the best workflow to use? I would appreciate any guidance or suggestions on the best way to do this.
Mark Buckley
putrtek#gmail.com
It sounds to me that you would be better off using distributed source control, such as Mercurial or Git for this project. SVN makes it exceptionally hard to merge, and distributed source control would make it so that you just have to pass ChangeSets back and forth.
Also, distributed source control houses a repository on each system, which is what you would have to do in this situation anyways.
This book should help you with most things Mercurial-related.
This Link explains how to pull new ChangeSets into your repository.
In your situation I would propose the following scenario: setup and maintain SVN repository on the one selected PC (let's say the most reliable one), the other members pass CD-R's with patches when they finish part of work, then all patches are integrated in that SVN repo and for each members own patches are created to have similar code on each PC. I know, this sounds awkward, but maybe the best option in this case and operations with patches can be automatized.
From a design perspective I think the code architecture needs to be good with clear separation of modules, less coupled codes, follow strict OOP, reduce code dependency and I guess in that way two people can easily work without much interaction... do plan your integration and do have your code / class signatures defined before hand if possible.
Weird question, perhaps. We have a number of simple utilities written in-house that need to be run on an automated basis. These are not build jobs. Just things like running SendOutHourlyEmailAlarms.exe, KeepFoldersInSynch.exe and such. I would normally set these things up as simple scheduled tasks/AT commands (or a Windows Service if more granular control is needed over the scheduling), but a co-worker has set up a number of these tasks as build projects on the CruiseControl.NET server. I asked him why he set these up this way and his response was that the executions (and their logs, return values, thrown exceptions) were all tracked and logged and that this information was accessible through an organized interface on the build server website. I couldn't argue with this.
But this just has a smell that I can't quite identify. Is this a proper use of CruiseControl.NET? If not, what are the dangers? Even if it may fit the bill, aren't there other products better suited for this type of thing?
We have all sorts of non-build related tasks for the exact same reason as your coworker had, I want one spot to look up any and all jobs I need run.
Some Examples of our CC.NET projects:
FTP installers to Remote QA
Creating Source Code Documentation
Create VM's with the installers
installed for QA in the morning
Archiving Installers
Pretty much anything I have to do by hand more than once, becomes a project. IMHO it is much better than a scheduled task for one other reason as well. Our config files are in source control, so we have 1 place to make adjustments. We do not have to log into multiple servers and make adjustments or wonder which server did that.
I think your coworker has made a good argument. If these tasks are related to the development process, then placing them in CruesControl.Net as a project seems acceptable. I would draw the line at utilizing a development server to run production processes though. Although it is true that "If the only tool you have is a hammer, you tend to see every problem as a nail," it doesn't mean that the hammer isn't capable of solving a lot of problems!
Just because a tool is designed to solve a particular problem does not mean that it will not have equal facility at solving similar problems outside the scope originally concieved by the tool creator. If CruiseControl.NET solves these problems well, then it is absolutely the appropriate tool to use.
I need a software to manage configurations of linux servers in one central location. It should be able to push changes to servers automaticly. Version control would be an advantage...
I've heard good things about Puppet (as matli suggested) and Cfengine, which are both listed at http://en.wikipedia.org/wiki/Comparison_of_open_source_configuration_management_software
Have a look at Puppet
There's also Chef and bcfg2. If you're a Java guy, Control Tier is nice. There are some new projects in the python space to address the issue as well: Kokki, Overmind, Edison.
They all do essentially the same thing just in different ways. If you're a ruby developer, Chef is going to feel VERY familiar. If you aren't a developer and don't care about the language, puppet, while written in Ruby, abstracts it all out into a DSL.
Checkout Bluepring and Blueprint I/O. Blueprint is an open-source tool for figuring out what's been done to a server. Packages, file modifications and source installs are detected and packaged up in a reusable formtat - a blueprint. Blueprint I/O is a tools for moving blueprints to another server. Together, they make for a drop dead simple configuration management tool. Hope this helps.
https://github.com/devstructure/blueprint (Blueprint # Github)
https://github.com/devstructure/blueprint-io (Blueprint I/O # Github)
Old question, but still might be helpful to you: We are releasing ConfigChief, a hosted central configuration repository with versioning, audit, access control which turns the problem around by pulling configuration from the server rather than pushing it which is the solution provided by Puppet and the rest.
You can signup for a beta at http://woot.configchief.com if you like.
We are cross-compiling an application for an embedded Linux target under desktop Linux. For testing and other purposes we are using statically linked libraries with our application. The testing library we are using is CMockery.
My question is: Where should the static libraries and include files for CMockery live, given that we are cross-compiling?
If we weren't cross-compiling, things should go in /usr/local/lib.
Some suggestions from our team have been:
/opt/google/lib and /opt/google/include
/opt/embeddedLinuxDistro/usr/local/share/google/lib (and include)
/usr/local/arch/lib (and include)
Any pointers appreciated!
Note: After writing this answer, my summary would be:
Keep anything that is non-standard to the Linux distro you're using separate. In fact keep files for different projects separate even if they share libraries. This will make it much easier to move your files to another machine, to setup multiple complete builds for testing, and most importantly to be able to recreate the build starting from scratch.
The decision is really subjective.
Do you just need one copy of the library for all users?
Does it rarely change?
If your build machine caught fire and you had no backups of that machine, how quickly and easily could you re-build your environment of libraries and cross-compilers?
I ask these questions, because if the library changes often or different users may need different versions, you're better off having it be portable. That is, you can specify in your build where to find the files.
Of your team's suggestions, I would lean towards a path that contains a reference to your project. This will make it easier a year from now (when someone asks you to setup another build machine) to reproduce everything.
Lastly, I wouldn't worry about trying to adhere to "standard" library locations because you're not creating and managing a Linux distribution. Furthermore, most people don't really know anything more than "/usr/lib" and /usr/local/lib" and even the people that know those do not know the difference.
Do what's best for your project no matter what that may be.