Is there a way I can get comeplete user information using puppet code.
Like group id, id, password expiration details, etc.
May be using facter.
You can run any command using facter and centralize it using Puppet, so technically is possible, but is not advised (as correctly stated by boxdog)
If you do that, the fact collected will be huge (as .json) and Puppet will have to store that in the DB and your performance is going to be slow down.
Think of Puppet's facter as the temperature for your AC. You need the temperature fact to be collected by the configuration management tool so that you can drive decision in order to turn on or off the heating. The reason you collect facts is to drive configuration decisions. In order to monitor your infrastructure you need a System Monitoring tool and not a Configuration Management tool.
If you want to collect this information one time, Puppet has support for:
Bolt
MCollective (deprecated for newer versions, available on <= 2018.1)
Related
In all servers we got some .env files, which sets configs for server (Node.JS) on start.
Now I want to edit this files from admin pane (another web-service, working with main server through API).
Is there any best practices or just good ideas how can I realize that?
First idea - create another web-server on instance, which will have only two API endpoints (read, write) and which will restart server after editing configs. This idea looking too heavy.
Second idea is to create bash script, which will send requests to admin servers to take actual configs and rewrite local .env file if find some changes, but here will be a lot unnecessary requests. (Request every minute, but configs will change 1 time per month).
What do you think? Any ideas?
You have a couple of options and it depends primarily on your deployment strategy..
If you have a distributed environment and/or your configuration changes often (i.e.: running multiple docker containers, rotating keys, etc.) I'd highly recommend using a K/V store and reading configuration(s) dynamically during application start. Check out HashiCorp Vault, etcd or even mongodb.
If your configuration contains sensitive data definitely use something like HashiCorp Vault. If you use a configuration tool like ansible, it has ansible-vault which will encrypt your secret(s) at rest and decrypt them during deployment.
I would highly advise against storing (even potentially) sensitive data such as api keys, tokens, etc. in version control. This is a pretty big attack vector and will lead you down a dark road.
Worst case scenario use environment variables. Almost all CI/CD tooling supports these and you can maintain separation of concerns.
I would like to use Terraform programmatically like an API/function calls to create and teardown infrastructure in multiple specific steps. e.g reserve a couple of eips, add an instance to a region and assign one of the IPs all in separate steps. Terraform will currently run locally and not on a server.
I would like to know if there is a recommended way/best practices for creating the configuration to support this? So far it seems that my options are:
Properly define input/output, heavily rely on resource separation, modules, the count parameter and interpolation.
Generate the configuration files as JSON which appears to be less common
Thanks!
Instead of using Terraform directly, I would recommend a 3rd party build/deploy tool such as Jenkins, Bamboo, Travis CI, etc. to manage the release of your infrastructure managed by Terraform. Reason being is that you should treat your Terraform code in the exact same manner as you would application code (i.e. have a proper build/release pipeline). As an added bonus, these tools come integrated with a standard api that can be used to execute your build and deploy processes.
If you choose not to create a build/deploy pipeline, your other options are to use a tool such as RunDeck which allows you to execute arbitrary commands on a server. It also has the added bonus of having a excellent privilege control system to only allow specified users to execute commands. Your other option could be to upgrade from the Open Source version of Terraform to the Pro/Premium version. This version includes an integrated GUI and extensive API.
As for best practices for using an API to automate creation/teardown of your infrastructure with Terraform, the best practices are the same regardless of what tools you are using. You mentioned some good practices such as clearly defining input/output and creating a separation of concerns which are excellent practices! Some others I can recommend are:
Create all of your infrastructure code with idempotency in mind.
Use modules to separate the common shared portions of your code. This reduces the number of places that you will have to update code and therefore the number of points of error when pushing an update.
Write your code with scalability in mind from the beginning. It is much simpler to start with this than to adjust later on when it is too late.
I would like to know if there is any API in "vespa platform" which I can use to create a search definition (sd) in runtime.
This is a requirement, because the documents that I will index are depending on the user input in my front end application.
No, there is no such API available. The idea of deploying an immutable application package (including the SD) is a conscious design choice to ensure appropriate management of multiple search clusters in multiple locations over time as well as enabling source control management.
If needed, one could build what you describe "on top" of Vespa: A web service that will let you mutate an existing SD and, upon submit, create the updated application package and deploy to your Vespa cluster. Vespa will (in most cases) handle schema changes without impacting serving.
I'm currently evaluating using Hazelcast for our software. Would be glad if you could help me elucidate the following.
I have one specific requirement: I want to be able to configure distributed objects (say maps, queues, etc.) dynamically. That is, I can't have all the configuration data at hand when I start the cluster. I want to be able to initialise (and dispose) services on-demand, and their configuration possibly to change in-between.
The version I'm evaluating is 3.6.2.
The documentation I have available (Reference Manual, Deployment Guide, as well as the "Mastering Hazelcast" e-book) are very skimpy on details w.r.t. this subject, and even partially contradicting.
So, to clarify an intended usage: I want to start the cluster; then, at some point, create, say, a distributed map structure, use it across the nodes; then dispose it and use a map with a different configuration (say, number of backups, eviction policy) for the same purposes.
The documentation mentions, and this is to be expected, that bad things will happen if nodes have different configurations for the same distributed object. That makes perfect sense and is fine; I can ensure that the configs will be consistent.
Looking at the code, it would seem to be possible to do what I intend: when creating a distributed object, if it doesn't already have a proxy, the HazelcastInstance will go look at its Config to create a new one and store it in its local list of proxies. When that object is destroyed, its proxy is removed from the list. On the next invocation, it would go reload from the Config. Furthermore, that config is writeable, so if it has been changed in-between, it should pick up those changes.
So this would seem like it should work, but given how silent the documentation is on the matter, I'd like some confirmation.
Is there any reason why the above shouldn't work?
If it should work, is there any reason not to do the above? For instance, are there plans to change the code in future releases in a way that would prevent this from working?
If so, is there any alternative?
Changing the configuration on the fly on an already created Distributed object is not possible with the current version though there is a plan to add this feature in future release. Once created the map configs would stay at node level not at cluster level.
As long as you are creating the Distributed map fresh from the config, using it and destroying it, your approach should work without any issues.
We are a mixed linux/windows shop that successfully adopted Puppet for Config Mgmt a while ago. We'd like to drop ansible in as our deployment orchestration tool (research suggests that puppet doesn't do this very well) but have questions about how to integrate the two products.
Today, puppet is the source of truth with respect to environment info (which nodes belong to which groups etc). I want to avoid duplicating this information in ansible. Are there any best practices with regards to sharing environment information between the two products?
One way to reduce the amount of duplicated state between the systems is to use Ansible's "Dynamic Inventory" support. Instead of defining your hosts/groups in a text file, you use a script that pulls the same data from somewhere else. This could be PuppetDB, Foreman, etc and is going to depend on your environment.
Writing a new script is also pretty simple, it just needs to be any executable (bash/python/ruby/etc) that returns json in a specific format.
Lastly, it is possible to roll out new releases with puppet, but it is easier with a "microservice" like release process. Ensuring apps/services/databases remain backwards compatible across versions can make pushing out releases trivial with puppet and your favorite package manager.
Using Puppet and Mcollective should be the way to go if you are looking for a solution from puppetlabs
https://puppetlabs.com/mcollective