I'm looking solution for moving configuration from test environment to prod environment for graylog.
Once I configure inputs and streams on test environment I would like easily move this configuration to production. Doing it manually can produce some bugs ex. by misspelling the correct name.
How to do that? Is there an option to export and import graylog configuration?
Thanks in advance.
If you only want to migrate Inputs (incl. Extractors), Streams, Dashboards, and Lookup Tables (incl. Data Adapters and Lookup Caches), you can use Content Packs for that.
See System/Content Packs in the web interface.
Related
I know that a good way to store data like db passwords, etc. is via environment variables, but setting environment variables manually for every server instance created is time consuming.
I'm planning to deploy my project to the cloud (using aws ebs or heroku).
where should I store my db password?
I think the .ebextensions file isn't a good option because it's tracked in vcs
Don't ever store secrets in source control. A common practice is to either put them in a secure file or in something like https://www.vaultproject.io/ then inject them (programmatically via a script or some other deployment/configuration tool) into the environment when you bring up your VM (or container or whatever).
My recommendation is to create a properties file which can be stored in the resources folder of your application and the code can access the resources. do not need environment variable. One property file can contain all db's userid and passwords. Deploy job based on url mapping in the properties file. For example, look at a spring hibernate example project which uses a property file. Or look at ant deploy scripts. Hope it helps.
It has never happened before, but all of a sudden my server keeps running out of space because of these puppet reports.
Can reports=none inside the puppet.conf file, auto disable the generation of .yaml report files? Or is there a better way of doing it?
Will we ever need these .yaml files and do they affect anything if I delete them all?
Puppet has fairly good online documentation, including for historic versions of the software. You can find a good overview of the reporting feature here: https://docs.puppetlabs.com/guides/reporting.html. A bit more detail is provided in the relevant section of the manual proper, and the configuration reference provides some detail on the relevant configuration settings.
Some of the configurable aspects of reporting are:
whether agents send reports [the report parameter in the [agent] section of the config file]. This is configured on a per-agent basis. By default, they do send reports.
what the master does with the reports it receives [the reports parameter in the [master] section of the config file]. The default is to use (only) the store report handler, which dumps them to YAML files in the configured report directory.
Can reports=none inside the puppet.conf file, auto disable the generation of .yaml report files?
No 'none' handler is documented, but you can write and plug in custom report handlers, as described in the docs. A none handler ought to be trivial to write, but see below.
Or is there a better way of doing it?
I'd recommend configuring your agents to not send reports in the first place. That should be less work for all involved, human and machine. Do that by setting report = false (note: singlular "report") in the [agent] section of each agent's Puppet configuration file. You may need to restart the agents after.
Will we ever need these .yaml files and do they affect anything if I delete all of them?
They are for your benefit. If you have no use for them, then you can safely delete them.
Currently we have a development cloud services (acme-dev-service) and a production cloud service (acme-prod-service). Our current setup in our solution has a cloud service project called acme.application that uses transformation of the .cscfg and .csdef files for deploying the project to the two environments (production and development). I don’t like the transformation method because it feels like a bit of a hack to me. So after doing some research it seems that you can have multiple configuration files which solves some of the issue but I am running into problems because you are only allowed one service definition. This doesn’t work for us because the production environment requires extra certificates as well as different hostHeader bindings than our dev environment does.
So it seems we cant really get away from using the transformations. So I guess my question boils down to am I looking at the Azure Service Project files in the wrong light? Should we really be mapping one Azure Project to one Azure cloud service? Should I have an Azure project for Production and a second Azure Project for Development?
Is there a better way to do this? Or a best practice for working with multiple environments in Azure?
The CSDefinition file is the real kicker here. If you have a value you need to be different between two environments (dev/test/stage/production, etc.) then you really have three options:
1) Manually modify the value before a deployment. Errr....Okay....you have two options.
1) Tap into the MS Build process and determine which cloud configuration you have selected (the one used to determine which version of the .cscfg file will be used) and then have the build modify the .csdef after the build and prior to packaging (there is a time when the file has been copied to a different directory just before packaging and this is where you want to make the change). This can be tricky, though I've seen it done and have even done so myself in the early SDK days. Here is a blog post explaining one example where he's using WebConfigTransformRunner to do just that: http://fabriccontroller.net/blog/posts/apply-xdt-transforms-to-your-servicedefinition-csdef-file/. I don't really think this is your best option because it is opaque. It's not evident what is going on and someone who comes along after you to maintain the code will not know about this little gem and will spend forever trying to figure out why some value they put into the csdef somewhere is somehow getting overwritten after they publish to a different environment.
2) Use the two Azure Project approach you mentioned. You can set up build definitions in your Build tool of choice that determine which of the Azure projects you want to build and publish. Personally I think this is the best way to deal with different .csdef files. It's straight forward and doesn't require modifying the csproj files. I'm not opposed to csproj file changing, it's just not overly obvious it was done and, speaking as someone who has inherited things like that, it's not easy to find when people do that kind of thing and they aren't around to tell you about it.
Our product consists of client, server and agents. Each deployed on different machines. The QA is having a hard time to manipulate the log4net sections in the respective config files. Right now, they have to have remote desktops to all the relevant machines and open notepad in each of them and then edit the files one at a time switching between different machines as they proceed. A real pain in the ass.
Can anyone suggest a better solution to this problem?
Thanks.
You could store the log4net configuration in a database (you could then even consider to create a web interface that allows your QA team to modify the configuration). You have to figure out how your applications pick up the new configuration (e.g. you have some remote Admin interface that allows you to tell your applications to use the new configuration).
On start-up you load the configuration from there. Maybe it is advisable to have some backup configuration in a file that is loaded first in case loading from the database fails. The default configuration would be for instance so that the QA team gets an email if loading the configuration from the database fails.
Another option would be to store all log4net configuration files on a network share... create an application setting that tells your application where to find the log4net configuration and call the Configure() method accordingly. Again the question is how your applications pick up the new configuration.
Not sure if ConfigureAndWatch() would behave as expected if the configuration files is on a network share. If so that would be quite an easy option to implement.
In a typical enterprise scenario with in-house development, you might have dev, staging, and production environments. You might use SVN to contain ongoing development work in a trunk, with patches being stored in branches, and your released code going into appropriately named tags. Migrating binaries from one environment to the next may be as simple as copying them to middle-ware servers, GAC'ing things that need to be GAC'ed, etc. In coordination with new revisions of binaries, databases are updated, usually by adding stored procedures, views, and adding/adjusting table schema.
In a Sharepoint environment, you might use a similar version control scheme. Custom code (assemblies) ends up in features that get installed either manually or via various setup programs. However, some of what needs to be promoted from dev to staging, and then onto production might be database content that supports the custom code bits.
If you've managed an enterprise Sharepoint environment, please share thoughts on how you manage promotion of code and content changes between environments, while protecting your work and your users, and keeping your sanity.
I assume when you talk about database content you are referring to the actual contents contained in a site a or a list.
Probably the best way to do this is to use the stsadm import and export commands to export and import content from one environment to another. (Don't use backup/restore when going from one environment to another.)
For any file changes (assemblies, aspx) you can use Features and then keep track of the installers. You would install the feature and do an upgrade to push changes.
There's no easy way to sync the data...you can use stsadm import/export commands as John pointed out. But this may not be straight-forward, especially if the servers are configured differently.
There's also Data Sync Studio product (http://www.simego.net/DataSync_Studio.aspx) you can try.
Depending on what form the database content takes, I would keep the creation of it in code so it's all in one place (your Visual Studio project) and can also be managed via source control. Deployment of the content could either be via a console application or even better feature receiver.
You might also like to read this blog post and look at the tool mentioned there for another approach.
The best resource I can point you to is Eric's paper:
http://msdn.microsoft.com/en-us/library/bb428899.aspx
I was part of a team working to better the story around development of WSS and MOSS solutions with TFS, but I don't know where that stands.