Azure Cloud Service - different mapping per environment - azure

I have a Cloud Service in Azure and I have multiple environments.
One of my classes use a mapping (key-value mapping) for doing some calculations.
The number of keys in that mapping varies depending on the environment.
I'm guessing I have no choice but to insert (somehow) the mapping to the environment's configuration (.cscfg file).
Since the configuration is in XML format, I'm wondering what would be the cleanest and most extensible way for define the mapping for each of the environments.
Thanks
For example:
I have this ID to Region mapper:
private static readonly Dictionary<string, Region> Id = new Dictionary<string, Region>
{
{"1", Region.UsE},
{"2", Region.UsE},
{"3", Region.UsE},
{"4", Region.UsSC},
{"5", Region.UsSC},
{"6", Region.UsSC},
{"7", Region.EuW},
{"8", Region.EuN}
};
This mapping changes between environments and I would like somehow to elegantly set the mapping in the cscfg file of each environment.
Hope this better explains my question.

You can add the values to the ConfigurationSettings element of the .CSCFG files for each environment. The values may then be read using the CloudConfigurationManager class.
You could also just have per-environment XML or JSON files.

Related

Micrometer - Add default prefix in metric name

In micrometer, we can create a new gauge doing something like
myMeterRegistry.gauge("my_metric", 69);
See the code here https://github.com/micrometer-metrics/micrometer/blob/master/micrometer-core/src/main/java/io/micrometer/core/instrument/MeterRegistry.java#L468
Would be possible to include a "prefix" name by default for my myMeterRegistry object?
Manually, it should like:
myeterRegistry.gauge("myprefix_my_metric", 69);
My goal is that every developer that creates a gauge metric in my application does not have to take care of adding the "myprefix_" at the beginning of the metric name
A MeterFilter would let you do that (but don't!):
new MeterFilter() {
#Override
public Meter.Id map(Meter.Id id) {
return id.withName("myprefix." + id.getName());
}
}
However a common prefix is typically a smell of an incorrect dimensionality. Usually users try to add a region, host, or the application's name as a prefix. Those are better provided as tags since then you can aggregate across systems and use common dashboards.
The commonTags approach is recommended:
registry.config().commonTags("team", "myteam", "region", "us-east-1");
For hierarchical meter registries, tags will be included in the name as a prefix.

Combining configuration files

I'm developing a jee application which has to look at two files in order to load configuration parameters. Both files are properties-like files.
The first one contains a default configuration properties and the other one overrides them. So the first one is read-only and the other one can be modified. I need to react and update changes made on second configuration file.
I've take a look on several resources:
Composite Configuration
Combined Configuration
Combining Configuration Sources
I've not been able to figure out what and how to make configuration strategy with commons-configuration2.
Up to now, I've been able to read from one configuration file:
FileBasedConfigurationBuilder<PropertiesConfiguration> builder =
new FileBasedConfigurationBuilder<PropertiesConfiguration>(PropertiesConfiguration.class)
.configure(new Parameters().properties()
.setFileName(ConfigurationResources.PROPERTIES_FILEPATH)
.setThrowExceptionOnMissing(true)
.setListDelimiterHandler(new DefaultListDelimiterHandler(';'))
.setIncludesAllowed(false));
Any ideas?
You need CombinedConfiguration. Here is the sample code
Parameters params = new Parameters();
CombinedConfigurationBuilder builder = new CombinedConfigurationBuilder()
.configure(params.fileBased().setFile(new File("configuration.xml")));
CombinedConfiguration cc = builder.getConfiguration();
Here configuration.xml file would contain the list of property files
<configuration systemProperties="systemProperties.xml">
<!-- Load the system properties -->
<system/>
<!-- Now load the config file, using a system property as file name -->
<properties fileName="myprops1.properties"/>
<properties fileName="myprops2.propert"/>
</configuration>
This documentation on Combined Configuration will be really helpful
Parameters params = new Parameters();
FileBasedConfigurationBuilder<FileBasedConfiguration> config1 = new FileBasedConfigurationBuilder<FileBasedConfiguration>(
PropertiesConfiguration.class)
.configure(params.properties().setFileNamesetFileName("file1.properties")));
FileBasedConfigurationBuilder<FileBasedConfiguration> config2 = new FileBasedConfigurationBuilder<FileBasedConfiguration>(
PropertiesConfiguration.class).configure(params.properties().setFileName("default_file2.properties"));
CombinedConfiguration config = new CombinedConfiguration(new OverrideCombiner());
config.addConfiguration(config1.getConfiguration());//this overrides config2
config.addConfiguration(config2.getConfiguration());
return config;
This is something I have used in my project to create a combined configuration. A combined configuration naturally creates a hierarchy of configurations taken from different or same source. For example you could also write something like : FileBasedConfigurationBuilder<FileBasedConfiguration> config2 = new FileBasedConfigurationBuilder<FileBasedConfiguration>( PropertiesConfiguration.class).configure(params.properties()‌​.setFileName(System.‌​getProperty("default‌​_file2.properties"))‌​);
The FileBasedConfigurationBuilder can be substituted with any kind of configuration you may like. Refer to the link https://commons.apache.org/proper/commons-configuration/apidocs/org/apache/commons/configuration2/builder/BasicConfigurationBuilder.html

Collect values from other puppet classes

I'm trying to implement automatic monitoring using nagios/icinga and puppet.
Hosts and basic services are working but now I want to implement different checks for services based on hostgroups. While I could setup the hostgroups in hiera I want to be able to do the following:
Apply a class for each service (like ssh, http) which only "exports" a hostgroup-name (like "ssh-servers" and "http-servers"
and also apply a base class which "collects" these names, joins them to a string and exports a nagios_host resource like this:
##nagios_host { $::fqdn:
ensure => present,
use => "generic-host",
alias => $::hostname,
address => $::ipaddress,
hostgroups => $hostgroups, # this should be something like "ssh-servers, http-servers"
}
I'm just starting with puppet and looked at virtual resources and exported resources but I'm not sure how to apply this correctly. Is this even possible?
The export/import paradigm does not lend itself well to this type of data gathering. If you want to take advantage of it, you will need to define resource types that Just Work when gathered on the Nagios server from all the agent catalogs.
Your mileage might very well increase if you try and rely on PuppetDB queries instead. You get much more control this way.

How to build a file based on defined-type instances in Puppet

I want my Puppet class to create a file resource with contents based on all instances of a particular defined type. I looked at this question with the idea of iterating over the instances to build the file, but apparently it's a "Bad Idea" per the one answer currently there.
Some background: I am building a monitor_service class in Puppet to deploy a custom monitoring application. The application reads a config file that tells it what to monitor, one item per line, along the lines of
ITEM: /var/things/thing-one (123)
ITEM: /var/things/thing-two (456)
I am also writing a defined type that deploys instances of the monitored items:
define my_thing::monitored_thing ( $port ) {
file { "/var/things/$name":
...
}
}
On a given node, I set up several monitored_things like
my_thing::monitored_thing { "thing-one":
port => 123
}
my_thing::monitored_thing { "thing-two":
port => 456
}
What's the "right" Puppet idiom for building the monitoring service config file? I would prefer for this to work in such a way that the monitor_service class doesn't have to be told which monitored_thing instances it is watching -- just creating a monitored_thing instance should cause it to be added to the config file automatically.
There are several ways to modify/declare only part of a file within multiple defined types:
Use puppetlabs-stdlib's file_line. This lets you specify that a file should contain a specific line. Best when you do not care about the other file contents and just want to make sure a line is present or absent.
Use puppetlabs-concat if you want to make sure that the final file only includes the fragments that you are specifying or the order of the fragments matters.
Use the augeas type if you need to edit/add configuration to a file with a more complicated structure, like xml, apache configurations, etc.

Windows Azure Table Services - Extended Properties and Table Schema

I have an entity that, in addition to a few common properties, contains a list of extended properties stored as (Name, Value) pairs of strings within a collection. I should probably mention that these extended properties widely vary from instance to instance, and that they only need to be listed for each instance (there won't be any queries over the extended properties, for example finding all instances with a particular (Name, Value) pair). I'm exploring how I might persist this entity using Windows Azure Table Services. With the particular approach I'm testing now, I'm concerned that there may be a degradation of performance over time as more distinct extended property names are encountered by the application.
If I were storing this entity in a typical relational database, I'd probably have two tables to support this schema: the first would contain the entity identifier and its common properties, and the second would reference the entity identifier and use EAV style row-modeling to store the extended (Name, Value) pairs, one to each row.
Since tables in Windows Azure already use an EAV model, I'm considering custom serialization of my entity so that the extended properties are stored as though they were declared at compile time for the entity. I can use the Reading- and Writing-Entity events provided by DataServiceContext to accomplish this.
private void OnReadingEntity(object sender, ReadingWritingEntityEventArgs e)
{
MyEntity Entry = e.Entity as MyEntity;
if (Entry != null)
{
XElement Properties = e.Data
.Element(Atom + "content")
.Element(Meta + "properties");
//select metadata from the extended properties
Entry.ExtendedProperties = (from p in Properties.Elements()
where p.Name.Namespace == Data && !IsReservedPropertyName(p.Name.LocalName) && !string.IsNullOrEmpty(p.Value)
select new Property(p.Name.LocalName, p.Value)).ToArray();
}
}
private void OnWritingEntity(object sender, ReadingWritingEntityEventArgs e)
{
MyEntity Entry = e.Entity as MyEntity;
if (Entry != null)
{
XElement Properties = e.Data
.Element(Atom + "content")
.Element(Meta + "properties");
//add extended properties from the metadata
foreach (Property p in (from p in Entry.ExtendedProperties
where !IsReservedPropertyName(p.Name) && !string.IsNullOrEmpty(p.Value)
select p))
{
Properties.Add(new XElement(Data + p.Name, p.Value));
}
}
}
This works, and since I can define requirements for extended property names and values, I can ensure that they conform to all the standard requirements for entity properties within a Windows Azure Table.
So what happens over time as the application encounters thousands of different extended property names?
Here's what I've observed within the development storage environment:
The table container schema grows with each new name. I'm not sure exactly how this schema is used (probably for the next point), but obviously this xml document could grow quite large over time.
Whenever an instance is read, the xml passed to OnReadingEntity contains elements for every property name ever stored for any other instance (not just the ones stored for the particular instance being read). This means that retrieval of an entity will become slower over time.
Should I expect these behaviors in the production storage environment? I can see how these behaviors would be acceptable for most tables, as the schema would be mostly static over time. Perhaps Windows Azure Tables were not designed to be used like this? If so, I will certainly need to change my approach. I'm also open to suggestions on alternate approaches.
Development storage uses SQL Express to simulate cloud table storage. Ignore what you see there... the production storage system doesn't store any schema, so there's no overhead to having lots of unique properties in a table.

Resources