Micrometer | Why CompositeMeterRegistry manages Set of Registry instead List/LinkedHashSet? - micrometer

I am using Micrometer project for publishing metrics in my spring boot 2.x application.
I have implemented two MeterRegistry(StepMeterRegistry and SimpleMeterRegistry) so spring boot internally creates CompositeMeterRegistry having a set of StepMeterRegistry and SimpleMeterRegistry.
Now, I am using /actuator/metrics endpoint for viewing current metrics count.
Implementation of MetricsEndpoint is like:
- It will fetch the first registry from compositeMeterRegistry and displays metrics.
so now the issue is below:
- compositeMeterRegistry manages set so the order is not guaranteed, but I want to publish SimpleMeterRegistry only for '/actuator/metrics' endpoint.
A current workaround I followed is creating #Primary bean for MetricsEndpoint and provided SimpleMeterRegistry.
#Primary
#Bean
public MetricsEndpoint metricsEndpoint(SimpleMeterRegistry registry) {
return new MetricsEndpoint(registry);
}
As micrometer is now managed by Spring, why don't they kept List or LinkedHashSet of registry in CompositeMeterRegistry. It would be easy to use for developers.

Related

Assign Application Insights cloud_RoleName to Windows Service running w/ OWIN

I have an application built from a series of web servers and microservices, perhaps 12 in all. I would like to monitor and, importantly, map this suite of services in Applications Insights. Some of the services are built with Dot Net framework 4.6 and deployed as Windows services using OWIN to receive and respond to requests.
In order to get the instrumentation working with OWIN I'm using the ApplicationInsights.OwinExtensions package. I'm using a single instrumentation key across all my services.
When I look at my Applications Insights Application Map, it appears that all the services that I've instrumented are grouped into a single "application", with a few "links" to outside dependencies. I do not seem to be able to produce the "Composite Application Map" the existence of which is suggested here: https://learn.microsoft.com/en-us/azure/application-insights/app-insights-app-map.
I'm assuming that this is because I have not set a different "RoleName" for each of my services. Unfortunately, I cannot find any documentation that describes how to do so. My map looks as follow, but the big circle in the middle is actually several different microservices:
I do see that the OwinExtensions package offers the ability to customize some aspects of the telemetry reported but, without a deep knowledge of the internal structure of App Insights telemetry, I can't figure out whether it allows the RoleName to be set and, if so, how to accomplish this. Here's what I've tried so far:
appBuilder.UseApplicationInsights(
new RequestTrackingConfiguration
{
GetAdditionalContextProperties =
ctx =>
Task.FromResult(
new [] { new KeyValuePair<string, string>("cloud_RoleName", ServiceConfiguration.SERVICE_NAME) }.AsEnumerable()
)
}
);
Can anyone tell me how, in this context, I can instruct App Insights to collect telemetry which will cause a Composite Application Map to be built?
The following is the overall doc about TelemetryInitializer which is exactly what you want to set additional properties to the collected telemetry - in this case set Cloud Rolename to enable application map.
https://learn.microsoft.com/en-us/azure/application-insights/app-insights-api-filtering-sampling#add-properties-itelemetryinitializer
Your telemetry initializer code would be something along the following lines...
public void Initialize(ITelemetry telemetry)
{
if (string.IsNullOrEmpty(telemetry.Context.Cloud.RoleName))
{
// set role name correctly here.
telemetry.Context.Cloud.RoleName = "RoleName";
}
}
Please try this and see if this helps.

Cassandra: customer data per keyspace

Problem: one of our new customers want the data to be stored in his own country (law regulations). However we use existing customer's data spread across few datacenters in different countries.
Question: how we can separate new customer's data to reside in its own country without much changing existing Cassandra architecture?
Potential Solution #1: to use separate keyspace for this customer. Schemas will be the same between keyspaces what adds the complexity for data migration and so on. DataStax support confirmed that it is possible to configure keyspace per region.
However Spring Data Cassandra we use, doesn't allow to choose keyspace dynamically.
The only way is to use CqlTemplate and to run use keyspace blabla everytime before the call or to add keyspace before the table select * from blabla.mytable but it sounds as a hack for me.
Potential Solution #2 to use separate environment for new client but management rejects to do it.
Any other ways to achieve this goal?
Update 3
Example and explanation below is same as in GitHub
Update 2
The example in GitHub is now working. The most future proof solution seemed to be using repository extensions. Will update the example below soon.
Update
Notice that the solution I originally posted had some flaws that I discovered during JMeter tests. The Datastax Java driver reference advises to avoid setting keyspace through Session object. You have to set keyspace explicitly in every query.
I've updated the GitHub repository and also changed solution's description.
Be very careful though: if the session is shared by multiple threads,
switching the keyspace at runtime could easily cause unexpected query failures.
Generally, the recommended approach is to use a single session with no
keyspace, and prefix all your queries.
Solution Description
I would set-up a separate keyspace for this specific customer and provide support for changing keyspace in the application. We used this approach previously with RDBMS and JPA in production. So, I would say it can work with Cassandra as well. Solution was similar as below.
I will describe briefly how to prepare and set-up Spring Data Cassandra to configure target keyspace on each request.
Step 1: Preparing your services
I would define first how to set the tenant ID on each request. A good example would be in-case-of REST API is to use a specific HTTP header that defines it:
Tenant-Id: ACME
Similarly on every remote protocol you can forward tenant ID on every message. Let's say if you're using AMQP or JMS, you can forward this inside message header or properties.
Step 2: Getting tenant ID in application
Next, you should store the incoming header on each request inside your controllers. You can use ThreadLocal or you can try using a request-scoped bean.
#Component
#Scope(scopeName = "request", proxyMode= ScopedProxyMode.TARGET_CLASS)
public class TenantId {
private String tenantId;
public void set(String id) {
this.tenantId = id;
}
public String get() {
return tenantId;
}
}
#RestController
public class UserController {
#Autowired
private UserRepository userRepo;
#Autowired
private TenantId tenantId;
#RequestMapping(value = "/userByName")
public ResponseEntity<String> getUserByUsername(
#RequestHeader("Tenant-ID") String tenantId,
#RequestParam String username) {
// Setting the tenant ID
this.tenantId.set(tenantId);
// Finding user
User user = userRepo.findOne(username);
return new ResponseEntity<>(user.getUsername(), HttpStatus.OK);
}
}
Step 3: Setting tenant ID in data-access layer
Finally you should extend Repository implementations and set-up keyspace according to the tenant ID
public class KeyspaceAwareCassandraRepository<T, ID extends Serializable>
extends SimpleCassandraRepository<T, ID> {
private final CassandraEntityInformation<T, ID> metadata;
private final CassandraOperations operations;
#Autowired
private TenantId tenantId;
public KeyspaceAwareCassandraRepository(
CassandraEntityInformation<T, ID> metadata,
CassandraOperations operations) {
super(metadata, operations);
this.metadata = metadata;
this.operations = operations;
}
private void injectDependencies() {
SpringBeanAutowiringSupport
.processInjectionBasedOnServletContext(this,
getServletContext());
}
private ServletContext getServletContext() {
return ((ServletRequestAttributes) RequestContextHolder.getRequestAttributes())
.getRequest().getServletContext();
}
#Override
public T findOne(ID id) {
injectDependencies();
CqlIdentifier primaryKey = operations.getConverter()
.getMappingContext()
.getPersistentEntity(metadata.getJavaType())
.getIdProperty().getColumnName();
Select select = QueryBuilder.select().all()
.from(tenantId.get(),
metadata.getTableName().toCql())
.where(QueryBuilder.eq(primaryKey.toString(), id))
.limit(1);
return operations.selectOne(select, metadata.getJavaType());
}
// All other overrides should be similar
}
#SpringBootApplication
#EnableCassandraRepositories(repositoryBaseClass = KeyspaceAwareCassandraRepository.class)
public class DemoApplication {
...
}
Let me know if there are any issues with the code above.
Sample code in GitHub
https://github.com/gitaroktato/spring-boot-cassandra-multitenant-example
References
Example: Using JPA interceptors
Spring #RequestHeader example
Spring request-scoped beans
Datastax Java Driver Reference
After many back and forth, we have decided to not do the dynamic keyspace resolution within the same JVM.
It was taken the decision to have dedicated Jetty/Tomcat per keyspace and on nginx router level to define to what server the request should be redirected to (based on companyId from the request url).
For example, all our endpoints have /companyId/<value> so based on the value, we can redirect the request to proper server which uses correct keyspace.
Advice with 2 keyspaces is correct.
If question is about having just 2 keyspaces why not configure 2 keyspaces.
for Region Dependent client - write to both
for others - write to one (main) keyspace only.
No data migration will be required.
Here is sample how to configure Spring Repositories to hit different keyspaces:
http://valchkou.com/spring-boot-cassandra.html#multikeyspace
the choice of repository can be simple if else
if (org in (1,2,3)) {
repoA.save(entity)
repoB.save(entity)
} else {
repoA.save(entity)
}

Why is data access tightly coupled to the Service base in ServiceStack

I'm curious why the decision was made to couple the Service base class in ServiceStack to data access (via the Db property)? With web services it is very popular to use a Data Repository pattern to fetch the raw data from the database. These data repositories can be used by many services without having to call a service class.
For example, let's say I am supporting a large retail chain that operates across the nation. There are a number of settings that will differ across all stores like tax rates. Each call to one of the web services will need these settings for domain logic. In a repository pattern I would simply create a data access class whose sole responsibility is to return these settings. However in ServiceStack I am exposing these settings as a Service (which it needs to be as well). In my service call the first thing I end up doing is newing up the Setting service and using it inside my other service. Is this the intention? Since the services return an object I have to cast the result to the typed service result.
ServiceStack convenience ADO.NET IDbConnection Db property allows you to quickly create Database driven services (i.e. the most popular kind) without the overhead and boilerplate of creating a repository if preferred. As ServiceStack Services are already testable and the DTO pattern provides a clean endpoint agnostic Web Service interface, there's often not a lot of value in wrapping and proxying "one-off" data-access into a separate repository.
But at the same time there's nothing forcing you to use the base.Db property, (which has no effect if unused). The Unit Testing Example on the wiki shows an example of using either base.Db or Repository pattern:
public class SimpleService : Service
{
public IRockstarRepository RockstarRepository { get; set; }
public List<Rockstar> Get(FindRockstars request)
{
return request.Aged.HasValue
? Db.Select<Rockstar>(q => q.Age == request.Aged.Value)
: Db.Select<Rockstar>();
}
public RockstarStatus Get(GetStatus request)
{
var rockstar = RockstarRepository.GetByLastName(request.LastName);
if (rockstar == null)
throw HttpError.NotFound("'{0}' is no Rockstar".Fmt(request.LastName));
var status = new RockstarStatus
{
Alive = RockstarRepository.IsAlive(request.LastName)
}.PopulateWith(rockstar); //Populates with matching fields
return status;
}
}
Note: Returning an object or a strong-typed DTO response like RockstarStatus have the same effect in ServiceStack, so if preferred you can return a strong typed response and avoid any casting.

How does the Social Business Toolkit Samples application uses managed-beans.xml?

So far I have:
installed and started sbt.sample-1.0.0.20140125-1133.ear on my WebSphere Application
Server,
added an URL resource for the SBT Properties file.
The Social Business Toolkit Samples app runs fine and I'm able to connect to my IBM Connections and retrieve some ActivityStream entries.
When I first loaded the application, I noticed this error:
Exception stack trace: com.ibm.websphere.naming.CannotInstantiateObjectException: A NameNotFoundException occurred on an indirect lookup on the name java:comp/env/url/ibmsbt-managedbeansxml. The name java:comp/env/url/ibmsbt-managedbeansxml maps to a JNDI name in deployment descriptor bindings for the application performing the JNDI lookup. Make sure that the JNDI name mapping in the deployment descriptor binding is correct. If the JNDI name mapping is correct, make sure the target resource can be resolved with the specified name relative to the default initial context.
In the Samples application's ibm-web-bnd.xml file I found this line:
<resource-ref name="url/ibmsbt-managedbeansxml" binding-name="url/ibmsbt-managedbeansxml" />
And in the web.xml:
<resource-ref>
<description>Reference to a URL resource which points to the managed bean configuration for the Social Business Toolkit.</description>
<res-ref-name>url/ibmsbt-managedbeansxml</res-ref-name>
<res-type>java.net.URL</res-type>
<res-auth>Container</res-auth>
<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>
I'm wondering, why should there be an URL resource to the JSF Application Configuration Resource File (managed-beans.xml) in the first place? According to the Java EE documentation the JavaServer Faces implementation will look for it in the /WEB-INF/ folder.
Does the SBT uses JavaServer Faces technology somewhere? Or can I choose not to use the managed-beans.xml file in my own applications that use the SBT?
I wouldn't recommend you consider them related. managed-beans.xml had a prior name, and it's just a set of configuration objects. The project itself does not use Java Server Faces.
I just read the documentation again, more carefully than the first time, and I think I now have a better understanding of what I asked in my second question. From the documentation:
In a web application SBTFilter (HTTP servlet filter) is responsible
for initializing the application using servlet context. Application
does the initialization like loading the managed beans and properties
factories.
The sample app is a web application. I think in my own application I can choose to use com.ibm.commons.runtime.impl.app.ApplicationStandalone instead of com.ibm.commons.runtime.impl.servlet.ApplicationServlet and then configure an endpoint programmatically. Or alternatively do not use an Application at all, like so:
RuntimeFactory runtimeFactory = new RuntimeFactoryStandalone();
Application application = runtimeFactory.initApplication(null);
Context.init(application, null, null);

How to specify and organize OXM_METADATA_SOURCE in glassfish v4 MOXy Provider?

I am a fan of both Glassfish and MOXy, and it's good news for me that MOXy had been bundled into Glassfish v4.
I had read and tried a few of MOXy examples on the internet, I like the dynamic OXM_META_DATA_SOURCE part, since while providing RESTful services, the "client perspective" is very flexible than domain classes.
So here is the problem:
Different RESTful services can have different views from same domain classes, and in my work it's very common case. So there can be a lot of binding OXM metadata files for every service. And as we know a single OXM metadata file can only correspond to a single java package. So there will be much more OXM metadata files to maintain.
Back to JAX-RS, Is there any framework to design patterns or best practices to finish the mapping between OXM metadata file set and the service itself?
You can try new feature called Entity Filtering which has been introduced in Jersey 2.3. Even though Entity Filtering is not based on OXM_META_DATA_SOURCE you can achieve your goal with it:
Let's assume you have a following domain class (annotations are custom entity-filtering annotations):
public class Project {
private Long id;
private String name;
private String description;
#ProjectDetailedView
private List<Task> tasks;
#ProjectAnotherDetailedView
private List<User> users;
// ...
}
And, of course, some JAX-RS resources, i.e.:
#Path("projects")
#Produces("application/json")
public class ProjectsResource {
#GET
#Path("{id}")
public Project getProject(#PathParam("id") final Long id) {
return ...;
}
// ...
}
Now, we have 2 detailed views defined on domain class (via annotations) and the resource class. If you annotate getProject resource method with:
#ProjectDetailedView - returned entity would contain id, name, description AND a list of tasks from Project
#ProjectAnotherDetailedView - returned entity would contain id, name, description AND a list of users from Project
If you leave the resource method un-annotated the resulting entity would contain only: id, name, description.
You can find more information about Entity Filtering in the User Guide or you can directly try it in our example: entity-filtering.
Note 1: Entity Filtering works only with JSON media type (via MOXy) at the moment. Support for other media types / providers is planned to be added in the future.
Note 2: Jersey 2.3 is not integrated into any (promoted) build of GF 4.0. The next Jersey version that should be part of GF 4.0 is 2.4. We plan to release 2.4 in the next few weeks.

Resources