Multiple Authorizers Support - security

I understand that Kafka ships with SimpleACLAuthorizer which support:
SSL
SASL
Can we implement a custom Authorizer, say - MyCustomAuthorizer (that links to LDAP for user-group assoc.) . And then associate the MyCustomAuthorizer ALONG with the above SimpleACLAuthorizer ?
IN short - Can Kafka support hosting multiple Authorizers at same time ?
One way I can see is that MyCustomAuthorizer can do to implement all 3 ways. But this is not really effective.
authorizer.class.name=<Foo.class> <- Supported
authorizer.class.names<Foo.class,Bar.class> <- Is this possible ?

Related

Confusion about interaction with other domains

We're creating a new application for an entirely new domain model (and Bounded Context) 'Appointment'. We chose to combine CQS with Hexagonal Architecture (using ports and adapters) for our new domain.
Our package structure mainly looks like this:
.appointments
.application
.command
.representation
- AppointmentScheduleApplicationService.java
- AppointmentScheduleQueryService.java
.domain.model
.port.adapter
.integration
.persistence
.web
.service
- AppointmentScheduleFacade.java
My questions:
Is this package structure OK for what we're trying to achieve?
We would want to see every communication to/from other domains via
the AppointmentScheduleFacade interface. Cross-domain
communication resides as plain method invocations (no RPC or REST)
as they're not distributed.
The facade mainly delegates to :
AppointmentScheduleApplicationService.java for model modification
AppointmentScheduleQueryService.java for passing data to other domains.
Is this setup OK? Or should an other domain correspond directly with Application and QueryService?
your structure seems to be fine, but of course it depends on how you use it. Hexagonal architecture is not just a matter of folder structure.
About communication between modules or contexts, I suggest you to strive for achieving less coupling possible: you can do that using a message bus where you publish your domain events, and other domains can retrieve those messages and do whatever they need. So from one module you don't need to know other modules, you just need to know the bus and to be able to read a message from that bus (serialized in json format, typically).
Modules publish and subscribe to events: is dependency inversion principle, but at an architecture level.
If you post some code example, I could be more explicit.
Good luck!

Spring Integration Multiple Endpoints

Currently i am working on a Spring Integration application which has a following scenario.
There is a Transformer which transforms incoming message in to a particular object type
Once the transformation is done, we need to write it to a log file and to a database table and then finally send to a JMS outbound adapter.
I was reading the Spring Integration reference and found out there are two ways we can approach this scenario.
Introduce a pub-sub channel as the output channel of the above mentioned transformer and have File-outbound, DB-outbound and JMS-outbound as the subscribers.
Introduce a Recipient List Router just after the transformer and specify the File-outbound, DB-outbound and JMS-outbound as the recipients.
When it comes to Enterprise Integration Patterns what is the best way to handle this scenario? Any new suggestions and improvements are welcome
Thanks,
Keth
There is no "best way" - both solutions are equivalent and there is little difference at runtime. So it's your preference; I generally use pub/sub for the simple case and an RLR if the recipients are conditional (with selectors).

Service Fabric - A web api in cluster who' only job is to serve data from reliable collection

I am new to Service Fabric and currently I am struggling to find out how to access data from reliable collection (That is defined, and initialized in a Statefull Service context) from a WEB API (that is, also living in the Service fabric cluster, as a separate application). The problem is very basic and I am sure I am missing something very obvious. So apologies to the community if this sounds lame.
I have a large XML, a portions of which I want to expose via a WEB API endpoints as results from various queries . Searched for similar questions, but couldn't find a suitable answer.
Would be happy to see how an experienced SF developer would do such task.
EDIT I posted the solution i have came up with
After reading around and observing others issues and Azure's samples, I have implemented a solution. Posting here the gotchas I had, hoping that will help other devs that are new to Azure Service fabric (Disclaimer: I am still a newbie in Service Fabric, so comments and suggestions are highly appreciated):
First, pretty simple - I ended up with a stateful service and a WEB Api Stateless service in an azure service fabric application:
DataStoreService - Stateful service that is reading the large XMLs and stores them into Reliable dictionary (happens in the RunAsync method).
Web Api provides an /api/query endpoint that filters out the Collection of XElements that is stored in the rteliable dictionary and serialize it back to the requestor
3 Gotchas
1) How to get your hands on the reliable dictionary data from the Stateless service, i.e how to get an instance of the Stateful service from Stateless one :
ServiceUriBuilder builder = new ServiceUriBuilder("DataStoreService");
IDataStoreService DataStoreServiceClient = ServiceProxy.Create<IDataStoreService>(builder.ToUri(), new ServicePartitionKey("Your.Partition.Name"));
Above code is already giving you the instance. I.e - you need to use a service proxy. For that purpose you need:
define an interface that your stateful service will implement, and use it when invoking the Create method of ServiceProxy (IDataStoreService)
Pass the correct Partition Key to Create method. This article gives very good intro on Azure Service Bus partiotions
2) Registering of Replica listeners - in order to avoid errors saying
The primary or stateless instance for the partition 'a67f7afa-3370-4e6f-ae7c-15188004bfa1' has invalid address, this means that right address from the replica/instance is not registered in the system
, you need to register replica listeners as stated in this post :
public DataStoreService(StatefulServiceContext context)
: base(context)
{
configurationPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject("Config");
}
3) Service fabric name spacing and referencing services - the ServiceUriBuilder class I took from the service-fabric-dotnet-web-reference-app. Basically you need something to generate an Uri of the form:
new Uri("fabric:/" + this.ApplicationInstance + "/" + this.ServiceInstance);,
where ServiceInstance is the name of the service you want to get instance of (DataStoreService in this case)
You can use WebAPI with OWIN to setup a communication listener and expose data from your reliable collections. See Build a web front end for your app for info on how to set that up. Take a look at the WordCount sample in the Getting started sample apps, which feeds a bunch of random words into a stateful service and keeps a count of the words processed. Hope that helps.

MyCouch client configuration and usage

I've a couple of questions regarding the (excellent) CouchDB .NET client MyCouch:
Is there some built-in retry policy in case of "transient" failure (like the server responding 503)?
Should instances of MyCouchClient or MyCouchStore be cached to be reused? Right now I'm creating one for each incoming request, but I'm wondering if that incurs a performance penalty.
I would like to customize the configuration of Json.NET as used by MyCouch, like adding a new StringEnumConverter { CamelCaseText = true } to the list of Converters. Is there a way to achieve that through the API?
Thanks
1) There's no magic in the MyCouchClient, it's just simple requests and responses. The MyCouchStore how-ever, I would gladly accept pull request to have options for retries or e.g. auto-batching queries.
2) Here are some links to information that would help you to decide on per request or per application.
Is async HttpClient from .Net 4.5 a bad choice for intensive load applications?
http://msdn.microsoft.com/en-us/library/system.net.servicepointmanager.defaultconnectionlimit.aspx
So doing one per application would probably need a reconf of the connection limit.
I have this centralized in my IoC-config, and by default I'm not doing per application. The first "connection" can take a bit longer, but the second has been measured down to milliseconds against Cloudant by other users so that should in general not be an issue.
3)
You can configure the serializer by providing a custom MyCouchClientBootstrapper and providing a custom implementation of: https://github.com/danielwertheim/mycouch/blob/master/source/projects/MyCouch.Net45/MyCouchClientBootstrapper.cs#L170
And you also have to extend this guy: https://github.com/danielwertheim/mycouch/blob/master/source/projects/MyCouch.Net45/Serialization/SerializationConfiguration.cs#L9
Feel free to suggest changes that makes this process simpler for you.

Issue with Hazelcast active-passive WAN Replication

I have several active and one passive (no wan-replication element) Hazelcast clusters.
When some item is added to the global WAN-Replicated map I see following message in the log of the passive cluster instance:
Received wan merge but no merge policy defined!
However, as I understanded from the 'hazelcast-fullconfig.xml', there is default merge policy for the map (hz.ADD_NEW_ENTRY). Also, I tried to set it explicit.
So as I understand, wan-replication merge policy and map merge policy are different things.
According to the manual, passive endpoint should not have wan-replication element.
Any ideas how can I configure wan-replication for the passive endpoints? Have I missed something?
In version 2.x, you should define wan-replication-ref (and merge policy) for passive side also.
See:
testWANClusteringActivePassive test in
https://github.com/hazelcast/hazelcast/blob/maintenance-2.x/hazelcast/src/test/java/com/hazelcast/impl/WanReplicationTest.java

Resources