I find it difficult to decide whether something should be part of domain or application.
Reading through this answer helps a lot with concepts like authorisation but I still find myself struggling with other things.
To illustrate my confusion please consider a case of comment posting. Here are the things that need to happen before a comment can be posted. I indicate in parenthesis where I think this functionality should go.
Make sure user role/status is allowed to comment on this post (authorisation, goes to Application)
Make sure post to which we are commenting exists and is published (Domain)
Make sure user has not posted more than 5 comments in the last minute (throttling, intuition says it goes to Application)
Make sure comment is not an empty string (Domain)
Make sure comment does not have dirty words (Domain?)
Make sure comment does not have duplicates from same user in this post (Domain?)
Format the comment (Application)
Remove certain HTML tags from comment that are not permitted for current user (Application)
Check comment for spam (Application?)
I can't decide whether checking comment for spam is domain concern or application, same goes for throttling. From my point of view both of these concerns are important for me and must be present. But same goes for authorisation and we know that it should not be in Domain.
If I split these concerns between a domain service and application service then I feel like my domain is not completely enforced and actually relies on application to make prior checks. In that case what is the whole point, why don't I just do it all in application to reduce confusion?
My current setup does this:
Controller ->
App.CommentingService.Comment() ->
Domain.CommentingService.Comment()
It would be helpful if someone could go through all the required steps to create a comment and assign it to the right layer giving some reasoning behind.
Your setup looks right. Application services often comes in 2 flavours:
Application features: Email notifications, Authorization, Persistence, etc. All the features, besides the domain, your system has comes here.
Application coordination: To meet a use case you need coordinate Application features and Domain. All the plumbing comes here.
Keep in mind that Application coordination models use cases so not always match 1 app service = 1 domain service because use case could involve more than 1 process.
Controller
App.CommentingService.Comment() //coordination of below features and domain
App.AuthService().Autorize(); //feature
Domain.CommentingService.Comment(); //domain
App.PersistenceService().Persist(); //feature
App.NotificationService().SentNotificationToUser(); //feature
Why don't I just do it all in application to reduce confusion?
Resposibility segregation, loose coupling, dependency inyection, etc; all of this are good for many reasons. I will give you a real example I was involved recently: Having a loose couple assembly (it was in .NET framework) with just Domain services allows me to host the same Domain untouched in a Web app, a DesktopApp and a SOAP Web Service just changing the App coordination services because requirements and use cases are different on each app.
About what goes into domain and what not. It is very hard to give you a direct answer because it depends a lot of what is your domain or not.
i.e.
Make sure user has not posted more than 5 comments in the last minute
You have to question why are you throttling? To prevent a messy UI? Performance reason? Prevent Denial of service threat? Or is breaking a rule in your "game" because your "game" just gives limited tries to the user in a time spam? This is what indicates when something is Domain or Application.
Related
Here's an abstract question with real world implications.
I have two microservices; let's call them the CreditCardsService and the SubscriptionsService.
I also have a SPA that is supposed to use the SubscriptionsService so that customers can subscribe. To do that, the SubscriptionsService has an endpoint where you can POST a subscription model to create a subscription, and in that model is a creditCardId that points to a credit card that should pay for the subscription. There are certain business rules that say whether or not you can use said credit card for the subscription (expiration is more than 12 months away, it's a VISA, etc). These specific business rules are tied to the SubscriptionsService
The problem is that the team working on the SPA want a /CreditCards endpoint in the SubscriptonsService that returns all valid credit cards of the user that can be used in the subscriptions model. They don't want to implement the same business validation rules in the SPA that are in the SubscriptionsService itself.
To me this seems to go against the SOLID principles that are central to microservice design; specifically separation of concerns. I also ask myself, what precedent is this going to set? Are we going to have to add a /CreditCards endpoint to the OrdersService or any other service that might use creditCardId as a property of it's model?
So the main question is this: What is the best way to design this? Should the business validation logic be duplicated between the frontend and the backend? Should this new endpoint be added to the SubscriptionsService? Should we try to simplify the business logic?
It is a completely fair request and you should offer that endpoint. If you define the rules for what CC is valid for your service, then you should offer any and all help dealing with it too.
Logic should not be repeated. That tend to make systems unmaintainable.
This has less to do with SOLID, although SRP would also say, that if you are responsible for something then any related logic also belongs to you. This concern can not be separated from your service, since it is defined there.
As a solution option, I would perhaps look into whether I can get away with linking to the CC Service since you already have one. Can I redirect the client with a constructed query perhaps to the CC Service to get all relevant CCs, without actually knowing them in the Subscription Service.
What is the best way to design this? Should the business validation
logic be duplicated between the frontend and the backend? Should this
new endpoint be added to the SubscriptionsService? Should we try to
simplify the business logic?
From my point of view, I would integrate "Subscription BC" (S-BC) with "CreditCards BC" (CC-BC). CC-BC is upstream and S-BC is downstream. You could do it with REST API in CC-BC, or with a message queue.
But what I validate is the operation done with a CC, not the CC itself, i.e. validate "is this CC valid for subscription". And that validation is in S-BC.
If the SPA wants to retrieve "the CCs of a user that he/she can use for subscription", it is a functionality of the S-BC.
The client (SPA) should call the the S-BC API to use that functionality, and the S-BC performs the functionality getting the CCs from the CC-BC and doing the validation.
In microservices and DDD the subscriptions service should have a credit cards endpoint if that is data that is relevant to the bounded context of subscriptions.
The creditcards endpoint might serve a slightly different model of data than you would find in the credit cards service itself, because in the context of subscriptions a credit card might look or behave differently. The subscriptions service would have a creditcards table or backing store, probably, to support storing its own schema of creditcards and refer to some source of truth to keep that data in good shape (for example messages about card events on a bus, or some other mechanism).
This enables 3 things, firstly the subscriptions service wont be completely knocked out if cards goes down for a while, it can refer to its own table and work anyway. Secondly your domain code will be more focused as it will only have to deal with the properties of credit cards that really matter to solving the current problem. Finally if your cards store can even have extra domain specific properties that are computed and materialized on store.
Obligatory Fowler link : Bounded Context Pattern
Even when the source of truth is the domain model and ultimately you must have validation at the
domain model level, validation can still be handled at both the domain model level (server side) and
the UI (client side).
Client-side validation is a great convenience for users. It saves time they would otherwise spend
waiting for a round trip to the server that might return validation errors. In business terms, even a few
fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and
frustration. Straightforward and immediate validation enables users to work more efficiently and
produce better quality input and output.
Just as the view model and the domain model are different, view model validation and domain model
validation might be similar but serve a different purpose. If you are concerned about DRY (the Don’t
Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to
follow the DRY principle. (NET-Microservices-Architecture-for-Containerized-NET-Applications book)
I have been looking around SO and other on-line resources but cant seem to locate how this is done. I was wondering how things like magnet links worked on torrent website. They automatically open up and application and pass the appropriate params. I was wondering how could I create one to send a custom program params from the net?
Thanks
s654m
I wouldn't say this is an answer, but it is actually too long for a comment to fit.
Apps tend to register as authorities that can open a specific scheme. I don't know how it's done in desktop apps (especially because depending on each OS, it will vary), but on Android you can catch schemes or base urls by Intent Filters.
The way it works (and I'm pretty sure the functionality is cross-OS) is:
Your app tells the system it can "read" a specific scheme or base url (it could be magnet:// or even http://www.twitter.com/).
When you try to open a URI (Uniform resource identifier, a supergroup that can contain URLs), the system searches for any application that was registered for that kind of URI. I guess it runs from more specific and complete formats to the base. So for instance, this tweet: https://twitter.com/korcholis/status/491724155176222720 may be traced in this order:
https://twitter.com/korcholis/status/491724155176222720 Oh, no registrar? Moving on
https://twitter.com/korcholis/status Nothing yet? Ok
https://twitter.com/korcholis Nnnnnnope?
https://twitter.com Anybody? Ah, you, Totally random name for a Twitter Client know how to handle these links? Then it's yours
This random twitter client gets the full URI and does something accordingly.
As you see, nobody had a chance to track https://, since another application caught the URI before them. In this case, nobody could be your browsers.
It also defines, somehow, a default value. This is the true key why browsers tend to battle to be your default browser of choice. This just tells you they want to be the default applications that catch http://, https:// and probably some more.
The true wonder here is that, as long as there's an app that catches a scheme, you can set the one you want. For instance, it's a common practice that apps from the same developer contain the same schemes, in case the developer wants to share tasks between them. This ensures the user will have to use a group of apps. So, one app can just offer data such as:
my-own-scheme://user/12
While another app is registered to get links that start with
my-own-scheme://
So, if you want to make your own schemes, it's ok, as long as they don't collide with other's. And if you want to read other's schemes, well, that's up to you to search for that. See? This is not a real answer, but I hope it removes almost all doubt.
I am a bit confused about how XEP-0114 works. Does servicing a domain using a component mean that the server will no longer do anything on behalf of that domain, or does it just mean that the component will ALSO be allowed to service all users on that domain.
More specifically, is it possible to have multiple components servicing the same domain? For example, one component could handle MUC, another could store all messages in a history store, and a third could handle the roster, etc... All while the XMPP server continues handling the user like it normally would - and replying to presence, iq packets, etc... What this means is that components would have to be written so that their realm doesn't intersect with each other.
Answering #dhruvbird's second question in the comments above, if you have delegated a domain to your XEP-114 component, that component is responsible for everything about that domain, including all of the presence states of the users in that domain. That is possible, if tedious, but make sure you've read the new RFC 6121 recently.
Note: most servers have a component that implements all of this presence subscription logic - it's where the real IM business logic is implemented. You'll effectively be writing a replacement for that logic, so make sure there's no other way to solve your problem first.
Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.
I'm in charge of an app that uses the internet to transfer data between sites, and some customers are being awkward about paying, so we need a mechanism that will allow us to cut off the service of non-payers. I'd like to protect against the admin people using firewalls to block off our checks, but conversely I'd like to give some allowance for our company web site disappearing for some reason and not being accessible.
The scheme I'm imagining is:
server makes twice daily check to web page using a URL like:
http://www.ourcompany.com/check.php?myID=GUID&Code=MyCode
This then returns a response that contains either nothing of interest, or the GUID and a value.
GUID=0
That zero indicates that the server should stop operation. To make it work again, the server will check every 5 mins for the same info, until the value matches what it thinks the code that it passed in should be transformed to.
This scheme makes sense to me, but the question really is how to protect against blocking. Given we know we must have internet access, how long should we continue to operate without being able to get the response from our web server? Is something like 14 days and then we just shut it off anyway the best way?
The solution I used in the end was pretty much as I suggested. Yes, it is defeatable using tools outlined here, but it is better than nothing.
The app checks daily to access a web site that contains a control file encrypted using public key encryption. It decrypts in memory, and if it finds its GUID, then it must match a code. To disable the operation, the code is set to 0 (zero) which will always fail. When disabled, it checks every two minutes to allow rapid restoration. There is also a manual mechanism to generate a code that will work for a week in case of server trouble.
The code will allow up to 14 days without connecting to the server before it takes this as a deliberate attempt to block it. After 10 days, it shows an error message which asks them to contact support.
This method is really easy to circumvent: just use a local dns server to point www.ourcompany.com to the local machine, or use a http proxy. Then the user can return whatever response they want to the program.
Assuming the user hasn't circumvented the check, how long you are to continue to operate without confirmation is a business decision and not a programming decision.
A user can use a tool such as OWASP WebScarab to change values on the fly to subvert your security model. You need to include something more difficult such as requiring a secure channel, comparing public key and so on.