I need to understand how the subject is stored and propagated in Weblogic.
Once authenticated, where is the subject stored at the HTTP layer?
Internally, is it stored in HTTPSession?
Same way, where is it stored at the EJB layer?
I have an application where a lot of principals are updated in the subject at the HTTP and EJB layers. In some corner cases, I'm getting a concurrent modification exception. Although the fix is simple, just to synchronize, I need to understand the internals of where the subject is stored at the HTTP and EJB layers.
This refers to WebLogic implementation internals, which may even change from version to version so I would keep away from it.
You will find official information just for the public Security APIs.
Related
So we were starting a new project from scratch and one of the developers suggested why have any GET API requests as POST API's are better in every which way. (At least when using a mobile client)
On further looking into this it does seem POST can do everything GET can do and it can do it better -
slightly more secure as parameters are not in URL
larger limit than GET request
So is there even a single reason to have a GET API ? (This will only be used from a mobile client so browser specific cacheing doesn't affect us)
Is there ever a need to have GET request API as POST is better in every way?
In general, yes. In your specific circumstances -- maybe no.
GET and POST are method tokens.
The request method token is the primary source of request semantics
They are a form of meta data included in the http request so that general purpose components can be aware of the request semantics and contribute constructively.
POST is, in a sense, the wildcard method - it can mean anything. But one of the consequences of this is - because the method has unconstrained semantics, general purpose components can't do anything useful other than pass the request along.
GET, however, has safe semantics (which includes idempotent semantics). Because the request is idempotent, general purpose components know that they can resend a GET request when the server returns no response (ie messages being lost on unreliable transport); general purpose components can know that representations of the resource can be pre-fetched, reducing perceived latency.
You dismissed caching as a concern earlier, but you may want to rethink that - the cache constraint is an important element that helped the web take over the world.
Reducing everything to POST reduces HTTP from an application for transferring documents over a network to dumb transport.
Using HTTP for transport isn't necessarily wrong: Simple Object Access Protocol (SOAP) works that way, as does gRPC. You still get authorization, and conditional requests; features of HTTP that you might otherwise need to roll your own.
You aren't doing REST at that point, but that's OK; not everybody has to.
That doesn’t mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them. (Fielding, 2008)
I want to replicate web traffic from production server to another instance of the application (pre-production env), so that i can verify that any improvements (e.g. performance-wise) that were introduced (and tested on course), remain improvements in the context of production-like load.
(Obviously, something that is clearly a performance improvement during tests, might as well turn out not quite so on the production. For example when trading time vs memory usage.)
There are tools like
teeproxy (https://serverfault.com/questions/729368/replicating-web-application-traffic-to-another-instance-for-testing, https://github.com/chrislusf/teeproxy/blob/master/teeproxy.go)
duplicator (Duplicate TCP traffic with a proxy, https://github.com/agnoster/duplicator)
but they don't seem to address the fact, that the duplicated web traffic will get
different session cookies
different CSRF tokens (in my case this is covered by JSF view state ids)
Is there a tool that could do that, automatically?
I had a similar concern and opened an issue here
Also since it is an open project I wanted to see if i can inject my own logic.
I am stuck there :- Adding headers to diffy proxy before multicasting
I dont have enough reputation to comment so added an answer
Twitter's diffy is specialized to duplicate HTTP traffic.
Despite having studied Domain Driven Design for a long time now there are still some basics that I simply figure out.
It seems that every time I try to design a rich domain layer, I still need a lot of Domain Services or a thick Application Layer, and I end up with a bunch of near-anemic domain entities with no real logic in them, apart from "GetTotalAmount" and the like. The key issue is that entities aren't aware of external stuff, and it's bad practice to inject anything into entities.
Let me give some examples:
1. A user signs up for a service. The user is persisted in the database, a file is generated and saved (needed for the user account), and a confirmation email is sent.
The example with the confirmation email has been discussed heavily in other threads, but with no real conclusion. Some suggest putting the logic in an application service that gets an EmailService and FileService injected from the infrastructure layer. But then I would have business logic outside of the domain, right? Others suggest creating a domain service that gets the infrastructure services injected - but in that case I would need to have the interfaces of the infrastructure services inside the domain layer (IEmailService and IFileService) which doesn't look too good either (because the domain layer cannot reference the infrastructure layer). And others suggest implementing Udi Dahan's Domain Events and then having the EmailService and FileService subscribe to those events. But that seems like a very loose implementation - and what happens if the services fail? Please let me know what you think is the right solution here.
2. A song is purchased from a digital music store. The shopping cart is emptied. The purchase is persisted. The payment service is called. An email confirmation is sent.
Ok, this might be related to the first example. The question here is, who is responsible for orchestrating this transaction? Of course I could put everything in the MVC controller with injected services. But if I want real DDD all business logic should be in the domain. But which entity should have the "Purchase" method? Song.Purchase()? Order.Purchase()? OrderProcessor.Purchase() (domain service)? ShoppingCartService.Purchase() (application service?)
This is a case where I think it's very hard to use real business logic inside the domain entities. If it's not good practice to inject anything into the entities, how can they ever do other stuff than checking its own (and its aggregate's) state?
I hope these examples are clear enough to show the issues I'm dealing with.
Dimitry's answer points out some good things to look for. Often/easily you find yourself in your scenario, with a data shoveling from db up to GUI through different layers.
I have been inspired by Jimmy Nilsson's simple advice "Value objects, Value objects and more Value objects". Often people tend to focus to much on Nouns and model them as entity. Naturally you often having trouble in finding DDD behavior. Verbs are easier to associate with behavior. A good thing is to make these Verbs appear in your domain as Value objects.
Some guidance I use for my self when trying to develop the domain (must say that it takes time to construct a rich domain, often several refactoring iterations...) :
Minimize properties (get/set)
Use value objects as much as you can
Expose as little you can. Make you domain aggregates methods intuitive.
Don't forget that your Domain can be rich by doing Validation. It's only your domain that knows how to conduct a purchase, and what's required.
Your domain should also be responsible for validation when your entities make a transition from one state two another state (work flow validations).
I'll give you some examples:
Here is a article I wrote on my blog regarding your issue about anemic Domain http://magnusbackeus.wordpress.com/2011/05/31/preventing-anemic-domain-model-where-is-my-model-behaviour/
I can also really recommend Jimmy Bogard's blog article about entity validations and using Validator pattern together with extension methods. It gives you the freedom to validate infrastructural things without making your domain dirty:
http://lostechies.com/jimmybogard/2007/10/24/entity-validation-with-visitors-and-extension-methods/
I use Udi's Domain Events with great success. You can also make them asynchronous if you believe your service can fail. You also wrap it in a transaction (using NServiceBus framework).
In your first example (just brainstorming now to get our minds thinking more of value objects).
Your MusicService.AddSubscriber(User newUser) application service get a call from a presenter/controller/WCF with a new User.
The service already got IUserRepository and IMusicServiceRepository injected into ctor.
The music service "Spotify" is loaded through IMusicServiceRepository
entity musicService.SignUp(MusicServiceSubscriber newSubsriber) method takes a Value object MusicServiceSubscriber.
This Value object must take User and other mandatory objects in ctor
(value objects are immutable). Here you can also place logic/behavior like handle subscriptionId's etc.
What SignUp method also does, it fires a Domain Event NewSubscriberAddedToMusicService.
It get caught by EventHandler HandleNewSubscriberAddedToMusicServiceEvent which got IFileService and IEmailService injected into it's ctor. This handler's implementation is located in Application Service layer BUT the event is controlled by Domain and MusicService.SignUp. This means the Domain is in control. Eventhandler creates file and sends email.
You can persist the user through eventhandler OR make the MusicService.AddSubscriber(...) method to this. Both will do this through IUserRepository but It's a matter of taste and perhaps how it will reflect the actual domain.
Finally... I hope you grasp something of the above... anyhow. Most important is to start adding "Verbs" methods to entitys and making the collaborate. You can also have object in your domain that are not persisted, they are only there for mediate between several domain entities and can host algorithms etc.
A user signs up for a service. The user is persisted in the
database, a file is generated and saved (needed for the user account),
and a confirmation email is sent.
You can apply Dependency Inversion Principle here. Define a domain interface like this:
void ICanSendConfirmationEmail(EmailAddress address, ...)
or
void ICanNotifyUserOfSuccessfulRegistration(EmailAddress address, ...)
Interface can be used by other domain classes. Implement this interface in infrastructure layer, using real SMTP classes. Inject this implementation on application startup. This way you stated business intent in domain code and your domain logic does not have direct reference to SMTP infrastructure. The key here is the name of the interface, it should be based on Ubiquitous Language.
A song is purchased from a digital music store. The shopping cart
is emptied. The purchase is persisted. The payment service is called.
An email confirmation is sent. Ok, this might be related to the first example. The question here is, who is responsible for orchestrating this transaction?
Use OOP best practices to assign responsibilities (GRASP and SOLID). Unit testing and refactoring will give you a design feedback. Orchestration itself can be part of thin Application Layer. From DDD Layered Architecture:
Application Layer: Defines the jobs the software is supposed to do and directs the
expressive domain objects to work out problems. The tasks this layer
is responsible for are meaningful to the business or necessary for
interaction with the application layers of other systems.
This layer is kept thin. It does not contain business rules or
knowledge, but only coordinates tasks and delegates work to
collaborations of domain objects in the next layer down. It does not
have state reflecting the business situation, but it can have state
that reflects the progress of a task for the user or the program.
Big part of you requests are related to object oriented design and responsibility assignment, you can think of GRASP Patterns and This, you can benefit from object oriented design books, recommend the following
Applying UML and Patterns
I am a bit confused about how XEP-0114 works. Does servicing a domain using a component mean that the server will no longer do anything on behalf of that domain, or does it just mean that the component will ALSO be allowed to service all users on that domain.
More specifically, is it possible to have multiple components servicing the same domain? For example, one component could handle MUC, another could store all messages in a history store, and a third could handle the roster, etc... All while the XMPP server continues handling the user like it normally would - and replying to presence, iq packets, etc... What this means is that components would have to be written so that their realm doesn't intersect with each other.
Answering #dhruvbird's second question in the comments above, if you have delegated a domain to your XEP-114 component, that component is responsible for everything about that domain, including all of the presence states of the users in that domain. That is possible, if tedious, but make sure you've read the new RFC 6121 recently.
Note: most servers have a component that implements all of this presence subscription logic - it's where the real IM business logic is implemented. You'll effectively be writing a replacement for that logic, so make sure there's no other way to solve your problem first.
I'm developing a java EE web app using JSF with a shopping cart style process, so I want to collect user input over a number of pages and then do something with it.
I was thinking to use an EJB 3 stateful session bean for this, but my research leads me to believe that a SFSB is not tied to a client's http session, so I would have to manually keep track of it via an httpSession, some side questions here . . .
1) Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
2) What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
So back to the main issue I see written all over that JSF is a presentation technology, so it should not be used for logic, but it seems the perfect option for collecting user input.
I can set a JSF session scoped bean as a managed property of all of my request beans which means it's injected into them, but unlike a SFSB the JSF managed session scoped bean is tied to the http session and so the same instance is always injected as long as the http session hasn't been invalidated.
So I have multiple tiers
1st tier) JSF managed request scoped beans that deal with presentation, 1 per page.
2nd tier) A JSF managed session scoped bean that has values set in it by the request beans.
3rd tier) A stateless session EJB who executes logic on the data in the JSF session scoped bean.
Why is this so bad?
Alternative option is to use a SFSB but then I have to inject it in my initial request bean and then store it in the http session and grab it back in each subsequent request bean - just seems messy.
Or I could just store everything in the session but this isn't ideal since it involves the use of literal keys and casting . etc .. etc which is error prone. . . and messy!
Any thoughts appreciated I feel like I'm fighting this technology rather than working with it.
Thanks
Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
From the old J2EE 1.3 tutorial:
What Is a Session Bean?
A session bean represents a single
client inside the J2EE server. To
access an application that is deployed
on the server, the client invokes the
session bean's methods. The session
bean performs work for its client,
shielding the client from complexity
by executing business tasks inside the
server.
As its name suggests, a session bean
is similar to an interactive session.
A session bean is not shared--it may
have just one client, in the same way
that an interactive session may have
just one user. Like an interactive
session, a session bean is not
persistent. (That is, its data is not
saved to a database.) When the client
terminates, its session bean appears
to terminate and is no longer
associated with the client.
So it has to do with a "session". But session not necessarily means "HTTP session"
What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
Well, first of all, you don't inject a SFSB in stateless component (injection in another SFSB would be ok), you have to do a lookup. Secondly, choosing between HTTP session and SFSB really depends on your application and your needs. From a pure theoretical point of view, the HTTP session should be used for presentation logic state (e.g. where you are in your multi page form) while the SFSB should be used for business logic state. This is nicely explained in the "old" HttpSession v.s. Stateful session beans thread on TSS which also has a nice example where SFSB would make sense:
You may want to use a stateful session
bean to track the state of a
particular transaction. i.e some one
buying a railway ticket.
The web Session tracks the state of
where the user is in the html page
flow. However, if the user then gained
access to the system through a
different channel e.g a wap phone, or
through a call centre you would still
want to know the state of the ticket
buying transaction.
But SFSB are not simple and if you don't have needs justifying their use, my practical advice would be to stick with the HTTP session (especially if all this is new to you). Just in case, see:
Stateless and Stateful Enterprise Java Beans
Stateful EJBs in web application?
So back to the main issue I see written all over that JSF is a presentation technology, so it should not be used for logic, but it seems the perfect option for collecting user input.
That's not business logic, that's presentation logic.
So I have multiple tiers (...)
No. You have probably a client tier, a presentation tier, a business tier, a data tier. What you're describing looks more like layers (not even sure). See:
Can anybody explain these words: Presentation Tier, Business Tier, Integration Tier in java EE with example?
Spring, Hibernate, Java EE in the 3 Tier architecture
Why is this so bad?
I don't know, I don't know what you're talking about :) But you should probably just gather the multi page form information into a SessionScoped bean and call a Stateless Session Bean (SLSB) at the end of the process.
1) Why is it called a session bean, as far as I can see it has nothing to do with a session, I could achieve the same by storing a pojo in a session.
Correction: an EJB session has nothing to do with a HTTP session. In EJB, roughly said, the client is the servlet container and the server is the EJB container (both running in a web/application server). In HTTP, the client is the webbrowser and the server is the web/application server.
Does it make more sense now?
2) What's the point of being able to inject it, if all I'm gonna be injecting' is a new instance of this SFSB then I might as well use a pojo?
Use EJB for transactional business tasks. Use a session scoped managed bean to store HTTP session specific data. Neither of both are POJO's by the way. Just Javabeans.
Why shouldn't I use a JSF SessionScoped bean for logic?
If you aren't taking benefit of transactional business tasks and the abstraction EJB provides around it, then just doing it in a simple JSF managed bean is indeed not a bad alternative. That's also the normal approach in basic JSF applications. The actions are however usually to be taken place in a request scoped managed bean wherein the session scoped one is been injected as a #ManagedProperty.
But since you're already using EJB, I'd question if there wasn't a specific reason for using EJB. If that's the business requirement from upper hand, then I'd just stick to it. At least, your session-confusion should now be cleared up.
Just in case you're not aware of this, and as a small contribution to the answers you have, you could indeed anotate a SFSB with #SessionScoped, and CDI will handle the life cycle of the EJB... This would tie an EJB to the Http Session that CDI manages. Just letting you know, because in your question you say:
but my research leads me to believe that a SFSB is not tied to a client's http session, so I would have to manually keep track of it via an httpSession, some side questions here . . .
Also, you could do what you suggest, but it depends on your requirements, until CDI beans get declarative transaction support or extended persistence contexts etc, you'll find yourself writing a lot of boilerplate code that would make your bean less clean. Of course you can also use frameworks like Seam (now moving to DeltaSpike) to enhance certain capabilities of your beans through their extensions.
So I'd say yes, at first glance you may feel it's not necessary to use a stateful EJB, but certain use cases may be better solve through them. If a user adds a product to his cart, and another user adds this same product later, but there is only one unit in stock, who gets it? the one who does the checkout faster? or the one who added it first? What if you want to access your entity manager to persist a kart in case the user decides to randomly close his browser or what if you have transactions that spawn multiple pages and you want every step to be synchronized to the db? (To keep a transaction open for so long is not advisable but maybe there could be a scenario where this is needed?) You could use SLSB but sometimes it's better and cleaner to use a SFSB..