Enabling a state server moves session state to the server instead of in-proc.
Does this also happen to application state? Or does it stay in-proc (and thus, un-shared)?
Related
Suppose i have three state A,B,C and state machine is initialisation occur if user fire some api call . if transition is going to happen from state A -> B -> C then at state B wanted to send response to user and then user will cal same api by appending some payload and then it has to go state C.
Is that possible to achieve this business use case using spring state machine. If possible then how we can do that. If not possible from spring state machine which framework i can use to achieve this use case
Yes, you can achieve this use case,
I but you need to persist the information somewhere, So that you know what information to feed and in which state, you should reset the state machine.
There are advanced controls in state machine where you can reset the state machine to a particular state. I'm assuming that you are using a state machine factory to get a state machine.
stateMachine.getStateMachineAccessor().doWithAllRegions(access -> access
.resetStateMachine(new DefaultStateMachineContext<>(state, null, null,null)));
With the above code, you will get a state machine that already is in the required state then you can pass context store with the required information.
We have a cluster of instances running as workerroles on Azure. They contain services that rely on state. Users connect via Socket.IO.
We need stickySessions, so that a client can stay with a single instance and the service that is running on it. If the user is re-routed to another instance, the client breaks.
My questions:
The LB distribution mode sourceIP will allow sticky sessions. But only as long the balanced instance set does not change (instance added/removed). When an instance is probed unhealthy, does the set change? And what does that mean for current users? Will they be re-routed?
What is a "new user" anyway? SourceIP uses a 2-tuple hash. Can i rely on clients sitting on the same system not being marked "new" when probes flag an instance unhealthy? Will they still be routed to that instance?
And lastly. Is there any sure and easy way to achieve sticky sessions?
Short story:
Any flow already established will persist unless of course the destination itself dies; this is about new flows only. The ability to pin all flows from a specific source to a specific VM irrespective of the health status of the backend pool is not supported in Azure Load Balancer and not what sourceIP distribution mode is. If that is what you need, you can either use another tool in the arsenal or you can solve this by having each node reach out to wherever state is actually kept.
Read long story below for details.
Long story:
When you create an Azure Load Balancer function, the SDN stack uses the protocol header values to compute a hash. The distribution mode choice governs which protocol header values are used to compute the hash:
5-tuple: source IP, source port, destination IP, destination port,
protocol
2-tuple: source IP, destination IP
2-tuple is what is referred to as distribution mode sourceIP. Since the destination IP is always that of the VIP when this is evaluated, only the source IP controls the hash. And because port numbers aren't considered, any session from a given source IP address will result in the same hash.
The hash determines where a new session lands. The session will persist on that destination until the flow terminates.
If the pool changes because a node becomes healthy or sick, the hash will change and impact new flows only.
Any session already in progress stays wherever it is. You just cannot guarantee a given source will always end up the same destination over time with a changing pool.
Here's an example:
Let's assume you have a scenario where you have VM's A, B, C in a pool behind an LB configuration. You turn on sourceIP.
First session from client X comes in, lands on B. As long as the pool stays the same (all nodes remain healthy and sick nodes aren't healed), any additional sessions from X will also land on B.
When the pool changes because a node changes health status from healthy to sick or vice versa, a new session from client X may land on a node other than B. Any sessions already on B remain on B.
Unless of course B itself is dead, in which case those established sessions will time out.
When subscriptions are created and maintained by crossbar where are they stored? I did a quick look through the source code and think they are all stored in local process memory. Is that right? What is the horizontal scale out model if stuff is stored in memory? Are connections expected to be stuck to a given node? What if the connection breaks and is restablished or if a server node goes offline? Do those connections loose all the state (subscription information)?
The scale-out model which Crossbar.io will implement (upcoming in 2015) is described here. On a Crossbar.io node, subscription state is transiently stored in process memory (of each router process), and synchronized across router processes. A given client is always connected to a single node. When it looses it's connection, it's subscriptions are gone. When a node goes down, the client will automatically reconnect - to another node in a cluster. The client will need to reestablish it's subscriptions at the new node. Two clients connected to two different nodes (and the same realm), where the nodes are both part of one cluster will transparently communicate.
I have a state machine working as a workflow service, having receive/send-reply activities as triggers for transitions.
Before send replies back, I have to do some work.
Problems come when exceptions happen in the process before sending the reply. In such case, if I don't handle the exception, the whole workflow is suspended; anyway, I shouldn't move to the next state if the requests wasn't properly handled.
Would it be enough to wrap the whole state machine with a Try/catch? Will the state machine recover from the last persisted state (I'm using Sql persistence)?
Are there other solutions?
Remark: workflows are hosted in IIS.
Thanks
Sometimes when IIS restarts the app pool it will start a new instance of my application before the previous instance is shut down completely. This causes me alot of problem so i wonder what i can do about it.
The course of action goes something like this. (spanning about 20 seconds)
Application is running, let's call this instance A.
Restart initializes
A new instance is started, let's call this B (Logged by Application_Start)
Incomming request is processed by instance B, this invalidates all data A has cached.
Timer on instance A is triggered, assumes its cache is valid and writes something invalid into the persistant storage.
Instance A is shut down (logged by Application_End)
Preferable i would like to disable the above behavior completely, IIS should only allow one instance. If not possible, can i in my code detect if other instances is alread running and then wait for it to quit inside application_start? If not possible, what is the best way to work around this?
Disable overlapped recycling:
"In an overlapped recycling scenario,
the process targeted for a recycle
continues to process all remaining
requests while a replacement worker
process is created simultaneously. The
new process is started before the old
worker process stops, and requests are
then directed to the new process. This
design prevents delays in service,
since the old process continues to
accept requests until the new process
has initialized successfully, and is
instructed to shut down only after the
new process is ready to handle
requests."
http://msdn.microsoft.com/en-us/library/ms525803(v=vs.90).aspx