I'm have a working Java EE application that runs a multitude of threads. I want to move these threads off my application and simply have access to their data (Strings and ints).
How should I achieve this if I want to say call a method on my web application that accesses a threads data on a different server/JVM?
Say you wanted to separate these layers (perhaps put them on different machines or for scalability) you would separate the data layer from your presentation layer and put them in different JVMs. You make the data layer provide a service to the presentation layer. How you do this depends on your preferred transport. e.g. it could be a web service, or you can use RMI, JMS, TCP, shared memory.
In any case, one JVM can only access the data of another process through services that JVM exposes. (Except in the case of shared memory, but it's not easy to get this working unless your data model is very simple)
accesses a threads data
In Java almost all data is on the heap, which is shared by many threads. Very little of it is scoped completely to an individual thread. So the very idea of moving a thread and only "its data" does not make much sense.
And it's not just Java, pretty much any language with shared mutable state will face the same issues.
Of course your application can have a concept of thread-owned data, but that would be application logic and not part of java itself or its Thread class.
Related
I am working on an embedded application running on Linux kernel. I need to add another auxiliary application that will communicate with the main application by opening a socket between two applications. There is another option to embed this auxiliary application to main application as a new thread, but this will cost so much time to rearrenge.
What is the advantages/disadvantages of using standalone auxiliary applications? What would be the possible misbehavior or problems that we would encounter? I am waiting for your wise hand-on and/or technical experience.
Thanks
Disadvantages of communication over socket:
Alower than shared memory.
Additional coding effort.
Third application might hijack socket.
Advantages of communication over socket:
Easily extendable to usage of separate systems for the two processes.
The two applications can be programmed in totally different languages and could use different bitness.
One application can be changed without touching the other if the protocol stays the same.
I would like to implement a mechanism in my server application but I'm not sure which OTL abstraction would be best appropriate.
My application collects data about various types of equipements.
Some of them use synchronous communication, thus generating Delphi event in my server application. (push-like)
Some of them use asynchronous communication, requiring my application to periodically request the latest data available. (pull like)
Because I want my server application to stay responsive while requesting as frequently as possible the new available data, I want to put that "pull driver" within a separated thread that will request all the configured data points one by one.
I'd like my main thread to spawn this OTL object and then receive the result as a delphi event in the main thread. This would emulate the "push-like" that my server main code is already made for.
Think of it as a thread you launch that periodically request a data you want to monitor and only send you an event when the value has changed.
Which OTL abstraction (high level? Low level?) do you think would be appropriate to this behavior?
Thank you.
I'm not sure OTL gives you much benefit here at all, to be honest. I write a lot of classes for managing hardware devices and the class model is almost invariably a plain TThread descendent. OTL is nice for spinning off tasks and work packages, queues, parallel calculations, etc. In this case, however, you don't want to do any of that. What you do want is a class that models your device and encapsulates the functions it can perform.
This is going to be a single worker thread dedicated to pumping reads and writes to the device. It is going to be a long-lived thread that will persist as long as the class that encapsulates the device remains alive - TThread makes sense for this. Your thread is going to be a simple loop that runs continuously, polling all the required data and flushing any write requests.
The class will also serve as a data cache for the device parameters and you will need some sort of synchronization devices (mutex, critical section, etc) to protect reads and writes to those fields through properties so again it makes sense that these sync objects also exist as class fields and that your thread and class model live together in a single entity. If you want event notifications, these too conveniently wrap into the same model. One device, one thread, one class. It's a perfect job for a TThread descendent.
I have a lot of Singleton implementation in asp.net application and want to move my application to IIS Web Garden environment for some performance reasons.
CMIIW, moving to IIS Web Garden with n worker process, there will be one singleton object created in each worker process, which make it not a single object anymore because n > 1.
can I make all those singleton objects, singleton again in IIS Web Garden?
I don't believe you can ( unless you can get those IIS workers to use objects in shared memory somehow ).
This is a scope issue. Your singleton instance uses process space as its scope. And like you've said, your implementation now spans multiple processes. By definition, on most operating systems, singletons will be tied to a certain process-space, since it's tied to a single class instance or object.
Do you really need a singleton? That's a very important question to ask before using that pattern. As Wikipedia says, some consider it an anti-pattern ( or code smell, etc. ).
Examples of alternate designs that may work include...
You can have multiple objects synchronize against a central store or with each other.
Use object serialization if applicable.
Use a Windows Service and some form of IPC, eg. System.Runtime.Remoting.Channels.Ipc
I like option 3 for large websites. A companion Windows Service is very helpful in general for large websites. Lots of things like sending mail, batch jobs, etc. should already be decoupled from the frontend processing worker process. You can push the singleton server object into that process and use client objects in your IIS worker processes.
If your singleton class works with multiple objects that share state or just share initial state, then options 1 and 2 should work respectively.
Edit
From your comments it sounds like the first option in the form of a Distributed Cache should work for you.
There are lots of distributed cache implementations out there.
Microsoft AppFabric ( formerly called Velocity ) is their very recent move into this space.
Memcached ASP.Net Provider
NCache ( MSDN Article ) - Custom ASP.Net Cache provider of OutProc support. There should be other custom Cache providers out there.
Roll out your own distributed cache using Windows Services and IPC ( option 3 )
PS. Since you're specifically looking into chat. I'd definitely recommend researching Comet ( Comet implementation for ASP.NET?, and WebSync, etc )
In a Message-Driven Bean am I restricted to the same rules of Session Beans (EJB3 or EJB3.1), i.e:
use the java.lang.reflect Java Reflection API to access information unavailable by way of the security rules of the Java runtime environment
read or write nonfinal static fields
use this to refer to the instance in a method parameter or result
access packages (and classes) that are otherwise made unavailable by the rules of Java programming language
define a class in a package
use the java.awt package to create a user interface
create or modify class loaders and security managers
redirect input, output, and error streams
obtain security policy information for a code source
access or modify the security configuration objects
create or manage threads
use thread synchronization primitives to synchronize access with other enterprise bean instances
stop the Java virtual machine
load a native library
listen on, accept connections on, or multicast from a network socket
change socket factories in java.net.Socket or java.net.ServerSocket, or change the stream handler factory of java.net.URL.
directly read or write a file descriptor
create, modify, or delete files in the filesystem
use the subclass and object substitution features of the Java serialization protocol
It is always a good idea not to create threads manually (ExecutorService seems fine in some cases though).
Actually MDBs are very often used to address this limitation: instead of creating a separate thread, send some task object (put something like MyJob extends Serializable in ObjectMessage) into the queue and let it be executed in MDB thread pool. This approach is much more heavyweight but scales very well and you don't have to manage any threads manually. In this scenario JMS is just a fancy way of running jobs asynchronously.
These EJB restrictions are typically not hard restrictions. In fact, they're not caveats on making your EJBs work properly, they're more like advisories on how to make your EJBs portable across EJB containers.
From time to time, some very fussy EJB container providers (cough.... WebSphere... cough) will actually enforce these restrictions through java security policies, but I would say about half of those restrictions are routinely ignored ( I mean just using log4j in your MDB potentially violates about 30% of them).
Violating the the other 70% probably indicates some architectural or design problem.
So, can you call System.exit() in an MDB ? The answer is yes, but only once... :)
It sounds like, in your case, you need some of these restrictions to reign in potentially misbehaving plugins. I don't know if MDBs are going to get you out of that problem. I suppose it depends on how much you trust the third party developers, but rather than use the invocation based models in EJB, I would install the components as JMX ModelMBeans. You can use the java security model to limit what they can do, but I suppose that would defeat the purpose.
Perhaps using some run (or load) time AOP byte code engineering, you could rewrite all requests for threads to be redirected to a per component thread factory that you allocate and limits the threads that can be created. Because you don't want to stop them from doing whatever it is that they do, you just don't want them to take down the whole server when they crash/stall/misbehave.
Interesting problem.
I tend to use the following as my standard threading model, but maybe it isn't such a great model. What other suggestions do people have or do they think this is set up well? This is not for a high performance internet server, though performance is sometimes pretty critical and in those cases I use asynchronous networking methods and reuse buffers, but it is the same model.
There is a gui thread to run the gui.
There is a backend thread that handles anything that is computationally intensive (basically anything the gui can hand off that isn't pretty quick to run) and also is in charge of parsing and acting on incoming messages or gui actions.
There is one or more networking threads that take care of breaking an outgoing send into peices if necessary, recieving packets from various sockets and reassembling them into messages.
There is an intermediary static class which serves as an intermediary between the networking and backend threads. It acts as a post office. Messages that need to go out are posted to it by backend threads and networking threads check the "outbox" of this class to find messages to send and post any incoming messages in a static "inbox" this class has (regardless of the socket they arrive from, though that information is posted with the incoming message) which the backend thread checks to find messages from other machines it should act on.
The gui / backend threading interface tends to be more ad hoc and should probably have its own post office like class or some alternative intermediary?
Any comments/suggestions on this threading setup?
My primary concern is that you don't really want to lock yourself into the idea that there can only be one back-end thread. My normal model is to use the MVC at first, make sure all the data structures I use aren't inherently unsafe for a threaded environment, avoid singletons, and then profile like crazy, splitting things out as I go while trying to minimize the number of condition variables I'm leveraging. For long asynchronous tasks, I prefer to spawn a new process, particularly if it's something that might want to let the OS give it a differing priority.
This architecture sounds like the classic Model-View-Controller architecture which is usually considered as good.