How to set system properties dynamically via java code in tomcat 8 (not through the tomcat configuration files) - multithreading

I have gone through the solution provided by peter for setting system properties dynamically in multithreading with the below link
System.setProperty used by a thread impacts other thread in communication to external network elements. How to resolve it?
But the problem is, tomcat is not considering the system properties that i am setting. So how to achieve this ?
I have mutiple threads in a management station connecting to different servers through RMI APIs and download the stub accordingly.
I am referring to the same name jar file as a stub at different locations for each server.
note: jar versions may differ at each location.
Eg: MS --> serv1 --> stublocation (http://15.xx.xx.xx:port/myfolder/myapp.jar)
MS --> serv2 --> stublocation (http://15.yy.yy.yy:port/myfolder/myapp.jar)
I want to set java.rmi.server.codebase system property for each of these locations dynamically and make it threadLocal so that it will not override each other settings.
With the example provided in the above link, I hope to achieve the solution for above problem.
But to test the resolution, i am unable to set these properties in tomcat.
Tomcat is ignoring the system properties that i am setting. Tomcat is considering the JVM arguments that are set through catalina.bat or service.bat but not through the system.properties as i need it to be dynamically set.
Any help here will be great! Thanks.

The java.rmi.server.codebase property is set at the JVM which exports remote objects. Setting it in a client JVM accomplishes exactly nothing, unless that JVM exports remote objects too, i.e. callbacks. It doesn't seem likely that you will be dealing with multiple versions of your own application within the same JVM.
In short, your question doesn't make sense.

As EJP points out, (successfully) setting that property is unlikely to achieve what you want.
But there are a couple of other important misconceptions in your question.
Tomcat doesn't implement RMI. RMI is actually implemented by the Java SE itself. Therefore, it is not up to Tomcat to pay attention to those property settings.
Typical Java services that use system properties for configuration purposes do it once during the lifetime of the JVM. Typically this happens when the relevant subsystem (e.g. RMI) initializes. The problem with setting system properties programmatically ("dynamically") is ensuring that they are set before the relevant initialization code uses them.
Going back to what you are trying to achieve, it seems that it is the same as or similar to this:
Java custom classloading and RMI
Nobody was able to help that person, and he ended up solving his problem another way. (I think he is saying that he handled serialVerionId mismatches with customized readObject / writeObject methods ...)
But his Q&A offers one possible way to solve the problem. It is a bit complicated.
The RMI system allows you to provide your own classloader for RMI to use. You do this by implementing the RMIClassLoaderSpi API and then registering your providers as described in the RMIClassLoader javadoc. That's one part of the equation.
The problem is that the RMI classloader is global, but you want RMI on different threads to use different class loaders.
Solution: delegate!
You implement your custom RMI classloader to delegate to one of a number of different classloaders, depending on which versions of the remote APIs that the context requires.
Since you have proposed using thread locals, you can declare a thread local variable for use by the custom RMI classloader, and have it use that variable's value to decide which classloader to delegate to.
CAVEAT ... I have not tried this!

Related

alternative to PoolingOptions, any suggestions?

Migrating from cassandra-driver-core(3.4.0) to java-driver-core (4.14.1) don't find an alternative to PoolingOptions class in new version, does anybody has implemented the same earlier or any suggestions?
Java driver is now driven by the configuration file - you can override default configurations with your own stuff either with additional config file, or programmatically (see manual). For pools there is a separate section called pool under datastax-java-driver.advanced.connection (source) where you can customize size of the pool, number of inflight requests, etc.

How to update classes of functional objects (Callable) in hazelcast without restarting

I found 2 options how to add classes to hazelcast:
CodeDeployment
clientUserCodeDeploymentConfig.addClass(cz.my.DemoTask.class);
problem is when I change code in this task I got exception:
java.lang.IllegalStateException: Class com.model.myclass is already in a local cache and conflicting byte code representation
Use some serialization like IdentifiedDataSerializable or Portable and add jar to client and server hazelcast with configuration.
so even this is versioned when you need to change our Task you need to update jar and restart server
so is there some other options ?
I found similar issue which is almost 2 years old where is mention:
For the functional objects, we don't have a solution in place but it
is on the road map.
so I am curious if there is some update about this.

How to use org.jboss.weld.proxy.dump?

I'm using CDI 1.2 with JBoss Weld 2.4.6, where one can configure in the weld.properties the key org.jboss.weld.proxy.dump to: "For debugging purposes, it’s possible to dump the generated bytecode of client proxies and enhanced subclasses to the filesystem." Can this classes somehow can be used to speed up the deployment by loading them to the container instead of letting the container doing the work again?
As the option says, those are purely for debugging purposes.
Weld dumps these proxies as an "on-the-side" action while creating the proxies and loading them anyway, so no slow down/speed up would happen. In other words, by the time the proxy can be dumped, it is already "loaded to the container" and no duplication of work happens.
Also note that creation of proxy does not mean the actual contextual instance will be created as well - those are created lazily when you first try to use them.

How do I sent the endpoint for a Resource component, eg port number

I need to have a RedHawk component have its ORB listen on a particular endpoint, specifically on a specified port. I am used to doing this by an endpoint parameter to ORB_init but since RedHawk calls ORB_init for me I do not know how to specify a particular giop:tcp::port endpoint. Is there a way to specify ORB_init parameters as a component property. Most programs that call ORB_init pass commandline parameters given to the executable on to ORB_init. Can I add --ORBendpoint to the entrypoint in the spd file?
Using a specific port for the ORB goes against the REDHAWK model of deployment-agnostic components. Furthermore, in the 2.1+ shared address space model, the ORB is shared between multiple components, rendering that level of control incompatible. A device or service, on the other hand, is explicitly deployed on a particular host machine, so using a specific ORB port is less fragile. As a general matter, REDHAWK attempts to abstract developers from the CORBA layer.
All of that notwithstanding, in principle it is possible to use specialized CORBA configurations. You cannot add arguments to the entry point in the SPD, but there are a couple of ways you could override the ORB port:
Edit the entry point to be a script that sets the environment variable OMNIORB_CONFIG, then exec's the real executable. Be aware that regenerating the device/service/component must be done carefully to avoid breakage (such as changing the name of the executable or overwriting your script).
Add a simple property, initialized via command the command line, called "-ORBendPoint"; I believe this would be passed along to CorbaInit. This is analogous to an execparam in SCA 2.2.2.
Modify the main() function to call ossie::corba::CorbaInit() before the call to start_component()/start_device()/start_service(). Subsequent calls to CorbaInit() will be no-ops.

Sandbox/JRE limitations of CloudBees?

I'm going to start developing a Java web app that I believe I will be deploying to CloudBees, but am concerned about what JRE/sandbox restrictions may apply.
For instance, with Google App Engine, you're not allowed to execute any methods packaged inside java.io.file or java.net. You're not allowed to start threads without using their custom ThreadFactory. You're not allowed to use JNDI, JMX or make calls to remote RDBMSes hosted on 3rd party machines. You're not allowed to use reflection. With GAE, there's a lot you're not allowed to do.
Do these same restrictions hold true for CloudBees? I'm guessing no, as I just read their entire developer docs and didn't run across anything of the sort.
However, what happens if my app tries to write to the local file system when deployed to their servers? They must have certain restrictions as to what can run on their machines, if for no other reason than security!
So I ask: what are these restrictions, or where can I find them listed in their docs? Thanks in advance!
Last I checked (a) there is no sandbox; (b) you can write to the local filesystem, but any files you write there may be discarded if the application is reprovisioned for any reason, i.e. use it for temporary files only. (An optional permanent file store service has been considered as a feature useful for certain applications.)

Resources