I found 2 options how to add classes to hazelcast:
CodeDeployment
clientUserCodeDeploymentConfig.addClass(cz.my.DemoTask.class);
problem is when I change code in this task I got exception:
java.lang.IllegalStateException: Class com.model.myclass is already in a local cache and conflicting byte code representation
Use some serialization like IdentifiedDataSerializable or Portable and add jar to client and server hazelcast with configuration.
so even this is versioned when you need to change our Task you need to update jar and restart server
so is there some other options ?
I found similar issue which is almost 2 years old where is mention:
For the functional objects, we don't have a solution in place but it
is on the road map.
so I am curious if there is some update about this.
Related
I was performing yet another execution of local Scala code against the remote Spark cluster on Databricks and got this.
Exception in thread "main" com.databricks.service.DependencyCheckWarning: The java class <something> may not be present on the remote cluster. It can be found in <something>/target/scala-2.11/classes. To resolve this, package the classes in <something>/target/scala-2.11/classes into a jar file and then call sc.addJar() on the package jar. You can disable this check by setting the SQL conf spark.databricks.service.client.checkDeps=false.
I have tried reimporting, cleaning and recompiling the sbt project to no avail.
Anyone know how to deal with this?
Apparently the documentation has that covered:
spark.sparkContext.addJar("./target/scala-2.11/hello-world_2.11-1.0.jar")
I guess it makes sense that everything that you are writing as code external to Spark is considered a dependency. So a simple sbt publishLocal and then pointing to the jar path in above command will sort you out.
My main confusion came from the fact that I didn't need to do this for a very long while until at some point this mechanism kicked it. Rather inconsistent behavior I'd say.
A personal observation after working with this setup is that it seems you only need to publish a jar a single time. I have been changing my code multiple times and changes are reflected even though I have not been continuously publishing jars for the new changes I made. That makes the whole task a one off. Still confusing though.
I'm using CDI 1.2 with JBoss Weld 2.4.6, where one can configure in the weld.properties the key org.jboss.weld.proxy.dump to: "For debugging purposes, it’s possible to dump the generated bytecode of client proxies and enhanced subclasses to the filesystem." Can this classes somehow can be used to speed up the deployment by loading them to the container instead of letting the container doing the work again?
As the option says, those are purely for debugging purposes.
Weld dumps these proxies as an "on-the-side" action while creating the proxies and loading them anyway, so no slow down/speed up would happen. In other words, by the time the proxy can be dumped, it is already "loaded to the container" and no duplication of work happens.
Also note that creation of proxy does not mean the actual contextual instance will be created as well - those are created lazily when you first try to use them.
I have gone through the solution provided by peter for setting system properties dynamically in multithreading with the below link
System.setProperty used by a thread impacts other thread in communication to external network elements. How to resolve it?
But the problem is, tomcat is not considering the system properties that i am setting. So how to achieve this ?
I have mutiple threads in a management station connecting to different servers through RMI APIs and download the stub accordingly.
I am referring to the same name jar file as a stub at different locations for each server.
note: jar versions may differ at each location.
Eg: MS --> serv1 --> stublocation (http://15.xx.xx.xx:port/myfolder/myapp.jar)
MS --> serv2 --> stublocation (http://15.yy.yy.yy:port/myfolder/myapp.jar)
I want to set java.rmi.server.codebase system property for each of these locations dynamically and make it threadLocal so that it will not override each other settings.
With the example provided in the above link, I hope to achieve the solution for above problem.
But to test the resolution, i am unable to set these properties in tomcat.
Tomcat is ignoring the system properties that i am setting. Tomcat is considering the JVM arguments that are set through catalina.bat or service.bat but not through the system.properties as i need it to be dynamically set.
Any help here will be great! Thanks.
The java.rmi.server.codebase property is set at the JVM which exports remote objects. Setting it in a client JVM accomplishes exactly nothing, unless that JVM exports remote objects too, i.e. callbacks. It doesn't seem likely that you will be dealing with multiple versions of your own application within the same JVM.
In short, your question doesn't make sense.
As EJP points out, (successfully) setting that property is unlikely to achieve what you want.
But there are a couple of other important misconceptions in your question.
Tomcat doesn't implement RMI. RMI is actually implemented by the Java SE itself. Therefore, it is not up to Tomcat to pay attention to those property settings.
Typical Java services that use system properties for configuration purposes do it once during the lifetime of the JVM. Typically this happens when the relevant subsystem (e.g. RMI) initializes. The problem with setting system properties programmatically ("dynamically") is ensuring that they are set before the relevant initialization code uses them.
Going back to what you are trying to achieve, it seems that it is the same as or similar to this:
Java custom classloading and RMI
Nobody was able to help that person, and he ended up solving his problem another way. (I think he is saying that he handled serialVerionId mismatches with customized readObject / writeObject methods ...)
But his Q&A offers one possible way to solve the problem. It is a bit complicated.
The RMI system allows you to provide your own classloader for RMI to use. You do this by implementing the RMIClassLoaderSpi API and then registering your providers as described in the RMIClassLoader javadoc. That's one part of the equation.
The problem is that the RMI classloader is global, but you want RMI on different threads to use different class loaders.
Solution: delegate!
You implement your custom RMI classloader to delegate to one of a number of different classloaders, depending on which versions of the remote APIs that the context requires.
Since you have proposed using thread locals, you can declare a thread local variable for use by the custom RMI classloader, and have it use that variable's value to decide which classloader to delegate to.
CAVEAT ... I have not tried this!
I'm trying to debug why the remote caching doesn't work for my use case.
I wanted to inspect the cache entries related to bazel, but realized that I don't really know and can't find what map names are used.
I found one "hazelcast-build-cache" - this seems to keep some of the build and test actions. I've set up a listener to see what gets put there, but I can't see any of the success actions.
For example,I run a test, and I want to verify that the success of this test gets cached remotely. I have no idea how to do this. I would either want to know how to find it out, or what map names I can inspect in hazelcast to find it out.
Hazelcast Management Center can show you all the maps/caches that you create or get created in the cluster, how data is distributed etc. You can also make use of various types of listeners within Hazelcast: EntryListener, MapListener etc.
Take a look at documentation:
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#management-center
http://docs.hazelcast.org/docs/3.9/manual/html-single/index.html#distributed-events
For certain jobs, we need to do some cleanup or preparation before that job is run again at another node due to failover. This is important especially if the previous run generates some partial result in db. It needs to be cleaned up before the job is run again.
I found #GridComputeJobBeforeFailover. But it doesn't seem the default GridCompute.run()/call() API support that. It will be very useful to add a GridComputeJobFailoverAware interface similar to GridComputeJobMasterLeaveAware. When an closure is an instance of GridComputeJobFailoverAware, then use a ComputeJobImpl with #GridComputeJobBeforeFailover.
But for now, is it true that my only option is to implement my own Task/Job if we want to have something run before a failover?
Yes, for now you need to implement your own GridComputeTask/GridComputeJob classes. However, your suggestion about supporting this annotation for basic runnables and callables is very valid. I have filed a Jira ticket for it, so it will be added to the product.