I'm looking into using the hazelcast distributed Executor functionality, however het
HazelcastClient.getExecutor(String name) confuses me.
if i run one 'server' instance like so:
Config config = new
ExecutorConfig executorConfig = new ExecutorConfig("name");
config.addExecutorConfig(executorConfig);
HazelcastInstance instance = Hazelcast.newHazelcastInstance(config);
This create a hazelcast node with executorConfig named 'name'
Considering this is the only ExecutorConfig for this instance i would expect to be able to submit Callables on this node on the executorService named 'name'
However if run the following (in a different process or machine)
HazelcastInstance client = HazelcastClient.newHazelcastClient(config);
CallableTest test = new CallableTest(); //callable that does sleep and sysout
IExecutorService executorService = client.getExecutorService("wrong_name");
executorService.submit(test);
the callable job gets submitted to the 'server' process and gets excecuted.
This seems weird to. i would expect to be able to manage the constraints of its executors.
The fact that this job gets executed even though there is no executorService with the name 'wrong_name' seem strange.
This leaves me wondering what the executor name is used for and how i can properly configure these executors.
Hazelcast will automatically create an executor with the given name. There is no restriction on the name; you don't need to configure it.
I can imagine that it feels a bit strange.
In short, make sure that you configure the name(s) correctly.
Related
I have a 3-node application that includes an embedded Hazelcast instance (so, 3 instances). These are used in the application for session sharing across the nodes. In the configuration, I added a local entry listener to write an audit record to the database.
Unfortunately, this caused something of a race condition because the listener ended up being added to the internal map of listeners 3 times. I'd end up getting a "connection has already been closed". I ended up switching to an interceptor which did NOT seem to get added multiple times.
I don't consider the interceptor method to be a great solution. However, the addLocalEntryListener() method returns a random UUID. So, it's not a matter of determining if the key exists in the map.
Ideally, I'd like something like this:
#Bean
public HazelcastInstance instance() {
HazelcastInstance instance = Hazelcast.getInstance();
IMap<String, ?> sessionsMap = instance.getMap("spring:session:sessions");
if (/* something something to ensure the listener wasn't already there */) {
sessionsMap.addLocalEntryListener(new MyLocalListener());
}
}
I'm trying to upgrade to Hazelcast 4.0 in our Spring Boot 2.2.1 application.
We use the #EnableHazelcastHttpSession annotation, which pulls in HazelcastHttpSessionConfiguration, which pulls in HazelcastIndexedSessionRepository from the spring-session-hazelcast jar.
However, this class won't compile because it imports Hazelcast's IMap which has moved to a different package in Hz 4.0.
Is there any way to fix this so that Spring Session works with Hazelcast 4?
I just copied the HazelcastIndexedSessionRepository into my own source code, changed the import from com.hazelcast.core.IMap to com.hazelcast.map.IMap, and swapped the sessionListenerId from String to UUID. If I keep it in the same package, then it loads my class instead of the one in the jar, and everything compiles and works fine.
Edit: We no longer get the SessionExpiredEvent, so something's not quite right, but manual testing shows us that our sessions do time out and force the user to log in again, even across multiple servers.
I found the cause of the error, you must look that the session repository is created by HazelcastHttpSessionConfiguration, in the class it checks wich version of hazelcast is in the classpath, when hazelcast4 is true then it instantiates Hazelcast4IndexedSessionRepository that doesn't use 'com.hazelcast.core.IMap' and you don't get the class not found exception.
Code of class HazelcastHttpSessionConfiguration
#Bean
public FindByIndexNameSessionRepository<?> sessionRepository() {
return (FindByIndexNameSessionRepository)(hazelcast4 ? this.createHazelcast4IndexedSessionRepository() : this.createHazelcastIndexedSessionRepository());
}
Remove the usage of HazelcastIndexedSessionRepository replace it with Hazelcast4IndexedSessionRepository or remove the code and let spring autoconfiguration do the job by HazelcastHttpSessionConfiguration
I have a question about Groovy scripting in Jmeter.
I have created a function in Groovy that connects to Redis DB, and the function works as expected.
Afterwards, when from "main" I try to get data, it says that he is not familiar with get from Redis.
My purpose is to create the first function that connects to Redis, the second one that creates Redis key, and the third one to get data using the Redis key.
in the "main" I call connect and it works, but the third one does not work. Is it because the connection is closed?
Not seeing your code it is hard to guess what is wrong, according to the error message jedis variable is not defined in the scope, you are trying to access it. you can try defining it globally like:
def jedis = null; // make "jedis" variable available to all methods
void connect() {
jedis = new Jedis(vars.get('Redis_IP', vars.get('Redis_Port') as int)
}
void somethingElse () {
if (jedis != null) {
log.info(jedis.ping())
}
}
A couple of points to consider:
Don't inline JMeter Variables or Functions into script body, it makes compilation caching feature aimpossible so the overall performance of your code will be lower. Also variables might resolve into something causing script interpretation failure or unexpected behaviour. Either use "Parameters" section or go for code-based equivalents as in my above demo
It is recommended to use JMeter built-in features (or plugins) where possible as even well-behaved Groovy script doesn't perform that fast as "normal" Java code. Check out if Redis Data Set is matching your use case and if it does - simply install it using JMeter Plugins Manager and start using instead of struggling with Groovy.
See JMeter’s Redis Data Set - An Introduction article for step-by-step instructions on the plugin installation and usage
I am using sbt-native-packager with the experimental Java Server archetype. I am trying to identify a conventional way to access my log files, and I'm wondering if anyone knows of a common approach here. Since I am using the Java Server archetype, I am getting a symlink /var/log/$app -> install_dir/$app/log, but it feels a little dirty and less portable to just have log4j open /var/log/$app/error.log directly.
[Update]
I ended up creating an object with run time path information:
object MakaraPaths {
def getLogPath = new File(getJarPath, "../logs").getPath
def getConfigPath = new File(getJarPath, "../conf").getPath
def getJarPath = {
val regex = new Regex("(/.+/)*")
val jarPath = Makara.getClass.getProtectionDomain.getCodeSource.getLocation.getPath
(regex findAllIn jarPath).mkString("")
}
}
In my main method, I established a system property based on the new MakaraPaths object:
System.setProperty("logPath", MakaraPaths.getLogPath)
I also used this for my config file:
val config = ConfigFactory.parseFile(new File(MakaraPaths.getConfigPath, "application.conf"))
Ultimately, to load the log file, I used a System Property lookup:
<RollingFile name="fileAppender" fileName="${sys:logPath}/server.log" filePattern="${sys:logPath}/server_%d{yyMMdd}.log">
This gets me most of the way where I needed to be. It's not completely portable, but it does technically support my use case. (Deploying to Ubuntu)
You could use relative path in log4j configuration. Just write logs in logs/filename.log.
During installation symlink install_dir/$app/logs -> /var/log/$app will be created, and all logs will be written in /var/log/$app/filename.log
I am trying to write a method to create a database and run migrations on it, given the connection string.
I need the multiple connections because I record an audit log in a separate database.
I get the connection strings out of app.config using code like
ConfigurationManager.ConnectionStrings["Master"].ConnectionString;
The code works with the first connection string defined in my app.config but not others, which leads me to think that somehow it is getting the connection string from app.config in some manner I don't know.
My code to create the database if it does not exist is
private static Context MyCreateContext(string ConnectionString)
{
// put the connection string where the factory method can get it
AppDomain.CurrentDomain.SetData("ConnectionString", ConnectionString );
var factory = new ContextFactory();
// I know I need this line - but I cant see how what follows actually uses it
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context,DataLayer.Migrations.Configuration>());
var context = factory.Create();
context.Database.CreateIfNotExists();
return context
}
The code in the Migrations.Configuration is
Public sealed class Configuration : DbMigrationsConfiguration<DataLayer.Context>
{
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
}
The context factory code is
public class ContextFactory : IDbContextFactory<Context>
{
public Context Create()
{
var s = (string)AppDomain.CurrentDomain.GetData("ConnectionString");
return new Context(s);
}
}
Thus I am setting the connection string before creating the context.
Where can I be going wrong, given that the connection strings are all the same except the database name, and the migration code runs with one connection string, but doesnt run with others?
I wonder if my problem is to do with understanding how How does Database.SetInitializer actually works. I am guessing something about reflection or generics. How do i make the call to SetInitializer tie tie to my actual context?
I have tried the following code but the migrations do not run
private static Context MyCreateContext(string ConnectionString)
{
Database.SetInitializer(new MigrateDatabaseToLatestVersion<Context, DataLayer.Migrations.Configuration>());
var context = new Context(ConnectionString);
context.Database.CreateIfNotExists();
}
This question appears to be related
UPDATE:
I can get the migrations working if I refer to the connection string using
public MyContext() : base("MyContextConnection") - which points to in the config
I was also able to get migrations working on using different instances of the context, if I created a ContextFactory class and passed the connection to it by referencing a global. ( See my answer to the related question link )
Now I am wondering why it has to be so hard.
I'm not sure exactly as to what the problems are you facing, but let me try
The easiest way to provide connection - and be sure it works that way...
1) Use your 'DbContext' class name - and define a connection in the app.config (or web.config). That's easiest, you should have a connection there that matches your context class name,
2) If you put it into the DbContext via constructor - then be consistent and use that one. I'd also suggest to 'read' from config connections - and again name it 'the same' as your context class (use the connection 'name', not the actual string),
3) if none is present - EF/CF makes the 'default' one - based on your provider - and your context's class name - which usually isn't what you want,
You shouldn't customize with initializers for that reason -
initializers should be agnostic and serve other purpose - setup
connection in the .config - or directly on your DbContext
Also check this Entity Framework Code First - How do I tell my app to NOW use the production database once development is complete instead of creating a local db?
Always check 'where your data' goes - before doing anything.
For how the initializer actually works - check this other post of mine, I made a thorough example
How to create initializer to create and migrate mysql database?
Notes: (from the comments)
Connection shouldn't be very dynamic - config is the right place for it to be, unless you have a good reason.
Constructor should work fine too.
CreateDbIfNotExists doesn't work well together with the 'migration' initializer. You can just use the MigrateDatabaseToLatestVersion initializer. Don't 'mix' it
Or - put something like public MyContext() : base("MyContextConnection") - which points to <connectionStrings> in the config
To point to connection - just use its 'name' and put that into constructor.
Or use somehting like ConfigurationManager.ConnectionStrings["CommentsContext"].ConnectionString
Regarding entertaining 'multiple databases' with migrations (local and remote from one app) - not exactly related - but this link - Migration not working as I wish... Asp.net EntityFramework
Update:
(further discussion here - Is adding a class that inherits from something a violation of the solid principles if it changes the behavior of code?)
It is getting interesting here. I did manage to reproduce the problems you're facing actually. Here is a short breakdown on what I think it's happening:
First, this worked 'happily':
Database.SetInitializer(new CreateAndMigrateDatabaseInitializer<MyContext, MyProject.Migrations.Configuration>());
for (var flip = false; true; flip = !flip)
{
using (var db = new MyContext(flip ? "Name=MyContext" : "Name=OtherContext"))
{
// insert some records...
db.SaveChanges();
}
}
(I used custom initializer from my other post, which controls migration/creation 'manually')
That worked fine w/o an Initializer. Once I switched that on, I ran into some curious problems.
I deleted Db-s (two, for each connection). I expected to either not work, or create one db, then another in the next pass (like it did, w/o migrations, just 'Create' initializer).
What happened, to my surprise - is it actually created both databases on the first
pass ??
Then, being a curious person:), I put breakpoints on the MyContext ctor, and debugged through the migrator/initializer. Again empty/no db-s etc.
It created first instance on my call within the flip. Then on the first access to 'model', it invoked the initializer. Migrator took over (having had no db-s). During the migrator.Update(); it actually constructs the MyContext (I'm guessing via generic param in Configuration) - and calls the 'default' empty ctor. That had the 'other connection/name' by default - and creates the other Db all as well.
So, I think this explains what you're experiencing. And why you had to create the 'Factory' to support the Context creation. That seems to be the only way. And setting some 'AppDomain' wide 'connection string' (which you did well actually) which isn't 'overriden' by default ctor call.
Solution that I see is - you just need to run everything through factory - and 'flip' connections in there (no need for static connection, as long as your factory is a singleton.
You can supply a configuration in the MigrateDatabaseToLatestVersion constructor.
If you set the initializer in the DbContext you can also pass a 'true' to use the current connection string.