how to put things in hashmap which is in another hashmap - hashmap

i have something like this on top of my code:
protected HashMap<String, HashMap<String, Player>> prisonsPlayers = new HashMap<String, HashMap<String, Player>>();
And i need to insert data in it. As soon as i am new in java and never work with hashmap before its a bit problem. I try something like this but...
protected HashMap<String, Player> loggedPlayers = new HashMap<String, Player>();
prisonsPlayers.put(player.getName(), loggedPlayers);
Is it right or how can i do it other way?

Yep you are doing right. It is called a Mmulti Layered HashMap behavior.
Add some values in loggedPlayers .
loggedPlayers.put(player.getName(), Player);
loggedPlayers.put(player.getName(), Player);
And add then
prisonsPlayers.put(player.getName(), loggedPlayers);

Related

Accessing keys and values from nested hashmaps in Java

I have a nested hashmaps in the below format in my code. When the code is executed, this class also gets executed.
public class NestedHashMap
{
HashMap<String, HashMap<String, HashMap<String, String>>> map1 = new HashMap<String, HashMap<String, HashMap<String, String>>>();
public void hashMapData()
{
HashMap<String, HashMap<String, String>> map2= new HashMap<String, HashMap<String, String>>();
HashMap<String, String> map3 = new HashMap<String, String>();
// map3 contains key as 'Student Exam Code' and value as 'Student Name'.
// map3.put("1352", "ABCDE");
// map3.put("4581", "JDWEF");
// map3.put("1587", "OWELW");
// map2 contains key as 'Exam Centre Code' and value as map3.
// map2.put("CENTRE092", map3);
// map1 contains key as 'Exam Paper Code' and value as map2.
// map1.put("ENG02", map2);
// The data in each hashmap is stored through for loop.
}
public void getData()
{
String strValue = map1.get("ENG02").get("CENTRE092").get("1352");
system.out.println(strValue);
}
}
I am getting the below error when the method 'getData()' gets executed.
java.lang.NullPointerException: Cannot invoke "java.util.HashMap.get(Object)" because the return value of "java.util.HashMap.get(Object)" is null
at scripts.NestedHashMap.getData(NestedHashMap.java:159)
Could someone please help to sort this issue out?

java 8 - iterate 2 hash maps and create new hash map with records only matching keys

we have a requirement to achieve in java8, please anyone help us.
we have method taking 2 parameters as input, both parameters are Hashmap<String,Dog>.
we want to iterate both hash maps and return one hashmap .
result hash Map contains only matched keys and corresponding values from 2 hashmaps, and (value for matched key) i.e Dog atributes we want to set some attribute from Hashmap1 and some attributes from hashmap2.
please suggest how we can achieve this in java 8.
You can implement the iterator simple like this:
public Map<String, Dog> combineMap(Map<String, Dog> first, Map<String, Dog> second) {
Map<String, Dog> result = new HashMap<>();
for (Map.Entry<String, Dog> entry : first.entrySet()) {
if(second.containsKey(entry.getKey())){
Dog dogFirst = entry.getValue();
Dog dogSecond = second.get(entry.getKey());
Dog combineDog = new Dog();
// Do what ever you want with combineDog
result.put(entry.getKey(), combineDog);
}
}
return result;
}

Cucumber V5-V6 - passing complex object in feature file step

So I have recently migrated to v6 and I will try to simplify my question
I have the following class
#AllArgsConstructor
public class Songs {
String title;
List<String> genres;
}
In my scenario I want to have something like:
Then The results are as follows:
|title |genre |
|happy song |romance, happy|
And the implementation should be something like:
#Then("Then The results are as follows:")
public void theResultsAreAsFollows(Songs song) {
//Some code here
}
I have the default transformer
#DefaultParameterTransformer
#DefaultDataTableEntryTransformer(replaceWithEmptyString = "[blank]")
#DefaultDataTableCellTransformer
public Object transformer(Object fromValue, Type toValueType) {
ObjectMapper objectMapper = new ObjectMapper();
return objectMapper.convertValue(fromValue, objectMapper.constructType(toValueType));
}
My current issue is that I get the following error: Cannot construct instance of java.util.ArrayList (although at least one Creator exists)
How can I tell cucumber to interpret specific cells as lists? but keeping all in the same step not splitting apart? Or better how can I send an object in a steps containing different variable types such as List, HashSet, etc.
If I do a change and replace the list with a String everything is working as expected
#M.P.Korstanje thank you for your idea. If anyone is trying to find a solution for this here is the way I did it as per suggestions received. Inspected to see the type fromValue has and and updated the transform method into something like:
if (fromValue instanceof LinkedHashMap) {
Map<String, Object> map = (LinkedHashMap<String, Object>) fromValue;
Set<String> keys = map.keySet();
for (String key : keys) {
if (key.equals("genres")) {
List<String> genres = Arrays.asList(map.get(key).toString().split(",", -1));
map.put("genres", genres);
}
return objectMapper.convertValue(map, objectMapper.constructType(toValueType));
}
}
It is somehow quite specific but could not find a better solution :)

Why am I getting poor performance with Hazelcast relative to database?

I have a cluster that I routinely run with several nodes and I am interested in resolving some performance issues. It could be that what I am doing is correct but I am not entirely sure and could use some expert guidance. The goal of this project was to offload database data into the hazelcast map to provide more scalable and performant access.
Assume there are three nodes in the cluster and there are 30,000 entries in the container map spread roughly evenly across the cluster. For the sake of the question assume a simple structure like so with its incumbent getters, setters, constructors and so on:
class Container {
int id;
Set<Integer> dataItems;
}
class Data {
int id;
String value;
}
The map config for the two maps looks like the following:
<map name="Container">
<in-memory-format>OBJECT</in-memory-format>
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<time-to-live-seconds>0</time-to-live-seconds>
<max-idle-seconds>259200</max-idle-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="PER_NODE">0</max-size>
<eviction-percentage>25</eviction-percentage>
<merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy>
</map>
As you can see this map has a large eviction time but is used heavily. Since the data experiences heavy write traffic as well as even heavier read traffic, I thought a near cache may not be entirely helpful as invalidations are quick. Now a standard iteration strategy if this were a local data set would be something like the following:
public List<Map<String, Object>> jsonMap(final Set<Integer> keys) {
final IMap<Integer, Container> cmap = hazelcast.getMap("Containers");
final IMap<Integer, Data> dmap = hazelcast.getMap("Data");
final List<Map<String, Object>> result = new ArrayList<>();
cmap.getAll(keys).values().stream().forEach((c) -> {
final Map<String, Object> cJson = new HashMap<>();
result.add(cJson);
cJson.put("containerId", c.id);
final List<Map<String, Object>> dataList = new ArrayList<>();
cJson.put("data", dataList);
dmap.getAll(c.dataItems).values().stream().forEach(d -> {
final Map<String, Object> dJson = new HashMap<>();
dataList.add(dJson);
dJson.put("id", d.id);
dJson.put("value", d.value);
});
});
return result;
}
As you can see there is simple iteration here to create a JSON representation. However since the data is scattered across the nodes we have found this to be extremely slow in performance. An order of magnitude slower than simply getting the data from the database directly. That has lead some to question the strategy of using hazelcast at all. As a solution I proposed the redesign of the system to use a completable future created with an execution callback.
public <K, R> CompletableFuture<R> submitToKeyOwner(final K key, final String executor, final Callable<R> callable) {
final CompletableFuture<R> future = new CompletableFuture<>();
hazelcast.getExecutorService(executor).submitToKeyOwner((Callable<R> & Serializable) callable, key, new FutureExecutionCallback<>(future));
return future;
}
public class FutureExecutionCallback<R> implements ExecutionCallback<R> {
private final CompletableFuture<R> future;
public FutureExecutionCallback(final CompletableFuture<R> future) {
this.future = future;
}
#Override
public void onResponse(final R response) {
future.complete(response);
}
#Override
public void onFailure(final Throwable t) {
future.completeExceptionally(t);
}
}
public List<Map<String, Object>> jsonMap2(final Set<Integer> keys) {
final List<Map<String, Object>> result = new ArrayList<>();
keys.stream().forEach(k -> {
result.add(submitToKeyOwner(k, (Callable<Map<String, Object>> & Serializable) () -> {
final IMap<Integer, Container> cmap = hazelcast.getMap("Containers");
final Container c = cmap.get(k);
final Map<String, Object> cJson = new HashMap<>();
cJson.put("containerId", c.id);
final List<Map<String, Object>> dataList = new ArrayList<>();
cJson.put("data", dataList);
c.dataItems.stream().map((dk) ->
dataList.add(submitToKeyOwner(dk, (Callable<Map<String, Object>> & Serializable) () -> {
final IMap<Integer, Data> dmap = hazelcast.getMap("Data");
final Data d = dmap.get(dk);
final Map<String, Object> dJson = new HashMap<>();
dJson.put("id", d.id);
dJson.put("value", d.value);
return dJson;
}).join()));
return cJson;
}).join());
});
return result;
}
Essentially I have devolved everything into a submitToKey and used Completable futures to wrap it all up. The logic being that the fetch of the object will run only on the node where it is locally stored. Although this works it is still slower than accessing the database directly given the hundreds of records we are accessing when a single database Hibernate call would bring the records in nanoseconds, this one is measured in tens of milliseconds. That seems counterintuitive in some ways. I would think the access to the cache should be much quicker than it actually is. Perhaps I am doing something wrong both in the implementation of the iteration or just a general paradigm. Entry processors are not an option because although I have posted a trivial example, the real example uses other maps in its process as well and entry processors have serious limitations. Using map reduce is not appropriate because the administration overhead of the job has proven to be more costly than either of these two methods.
The question I have is whether each of these is the right paradigm and if I should be expecting tens or hundreds of milliseconds in latency? Is this just the cost of doing business in a clusterable fault tolerant world or is there something I can do to reduce the time? Finally is there any better paradigm to use when accessing data in this manner?
Thanks a bunch for your time.
It won't solve your problem, but it's worth mentioning that <in-memory-format>BINARY</in-memory-format> usually yields better performance than <in-memory-format>OBJECT</in-memory-format> (using OBJECT adds a serializations step to map.get()).
From the docs:
Regular operations like get rely on the object instance. When the OBJECT format is used and a get is performed, the map does not return the stored instance, but creates a clone. Therefore, this whole get operation includes a serialization first on the node owning the instance, and then a deserialization on the node calling the instance. When the BINARY format is used, only a deserialization is required; this is faster.
Also, I read you are using Hibernate, have you considered simply using Hazelcast as Hibernate second level cache (instead of implementing the cache logic)? (works for hibernate3 and hibernate4)
(And one last thing, I believe setting eviction-percentage and eviction-policy does nothing unless you set max-size).

GXT Grid ValueProvider / PropertyAccess for a Map<K,V> Datastore?

Rather than using Bean model objects, my data model is built on Key-Value pairs in a HashMap container.
Does anyone have an example of the GXT's Grid ValueProvider and PropertyAccess that will work with a underlying Map?
It doesn't have one built in, but it is easy to build your own. Check out this blog post for a similar way of thinking, especially the ValueProvider section: http://www.sencha.com/blog/building-gxt-charts
The purpose of a ValueProvider is to be a simple reflection-like mechanism to read and write values in some object. The purpose of PropertyAccess<T> then is to autogenerate some of these value/modelkey/label provider instances based on getters and setters as are found on Java Beans, a very common use case. It doesn't have much more complexity than that, it is just a way to simply ask the compiler to do some very easy boilerplate code for you.
As that blog post shows, you can very easily build a ValueProvider just by implementing the interface. Here's a quick example of how you could make one that reads a Map<String, Object>. When you create each instance, you tell it which key are you working off of, and the type of data it should find when it reads out that value:
public class MapValueProvider<T> implements
ValueProvider<Map<String, Object>, T> {
private final String key;
public MapValueProvider(String key) {
this.key = key;
}
public T getValue(Map<String, Object> object) {
return (T) object.get(key);
}
public void setValue(Map<String, Object> object, T value) {
object.put(key, value);
}
public String getPath() {
return key;
}
}
You then build one of these for each key you want to read out, and can pass it along to ColumnConfig instances or whatever else might be expecting them.
The main point though is that ValueProvider is just an interface, and can be implemented any way you like.

Resources