I need to store a object describing the memory details of a memory block allocated by sbrk(), at the beggining of the memory block itself.
for example:
metaData det();
void* alloc = sbrk(sizeof(det)+50000);
//a code piece to locate det at the beggining of alocate.
I am not allowed to use placement new, And not allowed to allocate memory using new/malloc etc.
I know that simply assigning it to the memory block would cause undefined behaviour.
I was thinking about using memcpy (i think that can cause problems as det is not dynamicly allocated).
Could assigning a pointer to the object at the beginning work (only if theres no other choise), or memcpy?
thanks.
I am not allowed to use placement new
Placement new is the only way to place an object in an existing block of memory.
[intro.object] An object is created by a definition, by a new-expression, when implicitly changing the active member of a union, or when a temporary object is created
You cannot make a definition to refer to an existing memory region, so that's out.
There's no union, so that's also out.
A temporary object cannot be created in an existing block of memory, so that's also out.
The only remaining way is with a new expression, and of those, only placement new can refer to an existing block of memory.
So you are out of luck as far as the C++ standard goes.
However the same problem exists with malloc. There are tons of code out there that use malloc without bothering to use placement new. Such code just casts the result of malloc to the target type and proceeds from there. This method works in practice, and there is no sign of it being ever broken.
metaData *det = static_cast<metaData *>(alloc);
On an unrelated note, metaData det(); declares a function, and sizeof is not applicable to functions.
When I read the Java Concurrency in Practice by Brian Goetz, I recall him saying "Immutable objects, on the other hand, can be safely accessed even when synchronization is not used to publish the object reference" in the chapter about visibility.
I thought that this implies that if you publish an immutable object, all fields(mutable final references included) are visible to other threads that might make use of them and at least up to date to when that object finished construction.
Now, I read in https://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html that
"Now, having said all of this, if, after a thread constructs an immutable object (that is, an object that only contains final fields), you want to ensure that it is seen correctly by all of the other thread, you still typically need to use synchronization. There is no other way to ensure, for example, that the reference to the immutable object will be seen by the second thread. The guarantees the program gets from final fields should be carefully tempered with a deep and careful understanding of how concurrency is managed in your code."
They seem to contradict each other and I am not sure which to believe.
I have also read that if all fields are final then we can ensure safe publication even if the object is not per say immutable.
For example, I always thought that this code in Brian Goetz's concurrency in practice was fine when publishing an object of this class due to this guarantee.
#ThreadSafe
public class MonitorVehicleTracker {
#GuardedBy("this")
private final Map<String, MutablePoint> locations;
public MonitorVehicleTracker(
Map<String, MutablePoint> locations) {
this.locations = deepCopy(locations);
}
public synchronized Map<String, MutablePoint> getLocations() {
return deepCopy(locations);
}
public synchronized MutablePoint getLocation(String id) {
MutablePoint loc = locations.get(id);
return loc == null ? null : new MutablePoint(loc);
}
public synchronized void setLocation(String id, int x, int y) {
MutablePoint loc = locations.get(id);
if (loc == null)
throw new IllegalArgumentException("No such ID: " + id);
loc.x = x;
loc.y = y;
}
private static Map<String, MutablePoint> deepCopy(
Map<String, MutablePoint> m) {
Map<String, MutablePoint> result =
new HashMap<String, MutablePoint>();
for (String id : m.keySet())
result.put(id, new MutablePoint(m.get(id)));
return Collections.unmodifiableMap(result);
}
}
public class MutablePoint { /* Listing 4.5 */ }
For example, in this code example, what if that final guarantee is false and a thread made an instance of this class and then the reference to that object is not null, but the field locations is null at the time another thread uses that class?
Once again, I don't know which is correct or if I happened to misinterpret both the article or Goetz
This question has been answered a few times before but I feel that many of those answers are inadequate. See:
https://stackoverflow.com/a/14617582
https://stackoverflow.com/a/35169705
https://stackoverflow.com/a/7887675
Effectively Immutable Object
etc...
In short, Goetz's statement in the linked JSR 133 FAQ page is more "correct", although not in the way that you are thinking.
When Goetz says that immutable objects are safe to use even when published without synchronization, he means to say that immutable objects that are visible to different threads are guaranteed to retain their original state/invariants, all else remaining the same. In other words, properly synchronized publication is not necessary to maintain state consistency.
In the JSR-133 FAQ, when he says that:
you want to ensure that it is seen correctly by all of the other thread (sic)
He is not referring to the state of the immutable object. He means that you must synchronize publication in order for another thread to see the reference to the immutable object. There's a subtle difference to what the two statements are talking about: while JCIP is referring to state consistency, the FAQ page is referring to access to a reference of an immutable object.
The code sample you provided has nothing, really, to do with anything that Goetz says here, but to answer your question, a correctly initializedfinal field will hold its expected value if the object is properly initialized (beware the difference between initialization and publication). The code sample also synchronizes access to the locations field so as to ensure updates to the final field are thread-safe.
In fact, to elaborate further, I suggest that you look at JCIP listing 3.13 (VolatileCachedFactorizer). Notice that even though OneValueCache is immutable, that it is stored in a volatile field. To illustrate the FAQ statement, VolatileCachedFactorizer will not work correctly without volatile. "Synchronization" is referring to using a volatile field in order to ensure that updates made to it are visible to other threads.
A good way to illustrate the first JCIP statement is to remove volatile. In this case, the CachedFactorizer won't work. Consider this: what if one thread set a new cache value, but another thread tried to read the value and the field was not volatile? The reader might not see the updated OneValueCache. BUT, recalling that Goetz refers to the state of the immutable object, IF the reader thread happened to see an up-to-date instance of OneValueCache stored at cache, then the state of that instance would be visible and correctly constructed.
So although it is possible to lose updates to cache, it is impossible to lose the state of the OneValueCache if it is read, because it is immutable. I suggest reading the accompanying text stating that "volatile reference used to ensure timely visibility."
As a final example, consider a singleton that uses FinalWrapper for thread safety. Note that FinalWrapper is effectively immutable (depending on whether the singleton is mutable), and that the helperWrapper field is in fact non-volatile. Recalling the second FAQ statement, that synchronization is required for access the reference, how can this "correct" implementation possibly be correct!?
In fact, it is possible to do this here because it is not necessary for threads to immediately see the up-to-date value for helperWrapper. If the value that is held by helperWrapper is non-null, then great! Our first JCIP statement guarantees that the state of FinalWrapper is consistent, and that we have a fully initialized Foo singleton that can be readily returned. If the value is actually null, there are 2 possibilities: firstly, it is possible that it is the first call and it has not been initialized; secondly, it could just be a stale value.
In the case that it is the first call, the field itself is checked again in a synchronized context, as suggested by the second FAQ statement. It will find that this value is still null, and will initialize a new FinalWrapper and publish with synchronization.
In the case that it is just a stale value, by entering the synchronized block, the thread can setup a happens-before order with a preceding write to the field. By definition, if a value is stale, then some writer has already written to the helperWrapper field, and that the current thread just has not seen it yet. By entering into the synchronized block, a happens-before relationship is established with that previous write, since according to our first scenario, a truly uninitialized helperWrapper will be initialized by the same lock. Therefore, it can recover by rereading once the method has entered a synchronized context and obtain the most up-to-date, non-null value.
I hope that my explanations and the accompanying examples that I have given will clear things up for you.
Every time when I call the generate_directed_packet a new object is created. Should I worry about deleting the packet object, before creating the next one. If so, how should I go about deleting the packet object?
function void generate_directed_packet();
packet = new();
void'(packet.randomize());
endfunction : generate_directed_packet
SystemVerilog has automatic memory management. That means it only holds an object around as long as there is a class variable containing a handle to that object. The simulator "deletes" the object after there are no more class variables with a handle to that object. The "delete" is in quotes because you have no knowledge of when it deletes the object. More likely it keeps the space around until another new() of the same object and reclaims the space.
If you are using the UVM, the typical case is you are generating packets in a sequence, and sending them to a driver. What you are really doing is creating a handle to a new object in the sequence, and then copying the handle from variables in the sequence to variables in the driver.
As you copy the handle from one variable to another, you are erasing the reference to the older object. So as long as you are put putting handles to your objects in data structure that grows as you add more object handles to it, the space from the old objects get reclaimed.
HashMap vs ConcurrentHashMap, when the value is AtomicInteger or LongAdder, is there any harm in using HashMap in a multithreaded environment ?
Yes, there is.
An object being of type AtomicInteger or LongAdder just means that the object itself is safe in a concurrent modification operation (i.e. if two threads try to modify it, they will do so one after the other). However, if the map containing the objects itself is of type 'HashMap', then concurrent modification operations of the map are not safe. For instance, if you want to add a key-value pair only if the key doesn't already exist in the map, you cannot safely use the putIfAbset() operation anymore because it's not synchronized/thread-safe in HashMap. And if you do use it, then it is possible that two threads will execute call this method at the same time, both of them reaching the conclusion that the map doesn't have they key, and then both of them adding a key-value pair, resulting in one of them overwriting the other other's value.
You cannot use a HashMap in a multithreaded environment. The reason is as follows:
If multiple threads operate on a simple HashMap they can damage the internal structure of the HashMap which is an array of linked lists. The links can go missing or go in circles. The result will be that the HashMap will be totally unusable and corrupt. This is the reason you should always use a concurrentHashMap in a multithreaded environment regardless of what value you want to store in the map itself.
Now, in a concurrentHashMap of a type say map< String val, 'any number'> 'any number' could be a LongAdder or an AtomicLong etc. Remember that not all operations on a concurrentHashMap are threadsafe by default. Therefore, if you use say a LongAdder then you could write the following atomic operation without any need to synchronize:
map.putIfAbsent(a string key, new LongAdder());
map.get("abc").increment();
I need to take the proxy instance created by Proxy.newProxyInstance as the key of a HashMap. I am not sure if it overwrites the equals and hashcode method of the Object. I am not sure if Object is suitable for that.
I checked the doc for hashcode method:
As much as is reasonably practical, the hashCode method defined by
* class Object does return distinct integers for distinct
* objects. (This is typically implemented by converting the internal
* address of the object into an integer.
The hashcode of Object will change after the object instance is moved during GC. Is this right?
I got #NPE 's explanation:
The computation is performed the first time a hash code is required. To maintain consistency, the result is then stored in the object's header and is returned on subsequent calls to hashCode(). The caching is done outside this function.
So the hashcode of Object will not change.
I got this from Proxy Java Doc:
A method invocation on a proxy instance through one of its proxy interfaces will be dispatched to the invoke method of the instance's invocation handler, passing the proxy instance, a java.lang.reflect.Method object identifying the method that was invoked, and an array of type Object containing the arguments.
That's to say only those methods of its proxy interfaces will be involved for proxy action. equals and hashCode are reasonably excluded only if they are also in the interfaces.
So the proxy should not override Obeject's implementation.
Thanks