Best Practice: Use reference to objects in loop or plain array access? - c++98

I go an array of objects Data of let's say 100: Data data_array[100]. What would be the best practice to access these objects in a loop in c++98.
1.
for(int i=0;i<100;++i)
{
Data& data_obj = data_array[i];
// do a lot of with it, call functions and so on
}
2.
for(int i=0;i<100;++i)
{
// do a lot of with it, call functions and so on but always use data_array[i]
}
Is there a performance decrease when using method 1 over 2? Or will the compiler optimizations eliminate any differences anyway?
What would be the preferred way to write code?
PS: I don't have a PC at hand to test out the performance myself.

if you have dissasembler you could read the assembler code and check there are very little differences between the 2 ways.
In fact data_array[i] is always transformed in a temp variable.. but with the first way, i think the code is more readable

Related

Fastest way to determine best variables to get highest return

I'm trying to find the best variables for a stock trading bot in order to get the best return. Even with a small amount of data, its already doing multiple hundred billion loops which takes a while. When I scale this up with more data, it will take even longer to find the best variables.
What I'm doing is having multiple nested loops that try every different scenario to determine the best variables to get the highest return.
Below is an example. While it's obvious what the best variables for this scenario is, its just a mockup of what I'm doing and isn't code from the project, but rather just a simplified version of what I'm trying to achieve.
let bestReturn = 0;
for (let variableOne = 0; variableOne < 10; variableOne++) {
for (let variableTwo = 0; variableTwo < 10; variableTwo++) {
for (let variableThree = 0; variableThree < 10; variableThree++) {
if ((variableOne * variableTwo * variableThree) > bestReturn) {
bestReturn = variableOne * variableTwo * variableThree
}
}
}
}
I'm currently just running this on my Windows PC, but is there system that handles this process faster, like Linux for example? or is there a better way of doing this? Thanks!
Doing a grid search (that is essential that your loops are doing) on a hyperdimensional parameter space does not seem to be very promising. But at least you can increase the size of your grid and refine your search after the first rough scanning of the parameter space.
Parameter optimization is a huge field and at least for generic goal functions that can not be solved analytically the best you can do is to apply heuristics to find some (and hopefully a quite good one) local minimum.
The downhill simplex method is extremely easy to implement and quite efficient. so maybe you want to start with this.
Consider to run it multiple times with randomized starting parameters.

A non-loop efficient way to erase from unordered_map with predicate C++11?

Algorithms and member functions are suggested over looping for efficiency when working with containers. However, associative containers (unordered_map) does not work with the erase(remove_if) paradigm, it appears that the common method is to fall back on a loop.
uom is a std::unordered_map
for(auto it = uom.begin() ; it!=uom.end(); ){
if(it->second->toErase()) {
delete it->second; // omit delete if using std::unique_ptr
fpc.erase(it++);
}else{
++it;
}
}
//as per Scott Meyers Effective STL pg45
is this as efficient as possible? It seams like there should be a better way to do this using something like the erase(remove_if) paradigm but that works for unordered_map (I understand that the associative containers cannot be "re-ordered" hence the non-support of the remove_if algorithm). Is this really the best way to erase entries from an unordered_map using a predicate? Any suggestions?
Thank you in advance.
That is as efficient as possible. If you want something more convenient, you could use boost's erase_if template - see here. unordered_map maintains a linked list of nodes in each bucket, so it's cheap to erase them. There's no need of remove-if type "compaction", which suits std::vector's use of contiguous memory.

How to use async.map

I am having two for loops. One nested in another. I want to iterate on a single Object and change a property in it with another value, something like this:
for(i=0;i<items.length;<i++){
obj.changeThisAttribute = "abc";
for(j=0;j<items.anotherobj.length;j++){
items.anotherobj.changeThisAttribute = "dyz";
}
}
return items;
Is there any better way of doing this? I have read about Async.map and think that it will be a good solution however there is no good example of the same. Please suggest a running example or any alternative way of achieving this.
You're not performing anything asynchronous here so there is no point in async.map.
Unless this is very CPU intensive (looks fine! profile, how many objects do you have?) , your code looks fine.
It's readable, straightforward and simple, no need to look for alternative ways.
(I'm assuming your inner loop goes through items[i].anotherobj and not items.anotherobj though)

Counter++ in Parallel.ForEach

I understand using an iterator++ inside Parallel.ForEach is not a good option but right now i'm forced to use a counter inside a Parallel.ForEach loop, counter is used to pick up column names of a dynamic object at runtime.Any suggestion what would be the best option?.I read somewhere at StackOverflow that using "Interlocked" is again a bad design inside Parallel.ForEach.
If you really need parallel processing, the indices will have to be pre-computed. Something like Enumerable.Range(0, cols.Length).ToArray(). Otherwise, each column will depend on the previous one, which obviously doesn't parallelize.

Groovy for loop execution time

O Groovy Gurus,
This code snippet runs in around 1 second
for (int i in (1..10000000)) {
j = i;
}
while this one takes almost 9 second
for (int i = 1; i < 10000000; i++) {
j = i;
}
Why is it so?
Ok. Here is my take on why?
If you convert both scripts to bytecode, you will notice that
ForInLoop uses Range. Iterator is used to advance during each loop. Comparison (<) is made directly to int (or Integer) to determine whether the exit condition has been met or not
ForLoop uses traditional increment, check condition, and perform action. For checking condition i < 10000000 it uses Groovy's ScriptBytecodeAdapter.compareLessThan. If you dig deep into that method's code, you will find both sides of comparison is taken in as Object and there are so many things going on, casting, comparing them as object, etc.
ScriptBytecodeAdapter.compareLessThan --> ScriptBytecodeAdapter.compareTo --> DefaultTypeTransformation.compareTo
There are other classes in typehandling package which implements compareTo method specifically for math data types, not sure why they are not being used, (if they are not being used)
I am suspecting that is the reason second loop is taking longer.
Again, please correct me if I am wrong or missing something...
In your testing, be sure to "warm" the JVM up before taking the measure, otherwise you may wind up triggering various startup actions in the platform (class loading, JIT compilation). Run your tests many times in a row too. Also, if you did the second test while a garbage collect was going on, that might have an impact. Try running each of your tests 100 times and print out the times after each test, and see what that tells you.
If you can eliminate potential artifacts from startup time as Jim suggests, then I'd hazard a guess that the Java-style for loop in Groovy is not so well implemented as the original Groovy-style for loop. It was only added as of v1.5 after user requests, so perhaps its implementation was a bit of an afterthought.
Have you taken a look at the bytecode generated for your two examples to see if there are any differences? There was a discussion about Groovy performance here in which one of the comments (from one 'johnchase') says this:
I wonder if the difference you saw related to how Groovy uses numbers (primitives) - since it wraps all primitives in their equivalent Java wrapper classes (int -> Integer), I’d imagine that would slow things down quite a bit. I’d be interested in seeing the performance of Java code that loops 10,000,000 using the wrapper classes instead of ints.
So perhaps the original Groovy for loop does not suffer from this? Just speculation on my part really though.

Resources