I am working on a project that was written in FORTRAN95 (using the FTN95 compiler) and VC++6 that was moved to Intel Parallel Studio 15 and Visual Studio 2012 respectively. Since the migration, other features were added, but it was now discovered that an older feature generates NaN values at run-time in the FORTRAN code.
After several hours of debugging the program I am quite sure that the new features are not to blame, mainly because I only found causes of the bug in code which is the same as in the latest working version of the program (compiled with FTN95 & VC++6). Therefore I tend to believe that the issue was caused by the migration itself. Now the problem is that the program compiles without errors. Therefore I'm asking for any ideas on how I could solve this issue.
I know the description is vague at best and I cannot give many details about the bug since it stretches thousands of lines of code (the application is used for scientific calculations). Any ideas would be highly appreciated.
EDIT: Some general details about the bug
The program gets user input from the GUI and initializes corresponding variables in FORTRAN. This stage works as expected and values are passed correctly. After all the desired options have been chosen, the FORTRAN code runs it's calculations and outputs it's results (the NaN values) to a file.
Related
I am new to C++ programming and have been setting up Unreal Engine 4 with Visual Studio 2019.
The problem I am facing is that VS seems very slow to recognize my code in C++.
For example, it can take more than 10 seconds to recognize that I wrote a method declaration and apply the proper color and give me intellisense (intellicode?) or show me any error. Same problem with variables and such. Code completion is pretty much never happening.
It's unusable right now and I would appreciate if somebody have an idea of what's going on.
I never have this problem in C#.
I tried activate / deactivate extensions (just got a couple of them). I have ran a VS repair.
I have a recent computer (i7 9k, 16gb RAM) running windows 10.
Thanks in advance.
sigh
In my third try to introducing Linux to my everyday life I'm still bouncing hard on programming.
I have solution for my major thesis' program created in Visual Studio (five projects in one solution; 4 static libs, one executable), and managed with it since it's beginning. It's posted on Git, so I thought, that I'll try changing it and compiling on my laptop with Ubuntu 18.4. While changing it was no-effort task (Linux text editors are neat; as opposed to almost non-existent IDEs), compiling in the other hand was the point I doubted in all my skills.
First problem I've encountered is that I have no idea how to manage multiple projects outside VS. I mean, there are .vcxproj files with data, but not very useful for what I saw.
Second problem are references - I have no idea how to link #include directories to specific point in files without going ../MainFolder/subfolder/file.h which is extremely unesthetic.
I expect, that those are just iceberg's peaks, and I will encounter massive amount of problems in future, but as for now - can anybody give me idea of how to manage such project in Linux?
i have the following problem (and cannot really produce a minimal test)--
i am porting a package from C++ via Rcpp to R.
the tests (i am testing if the output matrix is exactly what i
would get if calling c++ directly) under linux and osx are absolutely equal, no difference.
but when testing either via build_win() or via a win 8.1 virtual machine i get different results (but the results between both are consistent, so i have linux/osx vs win results)
i already replaced the one rand() call with the corresponding Rcpp sugar, so this should be no problem (i hope at least).
as calling the tests via "R -d valgrind" also produce no error, i am a bit puzzled how to proceed.
all tests are done with R 3.2.0 (local machines) and latest unstable (via build_win())
so my questions are:
are there any known Rcpp differences when compiling (e.g. the compiler provided by Rtools on windows is too old and therefore numeric computations (using STL, no other library like boost/eigen etc) are expected to be slightly different?
is there a good way to debug the problem? i would need to trace basically the C++ code line by line, i am even not sure how to do that except for heavy std::couts.
thanks.
the truth about the 32bit/64bit problem is indeed written up here
different behaviour or sqrt when compiled with 64 or 32 bits
adding the -ffloat-store option did fix my problem.
never expected that, thought the problem is in the source code.
I know question like this have been asked before, but there isn't the exact answer I'm searching for.
Today I was writing ACM-ICPC contest with my team. Usually we are using GNU C++ 4.8.1 compilator (which was available on contest). We had written code, which had time limit exceeded on test case 10. At the end of contest, less then 2 minutes remaining, I sent the exactly same submission with Visual C++ 2013 (same source file, different language) it got accepted and worked. There were more than 60 test cases and our code passed them all.
Once more I say that there were no differences between the source codes.
Now I'm just interested why it happened.
Anyone knows what the reason is?
Without knowing the exact compiler options you used, this answer might be a bit difficult to answer. Usually, compilers come with many options and provide some default values which are used as long as the user does not override them. This is also true for code optimization options. Both mentioned compilers are capable to significantly improve the speed of the generated binary when being told so. A wild guess would be that in our case, the optimization settings used by the GNU compiler did not improve the executable performance so much but the VC++ settings did. For example because not any flags were used in one case. Another wild guess would be that one compiler was generating a debug binary and the other did not (check for the option -g with GCC which switches debug symbol generation on).
On the other hand, depending on the program you created, it could of course be that VC++ was simply better in performing the optimization than g++.
If you are interested in easy increasing the performance, have a look at the high-level optimization flags at https://gcc.gnu.org/onlinedocs/gnat_ugn/Optimization-Levels.html or for the full story, at the complete list at https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html.
More input on comparing compilers:
http://willus.com/ccomp_benchmark2.shtml?p1
http://news.dice.com/2013/11/26/speed-test-2-comparing-c-compilers-on-windows
I'm a little late with this question, but better late than never. I've been using Visual Studio 6.0 since it came out, but recently switched to VS 2013 on a new PC.
I've gotten my projects to build under 2013, but the resulting executables it produces are consistently bigger than VS6.0 produced. I've seen a similar thread on here about that happening in the transition from VS2008 to VS2010, and the comments and suggestions there all seem to attribute the change to changes in MFC libraries that are statically linked in. But my projects are straight C code. No C++, let alone MFC. And the 'Use of MFC' option on my project is set to "Use Standard Windows Libraries" (presumably set by the import tool that generated the 2013-compatible project). The only non-stadard library it uses is wsock32.lib.
The extra size isn't a killer, but it's a significant relative to the size of the whole app. My biggest .exe goes from 980Kb to 1.3Mb - about a 35% increase in size to an app whose small size was a selling point (i.e. install this tiny app and you have access to all of our goodies). That's without debugging info - the increase on the debug version is even more - but I don't really care about that.
Any ideas how to strip out the new cruft - or even to know what it is?
This is a good manual how to make your binaries smaller.
Basic ideas are the following:
Don't forget about Release mode
Declare #define WIN32_LEAN_AND_MEAN
Dynamically link to the C++ runtime
Compile the executable without debugging information
Compile with /O1, an 'optimize for size' flag
Remove iostream and fstream headers, use low level instead if possible
Typically you generate a MAP file on both systems, and figure out the sections that cause the largest contributions.
Anton's answer reminds me: first check if they are both linked the same way (both static or both dynamic, otherwise it is apples and oranges)