ATL uses thunks to manage callbacks for windows, and apparently it needs to allow for data execution.
Microsoft says:
Note that system DEP policy can override, and having DEP AlwaysOn will disable ATL thunk emulation, regardless of the attribute.
Am I correct in translating this quote to (more or less) "ATL applications can crash due to system policies"?
Is there a way to make a pre-ATL-8.0 application work correctly on any system, hopefully while still turning on DEP for everything other than the thunk?
DEP is enabled per process, so you cannot disable DEP for the buggy fragment only. The options are either to rebuild a binary with fixed ATL to make the binary DEP-compatible, or disable DEP for the whole process where the binary is used.
Earlier ATL versions indeed had this problem and it was fixed at some point.
DEP exceptions are under My Computer, Advanced tab, Performance Settings, Data Execution Prevention.
It is not a problem with ATL 8.0:
If possible, replace the older components with ones built to support
the "No eXecute Compatibility", such as those using ATL 8.0 or newer.
The ATL thunk strategy was devised as a lookup convenience and to
avoid using thread-local storage for a window-handle-to-object map,
but the thunk emulation required in DEP-aware OS's negates and even
reverses any performance improvement. Newer versions of ATL don't
require the thunk emulation because their thunks are created in
executable data blocks.
EDIT: Sorry, didn't notice you asked about pre-8.0 ATL.
Related
ucontext_t has been removed from POSIX, but is still there in glibc.
Is it safe to use it on a linux-arm64 if I don't care about interoperability? Any gotchas? (floating point registers or anything else I should be worry of?)
Yes, it should be perfectly safe to use. Just because ucontext.h was removed from POSIX.1-2017/SUSv7 does not mean that glibc no longer supports the functionality.
This particular header was removed in the latest version of the standard since IEEE Std 1003.1-2001/Cor 2-2004, item XBD/TC2/D6/28 was applied in the previous version of the standard, updating the getcontext, makecontext, setcontext, and swapcontext functions to be obsolescent, and thus the header was also defacto obsolescent.
I am using Qt 5.6 . I have created a application which contains many widgets such as labels,Buttons..etc.
Now I added a label in to my gui , But I am not able to access the label (ui->label_name).
While compile the application I am getting below prints.
*In file included from mainwindow.cpp:10:0: ui_mainwindow.h: In member function 'void Ui_MainWindow::setupUi(QMainWindow*)': ui_mainwindow.h:853:10: note: variable tracking size limit exceeded with -fvar-tracking-assignments, retrying without
void setupUi(QMainWindow *MainWindow)*
Variable Tracking Assignments is a compiler (GCC) option, oriented to enhance the debugger and optimization experience. If you are experiencing problems while debugging your program, then you may disable optimizations or disable this feature altogether (with the compiler flag '-fno-var-tracking-assignments'). There may be also a bug in your compiler version. If your compiler is too old, you may consider an upgrade. There is no limit for Qt widgets, though.
I'm thinking of porting some of my cross-platform scripts to node.js partly to learn node.js, partly because I'm more familiar with JavaScript these days, and partly due to problems with large file support in other scripting languages.
Some scripting languages seem to have patchy support for large file offsets, depending on such things as whether they are running on a 32-/64-bit OS or processor, or need to be specifically compiled with certain flags.
So I want to experiment with node.js anyway but Googling I'm not finding much either way on its support (or it's library/framework support etc) for large files with 64-bit offsets.
I realize that to some extend this will depend on JavaScript's underlying integer support at least. If I correctly read What is JavaScript's Max Int? What's the highest Integer value a Number can go to without losing precision? it seems that JavaScript uses floating point internally even for integers and therefore
the largest exact integral value is 253
Then again node.js is intended for servers and servers should expect large file support.
Does node.js support 64-bit file offsets?
UPDATE
Despite the _LARGEFILE_SOURCE and _FILE_OFFSET_BITS build flags, now that I've started porting my project that requires this, I've found that fs.read(files.d.fd, chunk, 0, 1023, 0x7fffffff, function (err, bytesRead, data) succeeds but 0x80000000 fails with EINVAL. This is with version v0.6.11 running on 32-bit Windows 7.
So far I'm not sure whether this is a limitation only in fs, a bug in node.js, or a problem only on Windows builds.
Is it intended that greater-than-31-bit file offsets work in node.js in all core modules on all platforms?
Node.js is compiled with _LARGEFILE_SOURCE and _FILE_OFFSET_BITS on all platforms, so internally it should be safe for large file access. (See the common.gypi in the root of the source dir.)
In terms of the libraries, it uses Number for start (and end) options when creating read and write streams (see fs.createReadStream). This means you can address up to position 2^53 through node (as evidenced here: Also relevant: What is JavaScript's highest integer value that a Number can go to without losing precision?) This is visible in the lib/fs.js code.
It was a little difficult to track down but node.js has only supported 64-bit file offsets since version 0.7.9 (unstable), from the end of May 2012. In stable versions from version 0.8.0, from the end of June 2012.
fs: 64bit offsets for fs calls (Igor Zinkovsky)
On earlier versions failure modes when using larger offsets, failure modes vary from silently seeking to the beginning of the file to throwing an exception with EINVAL.
See the (now closed) bug report:
File offsets over 31 bits are not supported
To check for large file support programatically from node.js code
if (process.version.substring(1).split('.') >= [0,7,9]) {
// use 64-bit file offsets...
}
I have a quad-core laptop running Windows XP, but looking at Task Manager R only ever seems to use one processor at a time. How can I make R use all four processors and speed up my R programs?
I have a basic system I use where I parallelize my programs on the "for" loops. This method is simple once you understand what needs to be done. It only works for local computing, but that seems to be what you're after.
You'll need these libraries installed:
library("parallel")
library("foreach")
library("doParallel")
First you need to create your computing cluster. I usually do other stuff while running parallel programs, so I like to leave one open. The "detectCores" function will return the number of cores in your computer.
cl <- makeCluster(detectCores() - 1)
registerDoParallel(cl, cores = detectCores() - 1)
Next, call your for loop with the "foreach" command, along with the %dopar% operator. I always use a "try" wrapper to make sure that any iterations where the operations fail are discarded, and don't disrupt the otherwise good data. You will need to specify the ".combine" parameter, and pass any necessary packages into the loop. Note that "i" is defined with an equals sign, not an "in" operator!
data = foreach(i = 1:length(filenames), .packages = c("ncdf","chron","stats"),
.combine = rbind) %dopar% {
try({
# your operations; line 1...
# your operations; line 2...
# your output
})
}
Once you're done, clean up with:
stopCluster(cl)
The CRAN Task View on High-Performance Compting with R lists several options. XP is a restriction, but you still get something like snow to work using sockets within minutes.
As of version 2.15, R now comes with native support for multi-core computations. Just load the parallel package
library("parallel")
and check out the associated vignette
vignette("parallel")
I hear tell that REvolution R supports better multi-threading then the typical CRAN version of R and REvolution also supports 64 bit R in windows. I have been considering buying a copy but I found their pricing opaque. There's no price list on their web site. Very odd.
I believe the multicore package works on XP. It gives some basic multi-process capability, especially through offering a drop-in replacement for lapply() and a simple way to evaluate an expression in a new thread (mcparallel()).
On Windows I believe the best way to do this would probably be with foreach and snow as David Smith said.
However, Unix/Linux based systems can compute using multiple processes with the 'multicore' package. It provides a high-level function, 'mclapply', that performs a list comprehension across multiple cores. An advantage of the 'multicore' package is that each processor gets a private copy of the Global Environment that it may modify. Initially, this copy is just a pointer to the Global Environment, making the sharing of variable extremely quick if the Global Environment is treated as read-only.
Rmpi requires that the data be explicitly transferred between R processes instead of working with the 'multicore' closure approach.
-- Dan
If you do a lot of matrix operations and you are using Windows you can install revolutionanalytics.com/revolution-r-open for free, and this one comes with the intel MKL libraries which allow you to do multithreaded matrix operations. On Windows if you take the libiomp5md.dll, Rblas.dll and Rlapack.dll files from that install and overwrite the ones in whatever R version you like to use you'll have multithreaded matrix operations (typically you get a 10-20 x speedup for matrix operations). Or you can use the Atlas Rblas.dll from prs.ism.ac.jp/~nakama/SurviveGotoBLAS2/binary/windows/x64 which also work on 64 bit R and are almost as fast as the MKL ones. I found this the single easiest thing to do to drastically increase R's performance on Windows systems. Not sure why they don't come as standard in fact on R Windows installs.
On Windows, multithreading unfortunately is not well supported in R (unless you use OpenMP via Rcpp) and the available SOCKET-based parallelization on Windows systems, e.g. via package parallel, is very inefficient. On POSIX systems things are better as you can use forking there. (package multicore there is I believe the most efficient one). You could also try to use package Rdsm for multithreading within a shared memory model - I've got a version on my github that has unflagged -unix only flag and should work also on Windows (earlier Windows wasn't supported as dependency bigmemory supposedly didn't work on Windows, but now it seems it does) :
library(devtools)
devtools::install_github('tomwenseleers/Rdsm')
library(Rdsm)
I have read in other discussions that Release dlls have reduced size compared to Debug dlls. But why is it that the size of the dll I have made is the other way around: the Release dll is bigger than the Debug dll. Will it cause problems?
It won't cause problems, its probably that the compiler is 'inlining' more items in the release build and creating larger code. It all depends on the code itself.
Nothing to worry about.
EDIT:
If you are really worried about and not worried about speed you can turn on optimize for size. Or turn off auto inlining and see what difference you get.
EDIT:
More info, you can use dumpbin /headers to see where the dll gets larger
How much bigger is your Release DLL than your Debug DLL?
Your Debug DLLs might seem small is you are generating PDB symbol files (so the debug symbol is not actually in the DLL file). Or if you are inadvertently compiling debug symbols into your Release DLL.
This can be caused by performance optimizations like loop unrolling - if it's significantly different, check your Release linker settings to make sure that you're not statically compiling anything in.
The performance can be influenced if your application perform tasks of high performance. A release version can even be larger than a debug version, if marked options to generate code with information on Debug included. But this also depends on the compiler you are using.