Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm looking for a memtester which would cover as large a part as possible of physical memory in a running machine which doesn't have an ECC RAM. It should test memory in chunks. For example: allocate 100MB, test it, release it, allocate another 100MB... I know that some regions of memory are already allocated so kernel has to reallocate them.
I found that this product has an option to specify the physical location but it doesn't work because mmap() function doesn't allocate specified location. I would get the solution if I modified the kernel but that still doesn't solve the problem because some sections are already allocated.
I think that this is a known problem, so is there anyone who already solved it?
Memtest86 is probably the way to go, you can boot it from a ramdisk. We have used it for years to test memory in the factory.
http://www.memtest.org/
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I was told once there is a book that shows you how to make a database from scratch using sed, awk, and the Linux filesystem. I thought I had the name, but now I cannot find it. What is this book called?
Edit:
My understanding is this book was meant for learning how databases work, and how to build your own entirely from scratch using awk and the filesystem. From how it was explained, you could build your own version of /rdb, then when you finished you could just use /rdb itself, but now you'd know how it was made.
So, at the end of the book, you'd have almost completely remade /rdb yourself.
Is it "Unix Relational Database Management: Application Development in the Unix Environment (/RDB)" http://www.amazon.com/exec/obidos/ISBN=013938622X/cbbrownecompu-20/ ?
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm new to OpenCL in order to need help on choosing a language for writing a OpenCL program, There are many language(like c/c++ , python , java) available for that. I want to develop a application on distributed OpenCL using VirtualCL.
I will suggest you to use C++ for OpenCL program.
But literly speaking, the reason to use C++ rather than other languages is performance.
It is very fast at runtime than other languages.This allows writing very efficient code that still has high abstraction level.
The best way to phrase it is :
Less code, no run-time overhead, more safety.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
i'm trying to develop an 3D graphic application on my SAMA5D34-EK board(cortex A5 ARM without GPU), seems the openGL cannot be used.
Anyone can tell me some other alternative solution to do that?
You could try Mesa 3D's software renderer, it's essentially OpenGL implemented all in software.
Your CPU seems to have some graphics-acceleration built-in, so if you could use that perhaps you'd gain performance. Probably not though, all the linked-in document says is:
[The] peripheral set includes an LCD controller with overlays for hardware-accelerated image composition [...]
So that might be too narrowly focused to be useful in a general-purpose rendering solution. I didn't chase down any more detailed documentation, so it's hard to be sure.
Also, using such acceleration might mean writing your own low-level engine/renderer though, which would probably be a very large project.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I would like to start writing lots of tiny "utility" NodeJS-based apps -- things like stream filters, generators, and the like, that might be 30-40LOC each. Each one would consume nearly zero CPU, RAM, or bandwidth (when the overhead of NodeJS and OS processes are factored out). The point is, I want a simple way to run thousands of them.
What do I need? Are there any PaaS's that can run thousands of NodeJS apps for a reasonable price ($10/mo)? Is there some kind of middleware that can give me thousands of sandboxed "partitions" on top of one Node process? Or is there some binary that's made for this that I could put on a VPS?
You can use vm module for sandboxing javascript code. It is still in works, be sure to read the caveats.
Functions that you can use:
runInThisContext: runs code in a separate context (but has access to global vars, not local).
runInNewContext: takes a seperate set of global var for context.
runInContext: takes a Context object(previously defined), for running the code.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'd like to improve my understanding of NTFS semantics; ideally, I'd like some kind of specification document(s).
I could (in theory) figure out the basics by experimentation, but there's always the possibility that I'd be ignoring some important variable.
For example, I'm having difficulty finding definitive information on the following:
(1) When do file times (created/modified/accessed) get set/updated? For example, does copying and/or moving a file affect any or all of these times? What about if the file is being copied/moved between volumes? What about alternate streams?
(2) How do sharing modes and read/write access interact?
(3) What happens to security information (SACL, DACL, ownership etc.) when a file is copied and/or moved?
As I said, I could probably "answer" these questions by writing some code, but that would only tell me how the specific operations I tested behaved across any machines that I ran the code on. I'd like to find a resource that can tell me how this stuff is supposed to behave, identifying all the variables that could affect the behaviour.
TIA!
Apparently there are no public non-NDA specifications. Projects such as NTFS-3G would greatly benefit from them, but they don't mention anything.
A predecessor of NTFS-3G, called linux-ntfs, has made some documentation on its own here. Maybe that's good enough for you, maybe not.