how to find a address everytime - visual-c++

i have been working on a server and it works with 2 programs i made one is the server and one is the error handler and if the main server fails it restarts it. the 2nd program's main way to handle data is by reading the values from the program(because when i was debugging i was filling in the address's), because writeing the values to a text file would just take too long and also would take up space i really need :| anyway i have about 100,000 values BUT i only need about 100 i need to find ONLY them and if i get the wrong one i'll i might crash it by trying to fix what's "wrong" when nothing is. (sometimes way more but it will not have more then 100k of them by the time i need to know the address's).
i don't need people to tell me how to do someother way to do it, i would really just like to know how to find one value in all of the other ones. and i can't write them to a text file i can only read them from memory because the way i set it up and i don't want to spend 2-3 weeks to recode it.
~edit~
Sorry, if i was not clear.
i need the address of a value in memory(i.e int, bool and etc), so i can find it.
also i really don't want to share anything with 2 program because if one crashs it might take the other with it. if they are shareing and if it crashs and does not restart then my server will be offline intell someone tells me or i do a update :| so a day or two.
if anyone else is confused sorry and just ask and i'll edit.

You won't be able to find them in memory unless you already know their values.
And if you already know their values, why bother looking them up?
If it'd take you 2-3 weeks to re-code it, you should probably spend those 2-3 weeks rewriting your "server" application so that it's more maintainable.

Sorry, it doesn't work that way. Many "values" (variables) are not stored in memory. Instead, they are stored in CPU registers. This is done because registers are a lot faster. However, they are also scarce, so in a big program like yours they will be reused. At different times, different variables will be mapped to a particular register. As a result, even if you know that localVariable732 is sometimes mapped to the ECX register, you won't know whether the ECX register currently contains the localVariable732 value.

Related

Why whenever I look information on how to use the SDRAM of my DE1-SOC on internet, it takes me to use NIOS-II?

I'm doing a simple project of taking 100 numbers from an external memory (one by one), doing a simple arithmetic to that number (like adding 1) and returning it to another memory.
I successfully did that project "representing" a memory in verilog code, however I want now to synthesize my design but using the SDRAM of the board. The way I load data to the SDRAM or what I do with the resulting data outputted again to the SDRAM is irrelevant for my homework.
But I just can't understand what to do, all the information in internet takes me to the utilization of NIOS-II. Considering I have to load data to the SDRAM to make it able to serve me, and other reasons, maybe, is that NIOS-II is the most recommended way to do this? Can be done with out it, and would it be more practical?
this might not be the place to have your homework done. Additionally your question is very unclear. Let's try anyway:
I successfully did that project "representing" a memory in verilog code
I assume that you mean that you downloaded a model corresponding to the memory you have on your board.
taking 100 numbers from an external memory
I wonder how you do that. Did you load some initialization file or did you write the numbers first? In case of the first: this will not be synthesized and you might read random data, you should refer to the datasheet of your memory for this. If you expect specific values, you will need to write them to memory during some initialization procedure.
Of course you will need the correct constraints for your device. So I'd suggest that you take the NIOSII example, get it up and running and get rid of the NIOSII in a next step. At least you will be sure that the interfacing between controller and sdram is correct. Then read the datasheet of the controller. Probably you have a readstrobe, write strobe, data in, data out port, some configuration, perhaps a burstlength. If you need help with that you'll need to come up with a more specific question

Netlogo 5.1 (and 5.05) Behavior Space Memory Leak

I have posted on this before, but thought I had tracked it down to the NW extension, however, memory leakage still occurs in the latest version. I found this thread, which discusses a similar issues, but attributes it to Behavior Space:
http://netlogo-users.18673.x6.nabble.com/Behaviorspace-Memory-Leak-td5003468.html
I have found the same symptoms. My model starts out at around 650mb, but over each run the private working set memory rises, to the point where it hits the 1024 limit. I have sufficient memory to raise this, but in reality it will only delay the onset. I am using the table output, as based on previous discussions this helps, and it does, but it only slows the rate of increase. However, eventually the memory usage rises to a point where the PC starts to struggle. I am clearing all data between runs so there should be no hangover. I noticed in the highlighted thread that they were going to run headless. I will try this, but I wondered if anyone else had noticed the issue? My other option is to break the BehSpc simulation into a few batches so the issues never arises, bit i would be nice to let the model run and walk away as it takes around 2 hours to go through.
Some possible next steps:
1) Isolate the exact conditions under which the problem does or not occur. Can you make it happen without involving the nw extension, or not? Does it still happen if you remove some of the code from your model? What if you keep removing code — when does the problem go away? What is the smallest code that still causes the problem? Almost any bug can be demonstrated with only a small amount of code — and finding that smallest demonstration is exactly what is needed in order to track down the cause and fix it.
2) Use standard memory profiling tools for the JVM to see what kind of objects are using the memory. This might provide some clues to possible causes.
In general, we are not receiving other bug reports from users along these lines. It's routine, and has been for many years now, for people to use BehaviorSpace (both headless and not) and do experiments that last for hours or even for days. So whatever it is you're experiencing almost certainly has a more specific cause -- mostly likely, in the nw extension -- that could be isolated.

How to tell Linux that a mmap()'d page does not need to be written to swap if the backing physical page is needed?

Hopefully the title is clear. I have a chunk of memory obtained via mmap(). After some time, I have concluded that I no longer need the data within this range. I still wish to keep this range, however. That is, I do not want to call mummap(). I'm trying to be a good citizen and not make the system swap more than it needs.
Is there a way to tell the Linux kernel that if the given page is backed by a physical page and if the kernel decides it needs that physical page, do not bother writing that page to swap?
I imagine under the hood this magical function call would destroy any mapping between the given virtual page and physical page, if present, without writing to swap first.
Your question (as stated) makes no sense.
Let's assume that there was a way for you to tell the kernel to do what you want.
Let's further assume that it did need the extra RAM, so it took away your page, and didn't swap it out.
Now your program tries to read that page (since you didn't want to munmap the data, presumably you might try to access it). What is the kernel to do? The choices I see:
it can give you a new page filled with 0s.
it can give you SIGSEGV
If you wanted choice 2, you could achieve the same result with munmap.
If you wanted choice 1, you could mremap over the existing mapping with MAP_ANON (or munmap followed by new mmap).
In either case, you can't depend on the old data being there when you need it.
The only way your question would make sense is if there was some additional mechanism for the kernel to let you know that it is taking away your page (e.g. send you a special signal). But the situation you described is likely rare enough to warrant additional complexity.
EDIT:
You might be looking for madvise(..., MADV_DONTNEED)
You could munmap the region, then mmap it again with MAP_NORESERVE
If you know at initial mapping time that swapping is not needed, use MAP_NORESERVE

How to open and read 1000s of files very quickly

My problem is that application takes too long to load thousands of files. Yes, I know it's going to take a long time, but I would like to make it faster by any amount of time. What I mean by "load" is open the file to get its descriptor and then read the first 100 bytes or so of it.
So, my main strategy has been to create a second thread that will open and close (without reading any contents) all the files. This seems to help because the thread runs ahead of the main thread and I'm guessing the OS is caching these file descriptors ahead of time so that when my main thread opens them it's a quick open. This has actually helped because the thread can start caching these file descriptors while my main thread is parsing the data read in from these files.
So my real question is...what else can I do to make this faster? What approaches are there? Has anyone had success doing this?
I've heard of OS prefetching calls but it was for virtual memory pages. Is there a way to tell the OS, hey I'm going to be needed all these files pretty soon - I suggest that you start gathering them for me ahead of time. My lookahead thread is pretty crude.
Are there low level disk techniques I could use? Is there possibly a pattern of file access that would help? Right now, the files that are loaded all come from the same folder. I suppose there is no way to determine where exactly on disk they lie and which ordering of file opens would be fastest for the disk. I'm also guessing that the disk has some hard ware to make this as efficient as possible too.
My application is mainly for windows, but unix suggestions would help as well.
I am programming in C++ if that makes a difference.
Thanks,
-julian
My first thought is that this is going to be hard to work around from a programmatic level.
You'll find Linux and OSX can access thousands of files like this in a fraction of the time it takes Windows. I don't know how much control you have over the machine. If you can keep the thousands of files on a FAT partition, you should see better results than with NTFS.
How often are you scanning these files and how often are they changing. If the ratio is heavily on the reading side, it would make sense to copy the start of each file into a cache. The cache could store the filename, modification time, and 100 bytes of each of the thousand files.

multithreading and reading from one file (perl)

Hej sharp minds!
I need your expert guidance in making some choices.
Situation is like this:
1. I have approx. 500 flat files containing from 100 to 50000 records that have to be processed.
2. Each record in the files mentioned above has to be replaced using value from the separate huge file (2-15Gb) containing 100-200 million entries.
So I thought to make the processing using multicores - one file per thread/fork.
Is that a good idea? Since each thread needs to read from same huge file? It's a bit of a problem loading it into memory do to the size? Using file::tie is an option, but is that working with threads/forks?
Need your advise how to proceed.
Thanks
Yes, of course, using multiple cores for multi-threaded application is a good idea, because that's what those cores are for. Though it sounds like your problem involves heavy I/O, so, it might be that you will not use that much of CPU anyway.
Also since you are only going to read that big file, tie should work perfectly. I haven't heard of problems with that. But if you are going to search that big file for each record in your smaller files, then I guess it would take you a long time despite of the number of threads you use. If data from big file can be indexed based on some key, then I would advice to put it in some NoSQL databse and access it from your program. That would probably speed up your task even more than using multiple threads/cores.

Resources