I wrote an application to analyze a log file using haskell.
When I run it with the same log file, sometimes it costs 30s, and sometimes costs 20s, the execution time differs by up to 10 seconds.
why is there such a large difference in running time?
Try separating the processing time from the file-access time.
Read the entire file into memory, track that time, then process the data in your storage strucutres and track that time separately.
My gut instinct is that the file access is the random contriubtor. Gut instinct is not a good substitution for a profiler.
The difference is more than likely caused by other processes that are running at the same time on the system.
Related
I have a python script that I would like to run periodically on a 1 minute basis using a cron job. The script imports some python modules and config files each time it is run. The issue is, there is some large overhead (1-2 minutes) with all the imports. (The modules and files are relatively small; the total size is only 15 MB, so can easily fit in memory).
Once everything is imported, the rest of the script runs relatively quickly (about 0.003 seconds; it's not computationally demanding).
Is it possible to cache all the imports, once, the very first time the script is run, so that all subsequent times the script is run there is no need to import the modules and files again?
No, you can't. You would have to use persistent storage, like shelve, or something in-memory such as SQLite, where you'd store any expensive computations that persist between sessions, and subsequently you'd just read those results from memory/disk, depending on your chosen storage.
Moreover, do note modules are, in fact, being cached upon import in order to improve load time, however, not in memory, but rather on disk, as .pyc files under __pychache__ and the import time per se is insignificant in general, so your imports take that long not because of the import itself, but because of the computations inside those modules, so you might want to optimise those.
The reason you can't do what you want is because in order to keep data in memory, the process must keep running. Memory belongs to the process running the script, and once that script finished, the memory is freed. See here for additional details regarding your issue.
You can't just run a script and fill the memory with whatever computations you have until you might run it another time, because, firstly, the memory wouldn't know when that other time would be (it might be 1 min later, it might be 1 year later) and secondly, if you would be able to do that, then imagine how shortly you'd run out of memory when different scripts from different applications across the OS (it's not just your program out there) would fill the memory with the results of their computations.
So you can either run your code in an indefinite loop with sleep (and keep the process active) or you can use a crontab and store your previous results somewhere.
I have written a sybase stored procedure to move data from certain tables[~50] on primary db for given id to archive db. Since it's taking a very long time to archive, I am thinking to execute the same stored procedure in parallel with unique input id for each call.
I manually ran the stored proc twice at same time with different input and it seems to work. Now I want to use Perl threads[maximum 4 threads] and each thread execute the same procedure with different input.
Please advise if this is recommended way or any other efficient way to achieve this. If the experts choice is threads, any pointers or examples would be helpful.
What you do in Perl does not really matter here: what matters is what happens on the side of the Sybase server. Assuming each client task creates its own connection to the database, then it's all fine and how the client achieved this makes no diff for the Sybase server. But do not use a model where the different client tasks will try to use the same client-server connection as that will never happen in parallel.
No 'answer' per se, but some questions/comments:
Can you quantify taking a very long time to archive? Assuming your archive process consists of a mix of insert/select and delete operations, do query plans and MDA data show fast, efficient operations? If you're seeing table scans, sort merges, deferred inserts/deletes, etc ... then it may be worth the effort to address said performance issues.
Can you expand on the comment that running two stored proc invocations at the same time seems to work? Again, any sign of performance issues for the individual proc calls? Any sign of contention (eg, blocking) between the two proc calls? If the archival proc isn't designed properly for parallel/concurrent operations (eg, eliminate blocking), then you may not be gaining much by running multiple procs in parallel.
How many engines does your dataserver have, and are you planning on running your archive process during a period of moderate-to-heavy user activity? If the current archive process runs at/near 100% cpu utilization on a single dataserver engine, then spawning 4 copies of the same process could see your archive process tying up 4 dataserver engines with heavy cpu utilization ... and if your dataserver doesn't have many engines ... combined with moderate-to-heavy user activity at the same time ... you could end up invoking the wrath of your DBA(s) and users. Net result is that you may need to make sure your archive process hog the dataserver.
One other item to consider, and this may require input from the DBAs ... if you're replicating out of either database (source or archive), increasing the volume of transactions per a given time period could have a negative effect on replication throughput (ie, an increase in replication latency); if replication latency needs to be kept at a minimum, then you may want to rethink your entire archive process from the point of view of spreading out transactional activity enough so as to not have an effect on replication latency (eg, single-threaded archive process that does a few insert/select/delete operations, sleeps a bit, then does another batch, then sleeps, ...).
It's been my experience that archive processes are not considered high-priority operations (assuming they're run on a regular basis, and before the source db fills up); this in turn means the archive process is usually designed so that it's efficient while at the same time putting a (relatively) light load on the dataserver (think: running as a trickle in the background) ... ymmv ...
I need to test some node frameworks, or at least their routing part. That means from the request arrives at the node process for processing until a route has been decided and a function/class with the business logic is called, e.g. just before calling it. I have looked hard and long for a suitable approach, but concluded that it must be done directly in the code and not using an external benchmark tool. I fear measuring the wrong attributes. I tried artillery and ab but they measure a lot more attributes then I want to measure, like RTT, bad OS scheduling, random tasks executing in the OS and so on. My initial benchmarks for my custom routing code using process.hrtime() shows approx. 0.220 ms (220 microseconds) execution time but the external measure shows 0.700 (700 microseconds) which is not an acceptable difference, since it's 3.18x additional time. Sometimes execution time jumps to 1.x seconds due to GC or system tasks. Now I wonder how a reproducible approach would look like? Maybe like this:
Use Docker with Scientific Linux to get a somewhat controlled environment.
A minimal docker container install, node enabled container only, no extras.
Store time results in global scope until test is done and then save to disk.
Terminate all applications with high/moderate diskIO and/or CPU on host OS.
Measure time as explained before and crossing my fingers.
Any other recommendations to take into consideration?
Why slave4 cost too much time but slave5 and slave8 cost so little?Even though the hardware of slave4 is older than other two nodes,but the difference in cost time is so huge,why?
Without code of your job I cannot be 100% sure, but I would assume that you've done some grouping instead of doing reduction first.
It seems like every node except for slave4 send all the data to slave4, and he did all the computations.
It's very common error at the beggining.
I need to implement a daemon that needs to extract data from a database, load the data to memory, and according to this data
perform actions like sending emails or write/update files. These actions need to be performed every 30 minutes.
I really don't know what to decide. Compile a c++ program that will do the task or use scripts and miscellaneous Linux tools (sed/awk).
What will be the fastest way to do this? To save cpu and memory.
The dilemma is about marinating this process if it's script it does not need compilations and I can just drop it into any machine linux/unix
but if it's native it's more harder.
What do you think?
Use cron(1) to start your program every 30 minutes.
So called scripting languages will definitely enable you to write your program more quickly than C++. But doing this with shell and sed an/or awk, while definitly possible, is very difficult when you have to cope with all corner cases, particularly regarding strings escaping (think quotes, “&”’s “;”’s…).
I suggest you go with a more full featured “scripting” language such as Perl or Python.
Why are you trying to save CPU & Memory? Are you absolutely sure this is a real requirement (or just "premature optimization")?
Unless performance is critical, there's absolutely no reason to code such a thing in C++. It seems to be a sort of maintenance process (right?). I say write it in the highest level script language you know. Python or PHP seem like good candidates. Even if you don't know these languages, it would still take you less time to familiarize yourself with them than it would take you to do it in C++.
I'd go with a Python/Perl/Ruby implementation with a cron entry to schedule the script to run every 30 minutes.
If performance becomes an issue you can add a column to you DB that tracks the last time you ran calculations for the account and then split the processing of your records into groups of 2 or 3 or 4, running them ever 15, 10, 5 minutes respectively.
If after splitting your calculations into groups, you still have performance demands then consider C++/C/Java.
I'd still run this using cron though. No need to be a daemon unless you are providing on-demand services.