I'm trying to use fio to replay some block traces.
The job file I wrote looks like:
[global]
name=replay
filename=/dev/md0
direct=1
ioengine=psync
[replay]
read_iolog=iolog.fio
replay_no_stall=0
write_lat_log=replay_metrics
numjobs=1
The key here is I want to use "psync" as the ioengine, and replay the iolog.
However, with psync, fio seems to ignore "replay_no_stall" option, which ignore the timestamp in the iolog.
And by setting numjobs to be 4, fio seems to make 4 copies of the same workload, instead of using 4 threads to split the workload.
So, how could I make fio with psync respect the timestamp, and use multiple threads to replay the trace?
Without seeing a small problem snippet of the iolog itself I can't say why the replay is always going as fast as possible. Be aware that waits are in milliseconds and successive waits in the iolog MUST increase if the later ones are to have an effect (as they are relative to the start of the job itself and not to each other or the previous I/O). See the "Trace file format v2" section of the HOWTO for more details. This problem sounds like a good question for the fio mailing list (but as it's a question please don't put it in the bug tracker).
numjobs is documented as only creating clones in the HOWTO so your experience matches the documented behaviour.
Sadly fio replay currently (end of 2016) doesn't work in a way that a single replay file can be arbitrarily split among multiple jobs and you need multiple jobs to have fio use multiple threads/processes. If you don't mind the fact that you will lose I/O ordering between jobs you could split the iolog into 4 pieces and create a job that uses each of the new iolog files.
Related
I have multiple process excute at the same time and they have some endless for loop or while loop to monitor some data in the network.At the moment I am using thread to execute them and stop as per the certain codition.
In this scenario which one is better?
1.Multiprocess
2.Multi threading
3.asyncio
One thing I want to mention the process which are execute simultaneously, they are not dependent each other.
Please share your thought
Thank you
before going tweak your server code , do check the possibility of device-side-slow-update.
besides that, in my opinion you are in a many-short-read-few-long-write pattern
divide your reading and updating payload should be the first thing to consider
handling read and update sequentially in one thread , may suffer heavy impact when facing a burst of high number of update.
you can start by setting up two thread pool , one for read and one for update
and tweak thread pool size based on benchmark stats
if cpu usage keeps on high (> 70%) , go asyncio
btw: update your question is better than leave your information in comments
I have written a sybase stored procedure to move data from certain tables[~50] on primary db for given id to archive db. Since it's taking a very long time to archive, I am thinking to execute the same stored procedure in parallel with unique input id for each call.
I manually ran the stored proc twice at same time with different input and it seems to work. Now I want to use Perl threads[maximum 4 threads] and each thread execute the same procedure with different input.
Please advise if this is recommended way or any other efficient way to achieve this. If the experts choice is threads, any pointers or examples would be helpful.
What you do in Perl does not really matter here: what matters is what happens on the side of the Sybase server. Assuming each client task creates its own connection to the database, then it's all fine and how the client achieved this makes no diff for the Sybase server. But do not use a model where the different client tasks will try to use the same client-server connection as that will never happen in parallel.
No 'answer' per se, but some questions/comments:
Can you quantify taking a very long time to archive? Assuming your archive process consists of a mix of insert/select and delete operations, do query plans and MDA data show fast, efficient operations? If you're seeing table scans, sort merges, deferred inserts/deletes, etc ... then it may be worth the effort to address said performance issues.
Can you expand on the comment that running two stored proc invocations at the same time seems to work? Again, any sign of performance issues for the individual proc calls? Any sign of contention (eg, blocking) between the two proc calls? If the archival proc isn't designed properly for parallel/concurrent operations (eg, eliminate blocking), then you may not be gaining much by running multiple procs in parallel.
How many engines does your dataserver have, and are you planning on running your archive process during a period of moderate-to-heavy user activity? If the current archive process runs at/near 100% cpu utilization on a single dataserver engine, then spawning 4 copies of the same process could see your archive process tying up 4 dataserver engines with heavy cpu utilization ... and if your dataserver doesn't have many engines ... combined with moderate-to-heavy user activity at the same time ... you could end up invoking the wrath of your DBA(s) and users. Net result is that you may need to make sure your archive process hog the dataserver.
One other item to consider, and this may require input from the DBAs ... if you're replicating out of either database (source or archive), increasing the volume of transactions per a given time period could have a negative effect on replication throughput (ie, an increase in replication latency); if replication latency needs to be kept at a minimum, then you may want to rethink your entire archive process from the point of view of spreading out transactional activity enough so as to not have an effect on replication latency (eg, single-threaded archive process that does a few insert/select/delete operations, sleeps a bit, then does another batch, then sleeps, ...).
It's been my experience that archive processes are not considered high-priority operations (assuming they're run on a regular basis, and before the source db fills up); this in turn means the archive process is usually designed so that it's efficient while at the same time putting a (relatively) light load on the dataserver (think: running as a trickle in the background) ... ymmv ...
I read this question but it didn't really help.
First and most important thing: time performances are the focus in the application that I'm developing
We have a client/server model (even distributed or cloud if we wish) and a data structure D hosted on the server. Each client request consists in:
Read something from D
Eventually write something on D
Eventually delete something on D
We can say that in this application the relation between the number of received operations can be described as delete<<write<<read. In addition:
Read ops cannot absolutely wait: they must be processed immediately
Write and delete can wait some time, but sooner is better.
From the description above, any lock-mechanism is not acceptable: this would imply that read operations could wait, which is not acceptable (sorry if I stress it so much, but it's really a crucial point).
Consistency is not necessary: if a write/delete operation has been performed and then a read operation doesn't see the write/delete effect it's not a big deal. It would be better, but it's not required.
The solution should be data-structure-independent, so it shouldn't matter if we write on a vector, list, map or Donald Trump's face.
The data structure could occupy a big amount of memory.
My solution so far:
We use two servers: the first server (called f) has Df, the second server (called s) has Ds updated.
f answers clients requests using Df and sends write/delete operations to s. Then s applies write/delete operations Ds sequentially.
At a certain point, all future client requests are redirected to s. At the same time, f copies s updated Ds into its Df.
Now, f and s roles are swapped: s will answer clients request using Ds and f will keep an updated version of Ds. The swapping process is periodically repeated.
Notice that I omitted on purpose A LOT of details for simplicity (for example, once the swap has been done, f has to finish all the pending client requests before applying the write/delete operations received from s in the meantime).
Why do we need two servers? Because the data structure is potentially too big to fit into one memory.
Now, my question is: there is some similar approach in literature? I came up with this protocol in 10 minutes, I find strange that no (better) solution similar to this one has been already proposed!
PS: I could have forgot some application specs, don't hesitate to ask for any clarification!
The scheme that you have works. I don't see any particular problem with it. This is basically like many databases run their HA solution. They apply a log of writes to replicas. This model affords a great deal of flexibility in how the replicas are formed, accessed and maintained. Failovers are easy, too.
An alternative technique is to use persistent datastructures. Each write returns you a new and independent version of the data. All versions can be read in a stable and lock-free way. Versions can be kept or discarded at will. Versions share as much of the underlying state as possible.
Usually, trees underlie such persistent datastructures because it is easy to update a small part of the tree and reuse most of the old tree.
A reason you might not have found a more sophisticated approach is that your problem is extremely general: You want this to work with any data structure at all and the data can be big.
SQL Server Hekaton uses a quite sophisticated data structure to achieve lock-free, readable, point in time snapshots of any database contents. Maybe it's worth a look how they are doing it (they released a paper describing every detail of the system). They also allow for ACID transactions, serializability and concurrent writes. All lock-free.
At the same time, f copies s updated Ds into its Df.
This copy will take a long time because the data is big. It will block readers. A better approach is to apply the log of writes to the writable copy before accepting new writes there. That way reads can be accepted continuously.
The switchover also is a short period where reads might have a slightly higher latency than normal.
Concise-ish problem explanation:
I'd like to be able to run multiple (we'll say a few hundred) shell commands, each of which starts a long running process and blocks for hours or days with at most a line or two of output (this command is simply a job submission to a cluster). This blocking is helpful so I can know exactly when each finishes, because I'd like to investigate each result and possibly re-run each multiple times in case they fail. My program will act as a sort of controller for these programs.
for all commands in parallel {
submit_job_and_wait()
tries = 1
while ! job_was_successful and tries < 3{
resubmit_with_extra_memory_and_wait()
tries++
}
}
What I've tried/investigated:
I was so far thinking it would be best to create a thread for each submission which just blocks waiting for input. There is enough memory for quite a few waiting threads. But from what I've read, perl threads are closer to duplicate processes than in other languages, so creating hundreds of them is not feasible (nor does it feel right).
There also seem to be a variety of event-loop-ish cooperative systems like AnyEvent and Coro, but these seem to require you to rely on asynchronous libraries, otherwise you can't really do anything concurrently. I can't figure out how to make multiple shell commands with it. I've tried using AnyEvent::Util::run_cmd, but after I submit multiple commands, I have to specify the order in which I want to wait for them. I don't know in advance how long each submission will take, so I can't recv without sometimes getting very unlucky. This isn't really parallel.
my $cv1 = run_cmd("qsub -sync y 'sleep $RANDOM'");
my $cv2 = run_cmd("qsub -sync y 'sleep $RANDOM'");
# Now should I $cv1->recv first or $cv2->recv? Who knows!
# Out of 100 submissions, I may have to wait on the longest one before processing any.
My understanding of AnyEvent and friends may be wrong, so please correct me if so. :)
The other option is to run the job submission in its non-blocking form and have it communicate its completion back to my process, but the inter-process communication required to accomplish and coordinate this across different machines daunts me a little. I'm hoping to find a local solution before resorting to that.
Is there a solution I've overlooked?
You could rather use Scientific Workflow software such as fireworks or pegasus which are designed to help scientists submit large numbers of computing jobs to shared or dedicated resources. But they can also do much more so it might be overkill for your problem, but they are still worth having a look at.
If your goal is to try and find the tightest memory requirements for you job, you could also simply submit your job with a large amount or requested memory, and then extract actual memory usage from accounting (qacct), or , cluster policy permitting, logging on the compute node(s) where your job is running and view the memory usage with top or ps.
I need to implement a daemon that needs to extract data from a database, load the data to memory, and according to this data
perform actions like sending emails or write/update files. These actions need to be performed every 30 minutes.
I really don't know what to decide. Compile a c++ program that will do the task or use scripts and miscellaneous Linux tools (sed/awk).
What will be the fastest way to do this? To save cpu and memory.
The dilemma is about marinating this process if it's script it does not need compilations and I can just drop it into any machine linux/unix
but if it's native it's more harder.
What do you think?
Use cron(1) to start your program every 30 minutes.
So called scripting languages will definitely enable you to write your program more quickly than C++. But doing this with shell and sed an/or awk, while definitly possible, is very difficult when you have to cope with all corner cases, particularly regarding strings escaping (think quotes, “&”’s “;”’s…).
I suggest you go with a more full featured “scripting” language such as Perl or Python.
Why are you trying to save CPU & Memory? Are you absolutely sure this is a real requirement (or just "premature optimization")?
Unless performance is critical, there's absolutely no reason to code such a thing in C++. It seems to be a sort of maintenance process (right?). I say write it in the highest level script language you know. Python or PHP seem like good candidates. Even if you don't know these languages, it would still take you less time to familiarize yourself with them than it would take you to do it in C++.
I'd go with a Python/Perl/Ruby implementation with a cron entry to schedule the script to run every 30 minutes.
If performance becomes an issue you can add a column to you DB that tracks the last time you ran calculations for the account and then split the processing of your records into groups of 2 or 3 or 4, running them ever 15, 10, 5 minutes respectively.
If after splitting your calculations into groups, you still have performance demands then consider C++/C/Java.
I'd still run this using cron though. No need to be a daemon unless you are providing on-demand services.