AES improvement the of speed processing by multi threaded task - multithreading

i implemented AES basic function but it is too slow , how can threading my enable on the AES function to improve the speed parallel process

you can't speed up it by using multiple threads.
Some encryption algorithms use pseudo-random stream of bytes and XOR them with input data.
So you could pre-calculate(predict) parts of such stream in parallel threads and encrypt parts of the same buffer simultaneously.
But each round of encryption in AES algorithm depends on the state of previous rounds so you can't speed up it by using multithreading.
Actually my answer can be both true and false depending on what mode do you use but most likely you do not use EBC mode so you can't speed it up.

Related

Does Zarr has built-in multi-threading support for fast read and write?

I am trying to speed up reading and writing Zarr files using multi-threading. For example, if I can store an array in 5 chunks, is there a way to use a thread per chunk to speed up reading and writing the array to and from disk (possibly using ThreadSynchronizer() and synchronizer argument?). I just want to speed up read/write. I don't want to parallelize the computation. I know that can be done with dask. Thanks.
Zarr supports parallel reads and writes to different chunks without any additional synchronization, see documentation. As pointed by the documentation, the bottleneck may be the GIL, in which case using multiple processes (rather than threads) can help speedup the processing.

Multi-threaded performance and profiling

I have a program that scales badly to multiple threads, although – theoretically – it should scale linearly: it's a calculation that splits into smaller chunks and doesn't need system calls, library calls, locking, etc. Running with four threads is only about twice as fast as running with a single thread (on a quad core system), while I'd expect a number closer to four times as fast.
The run time of the implementations with pthreads, C++0x threads and OpenMP agree.
In order to pinpoint the cause, I tried gprof (useless) and valgrind (I didn't see anything obvious). How can I effectively benchmark what's causing the slowdown? Any generic ideas as to its possible causes?
— Update —
The calculation involves Monte Carlo integration and I noticed that an unreasonable amount of time is spent generating random numbers. While I don't know yet why this happens with four threads, I noticed that the random number generator is not reentrant. When using mutexes, the running time explodes. I'll reimplement this part before checking for other problems.
I did reimplement the sampling classes which did improve performance substantially. The remaining problem was, in fact, contention of the CPU caches (it was revealed by cachegrind as Evgeny suspected.)
You can use oprofile. Or a poor man's pseudo-profiler: run the program under gdb, stop it and look where it is stopped. "valgrind --tool=cachegrind" will show you how efficiently CPU cache is used.
Monte Carlo integration seems to be very memory-intensive algorithm. Try to estimate, how memory bandwidth is used. It may be the limiting factor for your program's performance. Also if your system is only 2-core with hyperthreading, it should not work much faster with 4 threads, comparing with 2 threads.

Do I get a performance penalty when mixing SIMD instructions and multithreading

I was interested in doing a proyect about face-recognition (to make use of SIMD instructions set). But during the first semester of the current year, I learnt something about threads and I was wondering if I could combine them.
When should I avoid combining multithreading and SIMD instructions? When is it worth it to do it?
Saving x87/MMX/XMM/YMM registers can take quite some time and cause significant
cache thrash. Normally, saving and restoring of FP state is done in a lazy manner: upon a context switch, the kernel remembers the current thread as the "owner" of the FP state and sets the TS flag in CR0 and - this will cause a trap to the kernel whenever a thread attempts to execute an FP insn. The FP state of the old thread and the FP state of the currently executing thread are saved and restored, respectively, at that time.
Now, if for extended periods of time (several or many context switches) no other thread than yours uses FP insns - the lazy policy will cause no FP state to be saved/restored whatsoever and you won't get performance hit.
Since we're obviously talking about multiprocessor system, the threads, which execute your algorithm in parallel won't conflict with each other because they should execute on their own CPU/core/HT and have a private set of registers.
tl;dr
You shouldn't be concerned with the overhead of saving and restoring FP registers.
Why do you think there would be a problem? SIMD registers will be swapped out like any other CPU registers when a thread change occurs.
There aren't any new issues to worry about with multithreading and SIMD. So long as you're doing the SIMD correctly and efficiently, you shouldn't have anything to worry about.
Meaning SIMD has it's own implementation challenges, as does multithreading. But combining them won't make either more complex.

Which thread to use for audio decoding?

When working with audio playback I am used to the following pattern:
one disk (or network) thread which reads data from disk (or the network) and fills a ringbuffer
one audio thread which reads data from the ringbuffer, possibly performs DSP, and writes to audio hardware
(pull or push API)
This works fine, and there's no issue when working with, say, a WAV file.
Now, if the source data is encoded in a compressed format, like Vorbis or MP3, decoding takes some time.
And it seems like it's quite common to perform decoding in the disk/network thread.
But isn't this wrong design? While disk or network access blocks, some CPU time is available for decoding, but is wasted if decoding happens in the same thread.
It seems to me that if the networks becomes slow, then risks of buffer underruns are higher if decoding happens sequentially.
So, shouldn't decoding be performed in the audio thread?
In my context, I would prefer to avoid adding a dedicated decoding thread. It's for mobile platforms and SMP is pretty rare right now. But please tell if a dedicated decoding thread really makes sense to you.
It's more important for the audio thread to be available for playing audio smoothly than for the network thread to maintain a perfect size buffer. If you're only using two threads, then the decoding should be done on the network thread. If you were to decode on the playing thread then it's possible the time could come that you need to push more audio out to the hardware but the thread is busy decoding. It's better if you maintain a buffer of already decoded audio.
Ideally you would use three threads. One for reading the network, one for decoding, and one for playing. In our application that handles audio/video capture, recording, and streaming we have eight threads per stream (recently increased from six threads since we added new functionality recently). It's much easier for each thread to have it's own functionality and then it can appropriately measure its performance against those of it's incoming/outgoing buffers. This also benefits profiling and optimization.
If your device has a single CPU, all threads are sharing it. OS Thread swapping is usually very efficient (you won't lose any meaningfull CPU power for the swapping). Therefore, you should create more threads if it will simplify your logic.
In your case, there is a pipeline. Different thread for each stage of the pipeline is a good pattern. The alternative, as you notice, involves complex logic, synchronizations, events, interrupts, or whatever. Sometimes there is no good alternatives at all.
Hence, my suggestion - create a dedicated thread for the audio decoding.
If you'll have more than a single CPU, you'll even gain more efficiency by using one thread for each pipeline step.

Programming for Multi core Processors

As far as I know, the multi-core architecture in a processor does not effect the program. The actual instruction execution is handled in a lower layer.
my question is,
Given that you have a multicore environment, Can I use any programming practices to utilize the available resources more effectively? How should I change my code to gain more performance in multicore environments?
That is correct. Your program will not run any faster (except for the fact that the core is handling fewer other processes, because some of the processes are being run on the other core) unless you employ concurrency. If you do use concurrency, though, more cores improves the actual parallelism (with fewer cores, the concurrency is interleaved, whereas with more cores, you can get true parallelism between threads).
Making programs efficiently concurrent is no simple task. If done poorly, making your program concurrent can actually make it slower! For example, if you spend lots of time spawning threads (thread construction is really slow), and do work on a very small chunk size (so that the overhead of thread construction dominates the actual work), or if you frequently synchronize your data (which not only forces operations to run serially, but also has a very high overhead on top of it), or if you frequently write to data in the same cache line between multiple threads (which can lead to the entire cache line being invalidated on one of the cores), then you can seriously harm the performance with concurrent programming.
It is also important to note that if you have N cores, that DOES NOT mean that you will get a speedup of N. That is the theoretical limit to the speedup. In fact, maybe with two cores it is twice as fast, but with four cores it might be about three times as fast, and then with eight cores it is about three and a half times as fast, etc. How well your program is actually able to take advantage of these cores is called the parallel scalability. Often communication and synchronization overhead prevent a linear speedup, although, in the ideal, if you can avoid communication and synchronization as much as possible, you can hopefully get close to linear.
It would not be possible to give a complete answer on how to write efficient parallel programs on StackOverflow. This is really the subject of at least one (probably several) computer science courses. I suggest that you sign up for such a course or buy a book. I'd recommend a book to you if I knew of a good one, but the paralell algorithms course I took did not have a textbook for the course. You might also be interested in writing a handful of programs using a serial implementation, a parallel implementation with multithreading (regular threads, thread pools, etc.), and a parallel implementation with message passing (such as with Hadoop, Apache Spark, Cloud Dataflows, asynchronous RPCs, etc.), and then measuring their performance, varying the number of cores in the case of the parallel implementations. This was the bulk of the course work for my parallel algorithms course and can be quite insightful. Some computations you might try parallelizing include computing Pi using the Monte Carlo method (this is trivially parallelizable, assuming you can create a random number generator where the random numbers generated in different threads are independent), performing matrix multiplication, computing the row echelon form of a matrix, summing the square of the number 1...N for some very large number of N, and I'm sure you can think of others.
I don't know if it's the best possible place to start, but I've subscribed to the article feed from Intel Software Network some time ago and have found a lot of interesting thing there, presented in pretty simple way. You can find some very basic articles on fundamental concepts of parallel computing, like this. Here you have a quick dive into openMP that is one possible approach to start parallelizing the slowest parts of your application, without changing the rest. (If those parts present parallelism, of course.) Also check Intel Guide for Developing Multithreaded Applications. Or just go and browse the article section, the articles are not too many, so you can quickly figure out what suits you best. They also have a forum and a weekly webcast called Parallel Programming Talk.
Yes, simply adding more cores to a system without altering the software would yield you no results (with exception of the operating system would be able to schedule multiple concurrent processes on separate cores).
To have your operating system utilise your multiple cores, you need to do one of two things: increase the thread count per process, or increase the number of processes running at the same time (or both!).
Utilising the cores effectively, however, is a beast of a different colour. If you spend too much time synchronising shared data access between threads/processes, your level of concurrency will take a hit as threads wait on each other. This also assumes that you have a problem/computation that can relatively easily be parallelised, since the parallel version of an algorithm is often much more complex than the sequential version thereof.
That said, especially for CPU-bound computations with work units that are independent of each other, you'll most likely see a linear speed-up as you throw more threads at the problem. As you add serial segments and synchronisation blocks, this speed-up will tend to decrease.
I/O heavy computations would typically fare the worst in a multi-threaded environment, since access to the physical storage (especially if it's on the same controller, or the same media) is also serial, in which case threading becomes more useful in the sense that it frees up your other threads to continue with user interaction or CPU-based operations.
You might consider using programming languages designed for concurrent programming. Erlang and Go come to mind.

Resources