how to generate a delay - multithreading

i'm new to kernel programming and i'm trying to understand some basics of OS. I am trying to generate a delay using a technique which i've implemented successfully in a 20Mhz microcontroller.
I know this is a totally different environment as i'm using linux centOS in my 2 GHz Core 2 duo processor.
I've tried the following code but i'm not getting a delay.
#include<linux/kernel.h>
#include<linux/module.h>
int init_module (void)
{
unsigned long int i, j, k, l;
for (l = 0; l < 100; l ++)
{
for (i = 0; i < 10000; i ++)
{
for ( j = 0; j < 10000; j ++)
{
for ( k = 0; k < 10000; k ++);
}
}
}
printk ("\nhello\n");
return 0;
}
void cleanup_module (void)
{
printk ("bye");
}
When i dmesg after inserting the module as quickly as possile for me, the string "hello" is already there. If my calculation is right, the above code should give me atleast 10 seconds delay.
Why is it not working? Is there anything related to threading? How could a 20 Ghz processor execute the above code instantly without any noticable delay?

The compiler is optimizing your loop away since it has no side effects.
To actually get a 10 second (non-busy) delay, you can do something like this:
#include <linux/sched.h>
//...
unsigned long to = jiffies + (10 * HZ); /* current time + 10 seconds */
while (time_before(jiffies, to))
{
schedule();
}
or better yet:
#include <linux/delay.h>
//...
msleep(10 * 1000);
for short delays you may use mdelay, ndelay and udelay
I suggest you read Linux Device Drivers 3rd edition chapter 7.3, which deals with delays for more information

To answer the question directly, it's likely your compiler seeing that these loops don't do anything and "optimizing" them away.
As for this technique, what it looks like you're trying to do is use all of the processor to create a delay. While this may work, an OS should be designed to maximize processor time. This will just waste it.
I understand it's experimental, but just the heads up.

Related

How to count branch mispredictions?

I`ve got a task to count branch misprediction penalty (in ticks), so I wrote this code:
int main (int argc, char ** argv) {
unsigned long long start, end;
FILE *f;
f = fopen("output", "w");
long long int k = 0;
unsigned long long min;
int n = atoi(argv[1]);// n1 = atoi(argv[2]);
for (int i = 1; i <= n + 40; i++) {
min = 9999999999999;
for(int r = 0; r < 1000; r++) {
start = rdtsc();
for (long long int j = 0; j < 100000; j++) {
if (j % i == 0) {
k++;
}
}
end = rdtsc();
if (min > end - start) min = end - start;
}
fprintf (f, "%d %lld \n", i, min);
}
fclose (f);
return 0;
}
(rdtsc is a function that measures time in ticks)
The idea of this code is that it periodically (with period equal to i) goes into branch (if (j % i == 0)), so at some point it starts doing mispredictions. Other parts of the code are mostly multiple measurements, that I need to get more precise results.
Tests show that branch mispredictions start to happen around i = 47, but I do not know how to count exact number of mispredictions to count exact number of ticks. Can anyone explain to me, how to do this without using any side programs like Vtune?
It depends on the processor your using, in general cpuid can be used to obtain a lot of information about the processor and what cpuid does not provide is typically accessible via smbios or other regions of memory.
Doing this in code on a general level without the processor support functions and manual will not tell you as much as you want to a great degree of certainty but may be useful as an estimate depending on what your looking for and how you have your code compiled e.g. the flags you use during compilation etc.
In general, what is referred to as specular or speculative execution and is typically not observed by programs as their logic which transitions through the pipeline is determined to be not used is then discarded.
Depending on how you use specific instructions in your program you may be able to use such stale cache information for better or worse but the logic therein would vary greatly depending on the CPU in use.
See also Spectre and RowHammer for interesting examples of using such techniques for privileged execution.
See the comments below for links which have code related to the use of cpuid as well as rdrand, rdseed and a few others. (rdtsc)
It's not completely clear what your looking for perhaps but will surely get you started and provide some useful examples.
See also Branch mispredictions

Crosscompiling from Linux to Windows, but Windows terminal won't stop? (gcc)

quick question;
I'm using Ubuntu as my coding environment, and I am trying to write a C program for Windows for school.
The assignment says I have to do something using the system clock, and I decided to make a quick benchmarking program. Here it is:
#include <stdio.h>
#include <unistd.h>
#include <time.h>
int main () {
int i = 0;
int p = (int) getpid();
int n = 0;
clock_t cstart = clock();
clock_t cend = 0;
for (i=0; i<100000000; i++) {
long f = (((i+9)*99)%4)+(8+i*999);
if (i % p == 0)
printf("i=%d, f=%ld\n", i, f);
}
cend = clock();
printf ("%.3f cpu sec\n", ((double)cend - (double)cstart)* 1.0e-6);
return 0;
}
When I cross compile from Ubuntu to Windows using mingw32, it's fine. However, when I run the program in Windows, two issues happen:
The benchmark runs as expected, and takes roughly 5 seconds, yet the timer says it took 0.03 seconds (this doesnt happen when testing in my Ubuntu VM. If the benchmark takes 5 seconds in real time, the timer will say 5 seconds. So obviously, this is an issue.)
Then, once the program is done, the Windows terminal will close immediately.
How do I make the program stay open so you can look at your time for more than like 10 milliseconds, and how can I make the runtime of the benchmark reflect it's score like it does when I test in Ubuntu?
Thanks!

OpenCL float sum reduction

I would like to apply a reduce on this piece of my kernel code (1 dimensional data):
__local float sum = 0;
int i;
for(i = 0; i < length; i++)
sum += //some operation depending on i here;
Instead of having just 1 thread that performs this operation, I would like to have n threads (with n = length) and at the end having 1 thread to make the total sum.
In pseudo code, I would like to able to write something like this:
int i = get_global_id(0);
__local float sum = 0;
sum += //some operation depending on i here;
barrier(CLK_LOCAL_MEM_FENCE);
if(i == 0)
res = sum;
Is there a way?
I have a race condition on sum.
To get you started you could do something like the example below (see Scarpino). Here we also take advantage of vector processing by using the OpenCL float4 data type.
Keep in mind that the kernel below returns a number of partial sums: one for each local work group, back to the host. This means that you will have to carry out the final sum by adding up all the partial sums, back on the host. This is because (at least with OpenCL 1.2) there is no barrier function that synchronizes work-items in different work-groups.
If summing the partial sums on the host is undesirable, you can get around this by launching multiple kernels. This introduces some kernel-call overhead, but in some applications the extra penalty is acceptable or insignificant. To do this with the example below you will need to modify your host code to call the kernel repeatedly and then include logic to stop executing the kernel after the number of output vectors falls below the local size (details left to you or check the Scarpino reference).
EDIT: Added extra kernel argument for the output. Added dot product to sum over the float 4 vectors.
__kernel void reduction_vector(__global float4* data,__local float4* partial_sums, __global float* output)
{
int lid = get_local_id(0);
int group_size = get_local_size(0);
partial_sums[lid] = data[get_global_id(0)];
barrier(CLK_LOCAL_MEM_FENCE);
for(int i = group_size/2; i>0; i >>= 1) {
if(lid < i) {
partial_sums[lid] += partial_sums[lid + i];
}
barrier(CLK_LOCAL_MEM_FENCE);
}
if(lid == 0) {
output[get_group_id(0)] = dot(partial_sums[0], (float4)(1.0f));
}
}
I know this is a very old post, but from everything I've tried, the answer from Bruce doesn't work, and the one from Adam is inefficient due to both global memory use and kernel execution overhead.
The comment by Jordan on the answer from Bruce is correct that this algorithm breaks down in each iteration where the number of elements is not even. Yet it is essentially the same code as can be found in several search results.
I scratched my head on this for several days, partially hindered by the fact that my language of choice is not C/C++ based, and also it's tricky if not impossible to debug on the GPU. Eventually though, I found an answer which worked.
This is a combination of the answer by Bruce, and that from Adam. It copies the source from global memory into local, but then reduces by folding the top half onto the bottom repeatedly, until there is no data left.
The result is a buffer containing the same number of items as there are work-groups used (so that very large reductions can be broken down), which must be summed by the CPU, or else call from another kernel and do this last step on the GPU.
This part is a little over my head, but I believe, this code also avoids bank switching issues by reading from local memory essentially sequentially. ** Would love confirmation on that from anyone that knows.
Note: The global 'AOffset' parameter can be omitted from the source if your data begins at offset zero. Simply remove it from the kernel prototype and the fourth line of code where it's used as part of an array index...
__kernel void Sum(__global float * A, __global float *output, ulong AOffset, __local float * target ) {
const size_t globalId = get_global_id(0);
const size_t localId = get_local_id(0);
target[localId] = A[globalId+AOffset];
barrier(CLK_LOCAL_MEM_FENCE);
size_t blockSize = get_local_size(0);
size_t halfBlockSize = blockSize / 2;
while (halfBlockSize>0) {
if (localId<halfBlockSize) {
target[localId] += target[localId + halfBlockSize];
if ((halfBlockSize*2)<blockSize) { // uneven block division
if (localId==0) { // when localID==0
target[localId] += target[localId + (blockSize-1)];
}
}
}
barrier(CLK_LOCAL_MEM_FENCE);
blockSize = halfBlockSize;
halfBlockSize = blockSize / 2;
}
if (localId==0) {
output[get_group_id(0)] = target[0];
}
}
https://pastebin.com/xN4yQ28N
You can use new work_group_reduce_add() function for sum reduction inside single work group if you have support for OpenCL C 2.0 features
A simple and fast way to reduce data is by repeatedly folding the top half of the data into the bottom half.
For example, please use the following ridiculously simple CL code:
__kernel void foldKernel(__global float *arVal, int offset) {
int gid = get_global_id(0);
arVal[gid] = arVal[gid]+arVal[gid+offset];
}
With the following Java/JOCL host code (or port it to C++ etc):
int t = totalDataSize;
while (t > 1) {
int m = t / 2;
int n = (t + 1) / 2;
clSetKernelArg(kernelFold, 0, Sizeof.cl_mem, Pointer.to(arVal));
clSetKernelArg(kernelFold, 1, Sizeof.cl_int, Pointer.to(new int[]{n}));
cl_event evFold = new cl_event();
clEnqueueNDRangeKernel(commandQueue, kernelFold, 1, null, new long[]{m}, null, 0, null, evFold);
clWaitForEvents(1, new cl_event[]{evFold});
t = n;
}
The host code loops log2(n) times, so it finishes quickly even with huge arrays. The fiddle with "m" and "n" is to handle non-power-of-two arrays.
Easy for OpenCL to parallelize well for any GPU platform (i.e. fast).
Low memory, because it works in place
Works efficiently with non-power-of-two data sizes
Flexible, e.g. you can change kernel to do "min" instead of "+"

What difference between VC++ 2010 Express and Borland C++ 3.1 in compiling simple c++ code file?

I already don't know what to think or what to do. Next code compiles fine in both IDEs, but in VC++ case it causes weird heap corruptions messages like:
"Windows has triggered a breakpoint in Lab4.exe.
This may be due to a corruption of the heap, which indicates a bug in Lab4.exe or any of the DLLs it has loaded.
This may also be due to the user pressing F12 while Lab4.exe has focus.
The output window may have more diagnostic information."
It happens when executing Task1_DeleteMaxElement function and i leave comments there.
Nothing like that happens if compiled in Borland C++ 3.1 and everything work as expected.
So... what's wrong with my code or VC++?
#include <conio.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <memory.h>
void PrintArray(int *arr, int arr_length);
int Task1_DeleteMaxElement(int *arr, int arr_length);
int main()
{
int *arr = NULL;
int arr_length = 0;
printf("Input the array size: ");
scanf("%i", &arr_length);
arr = (int*)realloc(NULL, arr_length * sizeof(int));
srand(time(NULL));
for (int i = 0; i < arr_length; i++)
arr[i] = rand() % 100 - 50;
PrintArray(arr, arr_length);
arr_length = Task1_DeleteMaxElement(arr, arr_length);
PrintArray(arr, arr_length);
getch();
return 0;
}
void PrintArray(int *arr, int arr_length)
{
printf("Printing array elements\n");
for (int i = 0; i < arr_length; i++)
printf("%i\t", arr[i]);
printf("\n");
}
int Task1_DeleteMaxElement(int *arr, int arr_length)
{
printf("Looking for max element for deletion...");
int current_max = arr[0];
for (int i = 0; i < arr_length; i++)
if (arr[i] > current_max)
current_max = arr[i];
int *temp_arr = NULL;
int temp_arr_length = 0;
for (int j = 0; j < arr_length; j++)
if (arr[j] < current_max)
{
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int)); //if initial array size more then 4, breakpoint activates here
temp_arr[temp_arr_length] = arr[j];
temp_arr_length++;
}
arr = (int*)realloc(arr, temp_arr_length * sizeof(int));
memcpy(arr, temp_arr, temp_arr_length);
realloc(temp_arr, 0); //if initial array size is less or 4, breakpoint activates at this line execution
return temp_arr_length;
}
My guess is VC++2010 is rightly detecting memory corruption, which is ignored by Borland C++ 3.1.
How does it work?
For example, when allocating memory for you, VC++2010's realloc could well "mark" the memory around it with some special value. If you write over those values, realloc detects the corruption, and then crashes.
The fact it works with Borland C++ 3.1 is pure luck. This is a very very old compiler (20 years!), and thus, would be more tolerant/ignorant of this kind of memory corruption (until some random, apparently unrelated crash occurred in your app).
What's the problem with your code?
The source of your error:
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int))
For the following temp_arr_length values, in 32-bit, the allocation will be of:
0 : 4 bytes = 1 int when you expect 1 (Ok)
1 : 5 bytes = 1.25 int when you expect 2 (Error!)
2 : 6 bytes = 1.5 int when you expect 3 (Error!)
You got your priotities wrong. As you can see:
temp_arr_length + 1 * sizeof(int)
should be instead
(temp_arr_length + 1) * sizeof(int)
You allocated too little memory,and thus wrote well beyond what was allocated for you.
Edit (2012-05-18)
Hans Passant commented on allocator diagnostics. I took the liberty of copying them here until he writes his own answer (I've already seen coments disappear on SO):
It is Windows that reminds you that you have heap corruption bugs, not VS. BC3 uses its own heap allocator so Windows can't see your code mis-behaving. Not noticing these bugs before is pretty remarkable but not entirely impossible.
[...] The feature is not available on XP and earlier. And sure, one of the reasons everybody bitched about Vista. Blaming the OS for what actually were bugs in the program. Win7 is perceived as a 'better' OS in no small part because Vista forced programmers to fix their bugs. And no, the Microsoft CRT has implemented malloc/new with HeapAlloc for a long time. Borland had a history of writing their own, beating Microsoft for a while until Windows caught up.
[...] the CRT uses a debug allocator like you describe, but it generates different diagnostics. Roughly, the debug allocator catches small mistakes, Windows catches gross ones.
I found the following links explaining what is done to memory by Windows/CRT allocators before and after allocation/deallocation:
http://www.codeguru.com/cpp/w-p/win32/tutorials/article.php/c9535/Inside-CRT-Debug-Heap-Management.htm
https://stackoverflow.com/a/127404/14089
http://www.nobugs.org/developer/win32/debug_crt_heap.html#table
The last link contains a table I printed and always have near me at work (this was this table I was searching for when finding the first two links... :- ...).
If it is crashing in realloc, then you are over stepping, the book keeping memory of malloc & free.
The incorrect code is as below:
temp_arr = (int*)realloc(temp_arr, temp_arr_length + 1 * sizeof(int));
should be
temp_arr = (int*)realloc(temp_arr, (temp_arr_length + 1) * sizeof(int));
Due to operator precedence of * over +, in the next run of the loop when you are expecting realloc to passed 8 bytes, it might be passing only 5 bytes. So, in your second iteration, you will be writing into 3 bytes someone else's memory, which leads to memory corruption and eventual crash.
Also
memcpy(arr, temp_arr, temp_arr_length);
should be
memcpy(arr, temp_arr, temp_arr_length * sizeof(int) );

Question regarding the clock() function

Why does the time to execute function f1() changes from one run to another in debug mode? Why it's always zero in release mode?
I didn't include stdio.h nor cstdio and the code compiled. How ?
#include <iostream>
#include <ctime>
void f1()
{
for( int i = 0; i < 10000; i++ );
}
int main()
{
clock_t start, finish;
start = clock();
for( int i = 0; i < 100000; i++ ) f1();
finish = clock();
double duration = (double)(finish - start) / CLOCKS_PER_SEC;
printf( "Duration = %6.2f seconds\n", duration);
}
Possible the machine you're running your test code from is too fast. Try increasing the loop count to a really huge number.
Other things to try is to test with the sleep() function.
This should confirm the behavior of your clock() measurements.
I believe the reason you are seeing zero runtime for f1() in release mode is because the compiler is optimizing the function. Since your for loop doesn't have a code block, it can effectively be pulled out during compilation.
I'm guessing that this optimization is not performed in debug mode, which would explain why you see a longer execution time. It varies between runs simply because your OS scheduler (almost certainly) does not guarantee a fixed time slot for processes.
As for why you can use printf() when you have not explicitly included <cstdio>, it's because of the <iostream> include.
From looking my headers at C:\Program Files\Microsoft Visual Studio 10.0\VC\include, I can see that iostream includes istream and ostream, both of which include ios, which includes xlocnum, which includes both cstdlib and cstdio.

Resources