What linux kernel C function schedule () appears "need_resched:" mean - linux

need_resched:
preempt_disable();
cpu = smp_processor_id();
rq = cpu_rq(cpu);
rcu_note_context_switch(cpu);
prev = rq->curr;
switch_count = &prev->nivcsw;
release_kernel_lock(prev);
I would like to ask is: "need_resched:" What is the role.
In detail,The linux kernel version is 2.6.35.3.

need_schedule: is simply a label. Later in the code you will find:
if (need_resched())
goto need_resched;
I.e., if the rescheduling flag ist set (what is tested by need_reschedule()), this point in code is executed (again).

Related

QThread isRunning always true

I am using QT 5.11.2 on both linux and windows. I am trying to optimize my UI to prevent it from freezing in large functions. I started using QThread and was able to do exactly what I want on windows. However, I tried to test the same functions on linux RHEL7 but threads never finished.
Here is what I tried:
void MainWidget::Configure_BERT_DSO(bool isOptimization, double lineRate, int Scaling)
{
QThread *bertThread = QThread::create([this, lineRate, Scaling]{ ConfigureBert(lineRate, Scaling);
QThread *dsoThread = QThread::create([this, lineRate]{ ConfigureDSO(lineRate); });
bertThread->setObjectName("My Bert Thread");
dsoThread->setObjectName("My DSO Thread");
bertThread->start();
dsoThread->start();
while(bertThread->isRunning() || dsoThread->isRunning()) // even tried isFinished()
{
qApp->processEvents();
}
bertThread->exit();
dsoThread->exit();
delete bertThread;
delete dsoThread;
}
In windows the while loop exits after a while and both threads execute correctly with no problem.
On linux, to make sure that both functions execute correctly, I added qDebug() at the start and end of each function and they are all reached at the excpected time. But the problem is that isRunning never becomes true again and the same goes for isFinished and my loop gets stuck.
Thread: QThread(0x1d368b0, name = "My Bert Thread") started.
Thread: QThread(0x1d368e0, name = "My DSO Thread") started.
Thread: QThread(0x1d368b0, name = "My Bert Thread") finished.
Thread: QThread(0x1d368e0, name = "My DSO Thread") finished.
Is it something platform dependent or is there something that I might be missing?
EDIT
I also tried to use bertThread->wait() and dsoThread->wait() to check if it exits after my functions finish but it never returns from the first one I called although both functions reach their end successfully
Your help is much appreciated

How does a ioctl() call the driver code

I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, i was actually looking for a custom command(For e.g. I/O command, write and then read the same and finally compare if both the data are same)
For read the ioctl() function is used as shown in the below code.
struct nvme_user_io io = {
.opcode = opcode,
.flags = 0,
.control = control,
.nblocks = nblocks,
.rsvd = 0,
.metadata = (__u64)(uintptr_t) metadata,
.addr = (__u64)(uintptr_t) data,
.slba = slba,
.dsmgmt = dsmgmt,
.reftag = reftag,
.appmask = appmask,
.apptag = apptag,
};
err = ioctl(fd, NVME_IOCTL_SUBMIT_IO, &io);
Can I to where exactly the control of execution goes in order to understand the read.
Also I want to have another command that looks like
err = ioctl(fd,NVME_IOCTL_WRITE_AND_COMPARE_IO, &io);
so that I can internally do a write, then read the same location and finally compare the both data to ensure that the disk contains only the data that I wanted to write.
Since I am new to this nvme/ioctl(), if there is any mistakes please correct me.
nvme_io() is a main command handler that accepts as a parameter the NVMe opcode that you want to send to your device. According to the standard, you have separate commands (opcodes) for read, write and compare. You could either send those commands separately, or add a vendor specific command to calculate what you need.

Torch - Multithreading to load tensors into a queue for training purposes

I would like to use the library threads (or perhaps parallel) for loading/preprocessing data into a queue but I am not entirely sure how it works. In summary;
Load data (tensors), pre-process tensors (this takes time, hence why I am here) and put them in a queue. I would like to have as many threads as possible doing this so that the model is not waiting or not waiting for long.
For the tensor at the top of the queue, extract it and forward it through the model and remove it from the queue.
I don't really understand the example in https://github.com/torch/threads enough. A hint or example as to where I would load data into the queue and train would be great.
EDIT 14/03/2016
In this example "https://github.com/torch/threads/blob/master/test/test-low-level.lua" using a low level thread, does anyone know how I can extract data from these threads into the main thread?
Look at this multi-threaded data provider:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua
It runs this file in the thread:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L18
by calling it here:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L30-L43
And afterwards, if you want to queue a job into the thread, you provide two functions:
https://github.com/soumith/dcgan.torch/blob/master/data/data.lua#L84
The first one runs inside the thread, and the second one runs in the main thread after the first one completes.
Hopefully that makes it a bit more clear.
If Soumith's examples in the previous answer are not very easy to use, I suggest you build your own pipeline from scratch. I provide here an example of two synchronized threads : one for writing data and one for reading data:
local t = require 'threads'
t.Threads.serialization('threads.sharedserialize')
local tds = require 'tds'
local dict = tds.Hash() -- only local variables work here, and only tables or tds.Hash()
dict[1] = torch.zeros(4)
local m1 = t.Mutex()
local m2 = t.Mutex()
local m1id = m1:id()
local m2id = m2:id()
m1:lock()
local pool = t.Threads(
1,
function(threadIdx)
end
)
pool:addjob(
function()
local t = require 'threads'
local m1 = t.Mutex(m1id)
local m2 = t.Mutex(m2id)
while true do
m2:lock()
dict[1] = torch.randn(4)
m1:unlock()
print ('W ===> ')
print(dict[1])
collectgarbage()
collectgarbage()
end
return __threadid
end,
function(id)
end
)
-- Code executing on master:
local a = 1
while true do
m1:lock()
a = dict[1]
m2:unlock()
print('R --> ')
print(a)
end

Node.js function not working for isr galileo board

I want to use node.js mraa library for Galileo.
I need to set up an interrupt.
I achieve this by:
var param=1;
var myLed = new mraa.Gpio(2);
myLed.dir(mraa.DIR_IN); //set the gpio direction to input
myLed.isr(mraa.EDGE_BOTH,function f(x){},param );
i get this errors
in method 'Gpio_isr', argument 3 of type 'void (*)(void *)'
The documentation for this function states
mraa_result_t isr ( Edge mode,
void(*)(void *) fptr,
void * args
)
Sets a callback to be called when pin value changes
Parameters
mode The edge mode to set
fptr Function pointer to function to be called when interupt is triggered
args Arguments passed to the interrupt handler (fptr)
Returns
Result of operation
I don't know how to set up the params of function...
There is an open issue about this. The current response is that the isr method is not currently working.
Link:
https://github.com/intel-iot-devkit/mraa/issues/110
As pointed out in the issue, you can now do:
var m = require('mraa')
function h() {
console.log("HELLO!!!!")
}
x = new m.Gpio(14)
x.isr(m.EDGE_BOTH, h)
You'll need to be on v0.5.4-134-gd6891e8 or later from the master branch. You can use npm to get the correct version installed on your board or just compile form sources (you'll need SWIG 3.x)
npm install mraa

Why is the kernel using the default block driver instead of my driver's code?

I wrote a block driver program which creates a dummy block device (sbd0). I registered all device operations for that block device: (Refer to include/linux/blkdev.h in 2.6.32 kernel source)
static struct block_device_operations sbd_ops = {
.owner = THIS_MODULE,
.open = sbd_open,
.release = sbd_close,
.ioctl = sbd_ioctl,
.getgeo = sbd_getgeo,
.locked_ioctl = sbd_locked_ioctl,
.compat_ioctl = sbd_compat_ioctl,
.direct_access = sbd_direct_access,
.media_changed = sbd_media_changed,
.revalidate_disk = sbd_revalidate_disk
};
I compiled the driver program. I inserted the module and /dev/sbd0 was created. Now I want to test my driver code. So I wrote an application as below.
fd = open("/dev/sbd0", O_RDONLY);
retval = ioctl(fd, BLKBSZGET, &blksz); //trying to get logical block size
Output is :4096
I wondered: I didn't implement ioctl for BLKBSZGET. It didn't invoke my sbd_ioctl, instead it used the default driver and gave me the result. For open, close calls it executed sbd_open and sbd_close (that I implemented). And then I tried:
retval = ioctl(fd, HDIO_GETGEO, &geoinfo);
It invoked sbd_getgeo but I thought it would invoke sbd_ioctl.
Here are my questions:
I implemented a driver and created a device. If I perform any operation on that device, it has to invoke my driver application. But how does it use a few of my driver functions and few default driver functions?
ioctl(fd, HDIO_GETGEO, ..) didn't invoke .ioctl call, but it invoked .getgeo. How is this possible?
The ioctl dispatching is handled by the blkdev_ioctl function, which will process some of the ioctls directly, without calling into your driver's specific routine.
For HDIO_GETGEO, it calls your driver's getgeo function directly (from kernel 3.13.6, doesn't appear to have changed much since 2.6.32):
[...]
/*
* We need to set the startsect first, the driver may
* want to override it.
*/
memset(&geo, 0, sizeof(geo));
geo.start = get_start_sect(bdev);
ret = disk->fops->getgeo(bdev, &geo); /* <- here */
[...]
For BLKBSZGET, it calls block_size(bdev)), which simply returns bdev->bd_block_size.
You'll find blkdev_ioctl in block/ioctl.c if you need to know what happens for other ioctls.

Resources