I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, i was actually looking for a custom command(For e.g. I/O command, write and then read the same and finally compare if both the data are same)
In user space i need to invoke minimum of 2 ioclt() one with write command(nvme_cmd_write) and another with read command(nvme_cmd_read) and compare both the buffer contents.
Issue is actually when i wanted to send this command in parallel. At block level (using ioclt())we were not able to put this command in different I/O submission queues.
so can we have a custom command (nvme_cmd_write_compare) sent from ioclt() and have a new module at driver level for handling this new command.
Since I am new to this nvme/ioctl(), if there is any mistakes please correct me.
I wanted to know if we can implement this.
Related
I am using python to monitor a folder and check if files are being copied in and if so, replicate those to a new location.
I am using the following to monitor the folder:
fsmonitor
The issue I am facing is that I am unable to discern if the file is in use and currently in the process of writing the contents onto disk. if so I want to wait till copying is complete and then start copying it to my new location.
So how do I find out if a file is in use/open?
I have seen some suggestions here where I try to write to the file question and if it fails then it indicates that the file is in use:
example answer (I've seen similar in python)
But I am reluctant to use such a method due to the fear that it might cause corruption and such issues.
Is there an alternative/safer way to do this? Or is testing write permissions safe?
Is anyone familiar with pywin32? Does it provide such tools? The site looks arcane, so wonder if it has the latest API provided by windows, even fsmointor mentioned above uses the same library and I wonder if there are newer/more efficient ways to do this.
Currently, I am using psutil, proc.open_files() to loop through all processes and all files to list out open files. if files that I am concerned about appear on this list I wait and try again. However, this process creates a humongous list of files and uses 12% of my CPU to create it, so I desperately need an alternative.
In response to Adrian McCarthy
I starting out assuming that it is safe to action whatever fsmonitor puts out, but if you see the following output which si for a single file copy:
0 86 0
create C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe 3684bf38
create C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe 3684bf38
0 86 0
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe a8cf3250
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe a8cf3250
0 160 0
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe caef5c64
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64.exe caef5c64
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64 - Copy.exe caef5c64
modify C:\Users\ScanUser\Pictures\syncTest dotnet-sdk-5.0.203-win-x64.exe caef5c64
So the conundrum is at which 'modify' do I start copying the file? I can wait a few minutes/seconds to see if another 'modified' appeared for that file but how do I decide the time to wait for a large file over SFTP may take 30 minutes, so I need something scalable.
Also, I would like not the make multiple copy actions for a file since that will make the script inefficient.
This can help you
check if a file is open in Python
here is a code:
try: # try to open the file
with open("file", "r") as file:
# some code here
except IOError:
# if it throws an error that means it is in use
I think you're unnecessarily concerned about working with the file while another process still has it open.
On Windows. fsmonitor using the ReadDirectoryChangesW mechanism. That means you'll get a notification about a change after it happens. So if a process writes to foo.log, you'll get a notification after the write operation is completed. (In fact, I think it's after the update of the directory metadata.)
To copy the file, you need read access. So just go ahead and open it for reading.
If it opens, then it's safe to read, even if another process has it open. You cannot corrupt a file by reading it even if another process is writing to it.
If it fails to open, then another process has it open and is intentionally preventing other processes from reading it (probably because they know they'll be actively updating it). In that case, you can try again later.
Trying to first check whether another process is using the file doesn't actually help because the answer could change between the moment you check and the moment you try to act on that information.
When you open a file, the system does the permission check and the opening under a mutex*, so the answer cannot change in between. There's no way for you to simulate that yourself from user-mode code. Once you have the file open, you can safely use it.
If you try to read from a file at the same moment another process tries to write to it, the system will ensure that the read will get the data as it was before the write or as it is after the write. It won't get a result that's a mixture of old and new.
That said, if you're reading the file with a bunch of small read operations while another process is writing to the file with a bunch of small write operations, it's possible you might capture some intermediate state of the file. But that's okay. The original file is unharmed, and those writes will trigger another fsmonitor notification, so you're code will start over and try to make another copy of the file.
* I'm using "mutex" in a generic sense: It uses some sort of synchronization mechanism, but it might not necessarily be a Windows Mutex object.
Using uClinux we have one of two flash devices installed, a 1GB flash or a 2GB flash.
The only way I can think of solving this is to somehow get the device ID - which is down the in the device driver code, for me that is in:
drivers/mtd/devices/m25p80.c
I have been using the command mtdinfo (which comes from mtdutils binaries, derived from mtdinfo.c/h). There is various information stored in here about the flash partitions including flash type 'nor' eraseblock size '65536', etc. But nothing that I can identify the chip with.
Its not very clear to me how I can get information from "driver-land" into "user-land". I am looking at trying to extend the mtdinfo command to print more information but there are many layers...
What is the best way to achieve this?
At the moment, I have found no easy way to do this without code changes. However I have found an easy code change (probably a bit of a hack) that allows me to get the information I need:
In the relevant file (in my case drivers/mtd/devices/m25p80.c) you can call one of the following:
dev_err("...");
dev_alert("...");
dev_warn("...");
dev_notice("...");
_dev_info("...");
Which are defined in include/Linux/device.h, so they are part of the Linux driver interface so you can use them from any driver.
I found that the dev_err() and devalert() both get printed out "on screen" during run time. However all of these device messages can be found in /var/log/messages. Since I added messages in the format: dev_notice("JEDEC id %06x\n", jedecid);, I could find the device ID with the following command:
cat /var/log/messages | grep -i jedec
Obviously using dev_err() ordev_alert() is not quite right! - but dev_notice() or even _dev_info() seem more appropriate.
Not yet marking this as the answer since it requires code changes - still hoping for a better solution if anyone knows of one...
Update
Although the above "solution" works, its a bit crappy - certainly will do the job and good enough for mucking around. But I decided that if I am making code changes I may as well do it properly. So I have now implemented changes to add an interface in sysfs such that you can get the flash id with the following command:
cat /sys/class/m25p80/m25p80_dev0/device_id
The main function calls required for this are (in this order):
alloc_chrdev_region(...)
class_create(...)
device_create(...)
sysfs_create_group(...)
This should give enough of a hint for anyone wanting to do the same, though I can expand on that answer if anyone wants it.
I had written a linux scsi low level driver for cdrom. Am able to receive commands one by one from application and am testing it using sg3-utils.
Now I want to receive more than one command while serving the first one.?
for this I tried changing the struct scsi_host members can_queue and cmd_per_lun to some big values like 40, even though not able to receive multiple commands.
Is there any way to test multiple command reception in existing drivers like scsi_debug ?
Please give a little more information ... when you say "receive commands" that implies you are a target. Maybe you mean "sending commands".
Let's take e.g. "top" application which displays system information and periodically updates it.
I want to run it using node.js and display that information (and updates!).
Code I've come up with:
#!/usr/bin/env node
var spawn = require('child_process').spawn;
var top = spawn('top', []);
top.stdout.on('readable', function () {
console.log("readable");
console.log('stdout: '+top.stdout.read());
});
It doesn't behave the way I expected. In fact it produces nothing:
readable
stdout: null
readable
stdout:
readable
stdout: null
And then exits (that is also unexpected).
top application is taken just as an example. Goal is to proxy those updates through the node and display them on the screen (so same way as running top directly from command line).
My initial goal was to write script to send file using scp. Done that and then noticed that I am missing progress information which scp itself displays. Looked around at scp node modules and they also do not proxy it. So backtracked to common application like top.
top is an interactive console program designed to be run against a live pseudo-terminal.
As to your stdout reads, top is seeing that its stdin is not a tty and exiting with an error, thus no output on stdout. You can see this happen in the shell if you do echo | top it will exit because stdin will not be a tty.
Even if it was actually running though, it's output data is going to contain control characters for manipulating a fixed-dimension console. (like "move the cursor to the beginning of line 2"). It is an interactive user interface and a poor choice as a programmatic data source. "Screen scraping" and interpreting this data and extracting meaningful information is going to be quite difficult and fragile. Have you considered a cleaner approach such as getting the data you need out of the /proc/meminfo file and other special files the kernel exposes for this purpose? Ultimately top is getting all this data from readily-available special files and system calls, so you should be able to tap into data sources that are convenient for programmatic access instead of trying to screen scrape top.
Now of course, top has analytics code to do averages and so forth that you may have to re-implement, so both screen-scraping and going through clean data sources have pros and cons, and aspects that are easy and difficult. But my $0.02 would be focus on good data sources instead of trying to screen scrape a console UI.
Other options/resources to consider:
The free command such as free -m
vmstat
and other commands described in this article
the expect program is designed to help automate console programs that expect a terminal
And just to be clear, yes it is certainly possible to run top as a child process, trick it into thinking there's a tty and all the associated environment settings, and get at the data it is writing. It's just extremely complicated and is analogous to trying to get the weather by taking a photo of the weather channel on a TV screen and running optical character recognition on it. Points for style, but there are easier ways. Look into the expect command if you need to research more about tricking console programs into running as subprocesses.
I have an assignment in my Operating Systems class to make a simple pseudo-stack Linux device driver. So for an example, if I was to write "Hello" to the device driver, it would return "olleH" when I read from it. We have to construct a tester program in C to just call upon the read/write functions of the device driver to just demonstrate that it functions in a FILO manner. I have done all of this, and my tester program, in my opinion, demonstrates the purpose of the assignment; however, out of curiosity, inside BASH I execute the following commands:
echo "Test" > /dev/driver
cat /dev/driver
where /dev/driver is the special file I created using "mknod". However, when I do this, I get a black screen full of errors. After I swap back to the GUI view using CNTRL+ALT+F7, I see that BASH has returned "Killed".
Does anyone know what could be causing this to happen? I am confused since my tester program calls open(), read(), and write() with everything functioning as it should.
If I need to show some code, just ask.
The function in your device driver that writes to the buffer you are providing it is most likely causing this issue.
To debug, you can do the following:
First, make sure the read part is fine. You can printk your internal buffer after you read from input to ensure this.
Second, in your write function, printk some information instead of actually writing anything and make sure everything is fine.
Also, make sure the writer makes it clear that the write has ended. I'm not particularly sure about device drivers, but you either need to return 0 as the number of bytes written when called a second time, or set an eof variable (if that is one of the arguments to your function)