C Program to Store ttyUSB* devices in an array - linux

AM having a C program which uses system to list /dev/ttyUSB* devices how can I store them in an array and process.
#include <stdio.h>
#include <stdlib.h>
int main()
{
system("ls /dev/ttyUSB*");
printf("Done");
exit(0);
}

Using system for these things is a bad idea.
First of all, you have to Parse the output of ls, which you should avoid.
Apart from that, this will be quite inefficient. starting programs is rather slow, but you are running a program (written in C), that starts another program (written in C) which calculates something and renders this something into a human-readable form, and then you have to parse the human-readable form to find out what the original something was...
A better way is to do the shortcut can "calculate the something" directly:
check out glob

Related

How to pass array(or vec) between Go and Rust?

I want to build a local service mainly based on Golang, and I'm using Rust to reshape part of my code to make it run faster and cost less memory. Everything went well until I tried to pass an array between them.
I've defined the same data structure in both Rust and Go, and they work well separately, but I don't know how to pass an array.
Here are my structure definitions:
// go part
type ContractItems []ContractItem // Here `ContractItem` is another plain struct with simple k-v structure.
// rust part
type ContractItems Vec<ContractItem>
// c header file
#include <stdarg.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
typedef struct Vec_ContractItemPrice Vec_ContractItemPrice;
typedef struct Vec_ContractItemPrice ContractItemPrices;
struct ContractItemsPriceHandler contract_items_price_handler_from(ContractItemPrices c, int64_t contract_id);
I haven't use Clang for actually a long time, so the C header file was generated automatically by cbindgen.
And I received this panic report:
# command-line-arguments
cgo-gcc-prolog: In function '_cgo_fbbb6cfe6006_Cfunc_contract_items_price_handler_from':
cgo-gcc-prolog:46:22: error: field 'p0' has incomplete type
cgo-gcc-prolog:53:45: error: type of formal parameter 1 is incomplete
I expect to pass an array or array-like which can be transferred into both alloc::vec or just array in Rust and slice or array in Go. Anyway, I'm not quite good at Clang. So the possible solutions in Rust or Go part are more appreciated since I'm not sure about my ability to debug additional C code.

Is there a linux command (along the lines of cat or dd) that will allow me to specify the offset of the read syscall?

I am working on a homework assignment for an operating systems class, and we are implementing basic versions of certain file system operations using FUSE.
The only operation that we are implementing that I couldn't test to a point I was happy with was the read() syscall. I am having trouble finding a way to get the read() syscall to be called with an offset other than 0.
I tried some of the commands (like dd, head, and tail) mentioned in answers to this question, but by the time that they reached my implementation of the read() syscall the offset was 0. To clarify, when I called these commands I received (at the calling terminal) the bytes in the file that were specified in the calls, but in another terminal that was displaying the syscalls that were being handled by FUSE, and hence my implementations, it displayed that my implementation of the read() syscall was always being called with offset 0 (and usually size of 4096, which I presume is the block size of the real linux file system I am using). I assume that these commands are making read() syscalls in blocks of 4096 bytes, then internally (i.e., within the dd, head, or tail command's code rather than through syscalls) modifying the output to what is seen on the calling terminal.
Is there any command (or script) I can run (or write and then run in the case of the script) that will allow me to test this syscall with varying offset values?
I figured out the issue I was having. For posterity, I will record the answer rather than just delete my question, because the answer wasn't necessarily easy to find.
Essentially, the issue occurred within FUSE. FUSE defaults to not using direct I/O (which is definitely the correct default to have, don't get me wrong), which is what resulted in the reads in size chunks of 4096 (these are the result of FUSE using a page cache of file contents [AKA a file content cache] in the kernel). For what I wanted to test (as explained in the question), I needed to enable direct I/O. There are a few ways of doing this, but the simplest way for me to do this was to pass -o direct_io as a command line argument. This worked for me because I was using the fuse_main call in the main function of my program.
So my main function looked like this:
int main(int argc, char *argv[])
{
return fuse_main(argc, argv, &my_operations_structure, NULL);
}
and I was able to call my program like this (I used the -d option in addtion to the -o direct_io option in order to display the syscalls that FUSE was processing and the output/debug info from my program):
./prog_name -d -o direct_io test_directory
Then, I tested my program with the following simple test program (I know I don't do very much error checking, but this program is only for some quick and dirty tests):
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
int main(int argc, char *argv[])
{
FILE * file;
char buf[4096];
int fd;
memset(&buf[0], 0, sizeof(buf));
if (argc != 4)
{
printf("usage: ./readTest [size] [offset] [filename]\n");
return 0;
}
file = fopen(argv[3], "r");
if (file == NULL)
{
printf("Couldn't open file\n");
return -1;
}
fd = fileno(file);
pread(fd, (void *) buf, atoi(argv[1]), (off_t) atoi(argv[2]));
printf("%s\n", buf);
return 0;
}

lseek() on /dev/watchdog causes system crash

I'm new to this forum and I would like to ask the experts a question.
I wrote the following program ( part of a bigger thing, but this is the code that causes me trouble)
#include <unistd.h>
#include <fcntl.h>
int main()
{
int fd;
fd = open("/dev/watchdog",O_RDONLY);
lseek(fd,0,SEEK_END);
return 0;
}
The thing that bothers me is that after I run this program as root, after 20-30 seconds, the system crashes, and I can't seem to figure out why. This does not happend as a regular user.
Could you please enlighten me regarding this issue?
Thanks!
PS. Yes, I know that /dev/watchdog is a character file and it's not seekable, but this seems really weird.
It looks like /dev/watchdog is doing what its supposed to do. Once you open /dev/watchdog, you have to keep writing to it, otherwise the system reboots. It is probably not the lseek that is crashing it, it is the lack of writing. See the linux manpages for watchdog for more info.
When you ran as a non-root user, your open of /dev/watchdog probably just failed, so the system did not reboot. Your code is not checking for an error from open().

what happen when parent process and child process read the same file and write to the other same file?

#include <fcntl.h>
#include <stdlib.h>
int fdrd,fdwt;
char c;
void rdwrt();
main(int argc,char *argv[])
{
if(argc!=3)
exit(1);
if((fdrd=open(argv[1],O_RDONLY))==-1)
exit(1);
if((fdwt=creat(argv[2],0666))==01)
exit(1);
fork();
rdwrt();
exit(0);
}
void rdwrt()
{
for(;;)
{
if(read(fdrd,&c,1)!=1)
return;
write(fdwt,&c,1);
}
}
This program forks a child process,then parent process and child process try to read the same input file and write to the same output file.
Excute this program like this:
[root#localhost]./a.out input output
where content of input file is:
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
abcdefghijklmnopqrstuvwxyz
I thought the output file should have equal number of characters to the input file,though the character order probably not the same according to the competition of these two processes.
It turns out that the output file is:
abcdefghijklmnonqbcdefghijklwxyczdefjklpqrstuvwxyz
abcefgklmvwxefgklmnopqrstuvw
qrstuyz
abcdhijxyz
Actually,these tow files have different characters number:
[root#localhost]wc -m input output
162 input
98 output
Now I wonder why?
The contents of the output file will be difficult to predict because your program contains a race condition. Specifically, it depends on process scheduling.
Requested update:
This question is actually more interesting than it looked at first glance.
I'm going to make some predictions (tested successfully...)
On Unix-like systems1 ... then, yes, the number of characters will always be the same but the order will be difficult to predict.
You tagged your question linux unix, and in those systems, all of which1 properly implement the fork model, both children will share a single file position for both (forked) instances of fdrd, and they will share a second file position for both instances of fdwr.
If you could slow down time and watch the program run, at any point there are things you know and things you don't.
You don't know which child will win the race to do the next read, but you do know which character the winner will read, because they are always at the same file position. After the winner gets that next character, you still don't know who will read the following one, because the race is still on.
In fact, it is possible that the same process will win the race again, and again, because the scheduler probably won't want to run it for a very small time slice.
At any moment you also know that the next character will be written at EOF because, again, shared write position.
Now, you might ask, well then, if both processes are always at both the same input and output file positions, how does the file get cracked up?
Well, there is more than one race, one to the read and a second to the write. (Or one, kinda complicated race.) One child may have read its character but not written it when it gets time-sliced. So now it starts losing the race to the write statement and then probably to several iterations of read/write. So a character can get hung up in one child.
And finally, on merely-API-compatible C environments running over other operating systems, anything could happen. The OP's system appears to be one of these, or perhaps the test was flawed. My OSX system behaves as predicted.
1. "Real" UNIX, *BSD, OSX, or Linux.

Setting the thread /proc/PID/cmdline?

On Linux/NPTL, threads are created as some kind of process.
I can see some of my process have a weird cmdline:
cat /proc/5590/cmdline
hald-addon-storage: polling /dev/scd0 (every 2 sec)
Do you have an idea how I could do that for each thread of my process? That would be very helpful for debugging.
If you want to do this in a portable way, something that will work across multiple Unix variations, there are very few options available.
What you have to do is that your caller process must call exec with the argv [0] argument pointing to the name that you would like to see in the process output, and the filename pointing to the actual executable.
You can try this behavior from the shell by using:
exec -a "This is my cute name" bash
That will replace the current bash process with one named "This is my cute name".
For doing this in C, you can look at the source code of sendmail or any other piece of software that has been ported extensively and find all the variations that are needed across operating systems to support this.
Some operating systems have a setproctitle(3) API, some others allow you to override the contents of argv [0] and show that result.
argv points to writable strings. Just write stuff to them:
#include <string.h>
#include <unistd.h>
int
main(int argc, char** argv)
{
strcpy(argv[0], "Hello, world!");
sleep(10);
return 0;
}
Bah.. the code is not that nice, the trick is to reuse the environ (here argv_buffer) pointer:
memset (argv_buffer[0] + len, 0, argv_size - len);
argv_buffer[1] = NULL;
Any better idea?
Is that working for different threads?

Resources