How to read /dev/tty in WSL2? - wsl-2

I am trying to read the terminal from the code, but /dev/tty returns empty.
FILE *fp = fopen("/dev/tty", "r");
if (!fp)
return NULL;
The file pointer is just null:
If I just type tty in the terminal, I get this:
$ tty
/dev/pts/7
$ echo "blah" > /dev/tty
blah
$ ls -rtl /dev/tty
crw-rw-rw- 1 root root 5, 0 Feb 7 14:57 /dev/tty
Which seems to be working fine...? But doesn't seem to work in the code, does anyone else experienced this before and could perhaps give me a quick pointer?
I've checked this WSL issue but doesn't look like mine is related to the permission.
https://github.com/microsoft/WSL/issues/617
Thanks!

Related

How do you disable cores on an RPi running Ubuntu 18.04?

Everything done below has been as the root user.
I tried the solution stated here and got this:
$ echo 0 > /sys/devices/system/cpu/online
bash: /sys/devices/system/cpu/online: Permission denied
I then changed the permissions on /sys/devices/system/cpu/online and got this:
$ sudo chmod 777 /sys/devices/system/cpu/online
$ echo 0 > /sys/devices/system/cpu/online
bash: echo: write error: Input/output error
This raspberrypi forum does not help since there is no /boot/cmdline.txt file.
The file /sys/devices/system/cpu/online contains the text: 0-3 and the file /sys/devices/system/cpu/offline contains nothing, so I assume that:
0 in sys/devices/system/cpu/online means only core 0 is online, 0-1 means cores 0 and 1 are online, etc. I've also tried the above commands with 0-1 instead of 0 and got the same results.

Reading value of `stdout` and `stderr`

I am trying to read value of stdout and stderr using the following commands:
cat /dev/stderr
cat /dev/stdout
But, the command keeps running.
Use a FIFO Instead
Technically, /dev/stdout and /dev/stderr are really file descriptors, not FIFOs or named pipes. On my system, they're actually just symlinks to /dev/fd/1 and /dev/fd/2. Those descriptors are typically linked to your TTY or PTY. So, you can't really read from them the way you're trying to do.
What you probably want is the mkfifo utility. For example, to write to standard error, and then read it from another command or script:
# Create a named pipe.
$ mkfifo error
# See what a named pipe looks like in the filesystem.
$ ls -l error
prw-r--r-- 1 user staff 0 May 13 01:47 error|
# In a subshell: echo to stdout, duplicate stdout to stderr,
# write stderr to the error FIFO. Background to avoid blocking.
# Then read from the FIFO until empty, which ends both tasks.
$ ( echo foo >&2 2> error & ); cat error
foo
As a more verbose but less contorted example, consider this:
$ ruby -e 'STDERR.puts "Some error."' 2> error & cat error
[1] 32458
Some error.
[1]+ Done ruby -e 'STDERR.puts "Some error."' 2> error
In this example, Ruby uses standard error to write a string to the error FIFO we created earlier. The write happens in the background, but blocks until the FIFO is emptied by the cat command. Once the FIFO is emptied, the background job completes.
The FIFO is just a special type of file, so you can remove it when you're done with rm error.
I don't really know what you mean by reading the value of stderr or stdin, but I can tell you why that cat /dev/stderr would keep running: it's waiting for data to read from the fd.
On the systems I can test this on, both output fd's are connected to the terminal just like stdin is, and reading from them works just fine. On Linux, we can view this with:
$ ls -l /proc/self/fd
lrwx------ 1 nobody nogroup 64 May 13 21:47 0 -> /dev/pts/1
lrwx------ 1 nobody nogroup 64 May 13 21:47 1 -> /dev/pts/1
lrwx------ 1 nobody nogroup 64 May 13 21:47 2 -> /dev/pts/1
lr-x------ 1 nobody nogroup 64 May 13 21:47 3 -> /proc/44664/fd
The permission bits at the start of the line show that all of the fd's 0 to 2 are opened for both reading and writing.
Reading from them works in practice, too (input in italics):
$ cat /dev/stderr
foo
foo
$ read -u 2 ; echo "reply: $REPLY"
asdf
reply: asdf
Though really, it might be better to just open /dev/tty and read from there if we want to interact on the terminal even when there are redirections in place. (That's what ssh does to ask the password, for example.)

Unable to write on /dev/* files

I'm writing a basic char device driver for Linux kernel.
For this, the code flow I have considered is as follows:
alloc_chrdev_region() -> to use dynamic allocation of major number
class_create() -> to create device class in sysfs
device_creat() -> to create device under /dev/
cdv_init() -> to initialize char device structure
cdev_add() -> to add my device structure in kernel
I have added read, write, open, release methods in code.
When I try to read device file under /dev/ my read method is called.
But when I try to write on /dev/ file using echo it gives error
"bash: /dev/scull: Permission denied"
I have checked permissions of file using ls -l, and I have permissions to read or write on this file.
This problem occurs for every device driver module I have written. It works well in on another machine.
I'm working on ubuntu 15.10, custom compiled kernel 4.3.0
the result of ls -l /dev/scull:
crw------- 1 root root 247, 0 Dec 30 18:06 /dev/scull
the exact command I used to open the file
$ sudo echo 54 > /dev/scull
the source code for the open implementation
ssize_t scull_write(struct file *filp, const char __user *buf, size_t count, loff_t *f_pos){
pr_alert("Device Written\n");
return 0;
}
Behavior I'm seeking here is, I should be able to see 'Device Written' in dmesg ouput?
I assume that you are normally not root on your bash shell. Then this command line
sudo echo 54 > /dev/scull
does not what you think. The command is executed in two steps:
The bash setups the output redirection, i.e., it tries to open /dev/scull with the current user privileges.
The command sudo echo 54 is executed whereas stdout is connected to the file.
As you have no write-permissions as non-root user, the first step fails and the bash reports
"bash: /dev/scull: Permission denied"
You must already be root to setup the output redirection. Thus execute
sudo -i
which gives you an interactive shell with root privileges. The you can execute
echo 54 > /dev/scull
within that root shell.
I know the thread is too old to answer but just in case if someone wants to know alternative method without switching to root user, here is the solution:
sudo bash -c 'echo "54" > /dev/my_dev'
I wanted to note that on your system only root (file owner) has read / write permissions. Your (normal) user account has not! So another (fast) solution would be to give all users read / write permissions.
Probably this is not the safest solution! Only do this in your test environment!
sudo chmod a+rw /dev/scull
But now you test your module with your user account (without sudo)
echo "hello, world!" > /dev/scull
cat < /dev/scull
You can do so while going root with the command
sudo su
and then going into the /dev folder and enter your command (to save data into /dev/scull).
cd /dev
echo 54 > scull

bash redirect to /dev/stdout: Not a directory

I recently upgraded from CentOS 5.8 (with GNU bash 3.2.25) to CentOS 6.5 (with GNU bash 4.1.2). A command that used to work with CentOS 5.8 no longer works with CentOS 6.5. It is a silly example with an easy workaround, but I am trying to understand what is going on underneath the bash hood that is causing the different behavior. Maybe it is a new bug in bash 4.1.2 or an old bug that was fixed and the new behavior is expected?
CentOS 5.8:
(echo "hi" > /dev/stdout) > test.txt
echo $?
0
cat test.txt
hi
CentOS 6.5:
(echo "hi" > /dev/stdout) > test.txt
-bash: /dev/stdout: Not a directory
echo $?
1
Update: It doesn't look like this is problem related to CentOS version. I have another CentOS 6.5 machine where the command works. I have eliminated any environment variables as the culprit. Any ideas?
On all the machines these commands gives the same output:
ls -ld /dev/stdout
lrwxrwxrwx 1 root root 15 Apr 30 13:30 /dev/stdout -> /proc/self/fd/1
ls -lL /dev/stdout
crw--w---- 1 user1 tty 136, 0 Oct 28 23:21 /dev/stdout
Another Update: It seems the sub-shell is inheriting the redirected stdout of the parent shell. The is not too surprising I guess, but still why does it work on one machine, but fail on the other machine when they are running the same bash version?
On the working machine:
((ls -la /dev/stdout; ls -la /proc/self/fd/1) >/dev/stdout) > test.txt
cat test.txt
lrwxrwxrwx 1 root root 15 Aug 13 08:14 /dev/stdout -> /proc/self/fd/1
l-wx------ 1 user1 aladdin 64 Oct 29 06:54 /proc/self/fd/1 -> /home/user1/test.txt
I think Yu Huang is right, redirecting to /tmp works on both machines. Both machines are using isilon NAS for the /home mount, but probably one has slightly different filesystem version or configuration that caused the error. In conclusion, redirecting to /dev/stdout should be avoided unless you know the parent process will not redirecting it.
UPDATE: This problem arose after upgrade to NFS v4 from v3. After downgrading back to v3 this behavior went away.
Good morning, user1999165, :)
I suspect it's related to the underlying filesystem. On the same machine, try:
(echo "hi" > /dev/stdout) > /tmp/test.txt
/tmp/ should be linux native (ext3 or something) filesystem
On many Linux systems, /dev/stdout is an alias (link or similar) for file descriptor 1 of the current process. When you look at it from C, then the global stdout is connected to file descriptor 1.
That means echo foo > /dev/stdout is the same as echo foo 1>&1 or a redirect of a file descriptor to itself. I wouldn't expect this to work since the semantics are "close descriptor to redirect and then clone the new target". So to make it work, there must be special code which notices that the two file descriptors are actually the same and which skips the "close" step.
My guess is that on the system where it fails, BASH isn't able to figure out /dev/stdout == fd1 and actually closes it. The error message is weird, though. OTOH, I don't know any other common error which would fit better.
Note: I tried to replicate your problem on Kubuntu 14.04 with BASH 4.3.11 and here, the redirect works (i.e. I don't get an error). Maybe it's a bug in BASH 4.1 which was fixed, since.
I was seeing issues writing piped stdin input to AWS EFS (NFSV4) that paralleled this issue. (Using Centos 6.8 so unfortunately cannot upgrade bash to 4.2).
I asked AWS support about this, here's their response --
This problem is not related to EFS itself, the problem here is with bash. This issue was fixed in bash 4.2 or later in RHEL.
To avoid this problem, please, try to create a file handle before running the echo command
within a subshell, after that the same file handler can be used as a redirect. Like the below example:
exec 5> test.txt; (echo "hi" >&5); cat test.txt
hi

How to redirect output of an already running process [duplicate]

This question already has answers here:
Redirect STDERR / STDOUT of a process AFTER it's been started, using command line?
(8 answers)
Closed 6 years ago.
Normally I would start a command like
longcommand &;
I know you can redirect it by doing something like
longcommand > /dev/null;
for instance to get rid of the output or
longcommand 2>&1 > output.log
to capture output.
But I sometimes forget, and was wondering if there is a way to capture or redirect after the fact.
longcommand
ctrl-z
bg 2>&1 > /dev/null
or something like that so I can continue using the terminal without messages popping up on the terminal.
See Redirecting Output from a Running Process.
Firstly I run the command cat > foo1 in one session and test that data from stdin is copied to the file. Then in another session I redirect the output.
Firstly find the PID of the process:
$ ps aux | grep cat
rjc 6760 0.0 0.0 1580 376 pts/5 S+ 15:31 0:00 cat
Now check the file handles it has open:
$ ls -l /proc/6760/fd
total 3
lrwx—— 1 rjc rjc 64 Feb 27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 Feb 27 15:32 1 -> /tmp/foo1
lrwx—— 1 rjc rjc 64 Feb 27 15:32 2 -> /dev/pts/5
Now run GDB:
$ gdb -p 6760 /bin/cat
GNU gdb 6.4.90-debian
[license stuff snipped]
Attaching to program: /bin/cat, process 6760
[snip other stuff that's not interesting now]
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/foo3", 0600)
$2 = 1
(gdb) q
The program is running. Quit anyway (and detach it)? (y or n) y
Detaching from program: /bin/cat, process 6760
The p command in GDB will print the value of an expression, an expression can be a function to call, it can be a system call… So I execute a close() system call and pass file handle 1, then I execute a creat() system call to open a new file. The result of the creat() was 1 which means that it replaced the previous file handle. If I wanted to use the same file for stdout and stderr or if I wanted to replace a file handle with some other number then I would need to call the dup2() system call to achieve that result.
For this example I chose to use creat() instead of open() because there are fewer parameter. The C macros for the flags are not usable from GDB (it doesn’t use C headers) so I would have to read header files to discover this – it’s not that hard to do so but would take more time. Note that 0600 is the octal permission for the owner having read/write access and the group and others having no access. It would also work to use 0 for that parameter and run chmod on the file later on.
After that I verify the result:
ls -l /proc/6760/fd/
total 3
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 0 -> /dev/pts/5
l-wx—— 1 rjc rjc 64 2008-02-27 15:32 1 -> /tmp/foo3 <====
lrwx—— 1 rjc rjc 64 2008-02-27 15:32 2 -> /dev/pts/5
Typing more data in to cat results in the file /tmp/foo3 being appended to.
If you want to close the original session you need to close all file handles for it, open a new device that can be the controlling tty, and then call setsid().
You can also do it using reredirect (https://github.com/jerome-pouiller/reredirect/).
The command bellow redirects the outputs (standard and error) of the process PID to FILE:
reredirect -m FILE PID
The README of reredirect also explains other interesting features: how to restore the original state of the process, how to redirect to another command or to redirect only stdout or stderr.
The tool also provides relink, a script allowing to redirect the outputs to the current terminal:
relink PID
relink PID | grep usefull_content
(reredirect seems to have same features than Dupx described in another answer but, it does not depend on Gdb).
Dupx
Dupx is a simple *nix utility to redirect standard output/input/error of an already running process.
Motivation
I've often found myself in a situation where a process I started on a remote system via SSH takes much longer than I had anticipated. I need to break the SSH connection, but if I do so, the process will die if it tries to write something on stdout/error of a broken pipe. I wish I could suspend the process with ^Z and then do a
bg %1 >/tmp/stdout 2>/tmp/stderr
Unfortunately this will not work (in shells I know).
http://www.isi.edu/~yuri/dupx/
Screen
If process is running in a screen session you can use screen's log command to log the output of that window to a file:
Switch to the script's window, C-a H to log.
Now you can :
$ tail -f screenlog.2 | grep whatever
From screen's man page:
log [on|off]
Start/stop writing output of the current window to a file "screenlog.n" in the window's default directory, where n is the number of the current window. This filename can be changed with the 'logfile' command. If no parameter is given, the state of logging is toggled. The session log is appended to the previous contents of the file if it already exists. The current contents and the contents of the scrollback history are not included in the session log. Default is 'off'.
I'm sure tmux has something similar as well.
I collected some information on the internet and prepared the script that requires no external tool: See my response here. Hope it's helpful.

Resources