QEMU: /bin/sh: can't access tty; job control turned off - linux

As a development environment for linux kernel, I'm using qemu with setting up initramfs as similar to what is shown here, with few additional executable. Basically, it uses busybox for creating minimal environment and package it up using cpio. Content of init is shown below.
$ cat init
mount -t proc none /proc
mount -t sysfs none /sys
echo -e "\nBoot took $(cut -d' ' -f1 /proc/uptime) seconds\n"
exec /bin/sh
Using following command to start VM:
qemu-system-x86_64 -kernel bzImage -initrd initramfs -append "console=ttyS0" -nographic
It throws following error:
/bin/sh: can't access tty; job control turned off
Although, system functions normal in most cases. But, I'm not able to create background process:
$ prog &
/bin/sh: can't open '/dev/null'
$ fg
/bin/sh: fg: job (null) not created under job control
Root of all problems seem to be not having access to tty. How can I fix this?
EDIT: Apart from Accepted answer, as a get around cttyhack of busybox can be used.
$cat init
#!/bin/sh
mount -t proc none /proc
mount -t sysfs none /sys
mknod -m 666 /dev/ttyS0 c 4 64
echo -e "\nBoot took $(cut -d' ' -f1 /proc/uptime) seconds\n"
setsid cttyhack sh
exec /bin/sh

From Linux From Scratch Chapter 6.8. Populating /dev
6.8.1. Creating Initial Device Nodes
When the kernel boots the system, it requires the presence of a few device nodes, in particular the console and null devices. Create these by running the following commands:
mknod -m 600 /dev/console c 5 1
mknod -m 666 /dev/null c 1 3
You should then continue with the steps in "6.8.2. Mounting tmpfs and Populating /dev". Note the <-- below, and I suggest you read the entire free LFS.
mount -n -t tmpfs none /dev
mknod -m 622 /dev/console c 5 1
mknod -m 666 /dev/null c 1 3
mknod -m 666 /dev/zero c 1 5
mknod -m 666 /dev/ptmx c 5 2
mknod -m 666 /dev/tty c 5 0 # <--
mknod -m 444 /dev/random c 1 8
mknod -m 444 /dev/urandom c 1 9
chown root:tty /dev/{console,ptmx,tty}

Related

Device node in LXC is not accessible when connected via SSH

I have a problem where a physical hardware device passed through to an LXC container cannot be read from or written to when I am connected via SSH.
The device node of my physical hardware device looks like this:
myuser#myhost:~$ ls -la /dev/usb/hiddev0
crw-rw-rw- 1 root root 180, 0 Jul 30 10:27 /dev/usb/hiddev0
This is how I create and start my container:
myuser#myhost:~$ sudo lxc-create -q -t debian -n mylxc -- -r stretch
myuser#myhost:~$ sudo lxc-start -n mylxc
Then I add the device node to the LXC:
myuser#myhost:~$ sudo lxc-device -n mylxc add /dev/usb/hiddev0
Afterwards the device is available in the LXC and I can read from it after having attached to the LXC:
myuser#myhost:~$ sudo lxc-attach -n mylxc
root#mylxc:/# ls -la /dev/usb/hiddev0
crw-r--r-- 1 root root 180, 0 Aug 27 11:26 /dev/usb/hiddev0
root#mylxc:/# cat /dev/usb/hiddev0
����������^C
root#mylxc:/#
I then enable root access via SSH without a password:
myuser#myhost:~$ sudo lxc-attach -n mylxc
root#mylxc:/# sed -i 's/#\?PermitRootLogin.*/PermitRootLogin yes/g' /etc/ssh/sshd_config
root#mylxc:/# sed -i 's/#\?PermitEmptyPasswords.*/PermitEmptyPasswords yes/g' /etc/ssh/sshd_config
root#mylxc:/# sed -i 's/#\?UsePAM.*/UsePAM no/g' /etc/ssh/sshd_config
root#mylxc:/# passwd -d root
passwd: password expiry information changed.
root#mylxc:/# /etc/init.d/ssh restart
Restarting ssh (via systemctl): ssh.service.
root#mylxc:/# exit
When I connect via SSH now, the device node is there, but I cannot access it:
myuser#myhost:~$ ssh root#<lxc-ip-address>
root#mylxc:~# ls -la /dev/usb/hiddev0
crw-r--r-- 1 root root 180, 0 Aug 27 11:26 /dev/usb/hiddev0
root#mylxc:~# cat /dev/usb/hiddev0
cat: /dev/usb/hiddev0: Operation not permitted
In both cases (lxc-attach and ssh) I am the root user (verified via whoami), so this cannot be the problem.
Why am I not allowed to access the device when I am connected via SSH?
EDIT
In the meantime I found out that the error disappears when I call all the LXC initialization commands directly one after another in a script, i.e.:
sudo lxc-create -q -t debian -n mylxc -- -r stretch
sudo lxc-start -n mylxc
sudo lxc-device -n mylxc add /dev/usb/hiddev0
...
And then all the SSH configuration as described above. The device is correctly accessible via SSH then.
As soon as some time passes between lxc-start and lxc-device, the error appears, e.g.:
sudo lxc-create -q -t debian -n mylxc -- -r stretch
sudo lxc-start -n mylxc
sleep 1
sudo lxc-device -n mylxc add /dev/usb/hiddev0
...
Why is the timing relevant here? What happens during the first second within the LXC that makes the device become unaccessible?
With help from the lxc-users mailing list I found out that the restriction is intended. Access to devices has to be allowed explicitly in the LXC's config using their major/minor numbers:
lxc.cgroup.devices.allow = c 180:* rwm
The unrestricted access using lxc-attach seems to be some bug in my case. Devices should never be accessible in the LXC if not explicitly allowed.

Suspend / Hibernate cannot resume laptop

I'm trying to configure Suspend/Hibernate with my laptop and I issued some troubles.
I noticed the following troubles:
Suspend :
- Switching the lid off of my laptop : When I switch the lid on of the laptop, nothing happens. I must push down the power button to force the shutdown and then switch on the laptop.
- Typing systemctl suspend : same as previously.
Hibernate :
- Typing systemctl hibernate : the laptop seems to shutdown
I read those following links for helping me :
Hibernate with swap file
Suspend and hibernate
My system :
4.13.0-38-generic #43-Ubuntu SMP Wed Mar 14 15:20:44 UTC 2018 x86_64 GNU/Linux
My swap :
$ cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=403661c1-c7b4-47a8-9493-c5c0262ce14e / ext4 errors=remount-ro 0 1
UUID=BF40-CD4F /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
/swap swap swap defaults 0 0
What I've done :
Create swap file
$ fallocate -l 8g /swap
$ mkswap /swap
Add swappiness
sysctl -w vm.swappiness=1
$ cat /etc/sysctl.conf | grep swappiness
vm.swappiness=1
$ swapon /swap
Configure grub
$ cat /etc/default/grub | grep -i grub_cmdline_linux_default
GRUB_CMDLINE_LINUX_DEFAULT="resume=/swap resume_offset=60354560 quiet splash"
$ sudo filefrag -v /swap | head -n4
Filesystem type is: ef53
File size of /swap is 8589934592 (2097152 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 60354560.. 60354560: 1:
According to previous links I could configure /etc/mkinitcpio.conf but there is no file like this in my system.
So, I don't really know how to configure my initramfs.
Here is my configuration from /sys/power :
$ cat /sys/power/disk
[platform] shutdown reboot suspend test_resume
$ cat /sys/power/mem_sleep
s2idle [deep]
$ cat /sys/power/image_size
3261943808
$ cat /sys/power/resume
0:0
Could you give me some hints to make a step forward. Thanks you.

Access urandom device get "permission denied", why?

I can create a new urandom device on a some directory (test_urandom in below example), and it works as expected. E.g.
test_urandom$ sudo mknod -m 0444 ./urandom c 1 9
test_urandom$ ls -l
total 0
cr--r--r-- 1 root root 1, 9 Jun 9 09:06 urandom
test_urandom$ head -c 10 ./urandom
�׫O1�9�^
However, if I create the same device node in another directory, which in my case is an ext4 filesystem on a LVM (Logical Volume Management), it failed and system complained with permission denied.
test_urandom_lvm$ sudo mknod -m 0444 ./urandom c 1 9
test_urandom_lvm$ ls -l
total 0
cr--r--r-- 1 root root 1, 9 Jun 9 09:06 urandom
test_urandom_lvm$ head -c 10 ./urandom
head: cannot open ‘./urandom’ for reading: Permission denied
If I am allowed to create a device in the filesystem, why not allowed to read the device? What caused the permission denied? What changes is needed to make it work?
The filesystem is mounted with the nodev option, which inhibits block and character special device operation. Mounting it dev will allow them to work.

sshpass Failed to Get a Pseudo Terminal in Windows 10 ubuntu linux bash

Has anyone tried using sshpass in a windows 10 insider preview linux terminal?
It just returns this error
root#T430U:~# sshpass -p mypass ssh user#host
Failed to get a pseudo terminal: No such file or directory
Like it says here. You have to mount the pts directory:
m -rf /dev/ptmx
mknod /dev/ptmx c 5 2
chmod 666 /dev/ptmx
umount /dev/pts
rm -rf /dev/pts
mkdir /dev/pts
vim /etc/fstab
(added: none /dev/pts devpts defaults 0 0)
mount /dev/pts

XFS grow not working

So I have the following setup:
[ec2-user#ip-172-31-9-177 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 80G 0 disk
├─xvda1 202:1 0 6G 0 part /
└─xvda2 202:2 0 4G 0 part /data
All the tutorials I find say to use xfs_growfs <mountpoint> but that has no effect, nor has the -d option:
[ec2-user#ip-172-31-9-177 ~]$ sudo xfs_growfs -d /
meta-data=/dev/xvda1 isize=256 agcount=4, agsize=393216 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=1572864, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data size unchanged, skipping
I should add that I am using:
[ec2-user#ip-172-31-9-177 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.0 (Maipo)
[ec2-user#ip-172-31-9-177 ~]$ xfs_info -V
xfs_info version 3.2.0-alpha2
[ec2-user#ip-172-31-9-177 ~]$ xfs_growfs -V
xfs_growfs version 3.2.0-alpha2
Before running xfs_growfs, you must resize the partition the filesystem sits on.
Give this one a go:
sudo growpart /dev/xvda 1
As per https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
You have a 4GB xfs file system on a 4GB partition, so there is no work to do.
To overcome, enlarge the partition with parted then use xfs_growfs to expand the fs. You can use parted rm without losing data.
# umount /data
# parted
GNU Parted 3.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) print
....
(parted) rm 2
(parted) mkpart
....
(parted) print
(parted) quit
# xfs_growfs /dev/xvda2
# mount /dev/xvda2 /data
Done. No need to update /etc/fstab as the partition numbers are the same.
Before running xfs_growfs, Please do the following step first:
#growpart <devicenametobeextend>
# growpart /dev/xvda 1
CHANGED: partition=1 start=4096 old: size=31453151 end=31457247 new: size=41938911,end=41943007
#xfs_growfs -d /
enter FYI for your reference
Many Servers by default won't have growpart utils So you can follow the below steps to do
Install growpart utils using package manager as per OS distribution below is for RPM/FEDORA based.
yum install cloud-utils-growpart
Run the growpart command on the partition which has to change.
growpart /dev/xvda 1
Finally run the xfs_growfs command.
xfs_growfs -d /dev/xvda1

Resources