I have a script that creates a virtual machine using virt-install. This script uses a kickstart file for unattended installation. It works perfectly fine when triggered through shell but its throws the following error when triggered through crontab:
error: Cannot run interactive console without a controlling TTY
The VM creation process continues at the backend but in my script it doesn't wait for the virt-install to complete and moves on to the next commands. I wanted my script to wait for the virt-install command to complete its job and then move to the next command. Is there any way i can either get a controll on TTY or make my script to wait for virt-install to complete?
Edit
Here is the virt-install command that my script executes (in case it helps you figuring out the issue):
virt-install --connect=qemu:///system \
--network=bridge:$BRIDGE \
$nic2 \
--initrd-inject=$tmp_ks_file \
--controller type=scsi,model=virtio-scsi \
--extra-args="ks=file:/$(basename $tmp_ks_file) console=tty0 console=ttyS0,115200" \
--name=$img_name \
--disk $libvirt_dir/$img_name.img,size=$disk \
--ram $mem \
--vcpus=2 \
--check-cpu \
--accelerate \
--hvm \
--location=$tree \
--nographics
Thanks in advance,
Kashif
I finally able to cater this issue through two steps:
First of all remove the 'console' related configurations from the virt-install command. See extra-args in above command.
Put some logic to wait for virt-install to complete. I did add shutdown in the post install section of kickstart file so that VM shutoff after it is done installing all the packages. Then in my script i actually 'waited' for VM to go to shutdown state before moving to the next command.
This way I am able to run my script in crontab. It also worked with jenkins too.
Hope this helps someone facing the same issue.
Related
I use a service to generate sitemaps and i'm trying to automate the retrieval process.
I have been using wget to fetch the data and add it to my server
here is my wget statement:
wget --no-check-certificate --quiet \
--output-document sitemap.xml \
--method POST \
--timeout=0 \
--header 'Content-Type: application/x-www-form-urlencoded' \
--body-data 'method=download_sitemap&api_key=[SECRET_KEY]&site_id=[SECRET_ID]&sitemap_id=sitemap.xml' \
'https://pro-sitemaps.com/api/'
^this code works great for me no issues.
I cronjob -e and added the code to my cron folder using nano which looks like this
25 0 * * * "/etc/letsencrypt"/acme.sh --cron --home "/etc/letsencrypt" > /dev/null
07 18 * * * wget --no-check-certificate --quiet \ --output-document "/FILE/PATH/sitemap.xml" \ --method POST \ --timeout=0 \ --header 'Content-Type: application/x-www-form-urlencoded' \ --body-data 'method=download_sitemap&api_key=[SECRET_KEY]&site_id=[SECRET]&sitemap_id=sitemap.xml' \ 'https://pro-sitemaps.com/api/'
My problem is that my code is not running in the cron folder. I have set up the time, so my server time matches my local time. I have tried running the wget statement all on one line and removing any extra spacing in the code block. I tried shorthanding the commands (-T instead of --timeout) & I have tried adding a space at the end of each cron job. I am a bit stumped. Its probably something really simple that I missed in the documentation. Does anybody have any suggestions or notice anything off with what i'm doing in my cron folder?
I have observed these two q's which is where I have gotten the ideas so far troubleshooting: How to get CRON to call in the correct PATHs
& this q: CronJob not running
Again, I have no issues when I run the wget statement in my terminal. It pulls everything just as expected. My issues is just when I put the wget command in my cron folder the command wont run.
\ denotes line continuation, you should not use it if you have all in one line command, for example
wget --continue \
https://www.example.com
after conversion to one line is
wget --continue https://www.example.com
Regarding cron if you have working command you might create file with it and use it through bash e.g. you might create fetcher.sh with content as follows
wget -P /path/to/catalog https://www.example.com
inside /path/to/catalog and then add
58 23 * * * bash /path/to/catalog/fetcher.sh
where /path/to/catalog is path to existing catalog, then it would download example domain into /path/to/catalog each 2 minutes to midnight.
I've built a script to automate a CMake build of OpenCV4. The relevant part of the script is written as:
install.sh
#!/bin/bash
#...
cd /home/pi/opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D ENABLE_NEON=ON \
-D ENABLE_VFPV3=ON \
-D BUILD_TESTS=OFF \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D BUILD_EXAMPLES=OFF ..
This part of the code is first executed from /home/pi/ directory. If I execute these lines within the cli they work and the cmake file is built without error. If I run this same code form a bash script it results in the cmake command resulting in -- Configuring incomplete, errors occurred!.
I believe this is similar to these two SO threads (here and here) in as much as they both describe situations where calling a secondary script from the first script creates a problem (or at least that's what I think they are saying). If that is the case, how can you start a script from /parent/, change to /child/ within the script, execute secondary script (CMake) as though executed from the /child/ directory?
If I've missed my actual problem - highlighting taht would be even more helpful.
Update with Full Logs
Output log for CMakeOutput.log and CMakeError.log as unsuccessfully run from the bash script.
When executed from the cli the successful logs are success_CMakeOutput.log and success_CMakeError.log
Update on StdOut
I looked through the files above and they look the same... Here is the failed screen output (noting the bottom lines) and the successful screen output.
You are running your script as the root user with the /root home directory, while the opencv_contrib directory is in /home/pi directory. The /home/pi is most probably the home directory of the user pi.
Update the:
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
With proper path to opencv_contrib. Either provide opencv_contrib in the home directory of the root user, if you aim to run the script as root, or provide full, non-dependent on HOME, path to opencv_contrib directory.
-D OPENCV_EXTRA_MODULES_PATH=/home/pi/opencv_contrib/modules \
I'm developing web-app that will run on terminal with two monitors (vertical set-up). On terminal there is installed linux mint.
I need to open 2 different instances of google-chrome on 2 different monitors, but on same session.
So far I've reached this:
#clear any cache from previous run of terminal
rm -rf /dev/shm/Chrome
mkdir -p /dev/shm/Chrome
export DISPLAY=:0.0
exec /usr/bin/google-chrome \
--display=":0.0" \
--new-window \
--no-sandbox \
--disable-setuid-sandbox \
--start-maximized \
--kiosk-printing \
--start-fullscreen \
--user-data-dir=/dev/shm/Chrome \
--no-first-run \
'http://google.com/'
second script
#second instance started in different place
export DISPLAY=:0.1
exec /usr/bin/google-chrome \
--display=":0.1" \
--new-window \
--no-sandbox \
--disable-setuid-sandbox \
--start-maximized \
--kiosk-printing \
--start-fullscreen \
--user-data-dir=/dev/shm/Chrome \
--no-first-run \
'http://google.com/'
It does:
open two different windows of google chrome & opens them full screen.
However, if they share profile (in this case --user-data-dir=/dev/shm/Chrome) they will open on same display.
If it is different folder for them, then they will open on different monitors, but they will not share same session, which I do need for further development. I plan to use broadcast-channel-api, example could be found here: https://github.com/irekrog/broadcast-channel-api-simple-example . If Chrome does not share session, it is impossible to communicate with broadcast channel.
note: also tag --window-position=X,Y does not seem to work and just breaks everything if they're on same session
There is clone of this question for windows: How to open two instances of Chrome kiosk mode in different displays (Windows) But I need solution for linux, I don't believe I have access to WinApi as in accepted answer there.
Any solutions or workaround are appreciated
QEMU supports deterministic record and replay as documented at: https://github.com/qemu/qemu/blob/v2.9.0/docs/replay.txt
However, I could not get replay working for a full Linux kernel boot: it always hangs at some point.
These are the commands I'm running:
#!/usr/bin/env bash
cmd="\
time \
./buildroot/output.x86_64~/host/usr/bin/qemu-system-x86_64 \
-M pc \
-append 'root=/dev/sda console=ttyS0 nokaslr printk.time=y - lkmc_eval=\"/rand_check.out;wget -S google.com;/poweroff.out;\"' \
-kernel './buildroot/output.x86_64~/images/bzImage' \
-nographic \
\
-drive file=./buildroot/output.x86_64~/images/rootfs.ext2,if=none,id=img-direct,format=raw \
-drive driver=blkreplay,if=none,image=img-direct,id=img-blkreplay \
-device ide-hd,drive=img-blkreplay \
\
-netdev user,id=net1 \
-device rtl8139,netdev=net1 \
-object filter-replay,id=replay,netdev=net1 \
"
echo "$cmd"
eval "$cmd -icount 'shift=7,rr=record,rrfile=replay.bin'"
# Different than previous.
eval "$cmd -icount 'shift=7,rr=record,rrfile=replay.bin'"
# Same as previous.
eval "$cmd -icount 'shift=7,rr=replay,rrfile=replay.bin'"
and my kernel and root filesystem were generated with this Buildroot setup: https://github.com/cirosantilli/linux-kernel-module-cheat/tree/0a1a600d49d1292be82a47cfde6f0355996478f0 which uses QEMU v2.9.0.
lkmc_eval gets evaled by my init scripts. Here we print userspace stuff that is usually random to check that we are actually deterministic, and then power off the machine.
How I came up with those commands:
start from the working command I used in my repo without record replay
copy paste the hard disk and networking parts from the wiki: https://wiki.qemu.org/Features/record-replay
The in-tree docs say there is no networking support, but the wiki and git log says they were added as of v2.9.0, so I think the docs are just outdated compared to the wiki.
Using that setup, the boot replay progresses quite far, but hangs at the message:
[ 31.692427] NET: Registered protocol family 17
In the initial record, the next message would have been:
[ 31.777326] sd 1:0:0:0: [sda] Attached SCSI disk
so I'm suspicious that it is a block device matter.
The timestamps are however identical, so I'm confident that the record and replay has worked so far.
If for the networking I use just:
-net none
then the record itself hangs at:
[ 19.669685] ALSA device list:
[ 19.670756] No soundcards found.
If anyone wants to try a QEMU patch against it, just checkout to your patch inside /qemu/ and run:
./build -t host-qemu-reconfigure
to rebuild.
Your command line looks ok, but unfortunately record/replay is QEMU is broken in this release.
I hope that it will be fixed in the nearest weeks.
I have made a Docker image, from a Dockerfile, and I want a cronjob executed periodically when a container based on this image is running. My Dockerfile is this (the relevant parts):
FROM l3iggs/archlinux:latest
COPY source /srv/visitor
WORKDIR /srv/visitor
RUN pacman -Syyu --needed --noconfirm \
&& pacman -S --needed --noconfirm make gcc cronie python2 nodejs phantomjs \
&& printf "*/2 * * * * node /srv/visitor/visitor.js \n" >> cronJobs \
&& crontab cronJobs \
&& rm cronJobs \
&& npm install -g node-gyp \
&& PYTHON=/usr/sbin/python2 && export PYTHON \
&& npm install
EXPOSE 80
CMD ["/bin/sh", "-c"]
After creation of the image I run a container and verify that indeed the cronjob has been added:
crontab -l
*/2 * * * * node /srv/visitor/visitor.js
Now, the problem is that the cronjob is never executed. I have, of course, tested that "node /srv/visitor/visitor.js" executes properly when run manually from the console.
Any ideas?
One option is to use the host's crontab in the following way:
0 5 * * * docker exec mysql mysqldump --databases myDatabase -u myUsername -pmyPassword > /backups/myDatabase.sql
The above will periodically take a daily backup of a MySQL database.
If you need to chain complicated commands you can also use this format:
0 5 * * * docker exec mysql sh -c 'mkdir -p /backups/`date +\%d` && for DB in myDB1 myDB2 myDB3; do mysqldump --databases $DB -u myUser -pmyPassword > /backups/`date +\%d`/$DB.sql; done'
The above takes a 30 day rolling backup of multiple databases and does a bash for loop in a single line rather than writing and calling a shell script to do the same. So it's pretty flexible.
Or you could also put complicated scripts inside the docker container and run them like so:
0 5 * * * docker exec mysql /dailyCron.sh
It's a little tricky to answer this definitively, as I don't have time to test, but you have various options open to you:
You could use the Phusion base image, which comes with an init system and cron installed. It is based on Ubuntu and is comparatively heavyweight (at least compared to archlinux) https://registry.hub.docker.com/u/phusion/baseimage/
If you're happy to have everything started from cron jobs, you could just start cron from your CMD and keep it in the foreground (cron -f).
You can use lightweight process manager to start cron and whatever other processes you need (Phusion use runit, Docker seem to recommend supervisor).
You could write your own CMD or ENTRYPOINT script that starts cron and your process. The only issue with this is that you will need to be careful to handle signals properly or you may end up with zombie processes.
In your case, if your just playing around, I'd go with the last option, if it's anything more serious, I'd go with a process manager.
If you're running your Docker container with --net=host, see this thread:
https://github.com/docker/docker/issues/5899
I had the same issue, and my cron tasks started running when I included --pid=host in the docker run command line arguments.