This is a convoluted one so bear with me. I have a Perl script that is located on a Windows VirtualBox guest. I want to call this script from the Linux host and have it read a shared folder from the host. The reading of the folder fails.
On the host I call this script and it gives me the following output:
host:~/$ ./script.pl /nfs/nasi/temp
[2014-04-02 10:50:55] Uploading file records to localhost
[2014-04-02 10:50:55] Running VirtualBox for Kaspersky
fatal: opendir(E:\nasi\temp) failed: No such file or directory
[2014-04-02 10:50:56] Uploading malware samples data to localhost
host:$
The script converts the argument /nfs/nasi/temp to E:\nasi\temp and calls the script using the following command:
/usr/bin/VBoxManage guestcontrol <guest> execute \
--image "C:\strawberry\perl\bin \perl.exe" \
--username <user> --password <pass> \
--wait-stdout --wait-stderr --wait-exit -- \
"C:\antivirus\kaspersky.pl" "E:\nasi\temp"
When I run this same script using the same option from the guest directly however I get the following:
C:\antivirus>C:\strawberry\perl\bin\perl.exe C:\antivirus\kaspersky.pl E:\nasi\temp
[2014-04-02 10:54:19] Running Kaspersky Antivirus
[2014-04-02 10:54:20] Parsing Kaspersky report
[2014-04-02 10:54:20] Uploading Kaspersky results to 10.0.0.1
C:\antivirus>
But wait, it gets weirder. When instead of providing the shared directory E:\ I instead point it to C:\ it has no problem reading the directory and just happily keeps going. So the error only shows up when I run the command from the host through VirtualBox and point it to the share.
Here is the relevant code:
sub createSamplesMap {
opendir( my $dh, $ARGV[0] ) or
die "fatal: opendir($ARGV[0]) failed: $!\n";
my #files = readdir( $dh );
foreach my $file ( #files ) {
if (! -d $file ) {
...
}
}
closedir($dh);
}
I tried different ways of reading the filenames from the directory but they didn't work. Here's what I tried.
my #files = <$ARGV[0]\\*>;
my #files = glob( $ARGV[0] . '\\*' );
I don't know whether to blame perl or virtualbox. Anyone have any ideas on what the problem might be?
Windows 7, Strawberry Perl v5.18.2
Ubuntu 12.04.04, Perl v5.14.2
VirtualBox 4.2.16r86992
crosspost: https://forums.virtualbox.org/viewtopic.php?f=2&t=61011
I've found the problem. As mentioned on the virtualbox forum there was a problem with the environment variables set when running the perl script. After much googling I also found a blog post from kissmyarch where he describes how he solved the problem.
You can set environment variables using the --environment option in VBoxManage guestcontrol and according to kissmyarch you need to set USERPROFILE to get it to work. This did not work for me.
So instead I used the following code from the script to figure out what environment variables were set:
foreach $key (sort keys(%ENV)) {
print "$key = $ENV{$key}\n";
}
and ran that both on the guest and from guestcontrol to compare the environments. My command now looks like this:
/usr/bin/VBoxManage guestcontrol <vm> execute \
--image "C:\strawberry\perl\bin\perl.exe" \
--username <user> --password <pass> \
--environment "USERPROFILE=C:\Users\<user>" \
--environment "APPDATA=C:\Users\<user>\AppData\Roaming" \
--environment "LOCALAPPDATA=C:\Users\<user>\AppData\Local" \
--environment "HOMEDRIVE=C:" \
--environment "HOMEPATH=\Users\<user>" \
--environment "LOGONSERVER=\\\<server>" \
--environment "SESSIONNAME=Console" \
--environment "TEMP=C:\Users\<user>\AppData\Local\Temp" \
--environment "TMP=C:\Users\<user>\AppData\Local\Temp" \
--environment "USERDOMAIN=<domain>" \
--environment "USERNAME=<user>" \
--wait-stdout --wait-stderr --wait-exit \
-- "C:\antivirus\kaspersky.pl" "E:\nasi\temp"
Somewhere in that big pile of environment variables is one that is important.
Thanks to all that helped.
Related
I have a shell script that runs a docker container on a remote server.
I'm trying to send the hostname of the remote server into the container but i just get the hostname of my local computer where i run the script.
The command looks like this in the script:
ssh $remote "docker run -h '`hostname`' \
-e 'VARIABLE=$SCRIPT_VAR' \
-e 'HOST_HOSTNAME=`hostname`' \
..."
Both hostname and the environment variable host.hostname becomes the name of my local computer.
I know I can use singlequotes like this:
ssh $remote 'echo "`hostname`"'
and it will work. But then i cannot use scriptvariables like the $SCRIPT_VAR
How can i get it to evaluate on the remote server instead while also being able to use variables?
You still need to ensure that the expansion of $SCRIPT_VAR is quoted to prevent it from being subjected to word splitting or pathname expansion.
ssh $remote 'docker run -h "$(hostname)" \
-e "VARIABLE='"$SCRIPT_VAR"'" \
-e "HOST_HOSTNAME=$(hostname)" \
...'
I have installed gitlab community edition on my raspberry pi 3. Everything is working fine. But when the application is up there are 25 sidekiq threads. It's eating up my memory and I don't want so many threads.
I tried controlling by adding the file /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq.yml.
# Sample configuration file for Sidekiq.
# Options here can still be overridden by cmd line args.
# Place this file at config/sidekiq.yml and Sidekiq will
# pick it up automatically.
---
:verbose: false
:concurrency: 5
# Set timeout to 8 on Heroku, longer if you manage your own systems.
:timeout: 30
# Sidekiq will run this file through ERB when reading it so you can
# even put in dynamic logic, like a host-specific queue.
# http://www.mikeperham.com/2013/11/13/advanced-sidekiq-host-specific-queues/
:queues:
- critical
- default
- <%= `hostname`.strip %>
- low
# you can override concurrency based on environment
production:
:concurrency: 5
staging:
:concurrency: 5
I have restarted the application many times and even ran "reconfigure". It's not helping. It's not considering the sidekiq.yml file at all.
Can anybody please let me know where I am going wrong?
i found your question by searching for a solution for the same problem. All i found doesn't work. So i tried bye myself and found the right place for reducing sidekiq from 25 to 5. I use the gitlab omnibus version. I think the path is idetical to yours:
/opt/gitlab/sv/sidekiq/run
In this file you find the following code:
#!/bin/sh
cd /var/opt/gitlab/gitlab-rails/working
exec 2>&1
exec chpst -e /opt/gitlab/etc/gitlab-rails/env -P \
-U git -u git \
/opt/gitlab/embedded/bin/bundle exec sidekiq \
-C /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq_queues.yml \
-e production \
-r /opt/gitlab/embedded/service/gitlab-rails \
-t 4 \
-c 25
Change the last line to "-c 5". The result should look like this:
#!/bin/sh
cd /var/opt/gitlab/gitlab-rails/working
exec 2>&1
exec chpst -e /opt/gitlab/etc/gitlab-rails/env -P \
-U git -u git \
/opt/gitlab/embedded/bin/bundle exec sidekiq \
-C /opt/gitlab/embedded/service/gitlab-rails/config/sidekiq_queues.yml \
-e production \
-r /opt/gitlab/embedded/service/gitlab-rails \
-t 4 \
-c 5
Last but no least yout have to resart gitlab service
sudo gitlab-ctl restart
No idea, what happening on the gitlab update. I think i have to change this value again. It would be nice, if the gitlab developers add this option to gitlab.rb in /etc/gitlab directory.
I created a tar file of live centOS with:
tar --numeric-owner \
--exclude=/proc \
--exclude=/sys \
--exclude=/mnt \
--exclude=/var/cache \
--exclude=/usr/share/doc \
--exclude=/tmp \
--exclude=/var/log \
-zcvf /mnt/rhel7-base.tar.gz /
and then run
cat rhel7-base.tar.gz | docker import - rhel7/01
to load it into docker. It finished without an error and I can find it with
docker images command. Finally, i tried to run it docker run -i -t rhel7/01 (also without -i,-t switches), but nothing result
[root#vhp~]# docker run rhel7/01
[root#vhp~]#
I'm wondering if any one corrects me.
Not 100% sure, but it seems that you're missing command for docker to execute inside your image, try:
[root#vhp~]# docker run -it rhel7/01 bash # this should drop you into bash inside docker container
Also you can check if container is running with docker ps -a
It better you pull centos image from official docker repo,then configure according to your use and then push image to a registry.It is easy and simple.
docker push imagename
Make a Dockerfile like this:
#- pull base image.
FROM centos:latest
#- setting locale
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
#- add for terminal
ENV TERM xterm
I'm trying to boot up a guest os to continue with my work but I have a problem with my virsh installation.
Here is the part of installation script:
qemu-img create -f qcow2 -o preallocation=metadata ~/images/${vm_name}.qcow2 ${pool_size}G
# create dir for images
mkdir ~/images/
virt-install \
--connect qemu:///system \
--name $vm_name \
--ram 10240 \
--vcpus 4 \
--disk ~/images/${vm_name}.qcow2,size=$pool_size,bus=virtio,sparse=false,format=qcow2 \
--network network=default,model=virtio \
--location http://ua.archive.ubuntu.com/dists/trusty-updates/main/installer-amd64/ \
--initrd-inject=$current_dir/preseed.cfg \
--extra-args="file=file:/preseed.cfg vga=788 quiet console=tty0 utf8 console=ttyS0,115200" \
--os-type=linux \
--virt-type kvm \
--video=vga \
--noreboot \
--cpu host \
--hvm
virsh start $vm_name
echo "----------Login to console----------"
virsh console $vm_name
WHen Im trying to run this script as a file like ./script.sh it produces an error:
Formatting '/home/{username}/images/test.qcow2', fmt=qcow2 size=53687091200 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off refcount_bits=16
mkdir: cannot create directory '/home/flash/images/': File exists
ERROR 'DebianDistro' object has no attribute '_prefix'
error: failed to get domain 'test'
error: Domain not found: no domain with matching name 'test'
----------Login to console----------
error: failed to get domain 'test'
error: Domain not found: no domain with matching name 'test'
I have tried already reinstalling kvm qemu packages using this guide - https://help.ubuntu.com/community/KVM/Installation
and everything completed successfully.
I am sure that script will work file as I was using it before on the other machine without any problems.
Another try:
Using that script below
virt-install --connect qemu:///system -n test -r 10240 \
--vcpus=4 \
--disk path=/data0/images/test.img,size=50,format=qcow2,bus=virtio,cache=none \
--cdrom /home/{username}/Downloads/kvm/ubuntu-14.iso \
--vnc \
--os-type=linux \
--accelerate \
--network network=default \
--hvm
Produces an error:
ERROR internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied
Also when I'm trying to list all os variants by virt-install --os-variant list it cannot recognize this command and trying to boot up a guest os instead of listing variants.
Can you please help me to find out what is the problem here?
To fix this error:
ERROR 'DebianDistro' object has no attribute '_prefix'
Edit the file /usr/share/virt-manager/virtinst/urlfetcher.py and change this in line 1034:
if self._prefix:
to this:
if self._url_prefix:
Ubuntu 14.04.
I have a little problem that is troubling me. Can i connect to a windows trough remote desktop protocol from a linux (Ubuntu)?
In Windows i have Remote Desktop Connection
http://www.techotopia.com/images/8/81/Windows_server_2008_remote_desktop_connection.jpg
but in linux i can connect only to other linux.
And i don't know if is possible. Is this possible?
Thank you
There are bare-boned applications like rdesktop as well as a number of nicer ones that can set up configuration defaults etc. The way it goes with KDE and Gnome, these apps sometimes go stale, get replaced, have inconsistent naming etc but hey, the price is right.
I currently like remmina the best. It is a Gnome/Gtk+ plus application. One nice feature is that it also has NX plugins and more. All works out of the box on my Ubuntu systems.
I always use this optimal configuration simply running rdesktop from terminal:
$ rdesktop -u REMOTE_USER -p REMOTE_PASSWORD -k pt -g 1440x900 -T "MY REMOTE SERVER" -N -a 16 -z -xl -r clipboard:CLIPBOARD -r disk:SHARE_NAME_ON_REMOTE=LOCAL_SHARED_FOLDER_PATH SERVER_HOSTNAME_OR_IP_ADDRESS
This includes defining the desired window size (-k pt, choose the desired one for you), adding a meaningful window title (-T "MY REMOTE SERVER") , enabling clipboard support (-r clipboard:CLIPBOARD) and a shared folder ( disk:SHARE_NAME_ON_REMOTE=LOCAL_SHARED_FOLDER_PATH) between your Linux host and Windows host.
Additionally you can also create a bash script each connection that you use more often:
#!/bin/sh
rdesktop \
-u REMOTE_USER \
-p REMOTE_PASSWORD \
-k pt \
-g 1440x900 \
-T "MY REMOTE SERVER" \
-N \
-a 16 \
-z \
-xl \
-r clipboard:CLIPBOARD \
-r disk:SHARE_NAME_ON_REMOTE=LOCAL_SHARED_FOLDER_PATH \
SERVER_HOSTNAME_OR_IP_ADDRESS
Yes you can, there are apps like Remote Desktop Viewer that are capable of use the RDP (remote desktop protocol)