Cannot connect to X server :0.0 with a Qt application - linux

Compiling on Fedora 10.
I have just started my first qt GUI application. I used all the default settings.
Its just a simple form. It builds OK without any errors. But when I try and run the application. I get the following message:
Starting /home/rob/projects/qt/test1/test1/test1...
No protocol specified
test1: cannot connect to X server :0.0
Thanks for any advice,

The general causes for this are as follows:
DISPLAY not set in the environment.
Solution:
export DISPLAY=:0.0
./myQtCmdHere
( This one doesn't appear to be the one at fault though, as its saying which X display its trying to connect to. Also, its not always 0.0, but most of the time it is )
Non-Authorised User trying to run the X Application
Solution ( as X owning user, ie: yourself )
xhost +local:root # where root is the local user you want to grant access to.

Also, if you'd like your X server to be able to receive connection over TCP, these days you must almost always explicitly enable this. To test whether you're server is allowing remote TCP connections try:
telnet 127.0.0.1 6000
If telnet is able to connect, then your X server is listening. If it can't, then neither will any remote X application and you need to enable remote TCP connections on your server.

Adding to above answers.
I was in a similar situation while running tests for Code2Pdf at travis.
I solved the problem using xvfb-run. Quoting from the manpage,
xvfb-run is a wrapper for the Xvfb(1x) command which simplifies the task of running commands (typically an X client, or a script containing a list of clients to be run) within a virtual X server environment.
The script that I wrote was:
check_install_xvfb() { # check and install xvfb
if hash xvfb-run 2>/dev/null; then
:
else
sudo apt-get update
sudo apt-get upgrade
sudo apt-get install xvfb
fi
}
check_install_xvfb
export DISPLAY=localhost:1.0
xvfb-run -a bash .misc/tests.sh
# .misc/tests.sh is script that runs unit tests. You can replace it with something suitable to you.
Please bear with my bash code style. I am a noob bash programmer.
Running the above script helped me.
You can see the failing build and passing build.
Hope it helps

Related

FortiClient headless linux cli - how to install and configure to handle certain IP ranges only or permit SSH

I am trying to configure the headless VPN only FortiClient on an AWS ubuntu 20.04 ec2 instance, and though I am able to connect to the target, I am then disconnected from the instance and cannot progress.
Setup:
wget http://cdn.software-mirrors.com/forticlientsslvpn_linux_4.4.2328.tar.gz
tar -xzvf forticlientsslvpn_linux_4.4.2328.tar.gz
cd ./forticlientsslvpn/64bit/helper
sudo ./setup.linux.sh
# Accept license
cd ..
./forticlientsslvpn_cli --server serveraddress:port --vpnuser username
# Enter password
##Connected!
At this stage, I am booted out of the instance and cannot reconnect (requiring a soft restart of the instance to gain access again)
I can see that there is a configuration file at forticlientsslvpn/64bit/helper/config but I cannot find any documentation describing what can be configured there or whether it is something I should be concerned with.
The CLI itself doesn't take any other options other than:
forticlientsslvpn_cli [--proxy proxyaddress:proxyport] --server vpnserveraddress:vpnport [--proxyuser proxyuser] [--vpnuser vpnuser] [--pkcs12 pkcs12path] [--keepalive]
I would like to either:
Preserve my original SSH connection (and any future connections) so I can develop within the VPN or;
Limit the VPN to only package traffic that is going to a specific IP range (CIDR block)
I have found three different methods for installing the client (sudo apt install forticlient, sudo apt install -y openfortivpn, see above) and cannot navigate through them. I have looked into FortiClientLinuxGuide and installed that tool but couldn't find out how to configure it as a VPN instead (or where to add the configuration). Similar experience with the second one.
This seems to be the only documentation about how to configure the CLI and its just the bear minimum How to setup and install SSLVPN.
This post seems to be having the same problem ssh-telnet-disconnects and the solution looks like it would work if only I knew how to set that configuration.
alternatively, I have looked up split tunnel configuration which looks like it would be ideal but cannot work out how I would set that up. The documentation is only via the GUI Enable-split-tunnel-feature

Keep SSH running on Windows 10 Bash

I am having a problem keeping SSH running on the Windows Subsystem for Linux. It seems that if a shell is not open and running bash, all processes in the subsystem are killed. Is there a way to stop this?
I have tried to create a service using nssm but have not be able to get it working. Now I am attempting to start a shell and then just send it to the background but I haven't quite figured out how.
You have to keep at least one bash console open in order for background tasks to keep running: As soon as you close your last open bash console, WSL tears-down all running processes.
And, yes, we're working on improving this scenario in the future ;)
Update 2018-02-06
In recent Windows 10 Insider builds, we added the ability to keep daemons and services running in the background, even if you close all your Linux consoles!
One remaining limitation with this scenario is that you do have to manually start your services (e.g. $ sudo service ssh start in Ubuntu), though we are investigating how we might be able to allow you to configure which daemons/services auto-start when you login to your machine. Updates to follow.
To maintain WSL processes, I place this file in C:\Users\USERNAME\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup\wsl.vbs
set ws=wscript.createobject("wscript.shell")
ws.run "C:\Windows\System32\bash.exe -c 'sudo /etc/rc.local'",0
In /etc/rc.local I kick off some services and finally "sleep" to keep the whole thing running:
/usr/sbin/sshd
/usr/sbin/cron
#block on this line to keep WSL running
sleep 365d
In /etc/sudoers.d I added a 'rc-local' file to allow the above commands without a sudo password prompt:
username * = (root) NOPASSWD: /etc/rc.local
username * = (root) NOPASSWD: /usr/sbin/cron
username * = (root) NOPASSWD: /usr/sbin/sshd
This worked well on 1607 but after the update to 1704 I can no longer connect to wsl via ssh.
Once you have cron running you can use 'sudo crontab -e -u username' to define cron jobs with #reboot to launch at login.
Just read through this thread earlier today and used it to get sshd running without having a wsl console open.
I am on Windows 10 Version 1803 and using Ubuntu 16.04.5 LTS in WSL.
I needed to make a few changes to get it working. Many thanks to google search and communities like this.
I modified /etc/rc.local as such:
mkdir /var/run/sshd
/usr/sbin/sshd
#/usr/sbin/cron
I needed to add the directory for sshd or I would get an error "Missing privilege separation directory /var/run/sshd
I commented out cron because I was getting similar errors and haven't had the time or need yet to fix it.
I also changed the sudoers entries a little bit in order to get them to work:
username ALL = ....
Hope this is useful to someone.
John Butler

Run a command in remote server

What would be the best way to run commands in remote servers? I am thinking of using SSH, but is there a better way than that?
I used Red Hat Linux and I want to run the command in one of the servers, specify what other servers I want to have my command run, and it has to do the exact same thing in the servers specified. Puppet couldn't solely help, but I might be able to combine some other tool with Puppet to do the job for me.
It seems you are able to log on to the other servers without entering a password. I assume this is based on SSH keys, as described here: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/s2-ssh-configuration-keypairs.html
You say another script is producing a list of servers. You can now use the following simple script to loop over the list:
for server in `./server-list-script`; do
echo $server:
ssh username#$server mkdir /etc/dir/test123
done >logfile 2>&1
The file "logfile" will collect the output. I'm pretty sure Puppet is able to do this as well.
Your solution will almost definitely end up involving ssh in some capacity.
You may want something to help manage the execution of commands on multiple servers; ansible is a great choice for something like this.
For example, if I want to install libvirt on a bunch of servers and make sure libvirtd is running, I might pass a configuration like this to ansible-playbook:
- hosts: all
tasks:
- yum:
name: libvirt
state: installed
- service:
name: libvirtd
state: running
enabled: true
This would ssh to all of the servers in my "inventory" (a file -- or command -- that provides ansible with a list of servers), install the libvirt package, start libvirtd, and then arrange for the service to start automatically at boot.
Alternatively, if I want to run puppet apply on a bunch of servers, I could just use the ansible command to run an ad-hoc command without requiring a configuration file:
ansible all -m command -a 'puppet apply'

How do I deploy code to hardware nodes that are all on seperate networks?

This is an interesting problem I've been thinking about recently and have not come up with or found a solution that I find acceptable.
I'm playing with Raspberry Pi's and have 6 currently that I want to use throughout a few of my personal properties for surveillance purposes.
Making them work and sending video streams to my server is all easy, well and good - but how in the world do I deploy code updates to these "nodes" that are not on the same network, some are behind wi-fi networks that I don't have port forwarding access to also, so it's not like I can just post forward and SSH into them and run some .sh magic to update things.
The best I've come up with is using autossh to keep a constant, connection open to one of my servers through reverse ssh, and then ssh in to them through my parent server in parallel and running a .sh script on them when I want to update.. But this seems overly excessive and I'm sure there's some solution or platform out there that exists to solve this - how else do companies like Redbox or Nest for example update firmware on their systems remotely?
I'm actually doing something similar. I have Pi's deployed around the city that I live in. In order to not have to worry about port-forwarding and people changing their router configurations, I started using a service called Pagekite http://pagekite.net/
I'm not affiliated with them, but I can't say enough good things about the service and price. My Pi's are hooked up to screens that need to display certain things at certain times, and I'm able to VNC in very easily no matter where the Pi is to see what's currently playing. I can obviously just ssh in as well.
The following steps from my pi setup guide deal with installing pagekite and getting it to start on boot:
echo deb http://pagekite.net/pk/deb/ pagekite main | sudo tee -a /etc/apt/sources.list
sudo apt-key adv --recv-keys --keyserver keys.gnupg.net AED248B1C7B2CAC3
sudo apt-get update
sudo apt-get install pagekite
sudo leafpad /etc/pagekite.d/10_account.rc
Replace NAME.pagekite.me with the name of the kite
Replace YOURSECRET with whatever the secret is from the pagekite admin console
Remove the line “abort_not_configured” and the comment above it
sudo cp /etc/pagekite.d/80_sshd.rc.sample /etc/pagekite.d/80_sshd.rc
sudo invoke-rc.d pagekite restart
sudo reboot
This assumes you've made an account and setup a "kite"
I think you basically need a reliable reverse tunnel such as Pagekite, especially if you plan on expanding your network, as it will turn into a nightmare at a certain size. I believe I'm just going to keep a list of ssh usernames, ssh passwords, and pagekite addresses, then write a script that loops through them and rsync's my local directory with the new code to the remote directory on the pi.

Forwarding X11 without SSH? How do I run local apps on another Pc running X Server?

I am using Cygwin X and Debian. I can forward my X session via SSH but what happens is that I seem to loose the display forwarding in the X session once in a while (from Cygwin to Linux). So i am guessing that that is an imnplementation thing with Cygwin because I never loose X11 display in the same ssh session when I use Linux to Linux.
This also happens when a X11 forwarded app tries to fork another process lets say I run Thunderbird and I click on a url inside an email. Naturally Thurderbird will try to start the default web browser but it is not doing it with Cygwin X server and here is the message I get when SSH session gives up the display for various reasons that I am not able to know.
"Error: cannot open display: localhost:10.0"
The other issue is that since the ssh gives up the display variable, I have to restart my ssh session to get it working which also kills other apps that I might be running during the ssh session.
Anyway after struggling with this for a while I am thinking that I want to be able to open my apps on another display without using ssh forwarding. I am using it internally and it is almost a closed lan so I am not worried about the security for now. I just want to be able to run the app on the Linux then see the app on the Pc that is running Cygwin.
I tried basic DISPLAY variable thing like "export DISPLAY=MY_CYGWIN_PC_IP:0.0" (on Linux Pc) but it does not work.
So I am wondering about how I can achieve this. What are the proper settings to achieve what i need?
Your direction was OK. export DISPLAY is what you want. But it is not enough.
On the target, you need to type
xhost +from.where.the.windows.are.coming.com
It gives the X server the permission to allow remote windows from this computer.
Beware, it is not really secure! A possible attacker could not only windows shown by you, but even control your mouse/keyboard. But for simple solutions, or if you can trust the remote machine and the network between you, it may be ok.
If not, there is an advanced authorization, based on preshared keys. It is named xauth. Google for xauth.
The Xorg server has an option to disable the remote windows, and there are distributions, (f.e. ubuntu!) who turn this option by default on. You can test it - if you can telnet to the tcp port 6000, it is allowed.
If you are using ssh -X, don't. Use ssh -Y
Cygwin XWin server randomly loses connection
Basically to work as old times , we need enable xdmcp on display manager and use X11 , Xwayland seems to me that doesn't work either.
sddm doesn't support xdmcp , but gdm does , you need edit /etc/gdm/custom.conf and add
[security]
DisallowTCP=false
[xdmcp]
Enable=true
xhost + ip_of_remote_computer
echo $DISPLAY (the number of the display usually :0 or :1)
after you can verify :
netstat -l | grep xdmcp
udp 0 0 0.0.0.0:xdmcp 0.0.0.0:*
lsof -i :xdmcp
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
gdm 862335 root 12u IPv4 71774686 0t0 UDP *:xdmcp
on remote host :
export DISPLAY="ip_of_server:0" (see if is 0 or other number in echo $DISPLAY on server mention above )
xclock &
References:
http://www.softpanorama.org/Xwindows/Troubleshooting/can_not_open_display.shtml
https://tldp.org/HOWTO/html_single/XDMCP-HOWTO/
https://wiki.archlinux.org/title/XDMCP

Resources