teamviewer quick connect button with qt creator - wine

first, i am using centos 6.9, and qt creator 3.0.0
I am able to connect with server using terminal directly with command
/usr/bin/teamviewer -i [serverid] -P [password]"
then i was implement it to a button in qt with this code :
system("gnome-terminal --hide-menubar --profile=noclose -x bash -c '/usr/bin/teamviewer -i [serverid] -P [password];'");
but thats not work, it show this error :
Init...
CheckCPU: SSE2 support: yes
XRandRWait: No value set. Using default.
XRandRWait: Started by user.
Checking setup...
/opt/teamviewer/tv_bin/wine/bin/wineserver: Symbol `wine_casemap_upper' has different size in shared object, consider re-linking
/opt/teamviewer/tv_bin/wine/bin/wineserver: Symbol `wine_casemap_lower' has different size in shared object, consider re-linking
Launching TeamViewer ...
/opt/teamviewer/tv_bin/wine/bin/wineserver: Symbol `wine_casemap_upper' has different size in shared object, consider re-linking
/opt/teamviewer/tv_bin/wine/bin/wineserver: Symbol `wine_casemap_lower' has different size in shared object, consider re-linking
Launching TeamViewer GUI ...
i am also tried with different method using Qprocess .start .execute, still no luck.

I just resolved it myself.
i have installed wine in system and i guess i have deleted it correctly, but not.
it happened because I am not deleted all wine commponent correctly.
i just remove wine with yum remove wine,then it solved with yum remove wine*.
btw thanks to #nwp to change my tag question to wine and teamviewer only. Thats make me to recheck the wine package on my system.

Related

Cannot get custom kernel to boot - mkinitpcio does not add any modules

1. What I am trying to achieve:
Build a custom kernel so I can install and run Anbox-git from AUR on my Arch laptop. Custom kernel is needed for the package to work.
2. What I did to achieve it:
Download Arch Linux kernel v5.8.5-arch1 from here
I followed the guidelines on tradional compilation Arch wiki to create the custom kernel
Via make nconfig I applied the changes mentioned in the Anbox Arch wiki page.
Via make nconfig I changed EFIVAR_FS option from "M" to "*" to resolve an error from earlier attemps.
Via make nconfig under Location: -> Device Drivers-> Multiple devices driver support (RAID and LVM) (MD [=y])-> Device mapper support (BLK_DEV_DM [=y]) I added a few more options (*) because on earlier builds mkinitpcio gave errors for missing modules for DM_CRYPT, and some more DM_ modules which I cannot easily reproduce (will do if necessary for the answer, but I hope it'll be irrelevant).
After creating the config this way I did:
sudo make modules_install
sudo cp -v arch/x86_64/boot/bzImage /boot/vmlinuz-linux58ac
sudo cp /etc/mkinitcpio.d/linux.preset /etc/mkinitcpio.d/linux58ac.preset
Adapted the preset file per Arch wiki instructions
sudo mkinitcpio -p linux58ac
Important: The mkinitpcio runs fine, but keeps giving me a warning:
WARNING: No modules were added to the image. This is probably not what
you want.
sudo grub-mkconfig -o /boot/grub/grub.cfg
3. Expected result:
I am able to reboot, select the new kernel from grub menu, get the usual LVM password prompt, and launch into it without problems.
4. Result I get:
I can reboot and select new kernel from grub but when I select it I get a
Warning: /lib/modules/5.8.5-arch1/modules.devname not found, ignoring.
Starting version 246.4-1-arch
ERROR device 'dev/mapper/vg0-root' not found. Skipping fsck.
mount /new_root: special device /dev/mapper/vg0-root does not exist.
You are being dropped into an emergy shell.
I checked and the /lib/modules/5.8.5-arch1/modules.devnamedoes indeed exist. But I think the actual problem is that mkinitcpio doesn't load the correct modules into the custom kernel, causing it to become unbootable.
Any help appreciated!

How can I solve stdarg.h No such file or directory while compiling out-of-tree Linux kernel module?

I have an out-of-tree Linux kernel module that I need to compile. When I execute "make" in the kernel module directory I am getting:
"fatal error: stdarg.h: No such file or directory"
Before starting the build I installed the header file based on my Linux distribution.
$sudo apt-get install kernel-headers-$(uname -r)
How can I solve this compilation error? (my distribution is Ubuntu 16.04 with linux-headers-4.15.0-42-generic)
I ran a search of stdarg.h with the "locate" command to see if I can sport the file on the system.
I got:
/usr/include/c++/5/tr1/stdarg.h
/usr/lib/gcc/i686-linux-gnu/5/include/cross-stdarg.h
/usr/lib/gcc/i686-linux-gnu/5/include/stdarg.h
...
It tells me there is at least one stdarg.h provided by the compiler.
I tried to include the path "/usr/lib/gcc/i686-linux-gnu/5/include" in the kernel module Makefile so stdarg.h can be picked up. It did not work (while building, another reference to stdarg.h in the official kernel header was not being resolved).
I finally created a symlink directly under:
/usr/src/linux-headers-4.15.0-42-generic/include
$sudo ln -s /usr/lib/gcc/i686-linux-gnu/5/include/stdarg.h stdarg.h
This was just enough to solve the compilation issue.
I am wondering if the kernel headers should come with an implementation of stdarg.h by default (that is the first time I encounter this issue). I have also read that the compiler provide one implementation and most of the time it is better to use the compiler version.
Updated note: if the above solution still does not solve the problem:
Before running make again, do a make clean. Do a ls -la in the folder and look for a ".cache.mk" file. If this is still there, remove it and run "make" again. It should solve the problem.
I had the same issue with CentOS 9, and the other answers didn't work for me. Apparently the problem is that in more recent kernels, it shouldn't be <stdarg.h> but <linux/stdarg.h>. With virtualbox guest additions 6.1.34, it correctly checks for kernel with a version of 5.15.0 or more. But my kernel is the 5.14.xx, meaning the include for stdarg.h is wrong.
Solving the issue
Dependencies
Install all the dependencies for the guest edition
gcc make perl kernel-devel kernel-headers bzip2 dkms
Installation
Run the Guest Addition installation like you would normally. It will fail by saying it is unable to compile the kernel modules. That's expected. It will copy all the file we need to the VM disk.
Editing
We now need to edit the erroneous files.
/opt/VBoxGuestAdditions-6.1.34/src/vboxguest-6.1.34/vboxguest/include/iprt/stdarg.h
/opt/VBoxGuestAdditions-6.1.34/src/vboxguest-6.1.34/vboxsf/include/iprt/stdarg.h
On line 48 (may change for different versions), it check for a version of Linux and select the correct header depending on the version. We need to replace if RTLNX_VER_MIN(5,15,0) with if RTLNX_VER_MIN(5,14,0) in both files.
Compile the kernel modules
We can now compile the kernel modules, and the error should be gone.
sudo rcvboxadd quicksetup all
I personally got an error the first time, but then I recompiled without changing anything and it worked.
Remember that it's just a workaround, it may not work with different versions.
If you using Arch Linux with zen-kernel:
sudo CPATH=/usr/src/linux-zen/include/linux vmware-modconfig --console --install-all
I had the same problem with VirtualBox 6.1.0 running archlinux with kernel 6.1.9.
I downloaded VirtualBoxGuestAdditions_7.2.0.iso file from https://download.virtualbox.org/virtualbox/7.0.2/ link(you may select more appropriate to your VirtualBox version) and assigned as an optical drive to virtualbox machine. After start of the system running blkid command on terminal showed the name of CD rom device which was /dev/sr0. then I created iso folder on
/mnt folder
mkdir /mnt/iso
and mounted cd drive to that folder
mount -o loop /dev/sr0 /mnt/iso
after I cd'ed to /mnt/iso
cd /mnt/iso
and manually run VirtualBoxGuestAdditions.run script
sh ./VirtualBoxGuestAdditions.run
which successfully compiled and istalled required virtualbox guest modules.
Now everytime I update kernel version I redo the same procedure. And it work fine.
It also remove old 6.1.0 guest additons folder.

Deployement Qt error [duplicate]

I wrote application for linux which uses Qt5.
But when I am trying to launch it on the linux without Qt SDK installed, the output in console is:
Failed to load platform plugin "xcb". Available platforms are:
How can I fix this? May be I need to copy some plugin file?
When I use ubuntu with Qt5 installed, but I rename Qt directory, the same problem occurs. So, it uses some file from Qt directory...
UPDATE:
when I create in the app dir "platforms" folder with the file libqxcb.so, the app still doesnot start, but the error message changes:
Failed to load platform plugin "xcb". Available platforms are:
xcb
How can this happen? How can platform plugin be available but can't be loaded?
Use ldd (man ldd) to show shared library dependencies. Running this on libqxcb.so
.../platforms$ ldd libqxcb.so
shows that xcb depends on libQt5DBus.so.5 in addition to libQt5Core.so.5 and libQt5Gui.so.5 (and many other system libs). Add libQt5DBus.so.5 to your collection of shared libs and you should be ready to move on.
As was posted earlier, you need to make sure you install the platform plugins when you deploy your application. Depending on how you want to deploy things, there are two methods to tell your application where the platform plugins (e.g. platforms/plugins/libqxcb.so) are at runtime which may work for you.
The first is to export the path to the directory through the QT_QPA_PLATFORM_PLUGIN_PATH variable.
QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins ./my_qt_app
or
export QT_QPA_PLATFORM_PLUGIN_PATH=path/to/plugins
./my_qt_app
The other option, which I prefer is to create a qt.conf file in the same directory as your executable. The contents of which would be:
[Paths]
Plugins=/path/to/plugins
More information regarding this can be found here and at using qt.conf
I tried to start my binary, compiled with Qt 5.7, on Ubuntu 16.04 LTS where Qt 5.5 is preinstalled. It didn't work.
At first, I inspected the binary itself with ldd as was suggested here, and satisfied all "not found" dependencies. Then this notorious This application failed to start because it could not find or load the Qt platform plugin "xcb" error was thrown.
How to resolve this in Linux
Firstly you should create platforms directory where your binary is, because it is the place where Qt looks for XCB library. Copy libqxcb.so there. I wonder why authors of other answers didn't mention this.
Then you may want to run your binary with QT_DEBUG_PLUGINS=1 environment variable set to check which dependencies of libqxcb.so are not satisfied. (You may also use ldd for this as suggested in the accepted answer).
The command output may look like this:
me#xerus:/media/sf_Qt/Package$ LD_LIBRARY_PATH=. QT_DEBUG_PLUGINS=1 ./Binary
QFactoryLoader::QFactoryLoader() checking directory path "/media/sf_Qt/Package/platforms" ...
QFactoryLoader::QFactoryLoader() looking at "/media/sf_Qt/Package/platforms/libqxcb.so"
Found metadata in lib /media/sf_Qt/Package/platforms/libqxcb.so, metadata=
{
"IID": "org.qt-project.Qt.QPA.QPlatformIntegrationFactoryInterface.5.3",
"MetaData": {
"Keys": [
"xcb"
]
},
"className": "QXcbIntegrationPlugin",
"debug": false,
"version": 329472
}
Got keys from plugin meta data ("xcb")
loaded library "/media/sf_Qt/Package/platforms/libqxcb.so"
QLibraryPrivate::loadPlugin failed on "/media/sf_Qt/Package/platforms/libqxcb.so" : "Cannot load library /media/sf_Qt/Package/platforms/libqxcb.so: (/usr/lib/x86_64-linux-gnu/libQt5DBus.so.5: version `Qt_5' not found (required by ./libQt5XcbQpa.so.5))"
This application failed to start because it could not find or load the Qt platform plugin "xcb"
in "".
Available platform plugins are: xcb.
Reinstalling the application may fix this problem.
Aborted (core dumped)
Note the failing libQt5DBus.so.5 library. Copy it to your libraries path, in my case it was the same directory where my binary is (hence LD_LIBRARY_PATH=.). Repeat this process until all dependencies are satisfied.
P.S. thanks to the author of this answer for QT_DEBUG_PLUGINS=1.
I tried the main parts of each answer, to no avail. What finally fixed it for me was to export the following environment variables:
LD_LIBRARY_PATH=/usr/local/lib:~/Qt/5.9.1/gcc_64/lib
QT_QPA_PLATFORM_PLUGIN_PATH=~/Qt/5.9.1/gcc_64/plugins/
Ubuntu 16.04 64bit.
I got the problem for apparently no reasons. The night before I watched a movie on my VideoLan instance, that night I would like to watch another one with VideoLan. VLC just didn't want to run because of the error into the question.
I google a bit and I found the solution it solved my problem: from now on, VLC is runnable just like before. The solution is this comand:
sudo ln -sf /usr/lib/x86_64-linux-gnu/qt5/plugins/platforms/ /usr/bin/
I am not able to explain what are its consequencies, but I know it creates some missing symbolic link.
Since version 5, Qt uses a platform abstraction system (QPA) to abstract from the underlying platform.
The implementation for each platform is provided by plugins. For X11 it is the XCB plugin. See Qt for X11 requirements for more information about the dependencies.
There might be many causes to this problem. The key is to use
export QT_DEBUG_PLUGINS=1
before you run your Qt application. Then, inspect the output, which will point you to the direction of the error. In my case it was:
Cannot load library /opt/nao/plugins/platforms/libqxcb.so: (/opt/nao/bin/../lib/libz.so.1: version `ZLIB_1.2.9' not found (required by /usr/lib/x86_64-linux-gnu/libpng16.so.16))
But that is solved in different threads. See for instance https://stackoverflow.com/a/50097275/2408964.
Probably this information will help. I was on Ubuntu 18.04 and when I tried to install Krita, using the ppa method, I got this error:
This application failed to start because it could not find or load the Qt platform plugin "xcb" in "".
Available platform plugins are: linuxfb, minimal, minimalegl, offscreen, wayland-egl, wayland, xcb.
Reinstalling the application may fix this problem.
Aborted
I tried all the solutions that I found in this thread and other webs without any success.
Finally, I found a post where the author mention that is possible to activate the debugging tool of qt5 using this simple command:
export QT_DEBUG_PLUGINS=1
After adding this command I run again krita I got the same error, however this time I knew the cause of that error.
libxcb-xinerama.so.0: cannot open shared object file: No such file or directory.
This error prevents to the "xcb" to load properly. So, the solution will be install the `libxcb-xinerama.so.0" right? However, when I run the command:
sudo apt install libxcb-xinerama
The lib was already installed. Now what Teo? Well, then I used an old trick :) Yeah, that one --reinstall
sudo apt install --reinstall libxcb-xinerama
TLDR: This last command solved my problem.
I ran into a very similar problem with the same error message. First, debug some by turning on the Qt Debug printer with the command line command:
export QT_DEBUG_PLUGINS=1
and rerun the application. For me this revealed the following:
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
"Cannot load library /home/.../miniconda3/lib/python3.7/site-packages/PyQt5/Qt/plugins/platforms/libqxcb.so: (libxkbcommon-x11.so.0: cannot open shared object file: No such file or directory)"
Indeed, I was missing libxkbcommon-x11.so.0 and libxkbcommon-x11.so.0. Next, check your architecture using dpkg from the linux command line. (For me, the command "arch" gave a different and unhelpful result)
dpkg --print-architecture #result for me: amd64
I then googled "libxkbcommon-x11.so.0 ubuntu 18.04 amd64", and likewise for libxkbcommon-x11.so.0, which yields those packages on packages.ubuntu.com. That told me, in retrospect unsurprisingly, I'm missing packages called libxkbcommon-x11-0 and libxkbcommon0, and that installing those packages will include the needed files, but the dev versions will not. Then the solution:
sudo apt-get update
sudo apt-get install libxkbcommon0
sudo apt-get install libxkbcommon-x11-0
So, I spent about a day trying to figure out what was the issue; tried all the proposed solutions, but none of that worked like installing xcb libs or exporting Qt plugins folder. The solution that suggested to use QT_DEBUG_PLUGINS=1 to debug the issue didn't provide me a direct insight like in the answer - instead I was getting something about unresolved symbols within Qt5Core.
That gave me a hint, though: what if it's trying to use different files from different Qt installations? On my machine I had standard version installed in /home/username/Qt/ and some local builds within my project that I compiled by myself (I have other custom built kits as well in other locations). Whenever I tried to use any of the kits (installed by Qt maintenance tool or built by myself), I would get an "xcb error".
The solution was simple: provide the Qt path through CMAKE_PREFIX_PATH and not though Qt5_DIR as I did, and it solved the problem. Example:
cmake .. -DCMAKE_PREFIX_PATH=/home/username/Qt/5.11.1/gcc_64
I faced the same problem when after installing Viber. It had all required qt libraries in /opt/viber/plugins/.
I checked dependencies of /opt/viber/plugins/platforms/libqxcb.so and found missing dependencies. They were libxcb-render.so.0, libxcb-image.so.0, libxcb-icccm.so.4, libxcb-xkb.so.1
So I resolved my issue by installing missing packages with this libraries:
apt-get install libxcb-xkb1 libxcb-icccm4 libxcb-image0 libxcb-render-util0
I like the solution with qt.conf.
Put qt.conf near to the executable with next lines:
[Paths]
Prefix = /path/to/qtbase
And it works like a charm :^)
For a working example:
[Paths]
Prefix = /home/user/SDKS/Qt/5.6.2/5.6/gcc_64/
The documentation on this is here: https://doc.qt.io/qt-5/qt-conf.html
All you need to do is
pip uninstall PyQt5
and
conda install pyqt
Most of the problem of pyqt can be fixed by this simplest solution.
In my case, I needed to deploy two Qt apps on an Ubuntu virtualbox guest. One was command-line ("app"), the other GUI_based ("app_GUI").
I used "ldd app" to find out what the required libs are, and copied them
to the Ubuntu guest.
While the command-line executable "app" worked ok, the GUI-based executable crashed, giving
the "Failed to load platform plugin "xcb" error. I checked ldd for libxcb.so, but this too had no missing dependencies.
The problem seemed to be that while I did copy all the right libraries I accidentally had copied also libraries that were already present at the guest system.. meaning that (a) they were unnecessary to copy them in the first place and (b) worse, copying them produced incompatibilities between the install libraries.
Worse still, they were undetectable by ldd like I said..
The solution? Make sure that you copy libraries shown as missing by ldd and absolutely no extra libraries.
In my case missing header files were the reason libxcb was not built by Qt. Installing them according to https://wiki.qt.io/Building_Qt_5_from_Git#Linux.2FX11 resolved the issue:
yum install libxcb libxcb-devel xcb-util xcb-util-devel mesa-libGL-devel libxkbcommon-devel
Folks trying to get this started on Ubuntu 20.04 please try to run this and see if this solves the problem. This worked for me
sudo apt-get update -y
sudo apt-get install -y libxcb-xinerama0
I link all Qt stuff statically to the generic Linux builds of my open source projects. It makes life a bit easier. You just need to build static versions of Qt libraries first. Of course this cannot be applied to closed source software due to licensing issues. The deployment of Qt5 apps on Linux is currently a bit problematic, because Ubuntu 12.04, for example, doesn't have Qt5 libraries in the package repositories.
I had this problem, and on a hunch I removed the Qt Configs from my environment. I.e.,
rm -rf ~/.config/Qt*
Then I started qtcreator and it reconfigured itself with the existing state of the machine. It no longer remembered where my projects were, but that just meant I had to browse to them "for the first time" again.
But more importantly it built itself a coherent set of library paths, so I could rebuild and run my project executables again without the xcb or qxcb libraries going missing.
I faced the same situation, but on a Ubuntu 20.04 VM.
TL;DR: Check file permissions.
What I did:
I copied the Qt libs required to /usr/local/lib/x86_64-linux-gnu/ and added it to LD_LIBRARY_PATH
I copied the platforms folder from Qt to my application directory and added it to QT_PLUGIN_PATH
I ran ldd on the executable and in the offending libqxcb.so (ldd libqxcb.so), and it complains about some dependencies although ldconfig listed them as found.
linux-vdso.so.1 (0x00007ffee19af000)
libQt5XcbQpa.so.5 => not found
libfontconfig.so.1 => /lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f7cb18fb000)
libfreetype.so.6 => /lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f7cb183c000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f7cb1820000)
libQt5Gui.so.5 => /usr/local/lib/x86_64-linux-gnu/libQt5Gui.so.5 (0x00007f7cb0fd4000)
libQt5DBus.so.5 => not found
I used export QT_DEBUG_PLUGINS=1 for further info. It complains about missing files, although they are there.
What I found:
For some reason, when copying to the VM through the shared folder the files permissions were not the correct ones.
Thus, I ran: sudo chmod 775 * on the libs and voilĂ .
I solved the issue through this https://github.com/NVlabs/instant-ngp/discussions/300
pip uninstall opencv-python
pip install opencv-python-headless
This seems to have been a problem with the cv2 Python package and how it loops in Qt
sudo ln -sf /usr/lib/...."adapt-it"..../qt5/plugins/platforms/ /usr/bin/
It creates the symbolic link it's missed. Good for QT ! Good for VLC !!

How does "biosdevname" really work?

I know the purpose of "biosdevname" feature in Linux, but I'm not sure how
exactly it works.
I tested it with Ubuntu 14.04 and Ubuntu 14.10 (both 64-bit server editions)
and it looks like they enable it by default - right after system startup my
network interface has a name such as p4p1 instead of eth0, no customization
is needed. As I understood it, in order for biosdevname to be enabled, BOTH
of these two conditions must be met:
a boot option biosdevname=1 must be passed to a kernel
biosdevname package must be installed
As I already mentioned, both Ubuntu 14.04 and 14.10 seem to offer biosdevname
as a default feature: they come with biosdevname package already installed, I
didn't need to modify grub.cfg either - GRUB_CMDLINE_LINUX_DEFAULT has no
parameters and my network interface still has a BIOS name (p*p*) instead of a
kernel name (eth*.)
Later I wanted to restore the old style device naming and that's where the
interesting part begins. I decided to experiment a bit while trying to disable
the biosdevname feature. Since it requires biosdevname package to work (or
so I read here and there), I assumed removing it would be enough to disable the
feature, so I typed:
sudo apt-get purge biosdevname
To my surprise, after reboot my network interface was still p4p1, so
biosdevname clearly still worked even though biosdevname package had been
wiped out.
As a next step, I applied appropriate changes to /etc/network/interfaces in
order to restore the old name of my network interface (removed entry for p4p1
and added entry for eth0). As a result, after another reboot, ifconfig
reported neither eth0 nor p4p1 which was another proof that OS still
understood BIOS names instead of kernel names.
It turned out that I also had to explicitly change GRUB entry to
GRUB_CMDLINE_LINUX_DEFAULT=biosdevname=0 and update GRUB to get the expected
result (biosdevname disabled and old name of network interface restored).
My question is: how could biosdevname work without biosdevname package? Is
it not required after all? If so, what exactly provides the biosdevname
functionality and how does it work?
The reason biosdevname keeps annoying you even after you uninstall the package, is that it installed itself in the initrd 'initial ramdisk' file as well.
When uninstalling, the /usr/share/initramfs-tools/hooks/biosdevname is removed, but there is no postrm script in the package so update-initramfs is not executed and biosdevname is still present in the /boot/initrd... file used in the first stage of system startup.
You can fully get rid of it like this:
$ sudo update-initramfs -u

Headless X11 Angstrom

I have a BeagleBone - no LCD/display. In the console when I try and use startx, it says /dev/fb0 doesn't exist. The xorg.conf file is using the fbdev driver. Apparently, if an LCD is detected, everything works.
How can I setup a virtual display so I can vnc to it?
Thought I better answer this for reference. Oh, I also got the 'Tumbleweed' badge... Great...
If no LCD/DVI cape is attached, then the boot doesn't load a frame buffer (/dev/fb0). As such, no X11 server starts up. x11vnc requires a real X11 server to be running for it to work. There is also the program xvnc which can create a virtual X11/frame buffer on your behalf, but I couldn't see it in the Angstrom packages.
So, I installed Xvfb - and created a virtual frame buffer. Install the package
xserver-xorg-xvfb
When starting, keep in mind (for the newbies like me coming from Windows), it is case-sensitive. To create a virtual X11 server;
Xvfb :1 -screen 0 1024x768x16 &
When you do this, you will probably get these errors;
(EE) AIGLX error: dlopen of /usr/X11/lib/dri/swrast_dri.so failed (dlopen(/usr/X11/lib/dri/swrast_dri.so, 5): image not found)
(EE) GLX: could not load software renderer
So, load the package;
mesa-dri-driver-swrast
OK, error gone. Now we can export our display (an environment variable so Firefox, or whatever X11 client you run, can attach to the display).
export DISPLAY=:1
Load up Firefox (something to see)
firefox &
And now we try and start the x11vnc;
x11vnc -display :1 -bg -nopw -xkb
At this point, with this distro, you'll see an error about XTEST not being found/not available when it was built. Here describes the issue.
I made sure that I had all the proper libraries installed, so I figured it must have been a bad build on Angstrom. So, now to build it myself. I ensured all required libraries were available; these are the ones ending with '-dev'; by default they all appeared to be available. I followed the instructions here.
Except the copy line didn't work too good for me, so do what you need to do to copy it to the /usr/bin folder.
Now it starts, and there are no errors about XTEST, and the input works!

Resources