Error in tightVNC viewer no connection could be made because the target machine actively refused it [closed] - vnc

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am using TightVNC viewer from Window7 to connect the Ubuntu machine. I got the error "Error in tightVNC viewer no connection could be made because the target machine actively refused it.
I do not have any firewall setup.
When i run, ps -ef | grep vnc:: i get
root 5265 4521 0 15:57 pts/1 00:00:00 sudo x11vnc -safer -localhost -nopw -accept popup:0 -once -viewonly -display :0
root 5266 5265 0 15:57 pts/1 00:00:00 x11vnc -safer -localhost -nopw -accept popup:0 -once -viewonly -display :0
mmm 5890 5269 0 16:06 pts/2 00:00:00 grep --color=auto vnc
On, x11vnc -forever -usepw -httpdir /usr/share/vnc-java/ -httpport 5800
23/05/2014 16:16:12 * XOpenDisplay failed (:0)
* x11vnc was unable to open the X DISPLAY: ":0", it cannot continue.
* There may be "Xlib:" error messages above with details about the failure.
I am not sure where is the issue.
I tried connecting like, 171.69.35.33
171.69.35.33:5900
171.69.35.33::5901
f4rom tightvnc viewer.
ps aux | grep vnc
117 6125 2.1 8.3 4832760 679396 ? Sl 16:14 1:13 /usr/bin/kvm -S -M pc-1.0 -enable-kvm -m 4096 -smp 4,sockets=4,cores=1,threads=1 -name talon -uuid 33c53705-1847-e2a4-897d-436c39337179 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/talon.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/home/nso/build23-patch/talon-amd64-0.0.0.23_output/talon-amd64-0.0.0.23.qcow2,if=none,id=drive-ide0-0-0,format=qcow2 -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/home/nso/build23-patch/talon-amd64-0.0.0.23_output/talon.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=18,id=hostnet0,vhost=on,vhostfd=19 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:36:ce:ec,bus=pci.0,addr=0x3,bootindex=2 -chardev socket,id=charserial0,host=127.0.0.1,port=2225,telnet,server,nowait -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 127.0.0.1:1 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x5 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7
mandnaya 6756 0.0 0.0 8112 896 pts/2 R+ 17:10 0:00 grep --color=auto vnc

sudo apt-get install x11vnc
x11vnc -storepasswd
x11vnc -usepw
sudo x11vnc -xkb -noxrecord -noxfixes -noxdamage -display :0 -auth /var/run/lightdm/root/:0 -usepw
This solved my issue.

Related

Varnish 6 reload

I've upgraded my varnish from 6.2.x to 6.6.x. Amost everyting works Ok, but no reload.
After "start" ps show:
root 10919 0.0 0.0 18960 5288 ? Ss 22:38 0:00 /usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=4000 -p workspace_client=128k -p workspace_backend=128k -l 200m -S /etc/varnish/secret -s malloc,256m -s static=file,/data/varnish_storage.bin,80g
now I try to reload:
Apr 8 22:42:10 xxx varnishd[10919]: CLI telnet 127.0.0.1 5282 127.0.0.1 6082 Rd auth 0124ef9602b9e6aad2766e52755d02a0d17cd6cfe766304761d21ea058bd8b3b
Apr 8 22:42:10 xxx varnishd[10919]: CLI telnet 127.0.0.1 5282 127.0.0.1 6082 Wr 200 -----------------------------#012Varnish Cache CLI 1.0#012-----------------------------#012Linux,5.4.0-107-generic,x86_64,-junix,-smalloc,-sfile,-sdefa
ult,-hcritbit#012varnish-6.6.1 revision e6a8c860944c4f6a7e1af9f40674ea78bbdcdc66#012#012Type 'help' for command list.#012Type 'quit' to close CLI session.
Apr 8 22:42:10 xxx varnishd[10919]: CLI telnet 127.0.0.1 5282 127.0.0.1 6082 Rd ping
Apr 8 22:42:10 xxx varnishd[10919]: CLI telnet 127.0.0.1 5282 127.0.0.1 6082 Wr 200 PONG 1649450530 1.0
Apr 8 22:42:10 xxx varnishd[10919]: CLI telnet 127.0.0.1 5282 127.0.0.1 6082 Rd vcl.load reload_20220408_204210_11818 /etc/varnish/default.vcl
Apr 8 22:42:15 xxx varnishreload[11818]: VCL 'reload_20220408_204210_11818' compiled
Apr 8 22:42:20 xxx varnishreload[11818]: Command: varnishadm -n '' -- vcl.use reload_20220408_204210_11818
Apr 8 22:42:20 xxx varnishreload[11818]: Rejected 400
Apr 8 22:42:20 xxx varnishreload[11818]: CLI communication error (hdr)
Apr 8 22:42:20 xxx systemd[1]: varnish.service: Control process exited, code=exited, status=1/FAILURE
Apr 8 22:42:20 xxx systemd[1]: Reload failed for Varnish Cache, a high-performance HTTP accelerator.
and now ps shows:
vcache 10919 0.0 0.0 19048 5880 ? SLs 22:38 0:00 /usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=4000 -p workspace_client=128k -p workspace_backend=128k -l 200m -S /etc/varnish/secret -s malloc,256m -s static=file,/data/varnish_storage.bin,80g
vcache 10959 0.4 0.2 84585576 23088 ? SLl 22:39 0:01 /usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -p thread_pools=8 -p thread_pool_min=100 -p thread_pool_max=4000 -p workspace_client=128k -p workspace_backend=128k -l 200m -S /etc/varnish/secret -s malloc,256m -s static=file,/data/varnish_storage.bin,80g
I see process owner was changed to vcache. What is wrong with it? anoder reload will fail too with same reject code.
Can you try removing -j unix,user=vcache from your varnishd runtime command. If I remember correctly, Varnish will automatically drop privileges on the worker process without really needing to explicitly set jailing settings.
If that doesn't work, please also explain which commands you used to start Varnish and reload Varnish.

Why some processes(even they are user processes) could not be migrated to a certain cpu by `cpuset(7)`?

Why some processes could not be migrated to a certain cpu by cpuset(7) while some processes could?
I found that these processes could not be really migrated to a certain cpu(Though when you check the cpuset filesystem,it seems ok.But if check the affinity of these processes by top or htop, you could find the cpuset does not work for these processes indeed.):
/sbin/init splash
/usr/sbin/rpc.idmapd
/lib/systemd/systemd-timesyncd
/lib/systemd/systemd-timesyncd
/usr/sbin/cups-browsed
/usr/sbin/sshd -D
/sbin/dhclient -d -q -sf /usr/lib/NetworkManager/nm-dhcp-helper -pf
/var/run/dhclient-
/usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-
sshd: john [priv]
sshd: john [priv]
sshd: john#notty
/usr/lib/openssh/sftp-server
lightdm --session-child 12 15
upstart-file-bridge --daemon --user
/usr/lib/gvfs/gvfsd-fuse /run/user/1000/gvfs -f -o big_writes
/usr/lib/at-spi2-core/at-spi-bus-launcher
/usr/bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofork --print-addre
/usr/lib/at-spi2-core/at-spi2-registryd --use-gnome-session
/usr/lib/update-notifier/system-crash-notification
/usr/lib/x86_64-linux-gnu/hud/hud-service
/usr/lib/dconf/dconf-service
/usr/lib/x86_64-linux-gnu/indicator-power/indicator-power-service
/usr/lib/x86_64-linux-gnu/indicator-power/indicator-power-service
/usr/lib/x86_64-linux-gnu/indicator-datetime/indicator-datetime-service
/usr/lib/x86_64-linux-gnu/indicator-sound/indicator-sound-service
/usr/lib/x86_64-linux-gnu/indicator-printers/indicator-printers-service
/usr/lib/evolution/evolution-source-registry
/usr/lib/evolution/evolution-source-registry
/usr/lib/colord/colord
/usr/lib/colord/colord
/usr/lib/evolution/evolution-calendar-factory
/usr/bin/gnome-software --gapplication-service
/usr/lib/unity-settings-daemon/unity-fallback-mount-helper
/usr/lib/gvfs/gvfs-udisks2-volume-monitor
/usr/lib/gvfs/gvfs-udisks2-volume-monitor
/usr/lib/udisks2/udisksd --no-debug
/usr/lib/gvfs/gvfs-gphoto2-volume-monitor
/usr/lib/evolution/evolution-calendar-factory-subprocess --factory contacts --bus-name or
zeitgeist-datahub
I think that may because your computer use NUMA model rather than SMP model. This can solve the problem, but I'm not sure if that is the reason.

docker tty command execute display

Run container:
[root#localhost ~]# tty
/dev/pts/3
[root#localhost ~]# docker run -it nginx /bin/bash
root#bee12031f933:/# sleep 20
root#bee12031f933:/#
See:
[root#localhost ~]# tty
/dev/pts/2
[root#localhost ~]# w
17:43:24 up 19 days, 45 min, 5 users, load average: 0.00, 0.01, 0.05
USER TTY FROM LOGIN# IDLE JCPU PCPU WHAT
root pts/0 192.168.1.22 16:24 1:01m 0.73s 0.00s sleep 20
root pts/1 192.168.1.22 11:31 1:02m 4.92s 4.65s docker run -it centos:7.7.1908
root pts/2 192.168.1.22 16:31 4.00s 0.70s 0.01s w
root pts/3 192.168.1.22 15:09 4.00s 0.25s 0.07s docker run -it nginx /bin/bash
root pts/4 192.168.1.22 16:41 44.00s 0.06s 0.06s -bash
Example picture:
enter image description here
enter image description here
docker container running in pts/3, execute command in container "sleep 20". then, i execute command "w" on the external host, display command "sleep 20" is executed in pts/0, what's the reason ?
why do external hosts display commands executed in containers ?
docker is similar to how LXC works. It allows sandboxing processes from one another, and controlling their resource allocations.
Since the resources are "separated", the system will show the information based on what it knows.
myuser#localhost: ~ $ tty
/dev/pts/1
myuser#localhost: ~ $ docker run --rm -it ubuntu:18.04 bash
root#36ed505961f4:/# tty
/dev/pts/0
Check the Kernel Namespaces for more info.

Why Ubuntu 18.04 use `/sbin/init` instead of `systemd`? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
First of all, Here is my environment of system:
# cat /proc/version
Linux version 4.15.0-52-generic (buildd#lgw01-amd64-051) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #56-Ubuntu SMP Tue Jun 4 22:49:08 UTC 2019
# cat /etc/issue
Ubuntu 18.04.2 LTS \n \l
Refer to this Ubuntu Wiki, ubuntu has used Systemd by default since 15.04 and Systemd runs with PID 1 as /sbin/init. However, I found the different result on my ubuntu 18.04:
# ps aux | awk '$2==1{print $0}'
root 1 0.0 0.8 159692 8784 ? Ss Oct24 0:21 /sbin/init noibrs splash
# lsof -p 1 | grep txt
systemd 1 root txt REG 252,1 1595792 927033 /lib/systemd/systemd
So, my question is that:
Why Ubuntu 18.04 use /sbin/init instead of /lib/systemd/systemd?
Why lsof -p 1 | grep txt return /lib/systemd/systemd while the process of PID 1 is /sbin/init?
/sbin/init is a symbolic link to /lib/systemd/systemd
Take a look at the output of stat /sbin/init or readlink /sbin/init
This is what they mean by systemd "running as /sbin/init". The systemd binary is linked as /sbin/init and started by that link name.
Update
To further explain the difference between the ps and lsof output: ps is showing the command that started the process, while lsof is showing which files a process has opened.
When systemd was started, it was called by /sbin/init noibrs splash, the file system resolved the link to the file /lib/systemd/systemd which was then read from disk and executed.

Difference between pidof and pgrep? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'm not sure why pidof doesn’t work, but pgrep works.
$ pidof squid
returns nothing
$ pgrep squid
returns 3322
How can I get the 3322 using pidof?
pidof will return details regarding the name of a actual program whereas pgrep will return details regarding any processes that match the provided pattern. This is clearly stated in the man pages of both tools.
pidof [-s] [-c] [-n] [-x] [-m] [-o omitpid[,omitpid..]] [-o omitpid[,omitpid..]..] program [program..]
vs.
pgrep [options] pattern
When you're looking for the executable squid, pgrep can match it because the pattern matches /usr/bin/squid*. Whereas pidof cannot find a program called squid, because the Squid daemon is likely called something like /usr/bin/squid-server.
For example, here I'm looking at the output of ps and looking for programs running with the name systemd within them:
$ ps -eaf | grep systemd
root 1 0 0 Sep03 ? 00:00:05 /usr/lib/systemd/systemd --switched-root --system --deserialize 21
root 425 1 0 Sep03 ? 00:00:03 /usr/lib/systemd/systemd-journald
root 480 1 0 Sep03 ? 00:00:00 /usr/lib/systemd/systemd-udevd
dbus 630 1 0 Sep03 ? 00:00:01 /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 648 1 0 Sep03 ? 00:00:00 /usr/lib/systemd/systemd-logind
pgrep is able to find them as well:
$ pgrep -l systemd
1 systemd
425 systemd-journal
480 systemd-udevd
648 systemd-logind
But pidof only finds the first one:
$ pidof systemd
1
That's because the PID 1, has the name /usr/bin/systemd.

Resources