Assume that you run an application with a web UI (e.g. Jupyter, Kubernetes Dashboard UI) on a Linux/UNIX server (sarah#10.0.0.100). On the server, you confirm that you can access the web UI by opening http:/ /localhost:8001 on Firefox.
You have separate workstations in the same network. Is there any easy way to access the web UI by simply opening http:/ /10.0.0.100:8001 from a web browser on the workstations?
Workaround. Establish an SSH connection with port tunneling:
$ ssh -N -L 8001:localhost:8001 sarah#10.0.0.100
You can establish a similar connection by using other SSH client tools such as PuTTY. Open http:/ /localhost:8001 from a web browser on your workstation.
But it is tedious to establish an SSH connection every time so that I need a better idea.
why not just do this?
jupyter notebook --ip=0.0.0.0 --port=8001
Ok, if you want a system-level proxy that works on your local lan, you can do the following:
on your workstation, install proxychains-ng
modify proxychains.conf with the following:
[ProxyList]
socks4 127.0.0.1 8001
on your workstation run:
proxychains4 -f proxychains.conf ssh -L 8001:0.0.0.0:8001 sarah#10.0.0.100
Now you can hit http://<your_workstation>:8001 from anywhere on your LAN and it will be proxied to your remote system.
To keep your tunnel always connected, you can install autossh, and replace the proxy command with:
proxychains4 -f proxychains.conf \
autossh -t -M 0
-o 'ServerAliveInterval=30' \
-o 'ServerAliveCountMax=10000' \
-o 'SendEnv=TERM_PROGRAM' \
-o 'ExitOnForwardFailure=no' \
-o 'TCPKeepAlive=yes' \
-L 8001:0.0.0.0:8001 \
sarah#10.0.0.100
You should also consider you using key-based auth for this.
Related
The setup is as follows:
remote private server far far away
remote private server has private gitlab instance on port XXXX
remote private server is configured to allow SSH sign-on via SSH key
gitlab instance on port XXXX of remote private server requires SSH key authentication using different SSH key
How can I clone that repository onto my local machine, and push/pull data remotely given that setup?
This is how I access it locally when I am not far, far away from remote private server:
git clone git#XXX.XXX.XX.X:REPODIR/repo_name.git
In this case, XXX.XXX.XX.X is the IP of the local git-lab instance on the remote network.
Is there anyway to tunnel into the remote network and access the gitlab instance by proxy (forgive me for using the word wrong likely).
Thank you.
Ok, mostly thanks to #o11c for this, although here are my findings that led me to be able to clone my repo remotely.
Disclaimer: ProxyJump (-J see ssh manpage) is the shorthand, more modern, version of this but I couldn't get it working -- if anyone wants to update with their implementation of ProxyJump that would be useful!
SSH into your remote account to the main server with port to your gitlab or other application instance, using your main identity (this can be in ~/.ssh or you can manually reference it with -i)
ssh -ND 3131 nkunes#XXX.XXX.1.146 -i ../../keys/XXX-ssh &
I then source this bash script in the shell I intend to run git commands (notice the ProxyCommand usage instead of ProxyJump, this is the old method of doing this yet it works well for me. also notice the 127.0.0.1:PORT should be swapped with your application's port)
alias ssh="ssh -o ProxyCommand='/usr/bin/nc -X 4 -x 127.0.0.1:3131 %h %p'"
export GIT_SSH=~/Desktop/XXX-eng/ssh-access/ssh-proxy.sh
export PRE_SSH_ALIAS_PROMPT="$PS1"
export PS1="<< SSH ALIAS >>$PS1"
Where ssh-proxy.sh is defined as follows: (again, swap the port out for your application, and possibly use ProxyJump if want modern implementation)
ssh -o ProxyCommand='/usr/bin/nc -X 4 -x 127.0.0.1:3131 %h %p' "$#"
Then, you can clone normally using:
git clone git#XXX.XXX.XX.X:REPODIR/repo_name.git
Hi I am trying to setup DC/OS in Debian 8 Jessie, I got working ssh connection with the ssh key, I am able to login without password to all masters and agents (they are running CentOS 7). Strange thing is it's not working when running --preflight, it will say connection refused for all nodes.
TASK:
/usr/bin/ssh -oConnectTimeout=10 -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oBatchMode=yes -oPasswordAuthentication=no -p22 -i genconf/ssh_key -tt root#192.168.122.131 sudo rm -rf /opt/dcos_install_tmp
STDERR:
ssh: connect to host 192.168.122.131 port 22: Connection refused
STDOUT:
If I try to run this command in terminal, it works just fine. So it does not work only when running it via bash dcos_generate_config.sh --prefligh. Any idea what could be wrong please?
In bash -- denotes the end of command-line options, so what you need to probably do is:
bash dcos_generate_config.sh -- --prefligh
I am using SSH Secure Shell to connect to a server. My connection is allowed to Tunnel X11 connections but when I execute the command. The display is not showing up. I get the message:
couldn't connect to display "localhost:12.0"
I have a ssh server installed and running on my machine.
Remember: Both the client and the server have to allow X forwarding.
On the server look in /etc/ssh/sshd_config and make sure you have X11Forwarding yes. You will need to restart the service if you edit this file.
On the client look in /etc/ssh/ssh_config (your user ~/.ssh/ssh/config will override global settings, if you have created this file) and make sure you have ForwardX11 yes.
Alternatively give the -X switch when you create your client connection. e.g. ssh -X user#host
Oh and of course, your client needs to be running an X server which you have authority to use! E.g. if you connect from Windows using PuTTY it will never work, as Windows is not an X server!
I figured it out. I needed to have X-Server installed on my computer instead of SSH-Server. I installed Xming for that purpose and now everything works as it should.
I work at an organization with stringent security requirements, sometimes excessively so. My project team is trying to create an SVN repository, and we are having difficulties setting one up to comply with both our needs and our security requirements.
Our IT department requires us to authenticate ourselves with two-factor authentication. Each developer has an RSA token that must be used to log in to the repository host machines via SSH. The value displayed on the token changes once per minute and each value can be used only once.
The developers require the capability to store passwords. This prevents us from using svn+ssh to log in to the repository. Since the RSA token changes once a minute, we can't store the SSH passwords. Worse, the RSA token would reduce us to one SVN operation each minute. This is flatly unacceptable, especially since we have scripts that chain multiple SVN operations together.
We attempted to compromise by opening an SSH tunnel with port forwarding. We would open up a tunnel using ssh user#hostmachine -L 3690:localhost:3690 to forward all SVN requests on our local machine to the secure machine, where an svnserve process was running. This meant we could log in with two-factor authentication, and then use a separate SVN username and password (which could be stored) with our utilities.
Unfortunately, we noticed that we didn't need the tunnel; port 3690 was available to any computer for whom the hostname was visible. This is unacceptable to IT, and our sysadmin thinks that svnserve is the problem, so she is wondering if we have to go back to svn+ssh.
Is there any solution that works? Is our sysadmin correct? Is there an option on svnserve that will force it to listen only to traffic from localhost?
use:
svnserve -dr /my/repo --listen-host 127.0.0.1
This way the service will only listen on the loopback interface. When you connect with ssh use:
ssh -L3690:127.0.0.1:3690 user#svnserver.mycompany.com
also see:
vince#f12 ~ > svnserve --help
usage: svnserve [-d | -i | -t | -X] [options]
Valid options:
-d [--daemon] : daemon mode
-i [--inetd] : inetd mode
-t [--tunnel] : tunnel mode
-X [--listen-once] : listen-once mode (useful for debugging)
-r [--root] ARG : root of directory to serve
-R [--read-only] : force read only, overriding repository config file
--config-file ARG : read configuration from file ARG
--listen-port ARG : listen port
[mode: daemon, listen-once]
--listen-host ARG : listen hostname or IP address
[mode: daemon, listen-once]
-T [--threads] : use threads instead of fork [mode: daemon]
--foreground : run in foreground (useful for debugging)
[mode: daemon]
--log-file ARG : svnserve log file
--pid-file ARG : write server process ID to file ARG
[mode: daemon, listen-once]
--tunnel-user ARG : tunnel username (default is current uid's name)
[mode: tunnel]
-h [--help] : display this help
--version : show program version information
svnserve might have options to only listen on localhost, but this sounds like a firewall configuration issue.
If port 3690 isn't meant to accessible externally, it should be blocked by the firewall. It shouldn't matter whether svnserve or anything else is listening on that port. svnserve can then continue to listen on 3690, but will only receive connections from localhost because others are blocked by the firewall.
I need to create a bash script which will connect to an FTP server, upload a file and close the connection. Usually this would be an easy task but I need to specify some specific proxy settings which is making it difficult.
I can connect to the FTP fine using a GUI client i.e. Filezilla with the following settings:
Proxy Settings
--------------
FTP Proxy : USER#HOST
Proxy Host: proxy.domain.com
Proxy User: blank
Proxy Pass: blank
FTP Settings
------------
Host : 200.200.200.200
Port : 21
User : foo
Pass : bar
What I can't seem to do is replicate these settings within a text based ftp client i.e. ftp, lftp etc. Can anyone help with setting this script up?
Thanks in advance!
According to the docs, lftp should support the ftp_proxy environment variable, e.g.
ftp_proxy=ftp://proxy.domain.com lftp -c "cd /upload; put file" ftp://200.200.200.200
If that works, you can put
export ftp_proxy=ftp://proxy.domain.com
in your shell configuration files, or
set ftp:proxy=ftp://proxy.domain.com
in your ~/.lftprc.
Alternatively, try running the commands that your GUI FTP client is running, e.g.
upload.lftp
USER ...#...
PASS ...
PUT ...
And run it using -s:
lftp -s upload.lftp 200.200.200.200
Or try curl -T (docs) ncftpput (docs).
Something like:
FTP_PROXY=ftp://proxy.domain.com curl -T uploadfile -u foo:bar ftp://200.200.200.200/myfile
might work.