Noob questions for SVN checkout and network issues regading it - linux

We have a local server with SVN installed on it that we are using for development/testing purpouses. We would like to checkout the data from it to the live server that is somewhere out there.
The only way to do that which I thought of was to use "svn checkout" from the live server, right? This way we do not need to FTP the changes to it, that may cause problems is we forget to upload some of the changes. And if we found a problem we can alway go back to previous stable version, right? Correct me if I am wrong about any of these.
The problem is that our local server (Ubuntu) does not have a IP that is reachable from outside. We have a router from out ISP, but we can not use that to access the local server from the live. We are willing to ask the ISP provider to setup a second IP for the local server, but for security sake they want to setup a separate machine with Windows and windows base security software (firewall - http://www.kerio.com/control/ and antivirus) that will cost us a lot. Can we just setup a free firewall on the local server (Ubuntu as I said) and solve the problem without spending additional money?
I hope I was clear.

It's always hard to comment without knowing the exact situation, but this sounds a bit crazy.
What you would usually do is set up port forwarding for one port to the local server. The server would then be reachable (for example) through 123.45.67.89:3690
That's a three-minute task to set up in a normal household router.
As long as the Ubuntu server is closed otherwise, and Subversion or whatever you are using for authentication is properly configured and up to date, this should not create security issues.
In any case, putting a Windows machine in between to act as a firewall sounds really unnecessary. Ubuntu comes with everything necessary to secure the setup properly.

If the remote server has an ssh server, then you can use ssh forwarding.
From the internal svn server:
ssh -R 7711:localhost:3690 {REMOTE_SERVER}
7711 is an arbitrary port (you can use any free port on the remote system) that will be forwarded from the remote system to port 3690 (svn) on the svn server.
3690 is the port on the internal svn server that you want to talk to (via svn://).
If you are using subversion over http:// then use port 80 instead of 3690.
If you are using subversion over https:// then use port 443 instead of 3690.
After setting up the forward, then you can do this on the remote system:
svn checkout {SCHEME}://localhost:7711/{PATH}
{SCHEME} is svn, http, https, etc.
{PATH} is the normal svn path you want to check out.
Notes:
the forwarded traffic is tunneled through the ssh connection (on a different "channel") so it is also encrypted which is a nice benefit.
by default, the remote end of the forward will listen on the loopback interface so only processes on that system will be able to use the port forwarded port.
As soon as you close the ssh session, the forwarded port will also close. It only lasts the duration of the ssh connection.
ssh forwarding is very powerful. If you can ssh between two systems, then you can get around any sort of connection problem like this.
Do man ssh and read about the -L and -R options.
Useful links about ssh forwarding:
http://www.rzg.mpg.de/networkservices/ssh-tunnelling-port-forwarding:
http://www.walkernews.net/2007/07/21/how-to-setup-ssh-port-forwarding-in-3-minutes/

check if your ISP router provide some port forwarding abilities,
You should probably forward the ssh port ( after ensuring that everyone password is secure/or enforcing login with ssh keys file), and use SVN+SSH protocol to access your repository.

You should be able to open up and forward a single port (3690 by default) on your existing IP to the local server, as pointed out by Pekka. This depends on your router, and your ability to access the configuration interface on the router.
Instead of having to deal with SSH and worry about people trying to access your local server from anywhere, you could setup a firewall to only allow incoming traffic from your single remote server. Depending on the router setup, you could simply use the builtin firewall on the local server. It would still be advisable to have some svn authentication, though.
The SSH forwarding method described by kanaka prevents the entire issue about remote access to the local machine, but it requires you to execute the forwarding command from the local server every time you need to access svn on the remote server.

Related

Access Router From Internet

I am aware of how to access my router under normal circumstances (simply entering your public IP address), however I have forwarded a few ports to a web server that I have setup. Ports 22, 80, 8080 are all forwarded (for different reasons), and my public IP is set up through a DNS system.
Now when I attempt to access my router settings (through my public IP) it re-directs me to my website. I tried entering:
PU:BL:IC:IP:8080
and
PU:BL:IC:IP:80
with no success. I did attempt to disable my web server (which I still have access to) and that also failed. Is there anyway around this without having to go home and change setting manually. I have DMZ disabled if that's any help.
You have forwarded port 22, which is usually the SSH port. There are three ways to access your router from SSH:
Use SSH port forwarding to poke a hole through the router to access you're router's admin interface from the local computer. To do this in OpenSSH from the command line, you would use the option -L 12345:router-ip:80. In PuTTY, you would use the Connection/SSH/Tunnels category to add a local forwarded port with source 12345 and destination router-ip:80. Then you can access your admin interface from your local machine by browsing to http://localhost:12345. If your router uses a different port than 80, change that in the examples above. If you want to use a different local port than 12345, you may change that as well.
Use a text-mode browser, such as lynx or elinks, from the SSH connection. This is the simplest to set up, but using modern web apps in text-mode browsers can be difficult or impossible.
If you have an X server running at your current location, use SSH's X11 forwarding to run a graphical browser. Use the -X option for OpenSSH at the command line, or check the X11 forwarding box in Connection/SSH/X11 in PuTTY.

Getting "I won't open a connection to" when connecting to FTP server from Google Compute Engine

I ssh'ed to my Google Compute Engine's VM. And want to ftp to another server from there. It asked my username and password, I could login without problem. But when I do ls or get, I receive this error:
500 I won't open a connection to 10.240.XX.XX (only to XX.XX.XX.XX)
ftp: bind: Address already in use
That 10.240.XX.XX is my internal IP address I saw in ifconfig result.
How can I transfer files from another server using FTP?
System: Debian7
You are using the active mode of FTP to connect to a server running Pure-FTPd. In the active mode, a server has to connect back to a client to open a data transfer connection (for file transfers or directory listing). For that, the client sends its IP address to the FTP server in the PORT command.
If the FTP server is outside of the GCE private network, it obviously cannot connect back to the client machine, as the machine is behind a firewall and NAT.
And actually the Pure-FTPd explicitly checks that the IP address in the PORT command matches the client IP address of the FTP control connection. It won't match, if the client sends its internal IP address within the GCE network. If this case, the Pure-FTPd server rejects the transfer outright (without even trying to connect) with the error message, you are getting:
I won't open a connection to ... (only to ...)
(where the first ... is the IP address provided by the client in the PORT command [the local address within the GCE private network), and the second ... is the external [NATed] IP address of the client, as known by the server).
Even if the client reported the external [NATed] address in the PORT command, it still won't work as the connection attempt won't get past the NAT and firewall.
For this reason, the passive FTP mode exists, in which the client connects to the server to open the data transfer connection. Actually, none uses the active mode nowadays.
See (my article) FTP connection modes for details about the modes.
So, switch to the passive mode. How this is done is client-specific.
In most common *nix ftp command-line clients, use the -p command-line switch, though the passive mode is used by default anyway:
-p Use passive mode for data transfers. Allows use of ftp in environments where a firewall prevents
connections from the outside world back to the client machine. Requires that the ftp server sup-
port the PASV command. This is the default now for all clients (ftp and pftp) due to security
concerns using the PORT transfer mode. The flag is kept for compatibility only and has no effect
anymore.
Some clients also support passive command.
If you are on Windows, you cannot use the built-in command-line ftp.exe client, as it does not support the passive mode at all. You have to install a third-party client. See How to use passive FTP mode in Windows command prompt?
Enable your FTP with passive mode, if you have already connected please type
ftp> passive
Passive mode on.
You are currently using FTP in passive mode.
if you use wsl2 linux subsystem in windows 10:use
pftp
If you use PsPad editor and you have the same issue, try to set this configuration for your connection:

Accessing a server as localhost?

I use ssh keys to access a server at lets say 200.200.200.200. It works fine. How can i access that server in my host as 127.0.0.1?
I have tried my best but couldn't make it work.
You normally do this via port forwarding so you forward the remote port (the one from the server) that you are interested in to your local machine. Then you can access it via 127.0.0.1:
Example tutorial:
https://help.ubuntu.com/community/SSH/OpenSSH/PortForwarding
In putty it is also straight forward:
http://www.cs.uu.nl/technical/services/ssh/putty/puttyfw.html
You could also modify your local hosts file to point to this server but that often causes hick ups with local services.

linux command to connect to another server using hostname and port number

what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.

Mercurial hg clone error - "abort: error: Name or service not known"

I have installed the latest hg package available for Fedora Linux. However, hg clone reports an error.
hg clone http://localmachine001:8000/ repository
reports:
"abort: error: Name or service not known"
localmachine001 is a computer within the local network. I can ping it from my Linux box without any problems. I can also use the same http address and browse the existing code. However, hg clone does not work.
If I execute the same command from my Macintosh machine, I can easily clone the repository.
Some Internet resources recommend editing .hgrc file, and adding proxy to it:
[http_proxy]
host=proxy:8080
I have tried that without any success. Also, I assume that proxy is not needed in this case, since the hg server machine is in my local network.
Can anyone recommend me what should I do, or how could I track the problem?
Thank you in advance.
Running hg as root solved the problem for me. But still don't know why.
The problem is (likely) that your http proxy is not able to:
Resolve localmachine
Reach your (or localmachine's) local IP, even if it could resolve 'localmachine' to your correct local address.
First, make sure nothing on your side (iptables / NAT / firewall) is preventing egress or ingress on the proxy port. If it works if you're root, that's the problem - work backwards from there.
Its also conceivable that your proxy is mangling the responses from the remote HG sufficiently to confuse Mercurial, but not your browser. In either case, its best to just go around the proxy if the HG is on the local (localhost/lan) network.
Fortunately, the [http_proxy] directive supports bypassing the proxy for certain host names, which is ideal for dealing with stuff on the same side of a NAT, or hosts that only exist on one machine (e.g. resolved via /etc/hosts.) This saves the pain from having to edit .hgrc every time you need to change the behavior.
See the documentation, or simply make your .hgrc look something like this:
[http_proxy]
host=proxy:8080
no=localmachine,192.168.1.123,192.168.1.234,...,...
The operative directive of course being no. I'm not sure if you can use wildcards when specifying the hosts (I don't use the proxy feature, so no way of testing that .. and its not specified in the documentation). You might try experimenting with that, e.g. 192.168.1.* and let us know if that works as well.
Anyway, for the terminally lazy (or people in a rather big hurry), the related section of the documentation linked above:
http_proxy
Used to access web-based Mercurial repositories through a HTTP proxy.
host
Host name and (optional) port of the proxy server, for example "myproxy:8000".
no
Optional. Comma-separated list of host names that should bypass the proxy.
passwd
Optional. Password to authenticate with at the proxy server.
user
Optional. User name to authenticate with at the proxy server.
This happened to me when my DNS name server settings weren't configured. Try to ping the remote repository host and try to ping some well-known host e.g. google.com. If it doesn't work, fix your DNS settings.
The error is because of DNS, add entry into /etc/resolve.conf.
$ cat /etc/resolve.conf
search xyz.com abc.com --> DOMAINS
nameserver 10.192.160.12 -->DNS1
nameserver 10.193.180.23 -->DNS2
This enables to reach local machine by name, & hence clone will be successful.
Clone and the web interface use exactly the same mechanism, so it's very odd that you can see the repo at http://localmachine001:8000/ but you can't clone from it.
What about trying to go to that machine by IP address? Something like hg clone http://192.168.0.4:8080 and see what happens?
Any more detail with --debug?
I Faced same issue,
To resolve this issue i just disconnected my internet from my laptop and reconnected again
Thankyou Happy coding ;)

Resources