Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm on laptop (Ubuntu) with a network that use HTTP proxy (only http connections allowed).
When I use svn up for url like 'http://.....' everything is cool (google chrome repository works perfect), but right now I need to svn up from server with 'svn://....' and I see connection refused.
I've set proxy configuration in /etc/subversion/servers but it doesn't help.
Anyone have opinion/solution?
In /etc/subversion/servers you are setting http-proxy-host, which has nothing to do with svn:// which connects to a different server usually running on port 3690 started by svnserve command.
If you have access to the server, you can setup svn+ssh:// as explained here.
Update: You could also try using connect-tunnel, which uses your HTTPS proxy server to tunnel connections:
connect-tunnel -P proxy.company.com:8080 -T 10234:svn.example.com:3690
Then you would use
svn checkout svn://localhost:10234/path/to/trunk
Ok, this should be really easy:
$ sudo vi /etc/subversion/servers
Edit the file:
[Global]
http-proxy-host=my.proxy.com
http-proxy-port=3128
Save it, run svn again and it will work.
If you can get SSH to it you can an SSH Port-forwarded SVN server.
Use SSHs -L ( or -R , I forget, it always confuses me ) to make an ssh tunnel so that
127.0.0.1:3690 is really connecting to remote:3690 over the ssh tunnel, and then you can use it via
svn co svn://127.0.0.1/....
Okay, this topic is somewhat outdated, but as I found it on google and have a solution this might be interesting for someone:
Basically (of course) this is not possible on every http proxy but works on proxies allowing http connect on port 3690. This method is used by http proxies on port 443 to provide a way for secure https connections. If your administrator configures the proxy to open port 3690 for http connect you can setup your local machine to establish a tunnel through the proxy.
I just was in the need to check out some files from svn.openwrt.org within our companies network. An easy solution to create a tunnel is adding the following line to your /etc/hosts
127.0.0.1 svn.openwrt.org
Afterwards, you can use socat to create a tcp tunnel to a local port:
while true; do socat tcp-listen:3690 proxy:proxy.at.your.company:svn.openwrt.org:3690; done
You should execute the command as root. It opens the local port 3690 and on connection creates a tunnel to svn.openwrt.org on the same port.
Just replace the port and server addresses on your own needs.
when you use the svn:// URI it uses port 3690 and probably won't use http proxy
svn:// doesn't talk http, therefor there's nothing a http proxy could do.
Any reason why http doesn't work? Have you considered https? If you really need it, you probably have to have port 3690 opened in your firewall.
If you're using the standard SVN installation the svn:// connection will work on tcpip port 3690 and so it's basically impossible to connect unless you change your network configuration (you said only Http traffic is allowed) or you install the http module and Apache on the server hosting your SVN server.
Related
I have some struggles with the proxy settings.
There is a proxy server running which I use. So I've set the proxy urls in the environment based on this tutorial http://www.gtkdb.de/index_36_2111.html
This works pretty fine if I use the chromium browser, but ping
and apt-get still does not work.
Did I miss something?
I guess ping and so on don't use the proxy settings of env
To answer your problem referring to apt follow this thread: https://askubuntu.com/questions/109673/how-to-use-apt-get-via-http-proxy-like-this.
Ping uses ICMP and not http,https or ftp to do its job.
If you want ping to work you'll need to config the routing table of your machine as the proxy machine and config iptables on the proxy machine to NAT the traffic. To give you an idea follow this thread:
how to transmit traffic from a linux vpn server to another linux server?
Hope this helps.
The question might not what you thought it was by the title.
I have Linux machine running a Centos distribution. I have a certain script which sends HTTP requests but can not be configured to use proxies due to certain reasons.
What I'm looking for, is to configure a proxy connection only for the HTTP requests (port 80) while other connections such as SSH will work with the Server IP.
Can this be done?
Thanks.
You can set the variable :
http_proxy="http://PROXY:proxyport" yourcommand
or export this and do what you need :
export http_proxy="http://PROXY_IP:proxyport"
yum update
I am using gorilla/mux package in golang, but there are some problems. The first is I have no permissions to use port 80 on my application becuase I cannot run the application from sudo as the $GOPATH is not set when using sudo.
Here is the error I get from my program:
$ go run app.go
2014/06/28 00:34:12 Listening...
2014/06/28 00:34:12 ListenAndServe: listen tcp :80: bind: permission denied
exit status 1
I am unsure if it will even work when I fix the sudo problem, because apache is already using port 80 and I am not sure if both my app and apache can "play nice" together.
Any advice on how to solve this would be great. Thank you.
Quoting elithar's comment,
You have two options: either turn off Apache (because only one service
can bind to a port), or (better!) use Apache's ProxyPass to proxy any
incoming requests to a specific Hostname to your Go server running on
port (e.g.) 8000. The second method is very popular, robust, and you
can use Apache to handle request logging and SSL for you.
Reverse Proxying
Using Apache on port 80 in this way is called a reverse proxy. It receives all incoming connections on port 80 (and/or port 443 for https) and passes them on, usually unencrypted, via internal localhost connections only, to your Go program running on whatever port you choose. 8000 and 8080 are often used. The traffic between Apache and your server is itself HTTP traffic.
Because your Go program does not run as root, it is unable to alter critical functions on the server. Therefore it gives an extra degree of security, should your program ever contain security flaws, because any attacker would gain only limited access.
FastCGI
You can improve the overall performance of the reverse proxying by not using HTTP for the connection from Apache to the Go server. This is done via the FastCGI protocol, originally developed for shell, Perl and PHP scripts, but working well with Go too. To use this, you have to modify your Go server to listen using the fcgi API. Apache FastCGI is also required. The traffic from Apache to your server uses a more compact format (not HTTP) and this puts less load on each end.
The choice of socket type is also open: instead of the usual TCP sockets, it is possible to use Unix sockets, which reduce the processing load even further. I haven't done this in Go myself, but the API supports the necessary bits (see a related question).
Nginx
Whilst all the above describes using Apache, there are other server products that can provide a reverse proxy too. The most notable is Nginx (Nginx reverse proxy example), which will give you small but useful performance and scalability advantages. If you have this option on your servers, it is worth the effort to learn and deploy.
Based on this previous answer about environment variables, I was able to solve the sudo problem easily.
https://stackoverflow.com/a/8636711/2576956
sudo visudo
added these lines:
Defaults env_keep +="GOPATH"
Defaults env_keep +="GOROOT"
Using ubuntu 12.04 by the way. I think the previous answer about the proxy for using port 80 is the correct choice, because after fixing the sudo issue I was given this error about port 80 instead:
$ sudo go run app.go
2014/06/28 01:26:30 Listening...
2014/06/28 01:26:30 ListenAndServe: listen tcp :80: bind: address already in use
exit status 1
Meaning the sudo command was fixed but the proxy binding would not work with another service already using port 80 (apache).
I have a list of IPs for server on my network, I need to check if they are running ssh or not from within node. How can I go about this?
Depends how deep you want this check to run. You could simply check if the port 22 is open, if you want to see if something is running there, or perhaps you have to try to connect to establish it's really ssh protocol, and maybe portscan all ports for it or whatever.
There are ssh client libs like ssh or more general network scanners like netty.
Do npm search ssh client or npm search port scanning or npm search nmap for extended selection of libs that can do that.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am setting up my first Node.js server on a cloud Linux node and I am fairly new to the details of Linux admin. (BTW I am not trying to use Apache at the same time.)
Everything is installed correctly, but I found that unless I use the root login, I am not able to listen on port 80 with node. However I would rather not run it as root for security reason.
What is the best practice to:
Set good permissions / user for node so that it is secure / sandboxed?
Allow port 80 to be used within these constraints.
Start up node and run it automatically.
Handle log information sent to console.
Any other general maintenance and security concerns.
Should I be forwarding port 80 traffic to a different listening port?
Thanks
Port 80
What I do on my cloud instances is I redirect port 80 to port 3000 with this command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3000
Then I launch my Node.js on port 3000. Requests to port 80 will get mapped to port 3000.
You should also edit your /etc/rc.local file and add that line minus the sudo. That will add the redirect when the machine boots up. You don't need sudo in /etc/rc.local because the commands there are run as root when the system boots.
Logs
Use the forever module to launch your Node.js with. It will make sure that it restarts if it ever crashes and it will redirect console logs to a file.
Launch on Boot
Add your Node.js start script to the file you edited for port redirection, /etc/rc.local. That will run your Node.js launch script when the system starts.
Digital Ocean & other VPS
This not only applies to Linode, but Digital Ocean, AWS EC2 and other VPS providers as well. However, on RedHat based systems /etc/rc.local is /ect/rc.d/local.
Give Safe User Permission To Use Port 80
Remember, we do NOT want to run your applications as the root user, but there is a hitch: your safe user does not have permission to use the default HTTP port (80). You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like http://ip:port/
Unfortunately, unless you sign on as root, you’ll normally have to use a URL like http://ip:port - where port number > 1024.
A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:
sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep `readlink -f \`which node\``
Now, when you tell a Node application that you want it to run on port 80, it will not complain.
Check this reference link
Drop root privileges after you bind to port 80 (or 443).
This allows port 80/443 to remain protected, while still preventing you from serving requests as root:
function drop_root() {
process.setgid('nobody');
process.setuid('nobody');
}
A full working example using the above function:
var process = require('process');
var http = require('http');
var server = http.createServer(function(req, res) {
res.write("Success!");
res.end();
});
server.listen(80, null, null, function() {
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
drop_root();
console.log('User ID:',process.getuid()+', Group ID:',process.getgid());
});
See more details at this full reference.
For port 80 (which was the original question), Daniel is exactly right. I recently moved to https and had to switch from iptables to a light nginx proxy managing the SSL certs. I found a useful answer along with a gist by gabrielhpugliese on how to handle that. Basically I
Created an SSL Certificate Signing Request (CSR) via OpenSSL
openssl genrsa 2048 > private-key.pem
openssl req -new -key private-key.pem -out csr.pem
Got the actual cert from one of these places (I happened to use Comodo)
Installed nginx
Changed the location in /etc/nginx/conf.d/example_ssl.conf to
location / {
proxy_pass http://localhost:3000;
proxy_set_header X-Real-IP $remote_addr;
}
Formatted the cert for nginx by cat-ing the individual certs together and linked to it in my nginx example_ssl.conf file (and uncommented stuff, got rid of 'example' in the name,...)
ssl_certificate /etc/nginx/ssl/cert_bundle.cert;
ssl_certificate_key /etc/nginx/ssl/private-key.pem;
Hopefully that can save someone else some headaches. I'm sure there's a pure-node way of doing this, but nginx was quick and it worked.