How to use connect-proxy on CygWin - linux

I am trying to run a node.js program behind a corporate firewall. However, I am unable to explicitly tell node.js which proxy to use and therefore all my external connections time out.
I read on a some post that I could use connect-proxy as an HTTP proxy for my tunneling needs, but I have no idea how to actually use it.
I want to run the following:
$ node program.js
using connect-proxy.
The only command I was able to get so far is this:
$ connect-proxy -H myproxy.com:8083 google.com
GET
HTTP/1.0 302 Found
Location: http://www.google.com/
...

Before going further it is worth trying the environment variable that a number of other languages and tools support.
export http_proxy=http://proxyserver:port
Often they use port 8080 but check the Javascript in the PAC file loaded by your browser to be sure.
If that produces a different result but still doesn't connect, you probably need to do NTLM auth with the proxy and the only way I know to do this is to run NTLMAPS before running your app. If you are really interested in getting this working transparently, then porting NTLMAPS to Javascript should do the trick.

Related

Is there any way to force a program/software to use system proxy in Linux?

I am working on a Java project in Intellij Idea (Linux) that needs to access websites through a proxy. I have a personal proxy subscription to use and I can request through it programmatically with something like -
HttpHost proxy = new HttpHost("PROXY_SERVER", PORT);
String res = Executor.newInstance()
.auth(proxy, "USER_NAME", "PASSWORD")
.execute(Request.Get("https://example.com").viaProxy(proxy))
.returnContent().asString();
System.out.println(res);
However, if I use the proxy in the etc/environment with http(s)_proxy or through ubuntu network proxy from the settings, my browsers and some of the system programs such as - Chrome, Firefox use the proxy while making any requests but Intellij Idea doesn't follow the system proxy. I've tried to set it manually from IDEA settings but it doesn't work. The requests are always going from my current IP. So, I was curious if it is possible to force a software in Linux to use system proxy somehow. I need to mention that, I have tried proxychains but it didn't work, my server wasn't recognized. Any kind of help/suggestion will be appreciated as I have a little or no experience in networking.

Best way to use Slack API script from server

I created some internal tools that use Slack API and after testing them locally using ngrok and serveo, I'd like to use a server to allow other colleagues to use the script.
Do I still need to use ngrok or serveo or any other tunnels to forward the traffic using flask? If not, how would that work? I tried using IP or server name but didn't work.
In case tunnel is needed, what would be the best free tool to use? ngrok, serveo or others?
The problem with ngrok is that the free version expires after 8 hours, meaning everytime the session expires, I need to update Request URL, Options Load URL and Slash Commands URLs in Slack api UI, as the URL changes.
Using Serveo gives the option of keeping the same URL (serveo.net). This although didn't seem to be very stable.
I tried to refresh it adding the command to shell script:
ssh -R 80:localhost:5000 serveo.net
But got this message:
Pseudo-terminal will not be allocated because stdin is not a terminal. Host key verification failed.
Checking online I've tried suggested solutions (mainly using -t -t) but that didn't work, getting:
Could not resolve hostname 80:localhost:5000: Name or service not known

linux redirect localhost port to url port

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!
It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

How to capture full http requests on express to request them again to my localhost

I have a problem with an Express.js service running on production that I'm not able to replicate on my localhost. I have already tried requesting all the urls to production again to my local machine, but on my machine everything works fine. So I suspect that the problem comes with the data on the http headers (cookies, user agents, languages...).
So, is there a way, (some express module, or sniffer that runs on ubuntu) that allows me to easily create a dump on the server with the whole header so I can later repeat those exact requests to my localhost?
You can capture network packages with https://www.wireshark.org/, analyze them and maybe find the difference between your local environment and the production one.
You can try to use a Proxy-Tool like Charles (https://www.charlesproxy.com/) or Fiddler (http://www.telerik.com/fiddler) to log your Browser Requests.

Debugging all HTTP[S] on node.js

I'm having fits accomplishing something and after scouring google & SO, throwing my hands up after a few days. Trying to do something that I think is pretty common: debug / examine all HTTP traffic while developing a node.js app.
In Windows it is as simple as firing up Fiddler and I can see all HTTP & HTTPS traffic from all processes. But I've switched platforms over to OSX and trying to make the same work.
I've tried using Charles & MITMPROXY, but all I'm seeing is the traffic to, with the response, my node.js app. My node.js app is calling external services, some using the popular request package (which I have seen how to set that up) but also using other packages, like azure-storage. What's troubling me is I can't get any of the debugging proxies to show me at the azure-storage package is sending / receiving to the endpoints they are calling.
Conceptually I think I get it... I have to tell these different things (like node.js, request & azure-storage) to go through the proxy each of these tools uses... but how can you do that without modifying their source? Can't, like how Fiddler works on Windows, you do something to "all traffic goes through this proxy"?
I'd use Fiddler on OSX but it is currently not working with no ETA in sight after talking to Telerik.
So the problem I was having is what I thought... in my specific instance the module that I was using to access Azure storage was not using the default proxy. I found a package (**global-tunnel that hijacked everything that used the request package to control it going through a proxy. Now I saw stuff show up in the HTTP debuggers I was using.
The problem now is when I am trying to reach an HTTPS endpoint... using something like Charles, it used it's own SSL cert which wasn't trusted by Azure so the connections were refused. Back to the drawing board...

Resources