I am trying to write a node program (suggestion for other language are welcome) that can parse and run a corporate proxy script PAC and return the appropriate proxy server to use programmatically.
Are there any existing solutions that can do this? (or is this only possible through a browser)
It seems like PAC files assume certain global functions exist in the execution context such as
shExpMatch()
myIpAddress() // interestingly the nodejs ip package return the true LAN DHCP assigned IP instead of a VPN IP
The goal is to resolve the right proxy server each time a shell is launched (or not set it at all if not behind a proxy)
any tip is greatly appreciate it.
If you're sticking with Node, I'd recommend using something like pac-resolver for this.
const pac = require('pac-resolver');
const fetch = require('node-fetch');
const ip = require('ip');
fetch('http://<proxy_host>:<proxy_port>')
.then(res => res.text())
.then(body => {
// to handle a bug when pinging 8.8.8.8 will time out
// see: https://github.com/TooTallNate/node-pac-resolver/issues/18
findProxy = pac(body, { sandbox: { myIpAddress: ip.address }, });
return findProxy('http://google.com/'));
})
.then(proxy => console.log(proxy))
.catch(err => console.error(err));
In any case, you can look at the repo to see which global functions need to be defined and how that would be done in JavaScript.
If you can use other languages, look at pacparser.
The list of standard global functions, by the way, is here:
dnsDomainIs
shExpMatch
isInNet
myIpAddress
dnsResolve
isPlainHostName
localHostOrDomainIs
isResolvable
dnsDomainLevels
weekdayRange
dateRange
timeRange
alert
Related
So I made a react native app with nodeJS and in order to connect nodejs backend to react native frontend I had to create an endpoint like such:
app.post("/send_mail", cors(), async (req, res) => {
let {text} = req.body
var transport = nodemailer.createTransport({
host: "smtp.mailtrap.io",
port: 2525,
auth: {
user: "usertoken",
pass: "password"
}
})
await transport.sendMail({
from: "email#email.com",
to: "email2#email.com",
subject: "message",
html: `<p>${text}</p>`
})
})
and in react native frontend call function like that :
const handleSend = async() => {
try {
await axios.post("http://192.168.0.104:3333/send_mail", { //localhost
text: placeHolderLocationLatLon
})
} catch (error) {
setHandleSetError(error)
}}
and it works fine in the local environment. But in the google play store, this IP address doesn't work because it's my localhost ipv4. I tried to use localhost:3333, but that doesn't work too. I can't find anywhere described anything about what IP has to be used in the google play store. Does anyone know how I should make this endpoint? Thank you
You can't just host a service by yourself like that (typically your back-end). Well, you can but not with your knowledge level. It would be inefficient (as in you'd have to keep your computer up 24/7) and present security issues.
Does anyone know how I should make this endpoint?
Don't get me wrong, your endpoint is fine in itself. You're just lacking the networking part.
1) For testing purposes ONLY!
If this app is only for testing purposes on your end, and won't be part of a final product that'll be present on the Google Store, there's a way you can achieve this, called ngrok.
It takes your locally-running project and provides it a https URL anyone can use to access it. I'll let you read the docs to find out how it works.
2) A somewhat viable solution (be extremely careful)
Since ngrok will provide a new URL everytime you run it with your local project, it's not very reliable in the long term.
What you could do, if you have a static IP address, is registering a domain name and link it to your IP address, and do all the shenanigans behind it (figuring out how to proxy incoming traffic to your machine and project, and so on). But honestly, it's way too cumbersome and it implies that you have valuable knowledge when it comes to securing your own private network, which I doubt.
3) A long-lasting, viable solution
You could loan a preemptive machine on GCP and run your back-end service on it. Of course, you would still have to figure out some networking things and configs, but it's way more reliable than solution 2 and 1 if your whole app aims to be public.
Google gives you free credit (I think it's like 200 or 250€, can't recall exactly) to begin with their stuff, and running a preemptive machine is really cheap.
I am developing bots for telegram, I am from Iran and telegram url is blocked in my country and I am forced to use VPN/Proxy servers to access telegram api from my local dev machine.
But I have other apps running on my system that won't work throw a VPN, So I am forced to use proxifier, I can define rules for the apps that I need to go throw a proxy.
But node.exe is ignoring this rules for some reason, I can see in NetLimiter that the connection is coming from C:\Program Files (x86)\nodejs\node.exe, But adding this path to proxifier's rules has no effect, other apps like telegram itself and firefox and ... works fine with these rules ...
So has anyone managed to force node.exe to go throw proxifier?
I also tried to setup a proxcy with php in my host, but none of the proxy scripts I found was able to handle the file uploads
My last hope is to install some modules for apache and use it as a proxy or just install nginx ...
I also tried https://github.com/krisives/proxysocket and https://github.com/TooTallNate/node-https-proxy-agent with no success, its just keeps throwing errors :(
Ok, after hours of trying finally got this to work with proxifier.
https://github.com/TooTallNate/node-https-proxy-agent
new HttpsProxyAgent('http://username:password#127.0.0.1:8080')
Update :
This approach had its problems so I created a small personal proxy server with node-http-proxy on my server and connected to it:
process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
const debug = require('debug')('app');
const http = require('http');
const httpProxy = require('http-proxy');
const proxy = httpProxy.createProxyServer({
secure : false
});
proxy.on('error', function (e) {
debug(e);
});
const server = http.createServer(function(req, res) {
// You can define here your custom logic to handle the request
// and then proxy the request.
proxy.web(req, res, { target: 'https://api.telegram.org', });
});
server.listen(3333);
And simply just redirected all the request to this server.
I have a few Zeit micro services. This setup is a RESTful API for multiple frontends/domains/clients
I need to, in my configs that are spread throughout the apps, differentiate between these clients. I can, in my handlers, setup a process.env.CLIENT_ID for example that I can use in my config handler to know which config to load. However this would mean launching a new http/micro process for each requesting domain (or whatever method I use - info such as client id will prob come in a header) in order to maintain the process.env.CLIENT_ID throughout the request and not have it overwritten by another simultaneous request from another client.
So I have to have each microservice check the client ID, determine if it has already launched a process for that client and use that else launch a new one.
This seems messy but not sure how else to handle things. Passing the client id around with code calls (i.e. getConfg(client, key) is not practical in my situation and I would like to avoid that.
Options:
Pass client id around everywhere
Launch new process per host
?
Is there a better way or have I made a mistake in my assumptions?
If the process per client approach is the better way I am wondering if there is an existing solution to manage this? Ive looked at http proxy, micro cluster etc but none seem to provide a solution to this issue.
Well I found this nice tool https://github.com/othiym23/node-continuation-local-storage
// Micro handler
const { createNamespace } = require('continuation-local-storage')
let namespace = createNamespace('foo')
const handler = async (req, res) => {
const clientId = // some header thing or host
namespace.run(function() {
namespace.set('clientId', clientId)
someCode()
})
})
// Some other file
const { getNamespace } = require('continuation-local-storage')
const someCode = () => {
const namespace = getNamespace('foo')
console.log(namespace.get('clientId'))
}
I'm writing an api using nodejs and express and my app is hosted by openshift free plan.
I want to protect my routes from brute force. For example if an IP sends more than 5 requests /sec then block it for 5 minutes. :)
There's nothing stopping you from implementing this in Node.js/express directly, but this sort of thing is typically (and almost certainly more easily) handled by using something like nginx or Apache httpd to handle traffic to your app.
This has the added benefit of allowing you to run the app entirely as an unprivileged user because nginx (or whatever) will be binding to ports 80 and 443 (which requires administrative/superuser/whatever privileges) rather than your app. Plus you can easily get a bunch of other desirable features, like caching for static contents.
nginx has a module specifically for this:
The ngx_http_limit_req_module module (0.7.21) is used to limit the request processing rate per a defined key, in particular, the processing rate of requests coming from a single IP address.
There are several packages on NPM that are dedicated to this, if you are using the Express framework:
express-rate-limiter
express-limiter
express-brute
These can be used for limiting by ip, but also by other information (e.g. by username for failed login attempts).
It is better to limit rates on reverse-proxy, load balancer or any other entry point to your node.js app.
However, it doesn't fit requirements sometimes.
rate-limiter-flexible package has block option you need
const { RateLimiterMemory } = require('rate-limiter-flexible');
const opts = {
points: 5, // 5 points
duration: 1, // Per second
blockDuration: 300, // block for 5 minutes if more than points consumed
};
const rateLimiter = new RateLimiterMemory(opts);
const rateLimiterMiddleware = (req, res, next) => {
// Consume 1 point for each request
rateLimiter.consume(req.connection.remoteAddress)
.then(() => {
next();
})
.catch((rejRes) => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
You can configure rate-limiter-flexible for any exact route. See official express docs about using middlwares
There are also options for Cluster or distributed apps and many others useful
On the client I can use window.location.hostname to get the hostname. How can I get the same on the server?
I need this to work behind an Apache proxy, unfortunately Meteor.absoluteUrl() gives me localhost:3000. I also want it to work for different domains, I want one Meteor app that gives different results for different domains.
This question is somewhat related: Get hostname of current request in node.js Express
Meteor.absoluteUrl() given that your ROOT_URL env variable is set correctly.
See the following docs: http://docs.meteor.com/#meteor_absoluteurl.
Meteor doesn't know the outside-facing address of the proxy that it's sitting behind, and the (virtual) domain that this proxy was accessed by would have to be forwarded to the Meteor app for it to do what you are asking for. I don't think this is currently supported.
According to this you can now get the Host header inside Meteor.publish() and Meteor.methods() calls by accessing:
this.connection.httpHeaders.host
Elsewhere in the application, it's probably difficult to determine the Host header that is being used to connect.
If you want the hostname of the server, as configured in /etc/hostname for example:
With meteorite:
$ mrt add npm
In your server code:
os = Npm.require('os')
hostname = os.hostname()
This has no connection to the Host header provided in the incoming request.
updated answer with some of chmac's words from the comment below
In any server-side meteor file you can add:
if (Meteor.isServer) {
Meteor.onConnection(function(result){
var hostname = result.httpHeaders.referer; //This returns http://foo.example.com
});
}
You can fetch host as EnvironmentVariable from DDP object in method and publication. Meteor accounts-base package fetch userId via this way.
const currentDomain = function() {
const currentInvocation = DDP._CurrentMethodInvocation.get() || DDP._CurrentPublicationInvocation.get();
return currentInvocation.connection.httpHeaders.host;
}