Binding actix to unix socket + running on multiple subdomains - rust

I've run into following problem. I want to set up actix on unix socket, that's my preferred way, but I can change that if really needed.
Let's say /etc/server.sock
My nginx server redirects traffic from few subdomains into this socket
Now, I want to set up few domains preferably on same app to avoid mess.
I've tried setting up tokio::net::UnixListener, and passing it into listen method.
I've also tried passing everything into one server but then I realized that I can't do that due to fact that routes would collide in few places.

With help of friend I managed to find a solution, I have no idea if it's a good way to do it but:
1. I've split my files into different routes
Api folder (module)
\---files for all subdomains
main.rs
2. In main I've set up services for every subdomain
async fn main() -> io::Result<()> {
let server = HttpServer::new(|| {
App::new()
.service(api::a_handler())
.service(api::b_handler())
.service(api::c_handler())
.service(api::d_handler())
});
server.bind("127.0.0.1:8000")? //I've also switched from unix sockets
.run()
.await
}
3. Then in every api file I've set up handlers and guards respectively
pub fn a_handler() -> Scope {
web::scope("").guard(guard::Header("host", "a.example.com"))
}

Related

How to listen to https endpoint with Yew & Trunk?

I am trying to develop a web app, for oauth2 debugging I am required to have the web app respond to https, how can this be done developing with Yew?
Currently I am using:
trunk serve --proxy-backend=<backend-endpoint>
In order to serve.
You can use tunnelling through a HTTP tunnel which opens your dev server to the internet and has https.
I can recommend you use Ngrok but there are way more services out there. Eg:
ngrok http 8080
You can combine it with something like cargo-make to add a new command.
use ngrok;
fn main() -> std::io::Result<()> {
let tunnel = ngrok::builder()
.http()
.port(8080)
.run()?;
let public_url: url::Url = tunnel.http()?;
Command::new("trunk")
.arg("serve")
.arg(format!("--proxy-backend={public_url}"))
.output()
.expect("failed to execute process")
Ok(())
}

Is it possible to achieve a p2p connection without an external server if the public IP and listening port are known?

For context, my group and I are attempting to build a simple p2p messaging application in Rust with minimal library use. (We DID attempt using libp2p early on, but unfortunately it's incompatible with the key exchange algorithm we're using.)
The users on each end are required to send one another a public key through a third party messaging service before connecting, and we are able to encode the public IP address and the listening port of the program within this public key. (Meaning that the public IP and listening port of the other party will be known by the program at runtime.)
Since we are able to communicate the router's public IP address and the listening port of the program, would it be possible to establish a p2p connection without the need for an external server or port forwarding? If so, is there a simple solution we're not seeing using only the standard library? Currently we're attempting to check for incoming connections using TcpListener (see test code below) and are able to detect connections to localhost on the specified port, but have no access over the network.
We're all college students who are new to networking, so any explanation for what technology we're looking for would be greatly appreciated. We've tried researching hole punching, but to our understanding that requires a third server with a known open port. We were hoping that by broadcasting the IP and listening port directly we could bypass this. We're operating on the school's network, so UPnP is disabled.
use std::net::{TcpListener, TcpStream};
// Simple test script for TcpListener - attempting to listen over the network instead of locally.
// Handle new connections - test program just prints info and allows the connection to close.
fn handle_connection(stream: TcpStream) {
println!("New Client: {}", stream.peer_addr().unwrap());
}
fn main() -> std::io::Result<()> {
// Listen on any IP and let the OS choose a port - works with localhost and local address
// shown from "ipconfig". Does not work with public IP shown in web browsers.
// (This is expected - no open port or specialized incoming communication yet.)
let listener = TcpListener::bind("0.0.0.0:0").unwrap();
// Show listening port for testing purposes.
println!("Listening on: {}", listener.local_addr().unwrap());
// Attempt to connect to all incoming streams.
for stream in listener.incoming() {
match stream {
Ok(stream) => {
handle_connection(stream);
}
Err(_) => {
eprintln!("Connection Error");
}
}
}
Ok(())
}

Is it possible to 'share' functions between multiple Node.js express servers?

I have two separate node.js express servers running on different ports.
on port 5000 is running an Authentication API that handles the register, login, and session verification.
on port 6000 is running a Product API that handles the CRUD operations for the products.
When I create a new product I would like to verify the token found in the request header, so instead of copying over the session verification method from the Authorization API, I imported it, but for some reason, I get this error in the console when starting the app: Error: listen EADDRINUSE: address already in use :::5000
Authorization API exports the method:
export const verifySessionToken = async (sToken: string) => { ... }
Products API imports the Method:
import { verifySessionToken } from '../../../auth/common/verify-session';
If I comment out the import from above, the app runs again.
Is it even possible to import methods from node apps running on different ports?
If it is, what would be the correct way of doing it?
Million thanks!
First off, you import functions from modules. You don't import methods from servers. And, it's perfectly feasible to import the same function for use in two separate servers either in the same nodejs process or in different nodejs processes. The process of importing something from a module has absolutely nothing to do with a server or a port that server is running on. You're just importing a function reference from a file that you can call later.
You do need to make sure that your code is properly modularized so that the process of importing the function doesn't have any unintended side effects such as trying to start another server that you don't want to start. So, perhaps your function isn't properly modularized (put in its own sharable module)?
Is it even possible to import methods from node apps running on different ports? If it is, what would be the correct way of doing it?
Yes. It's very easy if you create your module properly and make sure that it doesn't have any unintended side effects. If you show us the entire module that you're importing from, we can probably help you identify what you are not doing correctly.
FYI, just put this:
export const verifySessionToken = async (sToken: string) => { ... }
in its own file where both places that want to use it can then import it.
I don't think you can run two servers sharing the same files. Why don't you just replicate your function in the other app ?

Protecting express js server from brute force

I'm writing an api using nodejs and express and my app is hosted by openshift free plan.
I want to protect my routes from brute force. For example if an IP sends more than 5 requests /sec then block it for 5 minutes. :)
There's nothing stopping you from implementing this in Node.js/express directly, but this sort of thing is typically (and almost certainly more easily) handled by using something like nginx or Apache httpd to handle traffic to your app.
This has the added benefit of allowing you to run the app entirely as an unprivileged user because nginx (or whatever) will be binding to ports 80 and 443 (which requires administrative/superuser/whatever privileges) rather than your app. Plus you can easily get a bunch of other desirable features, like caching for static contents.
nginx has a module specifically for this:
The ngx_http_limit_req_module module (0.7.21) is used to limit the request processing rate per a defined key, in particular, the processing rate of requests coming from a single IP address.
There are several packages on NPM that are dedicated to this, if you are using the Express framework:
express-rate-limiter
express-limiter
express-brute
These can be used for limiting by ip, but also by other information (e.g. by username for failed login attempts).
It is better to limit rates on reverse-proxy, load balancer or any other entry point to your node.js app.
However, it doesn't fit requirements sometimes.
rate-limiter-flexible package has block option you need
const { RateLimiterMemory } = require('rate-limiter-flexible');
const opts = {
points: 5, // 5 points
duration: 1, // Per second
blockDuration: 300, // block for 5 minutes if more than points consumed
};
const rateLimiter = new RateLimiterMemory(opts);
const rateLimiterMiddleware = (req, res, next) => {
// Consume 1 point for each request
rateLimiter.consume(req.connection.remoteAddress)
.then(() => {
next();
})
.catch((rejRes) => {
res.status(429).send('Too Many Requests');
});
};
app.use(rateLimiterMiddleware);
You can configure rate-limiter-flexible for any exact route. See official express docs about using middlwares
There are also options for Cluster or distributed apps and many others useful

heroku: route subdirectory to a second node.js app?

I have a heroku node.js app running under the domain foo.com. I want to proxy all urls beginning with foo.com/bar/ to a second node.js process - but I want the process to be controlled within the same heroku app. Is this possible?
If not, is it possible to proxy a subdirectory to a second heroku app? I haven't been able to find much control over how to do routing outside of the web app's entry point. That is, I can easily control routing within node.js using Express for example, but that doesn't let me proxy to a different app.
My last resort is simply using a subdomain instead of a subdirectory, but I'd like to see if a subdirectory is possible first. Thanks!
Edit: I had to solve my problem using http-proxy. I have two express servers listening on different ports and then a third externally facing server that routes to either of the two depending on the url. Not ideal of course, but I couldn't get anything else to work. The wrap-app2 approach described below had some url issues that I couldn't figure out.
Just create a new express server and put a middleware in the main one to redirect to the secondary when comes a request to your desired path:
var app2 = express();
app2.use(function(req, res){
res.send('Hey, I\'m another express server');
});
app.use('/foo', app2);
I haven't tried it yet in Heroku, but it the same process and doesn't create any new TCP binding or process, so It will work. For reference, a modified plain express template.
And if you really want other express process handling the connection, you need to use cluster. Check the worker.send utility.
app.use('/foo', function(req,res){
//You can send req too if you want.
worker.send('foo', res);
});
This is possible. The most elegant way I could think is by using clustering. 1 Heroku Dyno contains four cores. Therefore, you can run four worker threads to a node process.
Here is an introduction to clustering.
What you're looking at is initializing two express apps (assuming you're using express) and serving those two in two worker threads.
if (cluster.isMaster) {
// let's make four child processes
for (var i = 0; i < 4; i++) {
if (i%2 == 0) {
cluster.fork(envForApp1);
} else {
cluster.fork(envForApp2);
}
}
} else {
// refer to NODE_ENV and see whether this should be your app1 or app2
// which should be started. This is passed from the fork() before.
app.listen(8080);
}

Resources