Only allow mvc web api to be called from Ip address - security

Hi I'd like to restrict access to my MVC Web API app by IP address. I thought it might make a good security "layer". But I have some questions.
This api isn't protecting sensitive data, I'm thinking it would be useful way to help deter someone trying to hack the api. I'm new to this and would like pragmatic best practices.
Both API and client are using SSL certs so the IPs are static. But the users on the client will be unauthenticated public users of a website, my team have have no meaningful control over this.
What is the most reliable way to detecting IP in MVC?
How easy is it to spoof?
I'd prefer to do the code in an attribute rather than the web config so as team we can lock the controllers down with differing mechanisms on a more granualar level if the needs of the API change.
Or is this not a particularly useful approach? I'm open to alternatives.

Try to do it with CORS .. you can choose which ip (hosts) can comunicate with your back end... something like:
in your Startup.cs:
public partial class Startup
{
public void Configuration(IAppBuilder app)
{
var corsPolicy = new CorsPolicy
{
AllowAnyMethod = true,
AllowAnyHeader = true,
SupportsCredentials = true,
Origins = { "http://www.example.com", "http://localhost:38091", "http://localhost:39372" }
};
app.UseCors(new CorsOptions
{
PolicyProvider = new CorsPolicyProvider
{
PolicyResolver = context => Task.FromResult(corsPolicy)
}
});
ConfigureAuth(app);
}
}

Related

what nodejs, react-native IP address has to be used in endpoint when app is in google play store?

So I made a react native app with nodeJS and in order to connect nodejs backend to react native frontend I had to create an endpoint like such:
app.post("/send_mail", cors(), async (req, res) => {
let {text} = req.body
var transport = nodemailer.createTransport({
host: "smtp.mailtrap.io",
port: 2525,
auth: {
user: "usertoken",
pass: "password"
}
})
await transport.sendMail({
from: "email#email.com",
to: "email2#email.com",
subject: "message",
html: `<p>${text}</p>`
})
})
and in react native frontend call function like that :
const handleSend = async() => {
try {
await axios.post("http://192.168.0.104:3333/send_mail", { //localhost
text: placeHolderLocationLatLon
})
} catch (error) {
setHandleSetError(error)
}}
and it works fine in the local environment. But in the google play store, this IP address doesn't work because it's my localhost ipv4. I tried to use localhost:3333, but that doesn't work too. I can't find anywhere described anything about what IP has to be used in the google play store. Does anyone know how I should make this endpoint? Thank you
You can't just host a service by yourself like that (typically your back-end). Well, you can but not with your knowledge level. It would be inefficient (as in you'd have to keep your computer up 24/7) and present security issues.
Does anyone know how I should make this endpoint?
Don't get me wrong, your endpoint is fine in itself. You're just lacking the networking part.
1) For testing purposes ONLY!
If this app is only for testing purposes on your end, and won't be part of a final product that'll be present on the Google Store, there's a way you can achieve this, called ngrok.
It takes your locally-running project and provides it a https URL anyone can use to access it. I'll let you read the docs to find out how it works.
2) A somewhat viable solution (be extremely careful)
Since ngrok will provide a new URL everytime you run it with your local project, it's not very reliable in the long term.
What you could do, if you have a static IP address, is registering a domain name and link it to your IP address, and do all the shenanigans behind it (figuring out how to proxy incoming traffic to your machine and project, and so on). But honestly, it's way too cumbersome and it implies that you have valuable knowledge when it comes to securing your own private network, which I doubt.
3) A long-lasting, viable solution
You could loan a preemptive machine on GCP and run your back-end service on it. Of course, you would still have to figure out some networking things and configs, but it's way more reliable than solution 2 and 1 if your whole app aims to be public.
Google gives you free credit (I think it's like 200 or 250€, can't recall exactly) to begin with their stuff, and running a preemptive machine is really cheap.

NestJS WebSocket Gateway - how get full path for initial connection?

I'm adding a socket.io "chat" implementation to our NestJS app, currently serving a range of HTTP REST APIs. We have fairly complex tenant-based auth using guards for our REST APIs. Users can belong to one or many tenants, and they target a given tenats via the API URL, which can be either subdomain or path-based, depending on the deployment environment, for example:
//Subdomain based
https://tenant1.api.server.com/endpoint
https://tenant2.api.server.com/endpoint
//Path based
https://api.server.com/tenant1/endpoint
https://api.server.com/tenant2/endpoint
This all works fine for REST APIs, allowing us to determine the intended tenant (and validate the user access to that tenant) within guards.
The new socket.io implementation is being exposed on the same port at endpoint "/socket", meaning that possible full paths for connection could be:
https://tenant1.api.server.com/socket
https://api.server.com/tenant1/socket
Ideally I want to validate the user (via JWT) and the access to the group during the connection of the websocket (and if they are not validated they get immediately disconnected). I have been struggling to implement with guards, so I have done JWT/user validate in the socket gateway, which works ok. For the tenant validation, as per the above, I need the FULL URL that was used for the connection, because I will either be looking at the subdomain OR the path, depending on the deployment. I can get the host from the client handshake headers, but cannot find any way to get at the path. Is there a way to get the full path either from the handshake, or perhaps from Nest? I think perhaps I am limited in what I have access to in the handleConnection method when implementing OnGatewayConnection.
Code so far:
#WebSocketGateway({
namespace: 'socket',
cors: {
origin: '*',
},
})
export class ChannelsGateway
implements OnGatewayInit, OnGatewayConnection, OnGatewayDisconnect {
#WebSocketServer() public server: Server
//Init using separate socket service (allowing for other components to push messages)
afterInit(server: Server) {
this.socketService.socket = server
Logger.debug('Socket.io initialized')
}
//Method for handling the client initial connection
async handleConnection(client: Socket, ...args: any[]) {
//This line gets me the host, such as tenant1.api.server.com but without the path
const host = client.handshake.headers.host
//Get bearer token from authorizaton header and validate
//Disconnect and return if not validated
const bearerJwt = client.handshake.headers.authorization
const decodedToken = await validateJwt(bearerJwt).catch(error => {
client.disconnect()
})
if (!decodedToken) {
return
}
//What can I use to get at the path, such as:
//api.server.com/tenant1/socket
//tenant1.api.server.com/socket
//Then I can extract the "tenant1" with existing code and validate access
//Else disconnect the client
//Further code to establish a private room for the user, etc...
}
//other methods for receiving messages, etc...
}

Storing token on server-side using nestjs

I have a nestjs application that consumes third party API for data. In order to use that third party API, I need to pass along an access token. This access token is application-wide and not attached to any one user.
What would be the best place to store such a token in Nestjs, meeting the following requirements:
It must be available in the application and not per given user
It must not be exposed to the frontend application
It must work in a load balancer setup
I am looking at Nestjs caching https://docs.nestjs.com/techniques/caching, but I am not sure whether that's the best practice and if it is - should I use it with in-memory storage or something like redis.
Thank you.
If you're working with Load Balancing, then in-memory solutions are dead on arrival, as they will only affect one instance of your server, not all of them. Your best bet for speed purposes and accessibility would be Redis, saving the token under a simple key and keeping it alive from there (and updating it as necessary). Just make sure all your instances connect to the same Redis instance, and that your instance can handle it, shouldn't be a problem, more of a callout
I used a custom provider. Nest allows you to load async custom providers.
export const apiAuth = {
provide: 'API_AUTH',
useFactory: async (authService: AuthService) => {
return await authService.createOrUpdateAccessToken()
},
inject: [AuthService]
}
and below is my api client.
#Injectable()
export class ApiClient {
constructor(#Inject('API_AUTH') private auth: IAuth, private authService: AuthService) { }
public async getApiClient(storeId: string): Promise<ApiClient> {
if (((Date.now() - this.auth.createdAt.getTime()) > ((this.auth.expiresIn - 14400) * 1000))) {
this.auth = await this.authService.createOrUpdateAccessToken()
}
return new ApiClient(storeId, this.auth.accessToken);
}
}
This way token is requested from storage once and lives with the application, when expired token is re-generated and updated.

Is it possible to use a Cloudflare Service Worker as a proxy web server?

this is just a beginner level question looking for some advice, which the following may misuse some key terms as well because of lacking knowledge, but hopefully can deliver the key message to all of you:
Background:
my company is hosting a website in AWS, everything works fine except it cannot be loaded in China because of the well known Chinese Great Firewall that will block all unregistered/unlicensed websites outside of China; the best solution is perhaps to host a server in China and get an ICP license to be approved by the Chinese government, but that will take time and many other considerations. So we are now looking for some alternatives to let our customers from China able to read content from our site.
Main idea:
use a Cloudflare service worker to fetch HTTP resources from a given webpage first and then send the HTTP content to users by the Service Worker (since Cloudflare is accessible in China)
Example:
Let Cloudflare's registered Service Worker URL be: sample.workers.dev
Target website content to serve: google.com
When user tries to access this domain (sample.workers.dev), the service worker should try to load all HTTP content including images and scripts and css from google.com in backend, then return HTTP content to the users directly
This will work for my company's clients because we will usually generate a url and send over to the user by email or some other means, so we can send them a third party url that is accessible in China while actually loading content from our original website.
I tried all examples given by Cloudflare already: https://developers.cloudflare.com/workers/templates/
But so far no luck to archive what I want exactly.
Any thoughts?
yes, it's possible. i remember someone from Cloudflare implements it as an example.
Found it: https://github.com/lebinh/cloudflare-workers/blob/master/src/proxy.ts
However, Cloudflare workers does support streaming API now IIRC so there's a better way to do it.
/**
* HTTP Proxy to arbitrary URL with Cloudflare Worker.
*/
/**
* Cloudflare Worker entrypoint
*/
if (typeof addEventListener === 'function') {
addEventListener('fetch', (e: Event): void => {
// work around as strict typescript check doesn't allow e to be of type FetchEvent
const fe = e as FetchEvent
fe.respondWith(proxyRequest(fe.request))
});
}
async function proxyRequest(r: Request): Promise<Response> {
const url = new URL(r.url)
const prefix = '/worker/proxy/'
if (url.pathname.startsWith(prefix)) {
const remainingUrl = url.pathname.replace(new RegExp('^' + prefix), '')
let targetUrl = decodeURIComponent(remainingUrl)
if (!targetUrl.startsWith('http://') && !targetUrl.startsWith('https://')) {
targetUrl = url.protocol + '//' + targetUrl
}
return fetch(targetUrl)
} else {
return new Response('Bad Request', {status: 400, statusText: 'Bad Request'})
}
}
interface FetchEvent extends Event {
request: Request;
respondWith(r: Promise<Response> | Response): Promise<Response>;
}

Unique configuration per vhost for Micro

I have a few Zeit micro services. This setup is a RESTful API for multiple frontends/domains/clients
I need to, in my configs that are spread throughout the apps, differentiate between these clients. I can, in my handlers, setup a process.env.CLIENT_ID for example that I can use in my config handler to know which config to load. However this would mean launching a new http/micro process for each requesting domain (or whatever method I use - info such as client id will prob come in a header) in order to maintain the process.env.CLIENT_ID throughout the request and not have it overwritten by another simultaneous request from another client.
So I have to have each microservice check the client ID, determine if it has already launched a process for that client and use that else launch a new one.
This seems messy but not sure how else to handle things. Passing the client id around with code calls (i.e. getConfg(client, key) is not practical in my situation and I would like to avoid that.
Options:
Pass client id around everywhere
Launch new process per host
?
Is there a better way or have I made a mistake in my assumptions?
If the process per client approach is the better way I am wondering if there is an existing solution to manage this? Ive looked at http proxy, micro cluster etc but none seem to provide a solution to this issue.
Well I found this nice tool https://github.com/othiym23/node-continuation-local-storage
// Micro handler
const { createNamespace } = require('continuation-local-storage')
let namespace = createNamespace('foo')
const handler = async (req, res) => {
const clientId = // some header thing or host
namespace.run(function() {
namespace.set('clientId', clientId)
someCode()
})
})
// Some other file
const { getNamespace } = require('continuation-local-storage')
const someCode = () => {
const namespace = getNamespace('foo')
console.log(namespace.get('clientId'))
}

Resources