Node.js online app with intranet DB - node.js

What I want to build is an application that sits online and it's used by different groups that each have their own intranet. Now, because of a stupid security policy the data can't sit outside the intranet. How would you go about building an app that it's still online, so you can push updates to everyone at once, but has a DB on each intranet's server?
My initial plan is to use Node.js and MongoDB.

If your db really has to be onsite, and your app really has to be offsite, then probably your only option is to set up a secure connection from your app, to the onsite db, and pretend as if its hosted locally to the app. This may/may not violate the security policies. You could in theory lock it down pretty well, with a vpn between the two networks. But this is not for the faint of heart, performance will suffer, and it does have security issues. It also means a bit of work for every site.
If the only reason you're wanting it to be "online" is for pushing updates as you stated, then you'll do better installing the app on-premise, and getting it to poll into a central server for notifications about new versions, download updates to itself, and install them automatically. Once you've created this, a new installation requires no new work.

I'm facing a similar problem right now, so here's my take on it, not really mongo or node specifc.
Put a db and a simple restful server on each of client's intranets. Servers can be exactly the same.
Put a routing facade accesible from internet that redirects requests to apropriate server based on url, ie: http://facade/server/resource becomes a request to http://server/resouce.
Configure the facade so that a requests to http://facade/resource go to each server, retrieve results and return all of them in some aggregated form.
Obviously there are more details to take into account, like permissions (can everybody publish to each server? if not, who can?), but the general idea is there.

Related

Can i use server side blazor for production?

Yes, I know it is ready for production but I am scared if I use for production I may have some proxy problems or other problems. My plane is to start a blazor server-side project and deploy in IIS, do I need any setting changes in IIS and anyone of you used blazor server-side in production, did you face any problem while deploying?
I don’t have reputation to comment. Server side Blazor is phenomenal for production. Many companies have been using it for a long time. Deployment is the same as any asp.net core app. Checkout www.csharpacademy.com
I’ve also been receiving more contact from companies interested in converting apps to Blazor serverside from asp.net and webforms.
A couple gotchas with server side Blazor:
If your server goes down even for a millisecond, every client is dead. They will need to refresh to get a new connection. This doesn’t happen often in today’s world, but results may vary depending on use-case
I’ve seen with csharpacademy that if you leave the webpage open in a mobile phone for hours in a tab, then come back to it after other tasks, the page is dead and refresh needs to occur.
UI latency is more noticeable for server side. If you have a server in The US and have clients in other countries, they may see more delay. Again, checkout the latency on csharpacademy. It’s hosted in the US and I notice 0 latency with majority of clicks/interactions.
One last comment,
If you’re unsure of whether to use server side or clientside, I’d encourage you to try and build your app with flexibility in mind. Create a razor component library and put all your components/logic in there so that you can share the lib between a clientside and a server side app.
Cheers!

Is ngrok safe to use or can it be compromised?

Is ngrok a safe tool to use? I was reading a tutorial which recommended to use ngrok test API responses that I make to outside services that need to connect to my endpoints also.
There is no source code available for Version 2.0, considering it started as an open source project in 2014. I am suspect of any code that opens a tunnel to my localhost from the cloud. Pretty scary stuff especially without source code!
It opens up a tunnel to your dev machine, which is partially secured by obscurity (a hard to guess subdomain), and can be further secured by requiring a password. But you're still opening yourself up to ngrok itself, and the company is completely opaque (no address, no employees, no business name, no LinkedIn presence; all I can find is that it has 1-10 employees and is private; not even sure what country its based in). On top of that the code is not open-sourced. No reason to think they're not legit, but not a lot of information available to build trust.
You may be able to use ngrok and other local tunnel services with more security by encrypting the traffic. See https://security.stackexchange.com/questions/177280/end-to-end-encryption-for-localtunnel-ngrok-setup/177357#177357 for more information.
I found good rating, but vacuous information here:
http://www.scamadviser.com/is-ngrok.com-a-fake-site.html
The kicker for me is
https://developer.atlassian.com/blog/2015/05/secure-localhost-tunnels-with-ngrok/
where the Atlassian folks recommend it highly.
I think I am going to use it.
If anyone is concerning compromising their development environment, you can use Docker. There are many ngrok/docker projects but here is the one I chose: https://github.com/gtriggiano/ngrok-tunnel
for macOS, use "TARGET_HOST=docker.for.mac.localhost"
They now offer a service where you locally run only ssh, no need to run any of their code on your machine.
You run something like ssh -R 80:localhost:8501 tunnel.us.ngrok.com http. This connects to one of their hosts and forwards connections they receive back to your machine and the service you run on localhost:8501.
This seems secure to me, the only thing is that you don't know what information they collect and who is connecting to your exposed service. They print all connections, but it's their binary that does this and someone might well listen in without you noticing. You can check connections on your end, but you cannot be sure who it is that connects.
Ngrok is a convenient and highly secure utility for creating tunnels to locally hosted applications via a reverse proxy. This is a utility for publishing locally hosted applications on the web. style="letter-spacing: 0px;">Simply put, any locally hosted application provides a publicly accessible web URL to the . H. Either a Spring Boot or Nodejs based web application, or a webhook for a chat application, etc.

Is there any reason why a dev server should be accessible from the internet?

This is a very generic question that popped up in my mind. The reason has been that I came across a website dev server which leaked sensitive information about a database connection due an error. I was stunned at first and now I wonder why someone puts a development server out in the internet and make it accessible to everyone?
For me there is no reason for doing this.
But it certainly did not happen by accident that a company created a subdomain (dev.example.com) and pushed development code to it. So what could be the reason to ignore the fact of high security risk?
A quick search did not bring up any information about this. I'm interested in any further readings about this specific topic.
Thank you in advance
There is no reason for your dev servers to be accessible by the general public.
As a customer I just had an experience with a private chef site where I spent time interacting with their dev server because it managed to get crawled by Bing. Everything was the same as the live site but I got increasingly frustrated because paying a deposit failed to authorise. The customer support team had no idea I was on the wrong site either. The only difference was the URL. My e-mail address is now in their test system sending me spam every night when they do a test run.
Some options for you to consider, assuming you don't want to change the code on the page:
IP Whitelisting is the bare minimum
Have a separate login page that devs can use that redirects to the dev site with the correct auth token - bonus points for telling stray users that this is a test side and the live site is at https://.....
Use a robots.txt to make sure you don't get indexed
Hide it all behind a VNET - this really isn't an issue anymore with VPNs or services like Bastion.
Also consider the following so your devs/testers don't accidentally use the wrong site:
Have a dev css to make it obvious its a test system (this assumes you do visual testing later in your pipeline)
Use a banner to make it clear this is a dev site
Note that this would be a dev server. If you are using ringed/preview/progressive deployment then these should work just as well as the live site because they are the live site.
It's extremely common for a development environment or any "lower level" environment for that matter to be exposed to the pubic internet. Today, especially with more and more companies working in the public cloud and having remote team members, it's extremely more productive to have your development team or UAT done without having the need to set up a VPN connection or a faster more expensive direct connections to the cloud from your company's on premise network.
It's important to mention that exposing to the public internet does not mean that you shouldn't have some kind of HTTP Authentication in these environments that hides the details of your website. You can also use a firewall with an IP address whitelist. This is still very important so you don't expose your product and lose a possible competitive advantage. It's also important because lower level environments tend to be more error prone and important details about the inner workings of your application may accidentally show up.

Removing a Web Front-End server from Farm (Load Balancing)

I am currently working on a project where we have developed a portal on SharePoint. Currently we have two servers which is using Load Balancing. We're experiencing a lot of difficulties connected to this, so we are thinking about removing one of the Web Front-End servers from the farm.
Could this cause any kind of problems that you can think of? I want to be sure before I recommend to this to our client. Anything you could think of would be great. Also pro's you can think of by doing this is appreciated.
The load balancing was agreed on from the beginning of the project, before we came in as consultants.
(I know this could be posted on SharePoint.Stackexchange aswell, but this could be general knowledge for anyone else as well.)
Since "two servers" is not a good idea anyway (you'd normally create at minimum a three server farm - two load balanced web front-ends and one indexing/job server), you can easily merge them into one server. Steps would be like this:
- enable all the services on the server which stays there
- remove the other server from "web front-end" role
- uninstall sharepoint from the other server
This might require recreation of your shared services provider if you are hosting some of the SSP things on the server you are removing.

Accessing data in internal production databases from a web server in DMZ

I'm working on an external web site (in DMZ) that needs to get data from our internal production database.
All of the designs that I have come up with are rejected because the network department will not allow a connection of any sort (WCF, Oracle, etc.) to come inside from the DMZ.
The suggestions that have come from the networking side generally fall under two categories -
1) Export the required data to a server in the DMZ and export modified/inserted records eventually somehow, or
2) Poll from inside, continually asking a service in the DMZ whether it has any requests that need serviced.
I'm averse to suggestion 1 because I don't like the idea of a database sitting in the DMZ. Option 2 seems like a ridiculous amount of extra complication for the nature of what's being done.
Are these the only legitimate solutions? Is there an obvious solution I'm missing? Is the "No connections in from DMZ" decree practical?
Edit: One line I'm constantly hearing is that "no large company allows a web site to connect inside to get live production data. That's why they send confirmation emails". Is that really how it works?
I'm sorry, but your networking department are on crack or something like that - they clearly do not understand what the purpose of a DMZ is. To summarize - there are three "areas" - the big, bad outside world, your pure and virginal inside world, and the well known, trusted, safe DMZ.
The rules are:
Connections from outside can only get to hosts in the DMZ, and on specific ports (80, 443, etc);
Connections from the outside to the inside are blocked absolutely;
Connections from the inside to either the DMZ or the outside are fine and dandy;
Only hosts in the DMZ may establish connections to the inside, and again, only on well known and permitted ports.
Point four is the one they haven't grasped - the "no connections from the DMZ" policy is misguided.
Ask them "How does our email system work then?" I assume you have a corporate mail server, maybe exchange, and individuals have clients that connect to it. Ask them to explain how your corporate email, with access to internet email, works and is compliant with their policy.
Sorry, it doesn't really give you an answer.
I am a security architect at a fortune 50 financial firm. We had these same conversations. I don't agree with your network group. I understand their angst, and I understand that they would like a better solution but most places don't opt with the better choices (due to ignorance on their part [ie the network guys not you]).
Two options if they are hard set on this:
You can use a SQL proxy solution like greensql (I don't work for them, just know of them) they are just greensql dot com.
The approach they refer to that most "Large orgs" use is a tiered web model. Where you have a front end web server (accessed by the public at large), a mid-tier (application or services layer where the actual processes occurs), and a database tier. The mid-tier is the only thing that can talk to the database tier. In my opinion this model is optimal for most large orgs. BUT that being said, most large orgs will run into either a vendor provided product that does not support a middle tier, they developed without a middle tier and the transition requires development resources they dont have to spare to develop the mid-tier web services, or plain outright there is no priorty at some companies to go that route.
Its a gray area, no solid right or wrong in that regard, so if they are speaking in finality terms then they are clearly wrong. I applaud their zeal, as a security professional I understand where they are coming from. BUT, we have to enable to business to function securely. Thats the challange and the gauntlet I always try and throw down to myself. how can I deliver what my customer (my developers, my admins, my dbas, business users) what they want (within reason, and if I tell someone no I always try to offer an alternative that meets most of their needs).
Honestly it should be an open conversation. Here's where I think you can get some room, ask them to threat model the risk they are looking to mitigate. Ask them to offer alternative solutions that enable your web apps to function. If they are saying they cant talk, then put the onus on them to provide a solution. If they can't then you default to it working. Site that you open connections from the dmz to the db ONLY for the approved ports. Let them know that DMZ is for offering external services. External services are not good without internal data for anything more than potentially file transfer solutions.
Just my two cents, hope this comment helps. And try to be easy on my security brethren. We have some less experienced misguided in our flock that cling to some old ways of doing things. As the world evolves the threat evolves and so does our approach to mitigation.
Why don't you replicate your database servers? You can ensure that the connection is from the internal servers to the external servers and not the other way.
One way is to use the ms sync framework - you can build a simple windows service that can synchronize changes from internal database to your external database (which can reside on a separate db server) and then use that in your public facing website. Advantage is, your sync logic can filter out sensitive data and keep only things that are really necessary. And since the entire control of data will be in your internal servers (PUSH data out instead of pull) I dont think IT will have an issue with that.
The connection formed is never in - it is out - which means no ports need to be opened.
I'm mostly with Ken Ray on this; however, there appears to be some missing information. Let's see if I get this right:
You have a web application.
Part of that web application needs to display data from a different production server (not the one that normally backs your site).
The data you want/need is handled by a completely different application internally.
This data is critical to the normal flow of your business and only a limited set needs to be available to the outside world.
If I'm on track, then I would have to say that I agree with your IT department and I wouldn't let you directly access that server either.
Just take option 1. Have the production server export the data you need to a commonly accessible drop location. Have the other db server (one in the DMZ) pick up the data and import it on a regular basis. Finally, have your web app ONLY talk to the db server in the dmz.
Given how a lot of people build sites these days I would also be loath to just open a sql port from the dmz to the web server in question. Quite frankly I could be convinced to open the connection if I was assured that 1. you only used stored procs to access the data you need; 2. the account information used to access the database was encrypted and completely restricted to only running those procs; 3. those procs had zero dynamic sql and were limited to selects; 4. your code was built right.
A regular IT person would probably not be qualified to answer all of those questions. And if this database was from a third party, I would bet you might loose support if you were to start accessing it from outside it's normal application.
Before talking about your particular problem I want to deal with the Update that you provided.
I haven't worked for a "large" company - though large is hard to judge without a context, but I have built my share of web applications for the non profit and university department that I used to work for. In both situations I have always connected to the production DB that is on the internal Network from the Web server on the DMZ. I am pretty sure many large companies do this too; think for example of how Sharepoint's architecture is setup - back-end indexing, database, etc. servers, which are connected to by front facing web servers located in the DMZ.
Also the practice of sending confirmation e-mails, which I believe you are referring to confirmations when you register for a site don't usually deal with security. They are more a method to verify that a user has entered a valid e-mail address.
Now with that out of the way, let us look at your problem. Unfortunately, other than the two solutions you presented, I can't think of any other way to do this. Though some things that you might want to think about:
Solutions 1:
Depending on the sensitivity of the data that you need to work with, extracting it onto a server on the DMZ - whether using a service or some sort of automatic synchronization software - goes against basic security common sense. What you have done is move the data from a server behind a firewall to one that is in front of it. They might as well just let you get to the internal db server from the DMZ.
Solution 2:
I am no networking expert, so please correct me if I am wrong, but a polling mechanism still requires some sort of communication back from the web server to inform the database server that it needs some data back, which means a port needs to be open, and again you might as well tell them to let you get to the internal database without the hassle, because you haven't really added any additional security with this method.
So, I hope that this helps in at least providing you with some arguments to allow you to access the data directly. To me it seems like there are many misconceptions in your network department over how a secure database backed web application architecture should look like.
Here's what you could do... it's a bit of a stretch, but it should work...
Write a service that sits on the server in the DMZ. It will listen on three ports, A, B, and C (pick whatever port numbers make sense). I'll call this the DMZ tunnel app.
Write another service that lives anywhere on the internal network. It will connect to the DMZ tunnel app on port B. Once this connection is established, the DMZ tunnel app no longer needs to listen on port B. This is the "control connection".
When something connects to port A of the DMZ tunnel app, it will send a request over the control connection for a new DB/whatever connection. The internal tunnel app will respond by connecting to the internal resource. Once this connection is established, it will connect back to the DMZ tunnel app on port C.
After possibly verifying some tokens (this part is up to you) the DMZ tunnel app will then forward data back and forth between the connections it received on port A and C. You will effectively have a transparent TCP proxy created from two services running in the DMZ and on the internal network.
And, for the best part, once this is done you can explain what you did to your IT department and watch their faces as they realize that you did not violate the letter of their security policy, but you are still being productive. I tell you, they will hate that.
If all development solutions cannot be applied because of system engineering restriction in DMZ then give them the ball.
Put your website in intranet, and tell them 'Now I need inbound HTTP:80 or HTTPS:443 connections to that applications. Set up what you want : reverse proxies, ISA Server, protocols break, SSL... I will adapt my application if necessary.'
About ISA, I guess they got one if you have exchange with external connections.
Lot of companies are choosing this solution when a resource need to be shared between intranet and public.
Setting up a specific and intranet network with high security rules is the best way to make the administration, integration and deployment easier. What is easier is well known, what is known is masterized : less security breach.
More and more system enginers (like mines) prefer to maintain an intranet network with small 'security breach' like HTTP than to open other protocols and ports.
By the way, if they knew WCF services, they would have accepted this solution. This is the most secure solution if well designed.
Personnaly, I use this two methods : TCP(HTTP or not) Services and ISA Server.

Resources