I'm a beginner in Node.JS and Mongoose. I have created a small Node.JS app. I have pushed the code into Github and Heroku server. After I have deployed the code into the Heroku server. I have checked my website online. It's working fine. I have used and done some operations in online itself. It has been worked fine, very well.
After One hour I hit my link in google. It's showing This site is blocked due to a security threat. This site is blocked due to a security threat that was discovered by the Cisco Umbrella security researchers. .Why? what is the reason? what happened? what are the steps should I follow. Can anyone help me to work on my app online?
It will really helpful for me. any kind of help would be welcome. Thanks in advance.
Cisco Umbrella will block sites it deems to be a Security Threat to help protect people accessing these sites. This is normally because the site has been associated with one of three activities: malware, phishing, or command and control. If the domain is new then Cisco Umbrella has an option to block new domains for a period of time to protect users against temporary malicious domains.
This protection is achieved when using Cisco Umbrella as a DNS resolvee. This is either done using a client on your device or as a network configuration.
If your certain the domain is safe, you can whitelistthe dmain via the Umbrella Dashboard, if available. Alternativly use another DNS resolver services such as Google which offers no protection or filtering.
Related
I am developing a small demo API for my organization. I use Azure AD for authentication for the API (and the OpenAPI docs). Everything works perfectly in my local development environment and I don't have the hassle of SSL since the oauth2-redirect is localhost. I am now ready to make my demo accesible inside my organization's network. However, Azure App Registration mandates that a oauth2-redirect link has to be https (or localhost which is why it works perfectly for me). I can understand why, but I am eager to demo my API and so, if at all possible, I would like to avoid the hassle of setting up a reverse-proxy, configuring TLS etc. So my question is - if I use https://10.x.x.x.nip.io/oauth2-redirect what are the security implications of this? I fear they are major unfortunately. I guess nip.io could sniff my authorization if they wanted to?
This is not strictly an answer, since it doesn't discuss the security implications of nip.io. However, if TLS is an issue for a demo it is automatically configured for an Azure App Service. For small demos it is therefore very convenient to deploy as an Azure App Service especially if you protect it behind a private link.
It is well documented that Google Apps Script run on Google App Engine servers that would not have access to a company's internal network/server:
https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app
https://cloud.google.com/appengine/kb/#static-ip
https://developers.google.com/apps-script/guides/jdbc#using_jdbcgetconnectionurl
Per the documentation, if you want a Google Apps Script project to have access to an internal network/server then you will have to white-list Google's IPs. But we all know that isn't the safest option. In fact, the documentation even says so:
Note that using static IP address filtering is not considered a safe and effective means of protection. For example, an attacker could set up a malicious App Engine app which could share the same IP address range as your application. Instead, we suggest that you take a defense in depth approach using OAuth and Certs.
The issue is I cannot find any documentation, reference material, or articles on how best an organization should do what it suggests.
So my question is, how can an organization using G-Suite Enterprise securely allow Google Apps Script projects to access the company's internal network?
The documentation made it quite clear, that since App Scripts are ran on shared App Engine instances, it is impossible to restrict with IP, and that also implies the networking capability would be very limited (i.e. no VPC peering or alike). Therefore, as in the highlighted block, they suggest implementing authentication over just IP restriction.
Apart from authentication, App Script also supports encrypting and authenticating the server with SSL (sample code). This should protect the connection from being eavesdropped when sent over the Internet.
Further more, you can implement a "semi IP restriction" mechanism, technically called Port Knocking, which briefly works as follow:
First create a special endpoint, requires authentication, accepts an IP address as input. When requested, you open up your firewall to accept connection from that IP to your internal network for a limited time (e.g. 5min).
In your App Script, use URL Fetch to request that endpoint, so that your scripts instance is temporarily allowed to access your network.
Of course that will not be perfect, since one App Engine instance runs many scripts concurrently and the whitelist is opened for a set time, but still this is considerably better than persistently opening the port to all Google (App Engine) IPs.
Apps Script is a great tool for simplifying tasks when you are using G Suite services, unfortunately, what you are trying to achieve is not available. Also, keep in mind Apps Script is not built on App Engine, it's a completely different product.
Therefore if what it is shown in the documentation can't fulfill the requirements you have, please check other Google alternatives like App Engine or Google Cloud Platform, instead of G Suite.
I have been researching for a couple days and looking at pluralsight courses but I Can't seem to find a decent answer on how to setup a proper Azure infrastructure.
I have a client app, api backend, and a database as a core of my overall application. I know I need 2 different Web App services and an SQL database.
I also have a need to only allow access to all 3 from our company's IP address.
I'm getting lost with all the VNET and VPN talk and I am wondering if that is even required. Is it considered good to do IP restrictions and call it a day? Should I add an Application Gateway infront of the client application none the less?
If VNETs are required, is it a must to do site-to-site? (don't think we have the authority to do that) If not, how do we access the backend services like the database and API if everything is locked down?
Any help is appreciated because there is too much information and I can't seem to make sense of any of it.
Thanks
It depends a lot on both the purpose of your client application, web application and database, as well as the capabilities that currently exist within your organisation. Have you had a look at the references architectures Microsoft has as a starting point ?
If you are looking at a fairly simple application, deployed to Azure with minimal internal only use, then use something like this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn. You can actually simplify that a little further by removing the load balancers etc if you think traffic will be generally low.
If you are looking for an external application that can only be managed internally, you should adopt something similar to this reference architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/n-tier/n-tier-sql-server. Maybe even add a VPN component to the management jump box similar to this architecture: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/vpn.
Even this, however may be too complicated for your use case. If your application is pretty basic, is secured using username/password or identity federation, and has low risk data associated with it, then just the basic web application architecture would do fine, just read through the various considerations here: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/app-service-web-app/basic-web-app
I have a mobile application that communicates with a REST based web-service. The web-service lives behind the firewall and talks to other systems. Currently this web-service requires a firewall port to be opened and a SSL cert generated for each installation. Mobile apps sends login credentials so web-services can login to custom back-end systems.
Recently a customer approached us asking how could we deploy this to 50 offices. As we don't want to say modify every firewall in every office, we're looking for options.. This is a list of possible solutions and my thoughts on each one:
Open firewall port and expose https webservice - This is our current
solution but we dont want to have to contact 50 network admins and explain why we need to do this.
VPN - Too heavy weight, complex and expensive, we only need access
to one server. Does not solve problem as firewall needs to be
modified.
Microsoft Azure Hybrid Connection Manager - This provides a managed
service where the Azure cloud will expose an end point. Azure will
also expect connections from a easy to install application that
lives behind the firewall. When a REST call is made to the cloud
end-point, the request is forward down socket that was initiated by
the software behind the firewall. This does what we want but as its
a Microsoft Solution there might impose other requirements that our
customers might not want. Currently the simple Hybrid Connection Manager is free. But for how long?
Jscape MFT Gateway - Similar to Azure but you can host their server anywhere. Not that expensive but is not opensource.
Netty - A async java library/toolkit where this type of application could easily be build. Client and server apps would need to be build and deployed. Dont know what we dont know about Netty.
MDM, AirWatch, BlackBerry BES - A MDM based solution would work expect that MDM's are centrally managed and are not often in every office where the backend services are located. Airwatch has an AppTunnle but im not sure about the specifics.
At this point the Microsoft and Jscape systems are possible solutions.
But most likely these solutions will require us to modify the mobile software to work around issues such as:
How does the user know which server to login to? A locator service
needs to be built such that, an email address is used to lookup their
office, or they need to select their office location from a list.
While the connection is SSL many company might want some additional protection since network login information will be send down the pipe.
How is load balancing and fail-over managed?
So, at this point i'm looking for more options. The best option would be a commercial product that offers some level of customization. Second, would like a well used open-source product that could be installed in Aws and customized.
Thanks
The best approach we found was to use the PUTTY API and setup a reverse proxy.
We have added a Web API services layer to our application to help share the code with various product teams at my client's company. I like this as a way of managing versioning and for code organization but I'm concerned about violating Martin Fowlers First Law of Distributed Object Design, namely don't distribute your objects. We can host all of the various products on the same box currently and I was wondering if having the client application access our web services through localhost would allow us to avoid the issues that Martin is calling out. If it was WCF I would configure the end point to use Named Pipes and I guess I'm trying to figure out how to do that in IIS.
If you are hosting all your projects under the same process, it would be possible to go in-memory but I am not sure how much this makes sense. Here is a good example:
Batching Handler for ASP.NET Web API
A related post for the above one
It demonstrates the usage of in-memory hosting the entire Web API pipeline. However, in your case, it seems that this won't work out but might be worth considering.