Browser's service worker gives easy access to make botnets? - security

I've been reading about service workers to use some of its features for my app to be more fast and reliable. While I got excited as I see through the possibilities of service workers and persistant storage enabline webapp to have all native app capabilities, I also had this thought.
What if someone who wants to make a botnet? They just send their website link to some users, that's all, service workers get installed in a number of browsers all across the world. Now the site owner can do many things with this bot net like making a ddos attack, crypto mining etc. I think it might even make it easier for a hacker trying to exploit some browser vulnerability.
Am I missing something?

The web and browser security models have a number of measures in place to prevent or limit abuse.
1) Cross-Origin Resource Sharing (CORS) is a web security feature that limits or makes it difficult for websites/service workers to DDOS attacks on other domains.
2) Service workers have a maximum runtime of five minutes (in Chrome but other browsers should have similar limits). After five minutes the SW will be killed and the user will have to trigger an event before the service worker will be reactivated.
3) Most browsers use Safe Browsing to block access to malicious sites so it would be quick to disable globally.
This all assumes that "They just send their website link to some users" is easy to pull off. Email spam filters are really good these days and SMS is expensive so getting a PWA to a large number of user would be difficult.

Related

Potential security breach- open google script?

I'd like to allow a third party app(Zapier) to execute a google script with access to Admin Directory API within my organization. For that, I'd need to allow everyone with a link to execute this script which takes parameters to create a new user within the organization. Also, if somebody was to be able to edit the script, they could do lots of harm.
Does somebody know how to prevent that from happening or making this whole process secure?
What are the actual potential threats?
Cheers,
As #Dimu Designs and #James D have already stated, this might not be the best Idea.
It might be better to integrate directly with that service if possible. Exposing your app script as a public web app is rife with issues. Anyone with the web app's url can spam requests to that endpoint and exhaust your service quotas gumming up your system.
This is not a programming question. However, you'd actually be allowing anonymous users to execute the script as you in a read-only manner.
Also taking into account Quotas you would also have a problem as #Dimu Designs has already stated.
Passing parameters won't provide sufficient protection against spam attacks. As long as the web app url is hit via a GET or POST request it will count against the Trigger Total Runtime quota (since doGet(e) and doPost(e) are considered triggers) and also against the Simultaneous Execution quota. The only real protection is limiting access to trusted parties.
In the end doing this with Apps Script probably is not the way to go. Doing it internally through a intranet would be a much better solution. But if doing it with Apps Script is your only option beware of the risk involved in the process.

what does these unknown request_uri values against my nginx web server mean?

I built a web application with low traffic so far, after making some advertising I realized there are some suspicious requests against my server, this is what Loggly service shows me in panel:
Logs from Loggly about nginx requests
I am not an expert in security information buy I'm suspecting that someone wants to attack my site or are preparing a future attack.
What does these logs mean exactly?
Should I worry too much about this behavior?
are they using some exploit scanner software ?
I am setting a web application firewall to add some rules to DNS and changing all admin passwords but what other recommendation I must keep in mind?
Yep, some person or bot is using a vulnerability scanner to poke your server.
Unless it's excessive or causing stability issues, this is normal traffic. Every node that's accessible online will see attempts like this, and if you follow basic security practices (e.g. be up-to-date with os/app patches, use 2FA for logins, shut down unnecessary services/ports, vigilantly monitor your logs and usage, or at a bigger scale: invest in WAF, IPS/IDS products or use a vendor like Cloudflare), you shouldn't have much to worry about.
The culprit is Jorgee Security Scanner[1][2]
[1] https://www.checkpoint.com/defense/advisories/public/2016/cpai-2016-0214.html
[2] https://blog.paranoidpenguin.net/2017/04/jorgee-goes-on-a-rampage/

Is Google App Engine secure enough for financial applications?

I am wondering whether Google App Engine is secure enough for financial applications? This would involve storing sensitive information, access to users' funds, etc. Are there any applications like that already running on App Engine?
Generally speaking, App Engine is more secure than most other options.
(1) Google has more people working on security than most companies can afford to work on their own servers or VMs.
When a new security threat is discovered, Google is very likely to fix it quickly, compared to dedicated servers/VMs, where you have to rely on your own sys admin to fix it in time.
(2) There are no OS, firewalls, etc., to configure, which reduces a possibility of a wrong configuration that exposes a security hole. Run times are also limited.
Ultimately, the vast majority of all security breaches happen for two reasons:
wrong application code/architecture
human factor (people storing their passwords in email messages, choosing weak passwords, doing harm on purpose, phishing, etc.)
Neither of these factors are any different on App Engine than on any other platform.
As for the second part of your question, Snapchat has a lot of, should I say, very sensitive information. It runs on App Engine.
Webfilings, a company that handles financial data for most of the Fortune 500, runs on Google App Engine. Their statement about it in the second link I've given:
Google App Engine gave us that speed we needed to grow, but as Murray stated, “being on the Google foundation gives us and our customers peace of mind.” WebFilings needed a platform with a strong approach to security that was very reliable because our customers’ security is a top priority. Because our customers rely on accurate and timely access to the cloud as they input pre-released financial information, security is top-of-mind at our company. As a result, we want our customers’ security to be in the best hands, in Google App Engine’s hands.

Browsers are requesting crossdomain.xml & /-7890*sfxd*0*sfxd*0 on my site

Just recently I have seen multiple sessions on my site that are repeatedly requesting /crossdomain.xml & /-7890*sfxd*0*sfxd*0. We have had feedback from some of the folks behind these sessions that they cannot browse the site correctly. Is anyone aware of what might be causing these requests? We were thinking either virus or some toolbar.
The only common item we have seen on the requests is that they all are some version of IE (7, 8 or 9).
Independently of the nature of your site/application, ...
... the request of the /crossdomain.xml policy file is indicative of a [typically Adbobe Flash, Silverlight, JavaFX or the like] application running on the client workstation and attempting to assert whether your site allows the application to access your site on behalf of the user on said workstation. This assertion of the crossdomain policy is a security feature of the underlying "sandboxed" environment (Flash Player, Silverlight, etc.) aimed at protecting the user of the workstation. That is because when accessing third party sites "on behalf" of the user, the application gains access to whatever information these sites will provide in the context of the various sessions or cookies the user may have readily started/obtained.
... the request of /-7890*sfxd*0*sfxd*0 is a hint that the client (be it the application mentioned above, some unrelated http reference, web browser plug-in or yet some other logic) is thinking that your site is either superfish.com, some online store affiliated with superfish.com or one of the many sites that send traffic to superfish.com for the purpose of sharing revenue.
Now... these two kinds of request received by your site may well be unrelated, even though they originate from the same workstation in some apparent simultaneity. For example it could just be that the crossdomain policy assertion is from a web application which legitimately wishes to access some service from your site, while the "sfxd" request comes from some a plug-in on workstation's web browser (e.g. WindowsShopper or, alas, a slew of other plug-ins) which somehow trigger their requests based on whatever images the browser receives.
The fact that some of the clients which make these requests are not able to browse your site correctly (whatever that means...) could further indicate that some -I suspect- JavaScript logic on these clients get the root URL of their underlying application/affiliates confused with that of your site. But that's just a guess, there's not enough context about your site to get more precise hints.
A few suggestions to move forward:
Decide whether your site can and should allow crossdomain access and to whom, and remove or edit your site's crossdomain.xml file accordingly. Too many sites seem to just put <allow-access-from domain="*"/> in their crossdomain policy file for no good reason (and hence putting their users at risk). This first suggestion will not lead to solving the problem at hand, but I couldn't resist the cautionary warning.
ask one of these users which "cannot access your site properly" to disable some of the plug-in (aka add-ons) on their web browser and/or to use alternate web browser, and see if that improves the situation. Disabling plug-ins on web browser is usually very easy. To speed up the discovery, you may suggest some kind of a dichotomy approach, disabling several plug-ins at once and continuing the experiment with half of these plug-ins or with the ones that were still enabled, depending on results with your site's proper access.
If your application provides ads from third party sites, temporally disable these ads and see if that helps these users who "cannot access your site properly".

How to protect a website from DoS attacks

What is the best methods for protecting a site form DoS attack. Any idea how popular sites/services handles this issue?.
what are the tools/services in application, operating system, networking, hosting levels?.
it would be nice if some one could share their real experience they deal with.
Thanks
Sure you mean DoS not injections? There's not much you can do on a web programming end to prevent them as it's more about tying up connection ports and blocking them at the physical layer than at the application layer (web programming).
In regards to how most companies prevent them is a lot of companies use load balancing and server farms to displace the bandwidth coming in. Also, a lot of smart routers are monitoring activity from IPs and IP ranges to make sure there aren't too many inquiries coming in (and if so performs a block before it hits the server).
Biggest intentional DoS I can think of is woot.com during a woot-off though. I suggest trying wikipedia ( http://en.wikipedia.org/wiki/Denial-of-service_attack#Prevention_and_response ) and see what they have to say about prevention methods.
I've never had to deal with this yet, but a common method involves writing a small piece of code to track IP addresses that are making a large amount of requests in a short amount of time and denying them before processing actually happens.
Many hosting services provide this along with hosting, check with them to see if they do.
I implemented this once in the application layer. We recorded all requests served to our server farms through a service which each machine in the farm could send request information to. We then processed these requests, aggregated by IP address, and automatically flagged any IP address exceeding a threshold of a certain number of requests per time interval. Any request coming from a flagged IP got a standard Captcha response, if they failed too many times, they were banned forever (dangerous if you get a DoS from behind a proxy.) If they proved they were a human the statistics related to their IP were "zeroed."
Well, this is an old one, but people looking to do this might want to look at fail2ban.
http://go2linux.garron.me/linux/2011/05/fail2ban-protect-web-server-http-dos-attack-1084.html
That's more of a serverfault sort of answer, as opposed to building this into your application, but I think it's the sort of problem which is most likely better tackled that way. If the logic for what you want to block is complex, consider having your application just log enough info to base the banning policy action on, rather than trying to put the policy into effect.
Consider also that depending on the web server you use, you might be vulnerable to things like a slow loris attack, and there's nothing you can do about that at a web application level.

Resources