Security Issues with Docusign API - docusignapi

We have developed an app in Salesforce which uses the DocuSign web service API (https://demo.docusign.net/api/3.0/dsapi.asmx for development and https://www.docusign.net/api/3.0/dsapi.asmx for production). We found few vulnerabilities when we did the security scanning on both the APIs. We used ZAP tool for security scanning and it revealed the below vulnerabilities:
X-Frame-Options Header Not Set
Incomplete or No Cache-control and Pragma HTTP Header Set
Web Browser XSS Protection Not Enabled
X-Content-Type-Options Header Missing
Can these issues be fixed on the web services or Is there any document that proves that these are false positive?
Thanks

Zap, like all automated scanners, are very good at finding common oversights and comparing applications with best practices. Unfortunately, they do often fail to consider the larger scenario at hand. Setting the correct x-headers for the right scenarios is an important protection against common attacks like click-jacking and XSS in client-server web flows, as they help inform the user's browser which actions should be permitted or not. Those attacks are not relevant in a server to server API flow, however, so these should be considered false positives. Thank you for bringing these to our attention, however, DocuSign is continuously investing in our platform's security and we appreciate the scrutiny.

Related

Security vulnerability for mobile applications

I would like to know if the security vulnerabilities for web based applications such as the ones due to poor input validation such as
SQL injection
XML injection
XSS
CSRF
Click Jacking (Frame bursting)
Since the mobile app runs in its own sandbox environment, i would have thought that the browser specific vulnerabilities would not be applicable.
OWASP does not list out these as part of their top 10 list and I wanted to understand if there is a scenario where these can pose a issue for mobile apps
Most of the vulns described in the OWASP top 10 are attacks against the server. E.g SQL injection, XML injection, Java deserialization, CSRF and others.
Thus it doesn't matter if the client is a browser or a mobile App. The attacker can craft their requests with any tool they want.
There are specific vulns related to mobile application on the client side. These are described in the Owasp mobile app top 10

Modifying requests using Burpsuite considered to be valid security vulnerability?

I would like to know if intercepting and modifying requests using Burpsuite before reaching server is considered as vulnerability.
In our web based and mobile applications, adequate security measures are in place to avoid replay attack and data integrity etc.,
Now the same is being evaluated by one of the application security teams, they use burpsuite to intercept the request payload and have raised few security vulnerabilities which aren't reproducible without using Burpsuite.
So using such tools is still a valid case and considered to be vulnerable?

How to prevent user from modifying REST request?

This question might sound trivial, but even after reading a number of tutorials, I still don't get how the REST security should be implemented.
I have a webpage and soon-to-be-ready mobile app. Both of them will be using the REST API (written in node.js), and the question is - how can I prevent users from modyfing those requests? It's very easy to see the network traffic in the browser, and all the GET/POST requests that are made to the server. It also seems very easy to copy such a request, modify its parameters and/or payload and send it to the server.
How do I make sure that's my webpage or the app who made the request, and not someone else?
Sisyphus is absolutely correct: your focus should be on securing the channel (TLS, SSH, etc) and authentication (e.g. OAuth2).
You should absolutely familiarize yourself with the Open Web Application Security Project (OWASP). In particular, start with:
OWASP Top 10 Cheat Sheet
OWASP REST Security Cheat Sheet
Here is an excellent "hands on" tutorial that gives you a great overview of all the different pieces you need to worry about:
Authenticate a Node.js API with JSON Web Tokens
Once you've gone through the tutorial and scanned the OWASP cheat sheets, you'll have a much better idea of what kinds of things you need to worry about, what options/technologies are available to mitigate those risks, and what might work best for your particular scenario.
Good luck!
Typically, security these days uses a combination of Transport Layer Security and OAuth2. OAuth2 provides authentication and authorisation, ensuring appropriate access to resources, with TLS both securing data over the network and preventing the kind of replay attacks which you're concerned about. Neither are really specific to Restful APIs and you can find them being used in non-Rest contexts also.

Is it possible to detect the platform of my web clients in a verifiable way?

I have a web application that supports a variety of clients on various platforms including desktop browsers, mobile browsers, as well as mobile and tablet native applications. I am wondering if it is possible to detect, in a secure manner, which of these platforms is being used to connect to the service.
This would be useful information to have, and would enable use cases where a security decision could be made based on the client platform. For example, I could restrict access to certain portions of the service if a user was on a mobile client, or a browser with a known vulnerability.
I am aware of EFF's Panopticlick research, which uses a variety of browser-based attributes, such as User-Agent string, installed plugins, screen dimensions, etc. to establish a unique fingerprint for a client, but this doesn't meet the "verifiable" requirement, as all the information is compiled on the client and could easily be spoofed.
I need a solution that is verifiable on the server side that the information sent by the client is accurate. Does such a solution exist?
There is no way to know which user-agent/platform you're dealing with, because ultimately any information you might use to identify them comes from the client side.
Any attribute you use to fingerprint my browser or operating system can be faked by simply sending you different HTTP headers. There are dozens of browser-level HTTP header manipulators and HTTP request code libraries that do precisely that.
I would therefore highly recommend against making -any- security decision based on platform or user-agent of the client. Assume that whatever rules you may set for purposes of usability along those lines can be violated by a hacker.

When writing a HTTP proxy, what security problems do I need to think about?

My company has written a HTTP proxy that takes the original website page and translates it. Think something along the lines of the web translation service provided by Google, Bing, etc.
I am in the middle of security testing of the service and associated website. Of course there is going to be a million attacks or misuses of the site that I haven't yet thought of. Additionally I don't want our site to become a vector that allows anonymous attacks against third party sites. Since this site will be subject to many eyes from the day it is opened, ensuring the security of both our service and the sites visited by our service is concerning me.
Can anyone point me to any online or published information for security testing. e.g. good lists of attacks to be worried about, security best practices for creating web sites/proxies/etc. I have a good general understanding of security issues (XSS, CSRF, SQL injection, etc). I'm more looking for resources to help me with the specifics of creating tests for security testing.
Any pointers?
Seen:
https://www.owasp.org/index.php/Top_10
https://stackoverflow.com/questions/1267284/common-website-attack-methods-detection-and-recovery
Most obvious problems for a translation service:
Ensure that the proxy cannot access to internal network. Obvious when you think but mostly forgotten in the first release. i.e. user should not able to request translation for http://127.0.0.1 etc. As you can imagine this can cause some serious problems. A clever attack would be http://127.0.0.1/trace.axd which will expose more than necessary as it thinks the request coming from localhost. If you also have any kind IP based restrictions between that system and any other systems you might want to be careful about them as well.
XSS is the obvious problem, ensure that translation delivered to the user in a separate domain (like Google Translate). This is crucial, don't even think that you can filter XSS attacks successfully.
Other than that for all other common web security issues, there are lots of things to do. OWASP is the best resource to start for automated testing there are free tools such as Netsparker and Skipfish

Resources