Are Svelte Stores accessible from the browser console? - security

I wonder if svelte stores are a good way to store JWT securely. Are the svelte stores secure against XSS attacks?

Stores cannot be "secure against XSS"; either your site is, or it is not. I suspect your question is whether stores could be read in the case of XSS?
If so that depends primarily on how the store is handled. To my knowledge there is no global tracking of stores, so you cannot simply get a reference to all previously instantiated stores (such a mechanism would also leak memory).
Then the question is how your components are instantiated and whether they could potentially expose the store. If you did not intentionally set global state (e.g. window.app = new App(...), etc.) it should be hard or impossible to get to it as Svelte components should not leave references in the DOM.
If malicious code is executing on your site, you probably have other issues to worry about, though.

it's very simple...
If any information is stored by JavaScript, in any way, it is accessible by JavaScript and therefore susceptible to XSS.
Cookies, however, when used with the HttpOnly flag, are not accessible through JavaScript, and are immune to XSS.

Related

Is HTTP X-XSS-Protection response header sufficient for handling reflected XSS attacks

I am a newbie in web application development. My application is using AngularJS\NodeJS. A security tool reported reflected XSS vulnerability in our application. From my search on the internet I found that there is a HTTP X-XSS-Protection response header which appears to protect the application against reflected XSS attacks. However, I am not sure if that should be sufficient for handling the reflected XSS attacks or additionally any input sanitization should also be done in the application.
X-XSS-Protection is not enough, and is already enabled by default in most browsers. All it does is it enables the built-in XSS protection in browsers, but the trick is, to the best of my knowledge (maybe somebody will correct me) it is not specified what exactly the browser should do. Based on my experience, browsers only filter the most trivial XSS, when there is a parameter with javascript between script tags - such javascript from the parameter will not be run. For example if the parameter is
something <script>alert(1)</script> something
this will not be run by the browser. And apparently that's it.
For example if you have server-side code like
<div class="userclass-{{$userinput}}">
then this can be exploited with the string abc" onclick="alert(1)" x=", and this will go through the built-in filter in any browser I tried.
And then we haven't talked about stored or DOM XSS, where this protection is totally useless.
With Angular, it is slightly harder to make your code vulnerable to XSS (see here how to bind stuff as html for example), but it is by far not impossible, and because you would almost never need an actual script tag for that, this built-in protection has very limited use in an Angular app.
Also while input validation/sanitization is nice to have, what you actually need against XSS in general is output encoding. With Angular, that is somewhat easier than with many other frameworks, but Angular is not the silver bullet, your application can still be vulnerable.
In short, anytime you display user input anywhere (more precisely, anytime you insert user input into the page DOM), you have to make sure that javascript from the user can't be run. This can be achieved via output encoding, and secure Javascript use, which can mean a lot of things, mostly using your template engine or data binding in a way that only binds stuff as text, and not anything else (eg. tags).
What X-XSS-Protection does is browser dependent and can't be relied upon for security. It is meant as an extra layer to make it more difficult to exploit vulnerable applications, but is not meant as a replacement for correct encoding. It is a warning that an application is vulnerable and the responsibility to fix it is on the application developers.
This is why in Chrome, errors in the XSS filter are not even considered security bugs. See the Chrome Security FAQ:
Are XSS filter bypasses considered security bugs?
No. Chromium contains a reflected XSS filter (called XSSAuditor) that
is a best-effort second line of defense against reflected XSS flaws
found in web sites. We do not treat these bypasses as security bugs in
Chromium because the underlying issue is in the web site itself. We
treat them as functional bugs, and we do appreciate such reports.
The XSSAuditor is not able to defend against persistent XSS or
DOM-based XSS. There will also be a number of infrequently occurring
reflected XSS corner cases, however, that it will never be able to
cover. Among these are:
Multiple unsanitized variables injected into the page.
Unexpected server side transformation or decoding of the payload.
And there are plently of bugs. There are ways found to bypass the filter regularly. Just last week there was a fairly simple looking injected style attribute bypass.
Such a filter would need to tell where values in the output have come from by only looking at the input and the output, without seeing any templates or server-side code. When any processing is done to the input, or there's multiple kinds of decoding occurring, or multiple values are combined together, this can be very difficult.
Much easier would be to take care of the encoding at the template level. Here, we can easily tell the difference between variable values and static code because they haven't been merged together yet.

Can Mylar be hacked?

I'm interested in using Mylar for an upcoming project.
The promises that Mylar makes seem impressive. However, could a dev write a back-door attack into the code, that is allowed to run (verified by hash/signature), so that the data is compromised (likely via XSS)? Mylar documentation states:
"Mylar ensures that client-side application code is authentic, even if
the server is malicious."
The only way I can imagine this being protected against is for the browser itself to disallow outbound communication of unencrypted data. But, for that to happen, how can the app query the database, make calls back to the server (I understand that Mylar is best used with a browser side framework like Meteor, but still, Meteor needs to communicate with the server for certain tasks).
Is Mylar able to provide complete data security, even from the application developer/server admin?
Here is Mylar's claim (from http://www.mit.edu/~ralucap/mylar.pdf):
3.4 Threat model
Threats. Both the application and the database servers can be fully controlled by an adversary: the adversary may obtain all data
from the server, cause the server to send arbitrary responses to web
browsers, etc. This model subsumes a wide range of real-world security
problems, from bugs in server software to insider attacks. Mylar also
allows some user machines to be controlled by the adversary, and to
collude with the server. This may be either because the adversary is a
user of the application, or because the adversary broke into a user’s
machine. We call this adversary active, in contrast to a passive
adversary that eavesdrops on all information at the server, but does
not make any changes, so that the server responds to all client
requests as if it were not compromised.
Guarantees. Mylar protects a data item’s confidentiality in the face of arbitrary server compromises, as long as none of the users
with access to that data item use a compromised machine.
In this context, 'compromised machine' means the client machine/browser.
After re-reading the Mylar white paper, I see where the document states:
Assumptions. To provide the above guarantees, Mylar makes the
following assumptions. Mylar assumes that the web application as
written by the developer will not send user data or keys to
untrustworthy recipients, and cannot be tricked into doing so by
exploiting bugs (e.g., cross-site scripting). Our prototype of Mylar
is built on top of Meteor, a framework that helps programmers avoid
many common classes of bugs in practice.
Does this mean the way the application was written at the time of encryption, or at the time of attack? In other words, is the encrypted data somehow tied to a specific version of the application code? Elsewhere in the referenced Mylar white paper it indicates that the app code is verified against a hash signature.
If the app code can simply be hacked at the server, this reduces the value proposition greatly, as any attacker who gains access to the source code could modify the code and leach data as it is requested (at the browser). The Guarantee of "protecting confidentiality in the face of arbitrary server compromises" seems broad enough to include the idea of the attacker modifying the source code of the application, hence my confusion.
Also refer to section 6 in the white paper for more information. I believe the Mylar doc is conveying that it does mitigate compromised application code attacks. I'd really love to hear from a dev with authoritative understanding of Mylar.
... could a dev write a back-door attack into the code, that is allowed to run (verified by hash/signature), so that the data is compromised (likely via XSS)?
Yes, a developer could write a back-door into the code. There is no way to prevent that, because a developer could claim he's using Mylar although he doesn't or does use a compromised version. Note that Mylar doesn't say, it could prevent that. It's preventing attacks by server operators, for example if you host your application in a third-party cloud.
3 MYLAR ARCHITECTURE
There are three different parties in Mylar: the users, the web site owner, and the server operator. Mylar’s goal is to help the site owner protect the confidential data of users in the face of a malicious or compromised server operator.
If you don't trust the developers or web site owner, you have to check the client-side source code very time it's loaded.
Mylar documentation states: "Mylar ensures that client-side application code is authentic, even if the server is malicious."
The only way I can imagine this being protected against is for the browser itself to disallow outbound communication of unencrypted data. But, for that to happen, how can the app query the database, make calls back to the server [...]
Is Mylar able to provide complete data security, even from the application developer/server admin?
That's right, the browser won't send unencrypted data to the server (at least the data which you marked as secret). I can't provide a full explanation for how it allows a large subset of SQL functionality on encrypted data, because it's complicated. As Raluca Ada Popa explains in one of her presentations, data is encrypted several times with different algorithms, because each algorithm allows different operations on encrypted data (equality check, ordering, text search, ...). The MIT institute also developed CryptDB, which uses the same methodology but only protects the database server.
3.4 Threat model: Both the application and the database servers can be fully controlled by an adversary [...]
When an attacker controls the application server, he could exchange the whole application with his own, which mocks the original user interface. Here comes the browser plugin into play: The application is signed by the web site owner before it's deployed, so that the browser plugin may check the signature and alarm the user if the application was modified.
You might have noticed that Mylar needs the user to check authenticity himself. Other things that an user needs to be aware of:
Mylar applications must be loaded over a secure HTTPS connection.
Retrieved data must be signed by the expected user (for example a chat room must show who created it and the user has to check if someone tries to fake an existing room).
The client machine must not compromised.
...
Mylar assumes that the web application as written by the developer will not send user data or keys to untrustworthy recipients, and cannot be tricked into doing so by exploiting bugs (e.g., cross-site scripting).
Does this mean the way the application was written at the time of encryption, or at the time of attack?
They assume the application as delivered doesn't contain any bugs which could leak private data. Mylar doesn't prevent coding mistakes, it prevents untrusted modifications later on.
In other words, is the encrypted data somehow tied to a specific version of the application code? Elsewhere in the referenced Mylar white paper it indicates that the app code is verified against a hash signature.
If the app code can simply be hacked at the server, this reduces the value proposition greatly, as any attacker who gains access to the source code could modify the code and leach data as it is requested (at the browser).
Encrypted data isn't tied to a specific version. Each version of the application needs to be signed by the web site owner, so that the browser plugin may check it's signature and attacks would be obvious to users. A common dynamic web site wouldn't allow signing, because each user data is different and would modify the received code, therefore application code (HTML, JavaScript, ..) and data are strictly separated. After the application is loaded and it's signature was checked, data is retrieved via AJAX, whereas the AJAX response must not contain executable code (this is part of the Meteor framework, I can't tell anything about it).
Conclusion
If the web site owner himself is dishonest, you can't be sure about privacy. This is especially the case if governments are able to force the web site owner to cooperate.
Also Mylar doesn't prevent bugs, which could leak data. For example the simplest mistake would be that a developer forgot to mark a field as private.
When an attacker overtakes the application server, users are warned, but if they ignore it (for example they didn't install the browser plugin) their data could be intercepted.
If you want to outsource hosting of your application or you won't trust your own server operators, Mylar provides better security than any other framework I know of.

Securing a Browser Helper Object

I'm currently in the process of building a browser helper object.
One of the things the BHO has to do is to make cross-site requests that bypass the cross-domain policy.
For this, I'm exposing a __MyBHONameSpace.Request method that uses WebClient internally.
However, it has occurred to me that anyone that is using my BHO now has a CSRF vulnerability everywhere as a smart attacker can now make arbitrary requests from my clients' computers.
Is there any clever way to mitigate this?
The only way to fully protect against such attacks is to separate the execution context of the page's JavaScript and your extension's JavaScript code.
When I researched this issue, I found that Internet Explorer does provide a way to achieve creation of such context, namely via IActiveScript. I have not implemented this solution though, for the following reasons:
Lack of documentation / examples that combines IActiveScript with BHOs.
Lack of certainty about the future (e.g. https://stackoverflow.com/a/17581825).
Possible performance implications (IE is not known for its superb performance, how would two instances of a JavaScript engines for each page affect the browsing speed?).
Cost of maintenance: I already had an existing solution which was working well, based on very reasonable assumptions. Because I'm not certain whether the alternative method (using IActiveScript) would be bugfree and future-proof (see 2), I decided to drop the idea.
What I have done instead is:
Accept that very determined attackers will be able to access (part of) my extension's functionality.
#Benjamin asked whether access to a persistent storage API would pose a threat to the user's privacy. I consider this risk to be acceptable, because a storage quota is enforced, and all stored data is validated before it's used, and it's not giving an attacker any more tools to attack the user. If an attacker wants to track the user via persistent storage, they can just use localStorage on some domain, and communicate with this domain via an <iframe> using the postMessage API. This method works across all browsers, not just IE with my BHO installed, so it is unlikely that any attacker dedicates time at reverse-engineering my BHO in order to use the API, when there's a method that already works in all modern browsers (IE8+).
Restrict the functionality of the extension:
The extension should only be activated on pages where it needs to be activated. This greatly reduces the attack surface, because it's more difficult for an attacker to run code on https://trusted.example.com and trick the user into visiting https://trusted.example.com.
Create and enforce whitelisted URLs for cross-domain access at extension level (in native code (e.g. C++) inside the BHO).
For sensitive APIs, limit its exposure to a very small set of trusted URLs (again, not in JavaScript, but in native code).
The part of the extension that handles the cross-domain functionality does not share any state with Internet Explorer. Cookies and authorization headers are stripped from the request and response. So, even if an attacker manages to get access to my API, they cannot impersonate the user at some other website, because of missing session information.
This does not protect against sites who use the IP of the requestor for authentication (such as intranet sites or routers), but this attack vector is already covered by a correct implemention a whitelist (see step 2).
"Enforce in native code" does not mean "hard-code in native code". You can still serve updates that include metadata and the JavaScript code. MSVC++ (2010) supports ECMAScript-style regular expressions <regex>, which makes implementing a regex-based whitelist quite easy.
If you want to go ahead and use IActiveScript, you can find sample code in the source code of ceee, Gears (both discontinued) or any other project that attempts to enhance the scripting environment of IE.

Mitigating security risks of javascript objects using require.js

I'm a little paranoid about storing sensitive information in global variables on the browser; who wouldn't be. Enter AMD! My question is, can we confidently use require.js to completely isolate variables, to help mitigate unwanted manipulation of variables from the console? Has anyone found a backdoor, or maybe a better way to put it is, has anyone witnessed any security issues with the require.js library?
Thanks!
No you can't. Even if you don't have any global variable the user can still go through your source code and add break points, then when the code reach the breakpoint he can manipulate all the variables that are accessible in the actual scope.
Take a look at this gamedev question which has some advices on how to make it harder (but not impossible) for users to cheat your code.
Yeah, the attacker can always view the source.
But if you size and shape the payload, minifying and modularize parts/regions of the client, serving them in accordance with use-case narratives on-demand, you effectively add a layer of security that exists due to the assumption of human-play.
A bot cannot simply traverse directories on a server, but instead must (via JavaScript) navigate the application intelligently, only getting code at a uniquely specified point in the app. It must know when certain payloads are essential to the use-case (say offering up credit card info N screens into a process).
Moreover, client code and be obfuscated w/r/t IP address or along continuous, periodic release cycles.

Secure programming in dynamic languages

Mistakes in memory management in C, C++, and their ilk are well known. I mostly program in dynamic, weakly typed languages. Are there issues which are particularly important in languages of this type? What language specific issues might I keep an eye out for?
I'm generally mindful of standard security issues, and try to think about the ways in which code could be misused, but am sure there are plenty of less superficial mistakes I could be making, and am interested in expanding my knowledge in this area.
If you use anything similar to eval() then there is risks for attacks, esp if you are trusting something from outside your application.
Just because you're not writing the lower level code doesn't mean that the language you are using, and therefore your app, wont have these kinds of security problems. So my answer to your question is to ensure that you stay up to date on the latest releases on whatever tools you are using. This is more of an issue for you if you host the environment in which your app run, otherwise it's more of a problem for users of your app if they have to run it on their machines.
SQL Injection is a common attack which doesn't depend on type management. Generally, missing input validation is a very common reason for security issues.
In the case of JavaScript the main vulnerabilities according to the EC-Council Secure Programmer Vol.1 are the following:
Cross Site Scriptting (XSS). In a XSS
attack, attackers submit client-side
executable scripts by inserting
malicious Javascript, VBScript,
ActiveX, HTML or Flash into vulnerable
dynamic page and execute the script on
the user's machine to collect the
user's information.
Avoiding XSS:
Constrain Input:
Define a codepage that decide wich characters are problemetic,
Restrict variables to choose characters that are explicitly allowed.
Filter metacharacters depending on the interpreter (HTML, browser and file system)
Aply canonicalization:
- The canonicalization technique brinbgs the input to an appropiate from before validating the input.
Validate de input:
Validate all external input for field length, data type, range, and for a white list to ensure acceptance of onlyknown unproblematic characters.
Encode Output
Convert metacharacters e.g: <, >, and "",use HTML entities instead.
Encode user-supplied output so that any inserted script are prevented from being transmitted to users in an executable form.
JavaScript Hijacking: Allows an
unauthorized party to read
confidential information. Occurs
because most web-browsers that
implement a security model do not
anticipate the use of Javascript for
communication. JavaScrpt Hijacking is
generally carried out through
cross-site request forgery. Coss-site
request forgeryis an attack that
enables the victim to sumbit one or
more HTTP requests to a vulnerable
website. This attack compromises data
integrity and confidentiality, meaning
an attacker can read the victim's
information and modify the information
stored on the vulnerable site.
A Javascript Hijacking attack can be defended:
By declinig malicious requests.
By preventing direct execution of the JavaScript response.

Resources