I'm new to parse, and have a question about the security of data in the parse.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
How we can enforce security in parse.
Can anyone please explain me ?
Basically, there are two levels of security. First, every connection to the Parse servers is made via SSL, every other way of trying to connect is automatically blocked by their servers. That's something you don't need to worry about, but it's good to know the connection is encrypted.
There is, however, something you can and should do, to ensure the integrity and security of your data. It's called ACL, and can be defined for every object. Here is the documentation of it. With ACL's you can define which user has read access on an object, and which user can write/edit/delete it. Additionally, you can define global ACLs via the databrowser.
Another thing worth mentioning is client enabled class creation. In development it is quite useful to have that enabled, because this means every time you call a new class which doesn't already exist, it's automatically created for you. Similarly, there is the possibility to disable the creation of new fields for a class. Also useful in development, these things should be disabled in production, unless you really have to have them enabled (which shouldn't be the case if your data structure is good enough).
Additionally, you should read this page by Parse, it's all about security and data integrity.
Related
I run a landscaping company and have multiple crews. I want to provide each one with a custom URL (like mysite.com/xxxx-xxxx-xxxx) that shows their daily schedule. Going to the page will list the name, address and phone number of 5-10 customers for the day.
Is it safe/wise to use a UUID in a URL for semi-private data?
Depends on how safe you want it to be.
Are the UUIDs used for anything else? If not, they are fine for creating random URLs.
But, browser history would allow anyone using the same machine to find the URLs. Also, unless using https, a network sniffer could easily see the requested URLs and go to the same page.
Another concern is spider bots. Make sure nothing links to those pages, use a robots.txt to prevent indexing the site, but you still might find that some of the pages show up on search engines. It might be better to have the UUID set in a cookie and check that for determining which employee it is, lest your semi-private pages start showing up on google.
Whether or not that schema would work for you, depends on your threat model (as well as some implementation details). Without a concrete threat model, it is not possible to give a definitive answer to your question.
I can, however, give you some ideas about potential issues with the solution, so you can determine if they are relevant for your application. This is not a complete list.
On the implementation side of things:
Not all UUID generators are created equal. Ideally, you want to use a generator based on a cryptographically secure RNG, providing an UUID where every byte is chosen at random.
Using the UUID for a database lookup or similar operation is not necessarily a constant-time operation (and thus there might be side-channel attacks unless you implement the lookup by yourself)
Make sure your URI does not leak via referrer
Some tools attempt to detect 'secret' URLs to protect them from history synchronization or other automatic features. Your schema will most likely not be detected as 'secret'. It might be better to artificially lengthen your URI and to move your UUID into a query parameter.
You can further reduce attack surface with the usual methods (rate limiting, server hardening, etc.)
On the conceptual side of things:
A single identifier for both identification and authentication is not necessarily a bad thing. However, in most cases there is a need for an identification-only identifier – you must not use the 'secret' UUID in those scenarios
If a 'crew' consists of multiple people: you cannot revoke access for a single crew member
Some software (antivirus, browser, etc.) treats information in URLs as public information, and might upload them without user interaction
Recently I discovered how useful and easy parse.com is.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
But how secure is it? From what I understand, you have to embed your app private key in the code, thus granting access to the data.
But what if someone is able to recover the key from your app? I tried it myself. It took me 5 minutes to find the private key from a standard APK, and there is also the possibility to build a web app with the private key hard-coded in your javascript source where pretty much anyone can see it.
The only way to secure the data I've found are ACLs (https://www.parse.com/docs/data), but this still means that anyone may be able to tamper with writable data.
Can anyone enlighten me, please?
As with any backend server, you have to guard against potentially malicious clients.
Parse has several levels of security to help you with that.
The first step is ACLs, as you said. You can also change permissions in the Data Browser to disable unauthorized clients from making new classes or adding rows or columns to existing classes.
If that level of security doesn't satisfy you, you can proxy your data access through Cloud Functions. This is like creating a virtual application server to provide a layer of access control between your clients and your backend data store.
I've taken the following approach in the case where I just needed to expose a small view of the user data to a web app.
a. Create a secondary object which contains a subset of the secure objects fields.
b. Using ACLs, make the secure object only accessible from an appropriate login
c. Make the secondary object public read
d. Write a trigger to keep the secondary object synchronised with updates to the primary.
I also use cloud functions most of the time but this technique is useful when you need some flexibility and may be simpler than cloud functions if the secondary object is a view over multiple secure objects.
What I did was the following.
Restrict read/write for public for all classes. The only way to access the class data would be through the cloud code.
Verify that the user is a logged in user using the parameter request.user ,and if the user session is null and if the object id is legit.
When the user is verified then I would allow the data to be retrieved using the master key.
Just keep a tight control on your Global Level Security options (client class creation, etc...), Class Level Security options (you can for instance, disable clients deleting _Installation entries. It's also common to disable user field creation for all classes.), and most important of all, look out for the ACLs.
Usually I use beforeSave triggers to make sure the ACLs are always correct. So, for instance, _User objects are where the recovery email is located. We don't want other users to be able to see each other's recovery emails, so all objects in the _User class must have read and write set to the user only (with public read false and public write false).
This way only the user itself can tamper with their own row. Other users won't even notice this row exists in your database.
One way to limit this further in some situations, is to use cloud functions. Let's say one user can send a message to another user. You may implement this as a new class Message, with the content of the message, and pointers to the user who sent the message and to the user who will receive the message.
Since the user who sent the message must be able to cancel it, and since the user who received the message must be able to receive it, both need to be able to read this row (so the ACL must have read permissions for both of them). However, we don't want either of them to tamper with the contents of the message.
So you have two alternatives: either you create a beforeSave trigger that checks if the modifications the users are trying to make to this row are valid before committing them, or you set the ACL of the message so that nobody has write permissions, and you create cloud functions that validates the user, and then modifies the message using the master key.
Point is, you have to make these considerations for every part of your application. As far as I know, there's no way around this.
Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.
I'm in charge of an app that uses the internet to transfer data between sites, and some customers are being awkward about paying, so we need a mechanism that will allow us to cut off the service of non-payers. I'd like to protect against the admin people using firewalls to block off our checks, but conversely I'd like to give some allowance for our company web site disappearing for some reason and not being accessible.
The scheme I'm imagining is:
server makes twice daily check to web page using a URL like:
http://www.ourcompany.com/check.php?myID=GUID&Code=MyCode
This then returns a response that contains either nothing of interest, or the GUID and a value.
GUID=0
That zero indicates that the server should stop operation. To make it work again, the server will check every 5 mins for the same info, until the value matches what it thinks the code that it passed in should be transformed to.
This scheme makes sense to me, but the question really is how to protect against blocking. Given we know we must have internet access, how long should we continue to operate without being able to get the response from our web server? Is something like 14 days and then we just shut it off anyway the best way?
The solution I used in the end was pretty much as I suggested. Yes, it is defeatable using tools outlined here, but it is better than nothing.
The app checks daily to access a web site that contains a control file encrypted using public key encryption. It decrypts in memory, and if it finds its GUID, then it must match a code. To disable the operation, the code is set to 0 (zero) which will always fail. When disabled, it checks every two minutes to allow rapid restoration. There is also a manual mechanism to generate a code that will work for a week in case of server trouble.
The code will allow up to 14 days without connecting to the server before it takes this as a deliberate attempt to block it. After 10 days, it shows an error message which asks them to contact support.
This method is really easy to circumvent: just use a local dns server to point www.ourcompany.com to the local machine, or use a http proxy. Then the user can return whatever response they want to the program.
Assuming the user hasn't circumvented the check, how long you are to continue to operate without confirmation is a business decision and not a programming decision.
A user can use a tool such as OWASP WebScarab to change values on the fly to subvert your security model. You need to include something more difficult such as requiring a secure channel, comparing public key and so on.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm pretty green still when it comes to web programming, I've spent most of my time on client applications. So I'm curious about the common exploits I should fear/test for in my site.
I'm posting the OWASP Top 2007 abbreviated list here so people don't have to look through to another link and in case the source goes down.
Cross Site Scripting (XSS)
XSS flaws occur whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc.
Injection Flaws
Injection flaws, particularly SQL injection, are common in web applications. Injection occurs when user-supplied data is sent to an interpreter as part of a command or query. The attacker's hostile data tricks the interpreter into executing unintended commands or changing data.
Malicious File Execution
Code vulnerable to remote file inclusion (RFI) allows attackers to include hostile code and data, resulting in devastating attacks, such as total server compromise. Malicious file execution attacks affect PHP, XML and any framework which accepts filenames or files from users.
Insecure Direct Object Reference
A direct object reference occurs when a developer exposes a reference to an internal implementation object, such as a file, directory, database record, or key, as a URL or form parameter. Attackers can manipulate those references to access other objects without authorization.
Cross Site Request Forgery (CSRF)
A CSRF attack forces a logged-on victim's browser to send a pre-authenticated request to a vulnerable web application, which then forces the victim's browser to perform a hostile action to the benefit of the attacker. CSRF can be as powerful as the web application that it attacks.
Information Leakage and Improper Error Handling
Applications can unintentionally leak information about their configuration, internal workings, or violate privacy through a variety of application problems. Attackers use this weakness to steal sensitive data, or conduct more serious attacks.
Broken Authentication and Session Management
Account credentials and session tokens are often not properly protected. Attackers compromise passwords, keys, or authentication tokens to assume other users' identities.
Insecure Cryptographic Storage
Web applications rarely use cryptographic functions properly to protect data and credentials. Attackers use weakly protected data to conduct identity theft and other crimes, such as credit card fraud.
Insecure Communications
Applications frequently fail to encrypt network traffic when it is necessary to protect sensitive communications.
Failure to Restrict URL Access
Frequently, an application only protects sensitive functionality by preventing the display of links or URLs to unauthorized users. Attackers can use this weakness to access and perform unauthorized operations by accessing those URLs directly.
The Open Web Application Security Project
-Adam
OWASP keeps a list of the Top 10 web attacks to watch our for, in addition to a ton of other useful security information for web development.
These three are the most important:
Cross Site Request Forgery
Cross Site Scripting
SQL injection
Everyone's going to say "SQL Injection", because it's the scariest-sounding vulnerability and the easiest one to get your head around. Cross-Site Scripting (XSS) is going to come in second place, because it's also easy to understand. "Poor input validation" isn't a vulnerability, but rather an evaluation of a security best practice.
Let's try this from a different perspective. Here are features that, when implemented in a web application, are likely to mess you up:
Dynamic SQL (for instance, UI query builders). By now, you probably know that the only reliably safe way to use SQL in a web app is to use parameterized queries, where you explicitly bind each parameter in the query to a variable. The places where I see web apps most frequently break this rule is when the malicious input isn't an obvious parameter (like a name), but rather a query attribute. An obvious example are the iTunes-like "Smart Playlist" query builders you see on search sites, where things like where-clause operators are passed directly to the backend. Another great rock to turn over are table column sorts, where you'll see things like DESC exposed in HTTP parameters.
File upload. File upload messes people up because file pathnames look suspiciously like URL pathnames, and because web servers make it easy to implement the "download" part just by aiming URLs at directories on the filesystem. 7 out of 10 upload handlers we test allow attackers to access arbitrary files on the server, because the app developers assumed the same permissions were applied to the filesystem "open()" call as are applied to queries.
Password storage. If your application can mail me back my raw password when I lose it, you fail. There's a single safe reliable answer for password storage, which is bcrypt; if you're using PHP, you probably want PHPpass.
Random number generation. A classic attack on web apps: reset another user's password, and, because the app is using the system's "rand()" function, which is not crypto-strong, the password is predictable. This also applies anywhere you're doing cryptography. Which, by the way, you shouldn't be doing: if you're relying on crypto anywhere, you're very likely vulnerable.
Dynamic output. People put too much faith in input validation. Your chances of scrubbing user inputs of all possible metacharacters, especially in the real world, where metacharacters are necessary parts of user input, are low. A much better approach is to have a consistent regime of filtering database outputs and transforming them into HTML entities, like quot, gt, and lt. Rails will do this for you automatically.
Email. Plenty of applications implement some sort of outbound mail capability that enable an attacker to either create an anonymous account, or use no account at all, to send attacker-controlled email to arbitrary email addresses.
Beyond these features, the #1 mistake you are likely to make in your application is to expose a database row ID somewhere, so that user X can see data for user Y simply by changing a number from "5" to "6".
bool UserCredentialsOK(User user)
{
if (user.Name == "modesty")
return false;
else
// perform other checks
}
SQL INJECTION ATTACKS. They are easy to avoid but all too common.
NEVER EVER EVER EVER (did I mention "ever"?) trust user information passed to you from form elements. If your data is not vetted before being passed into other logical layers of your application, you might as well give the keys to your site to a stranger on the street.
You do not mention what platform you are on but if on ASP.NET get a start with good ol' Scott Guthrie and his article "Tip/Trick: Guard Against SQL Injection Attacks".
After that you need to consider what type of data you will permit users to submit into and eventually out of your database. If you permit HTML to be inserted and then later presented you are wide-open for Cross Site Scripting attacks (known as XSS).
Those are the two that come to mind for me, but our very own Jeff Atwood had a good article at Coding Horror with a review of the book "19 Deadly Sins of Software Security".
Most people here have mentioned SQL Injection and XSS, which is kind of correct, but don't be fooled - the most important things you need to worry about as a web developer is INPUT VALIDATION, which is where XSS and SQL Injection stem from.
For instance, if you have a form field that will only ever accept integers, make sure you're implementing something at both the client-side AND the server-side to sanitise the data.
Check and double check any input data especially if it's going to end up in an SQL query. I suggest building an escaper function and wrap it around anything going into a query. For instance:
$query = "SELECT field1, field2 FROM table1 WHERE field1 = '" . myescapefunc($userinput) . "'";
Likewise, if you're going to display any user-inputted information onto a webpage, make sure you've stripped any <script> tags or anything else that might result in Javascript execution (such as onLoad= onMouseOver= etc. attributes on tags).
This is also a short little presentation on security by one of wordpress's core developers.
Security in wordpress
it covers all of the basic security problems in web apps.
The most common are probably database injection attacks and cross-site scripting attacks; mainly because those are the easiest to accomplish (that's likely because those are the ones programmers are laziest about).
You can see even on this site that the most damaging things you'll be looking after involve code injection into your application, so XSS (Cross Site Scripting) and SQL injection (#Patrick's suggestions) are your biggest concerns.
Basically you're going to want to make sure that if your application allows for a user to inject any code whatsoever, it's regulated and tested to be sure that only things you're sure you want to allow (an html link, image, etc) are passed, and nothing else is executed.
SQL Injection. Cross Site Scripting.
Using stored procedures and/or parameterized queries will go a long way in protecting you from sql injection. Also do NOT have your web app access the database as sa or dbo - set a up a standard user account and set the permissions.
AS for XSS (cross site scripting) ASP.NET has some built in protections. The best thing is to filter input using validation controls and Regex.
I'm no expert, but from what I learned so far the golden rule is not to trust any user data (GET, POST, COOKIE). Common attack types and how to save yourself:
SQL Injection Attack: Use prepared queries
Cross Site Scripting: Send no user data to browser without filtering/escaping first. This also includes user data stored in database, which originally came from users.