Cryptography task for repositorium of data - security

The application provides the user with the option of downloading existing documents as well as
upload new documents. Each new document, before being moved to the file system, is divided into
N segments (N≥4, randomly generated value), where each of these segments
moved to a different directory, in order to further increase system security and reduce
the possibility of document theft. Confidentiality and integrity must be adequately protected
of each segment, so that only the user to whom the document belongs can access his
property and see its contents. The application should detect any unauthorized modification
of stored documents and notify the user about it, when trying to download
such documents.
how make code for this?
I tried map reduce, but that not solution, I need help

Related

Encrypt all user data in my web application

This is not a typical StackOverflow question as it is quite specific and bound to my current project. Given my project (GitHub link), I would like to encrypt or handle all user data in a way that impedes me as a service provider to view data of specific users. This would probably not be feasible in a typical webapp with a rational SQL database. I am using Redis with data that is basically structured as follows:
Users can view their data filtered by two dimensions: A time range and a domain. These are further grouped by another dimension, which are multiple charts. So there is data for countries, top landing pages, etc (It's a web analytics app). Internally of course I also need to have the user baked in as dimension in the key that holds data for a chart and of course there is some indexing stuff going on.
Now here is the idea: I could hash the access key for this single charts - I am only doing direct key access anyway and no scanning (filtering over keys). Furthermore I would only save the hashed username in the database so the username becomes the missing information I don't have to retrieve the payloads.
This would leave me with the cleartext payloads, which represent specific charts given by specific user selections (Yes, I only save the user data in an aggregated form btw) but I would have no reasonable way to map a single chart to a specific user or domain. Given I have ~70 integrated users at the moment, it would be not feasible to try to manually map data points to specific users (But I could still see all domains a "user" uses).
Of course this is relying on the username being somewhat a secret and I would only save the hashed username to the database and only handle the cleartext username in ram. I can still greet the user since the cleartext username is saved in a cookie :-)
With usernames being too short and having almost no entropy of course I could brute force my own database in order to regain the missing links and access to all data individual users have. But before doing this the more obvious way to "cheat" would be to just run another software (without that hashing) on the server but still stating everything is encrypted. So my point is that the presented solution is good enough for a hosted service.
Does this sound plausible? Would such an approach add an additional layer of security or be meaningless because it is too easy to circumvent?
In my opinion I could compare this with locking a bicycle with a very cheap lock. Even if the lock is easily breakable it does have a strong symbolic meaning that someone that breaks the lock is doing something worse than stealing a bicycle that has no lock at all. So even it is not possible to protect user data from a hosting provider, it is possible to make the work to do so more "dirty" and such socially and legally less acceptable. Does this makes sense? :-)
So my question is: security by obscurity or sound approach?
Cheers!

use localstorage instead of database to avoid requests to the server

I am creating an application in which the user can post information as well as choose as a favorite the publication of someone else, when the user performs any of these actions I keep the necessary information in the in the database, specifically in a document where the information linked to the user is found (name, surname, telephone number, etc.).
so when the user logging in the page I get all that information with a single query and I keep it in the LOCALSTOAGE and reduce the queries in the database, then in a different section you can see the publications you have created as well as the ones you have marked as favorites, very similar to what we commonly see in an online store
I'm using angular 6, noje.js and mongoDB. My question is the following:
Is this a correct and effective way to do it?
Should I save it in the database and then perform the corresponding query to obtain it?
shows a screenshot of local storage for explicit use:
As you can see I also save the token that I use to authenticate the user's queries and obviously I do not show your password I would like your opinions.
You never should consider localStorage as an alternative to the database.
At some point, you might have a huge amount of data and your browser would crash to load them.
Bring the data you required from the server.
For some minimum and temporary amount of data, you can consider localStorage. Don't bring all the data in a single query to save database operation. Databases are built to do that for you.

CouchDB simple document design: need feedback

I am in the process of designing document storage for CouchDB and would really appreciate some feedback. These documents are to represent "assets".
These databases will also be synced locally to the browser via pouchdb.
Requirements:
Each user can have many assets
Users can share assets with others by providing them with a URI such as (xyz.com/some_id). Once users click this URI, they are considered to have been "joined" and are now part of a group.
Group users can share assets of their own with other members of the group.
My design
Each user will have his/her own database to store assets - let's call it "user". Each user DB will be prefixed with the his/her unique ID.
Shared assets will be stored in a separate database - let's call it "group". shared assets are DUPLICATED here and have an additional field for userId (to indicate creator).
Group database is prefixed with a unique ID just like a user database is prefixed with one too.
The reason for storing group assets in a separate database is because when pouchdb runs locally, it only knows about the current user and his/her shared assets. It does not know about other users and will should not query these "other" users' databases.
Any input would be GREATLY appreciated.
Seems like a great design. Another alternative would be to just have one database per group ("role"), and then replicate from a user's group(s) into their local PouchDB.
That might get hairy, though, when it comes time to replicate back to the server, because you're going to have to filter the documents as they leave the user's local database, depending on which group-database they belong to. Still, you're going to have to do that on the server side anyway with your current design.
Either way is fine, honestly. The only downside of your current approach is that documents are duplicated on the server side (once per user-db and once per group-db). On the other hand, your client code becomes dead-simple, because you don't have to do any filtered replication. If you have enough space on your server not to worry about it, then I would definitely go with your approach. :)

parse.com security

Recently I discovered how useful and easy parse.com is.
It really speeds up the development and gives you an off-the-shelf database to store all the data coming from your web/mobile app.
But how secure is it? From what I understand, you have to embed your app private key in the code, thus granting access to the data.
But what if someone is able to recover the key from your app? I tried it myself. It took me 5 minutes to find the private key from a standard APK, and there is also the possibility to build a web app with the private key hard-coded in your javascript source where pretty much anyone can see it.
The only way to secure the data I've found are ACLs (https://www.parse.com/docs/data), but this still means that anyone may be able to tamper with writable data.
Can anyone enlighten me, please?
As with any backend server, you have to guard against potentially malicious clients.
Parse has several levels of security to help you with that.
The first step is ACLs, as you said. You can also change permissions in the Data Browser to disable unauthorized clients from making new classes or adding rows or columns to existing classes.
If that level of security doesn't satisfy you, you can proxy your data access through Cloud Functions. This is like creating a virtual application server to provide a layer of access control between your clients and your backend data store.
I've taken the following approach in the case where I just needed to expose a small view of the user data to a web app.
a. Create a secondary object which contains a subset of the secure objects fields.
b. Using ACLs, make the secure object only accessible from an appropriate login
c. Make the secondary object public read
d. Write a trigger to keep the secondary object synchronised with updates to the primary.
I also use cloud functions most of the time but this technique is useful when you need some flexibility and may be simpler than cloud functions if the secondary object is a view over multiple secure objects.
What I did was the following.
Restrict read/write for public for all classes. The only way to access the class data would be through the cloud code.
Verify that the user is a logged in user using the parameter request.user ,and if the user session is null and if the object id is legit.
When the user is verified then I would allow the data to be retrieved using the master key.
Just keep a tight control on your Global Level Security options (client class creation, etc...), Class Level Security options (you can for instance, disable clients deleting _Installation entries. It's also common to disable user field creation for all classes.), and most important of all, look out for the ACLs.
Usually I use beforeSave triggers to make sure the ACLs are always correct. So, for instance, _User objects are where the recovery email is located. We don't want other users to be able to see each other's recovery emails, so all objects in the _User class must have read and write set to the user only (with public read false and public write false).
This way only the user itself can tamper with their own row. Other users won't even notice this row exists in your database.
One way to limit this further in some situations, is to use cloud functions. Let's say one user can send a message to another user. You may implement this as a new class Message, with the content of the message, and pointers to the user who sent the message and to the user who will receive the message.
Since the user who sent the message must be able to cancel it, and since the user who received the message must be able to receive it, both need to be able to read this row (so the ACL must have read permissions for both of them). However, we don't want either of them to tamper with the contents of the message.
So you have two alternatives: either you create a beforeSave trigger that checks if the modifications the users are trying to make to this row are valid before committing them, or you set the ACL of the message so that nobody has write permissions, and you create cloud functions that validates the user, and then modifies the message using the master key.
Point is, you have to make these considerations for every part of your application. As far as I know, there's no way around this.

Secure file server

Introduction
I want to create a Java web application for storing and backing up user files, similar to Dropbox. One of the interesting Dropbox feature is that it can detect whether a certain file already exists on server. For example, if one user upload a file onto server, another user who tries to upload the same file will not need to upload the same file content. Server will only need mark that he has the same file. This helps to save the bandwidth/space and increases the speed in many ways.
The most basic solution to this problem is to use a file hash string, e.g. sha1, md5, etc., to identify the file. The client software check whether a certain hash exists on server or not. If it exists, then it can skip the uploading process and mark that user has the same file.
Problem
The web application is implemented based on REST architecture so that user can easily write their own client software to upload their files. For security reasons, the SSL is enabled for all transactions. But my most security concern is about users faking that they have a file without actually owning it if I use sha1 or any other standard hash alogorithms. This cannot be prevented by SSL or encryption. If a user manage to get the hash string, e.g. md5 and sha1 of many files can be found by googling, he can mark that he has the file using REST service on the web application.
So one of the possible solution is that the server requests a set of certain random bytes from the file as well as the hash of the whole file. Here is example steps:
Client checks whether a certain hash exists on server or not. Then, server returns the required positions of random bytes if the file already exists.
Client sends random bytes as per request if the server has the file. Client software will not be able to response to it without having the actual file.
In this way, it can save the bandwidth as well as ensure that user owns the file they want to upload.
Question
I am no expert in Security over the web so I have no idea whether this is a good idea or not. I have read some articles about implementing their own fancy process might lead to the reduction in security strength because the security cannot be tested and the extra information may provide a cracking method.
Does anyone has any comment on the process?
Will it reduce the sucurity?
Does anyone have an idea to solve this problem differently?
I understand that there might not be an exactly answer to this question but I would like to hear if anyone has encounter the same problem and has any good solution to it.
Rather than asking the client to upload some random bytes of the file's contents, it may be better to ask the client to upload the hash of a random region the file. That way you can use a wider range of sizes that you ask the client to verify.
Better yet, though, may be to send the client a random number and require the client to compute an HMAC of the entire file's contents using that number as the key. This is more computationally-expensive since the server must compute the HMAC too, but it verifies that the client has the entire file, not just a small portion of it.
One unavoidable side effect of this hash feature, even with a verification scheme, is that it reveals that a copy of the file already exists somewhere on the server. That by itself may be sensitive information.
For the most stringent privacy protection, you should forego this feature and make each user upload their own copy of the file. You can use hash comparison on the server to avoid storing multiple copies of the file, transparently to the clients.

Resources