What is the best approach to store settings in j2me? - java-me

I want to store the setting of my j2me application. When application starts, the setting will be loaded. Two ways I found are
RecordStore
Read and write in text files
What I think is,
Risk using Record Store if user delete RMS data then I will loose the settings. Text file can be stored in jar so user can't delete them.
Which is best approach from above two considering performance and above risk or is there better approach than above two?
[Appending]
Regarding RMS files
Suppose user access the mobile storage from computer then RMS files are visible. And regarding text files, if i store the txt file in apps resources, then it will be a part if jar file and user wont be able to access it. when application is uninstalled that file will also get deleted as it is part of jar. Considering this as well what will be best approach

RMS is definitely the best solution.
There is no way for a user to delete an app's RMS data unless you provide one.
Also, if the user uninstalls your app, the data will be deleted at the same time: this is a good thing.
Using text files is a bad idea because the user can access it and delete it at any time, and you have no way of deleting it when the user uninstalls.

Related

Saving records in a protected folder .htaccess, secure?

I have done quite a lot of searching but am not really able to find a clear answer. I'm wondering if storing simple generated record documents (.txt files, e.g. purchase records) in a protected directory with deny from all is secure? Obviously, anyone going directly to the file in the browser will not be able to access it, but I wonder if the information in these text files is visible in other ways?
Why store them in a place accessible by the browser? Can’t you place the files somewhere else in the server, in a directory that is not seen by the http server?
I assume you would like to access them later through the browser and if that’s the case, can’t you create those reports on the fly each time a request is made for them? I have seen servers littered with saved reports when the best solution would have been to generate the reports again by retrieving data from a database. Please do not take this as an insult, but if my assumption is correct, try to consider another solution.
Technically, the answer to your question is “those files are not accessible if the server is configured correctly, you have no bugs in the code, etc.”

What's the best way to implement secure storage of user-uploaded files on a website?

I'm working on a web platform which will contain some rather sensitive personal information, and obviously this raises the problematic of how secure this data will be. Users can upload some files, and I was wondering what the best way was to store them securely.
I've done several searches, and one of the pages which I found quite useful was https://stormpath.com/blog/how-to-gracefully-store-user-files (I'm not using Stormpath btw, just looking for implementation ideas) which said that using Cloud services is one of the best solutions as their security is already quite tight. The caveat I've found in other discussions is that your data is stored by a third-party, and if you use Amazon-managed encryption keys, they can theoretically view your data.
Yet, overall, one thing I don't quite understand - I guess because of my total lack of expertise in the domain - is why storing files elsewhere than on your own server would be more secure. I've tried imagining a few different scenarios :
1- files stored on the webserver with no encryption
-> obvious issue if someone breaks into the server
2- files stored on the webserver, encrypted with a global key, stored outside of the "public" folder
-> if someone manages to get access to the server, they could get the files but also find the encryption key (and whatever they want actually) and access the files?
3- files stored on a 3rd party cloud provider, ewith a global key, stored outside of the "public" folder
-> well.. same issue? if someone gets access to the server, they can get the encryption key, and I guess it wouldn't be difficult for them to get the file which gives the credentials to the cloud account, and hence get the files?
Overall, it seems that whenever your web server gets compromised.. your data is basically compromised as well? The only solution would be to encrypt the files with a key only known to the user, but in practice this comes with a lot of "usability" cons : data irrecoverable if the user forgets the key, user needs to keep safe a long encryption key on top of his password, etc..
Any comments to shed some light on this topic for me?
Thanks very much

Is CouchDB/PouchDB a viable solution for my project? Any advice is welcome

I have been reading up a lot about CouchDB (and PouchDB) and am still unsure what the best option would be for a project of mine.
I do have a possible way to solve the project in my head based on what I have read so far, but I am unsure about things like performance and would love to get some insights. Or perhaps there's a better place to ask this question? Please let me know if that's the case! (Already tried their IRC channel and the mailing list, but no answers there as of yet)
So the project is basically an 'offline-first' mobile application. The users are device installers. They get assigned a few locations and devices to install every day. They need to walk around buildings and update the data (eg. device X has been installed at location Y; Or property A of device B on location C has been changed to D, etc...)
Some more info about the basic data.
There are users, they are the device installers. They need to log into the app.
There are locations, all the places that the device installers need to visit.
There are devices, all the different devices that can be installed by the users.
There are todos, basically a planned installation for a specific user at a specific location for specific devices.
Of course I have tried to simplify the data, but this should contain the gist.
Now, these are important characteristics of the application:
Users, locations and devices can be changed by an administrator (back-end software).
Todos can be planned by an administrator (back-end software).
App user (device installer) only sees his/her own todos/planning for today + 1 week ahead.
Multiple app users (device installers) might be assigned to the same location and/or todos, because for a big building there might be multiple installers at work.
Automatic synchronization between the data in each app in use and the global database.
Secure, it should only be possible for user X to request his/her own todos/planning.
Taking into account these characteristics I currently have the following in mind:
One global 'master' database containing all users, locations, devices, todos.
Filtered replication/sync using a selector object which for every user replicates only the data that may be accessible for this specific user.
Ionic application using PouchDB which does full/normal replication/sync with his/her own user database.
Am I correct in assuming the following?
The user of the application using PouchDB will have full read access on his own user database which has been filtered server-side?
For updating data I can make use of validate_doc_update to check whether the user may or may not modify something?
Any changes done on the PouchDB database will be replicated to the 'user' database?
These changes will then also be replicated from the 'user' database to the global 'master' database?
Any changes done on the global 'master' database will be replicated to the 'user' database, but only if required (only if there have been new/changed(/deleted) documents for this user)?
These changes will then also be replicated from the 'user' database to the PouchDB database for the mobile app?
If all this holds true, then it might be a good fit for this project. At least I think so? (Correct me if I'm wrong!) But I did read some 'performance' problem regarding filtered replication. Suppose there are hundreds of users (device installers) (there aren't this many right now, but there might be in the future). Then would it be a problem to have this filtered replication running for hundreds of 'user' databases? I did read about CouchDB 2.0 and 2.1 having a selector object to do filtered replication instead of the usual JS MapReduce which is supposed to be up to 10x faster. But my question is still: does this work well, even for hundreds (or even thousands) of 'filtered' databases? I don't know enough about the underlying algorithms and limitations but I am wondering whether any change to the global 'master' database does or does not require expensive calculations to run to decide which 'filtered' databases to replicate to. And if it does... does it matter in practice?
Please, any advice would be welcome. I did also consider using other databases. My first approach would actually have been to use a relational database. But one of the required characteristics of this app must be the real-time synchronization. In the past I have been able to handle this myself using revision fields in a RDBMS and with a lot of code, but I would really prefer something as elegant as CouchDB/PouchDB for the synchronization. This is really an area that would save me a lot of headache. Keeping this in mind, what are my options? Am I going in a possible right path or could performance become an issue down the road?
Also note that I have also thought about having separate databases for each user ('one database per user'), but I think it might not be the best fit for this project because some todos might be assigned to multiple users and when one user updates something for a todo, it must be updated for the other user as well.
Hopefully some CouchDB experts can shed some light on my questions. Much appreciated!
I understand there might be some debate but I am only interested in the facts and expertise of others.

Best image upload directory structure practises?

I have developed a large web application with NodeJS. I allow my users to upload multiple images to my Google Cloud Storage bucket.
Currently, I am storing all images under the same directory of /uploads/images/.
I'm beginning to think that this is not the safest way, and could effect performance later down the track when the directory has thousands of images. It also opens up a threat since some images are meant to be private, and it could allow users to search for images by guessing a unique ID, such as uploads/images/29rnw92nr89fdhw.png.
Would I be best changing my structure to something like /uploads/{user-id}/images/ instead? That way each directory only has a couple dozen images. Although, can a directory handle thousands of other subdirectories without suffering performance issues? Does Google Cloud Storage happen to accomodate for issues like this?
GCS does not actually have "directories." They're an illusion that the UI and command-line tools provide as a nicety. As such, you can put billions of objects inside the same "directory" without running into any problems.
One addendum there: if you are inserting more than a thousand objects per second, there are some additional caveats worth being aware of. In such a case, you would see a performance benefit to avoiding sequential object names. In other words, uploading /uploads/user-id/images/000000.jpg through /uploads/user-id/images/999999.jpg, in order, in rapid succession, would likely be slower than if you used random object names. GCS has documentation with more on this, but this should not be a concern unless you are uploading in excess of 1000 objects per second.
A nice, long GUID should be effectively unguessable (or at least no more guessable than a password or an access token), but they do have the downside of being non-revocable without renaming the image. Once someone knows it, they know it forever and can leak it to others. If you need firm control of your objects, you could keep them all private and visible only to your project and allow users to access them only via signed URLs. This offers you the most flexibility and control, but it's also harder to implement.

Chrome Extension Database Storage

I am working on a page action extension and would like to store information that all users of the extension can access. The information will be key:value pairs, where the key is a web url and the value is an array of links.
I have to be able to update the database without redeploying the extension to the chrome store. What is it that I should look into using? The storage APIs seem oriented towards user data rather than data stored by the app and updated by the developer.
If you want something to be updated without deploying an updated version through CWS, you'll need to host the data yourself somewhere and have the extension query it.
Using chrome.storage.local as a cache for said data would be totally appropriate.
the question is pretty broad so ill give you some ideas Ive done before.
since you say you dont want to republish when the db changes, you need to store the data for the db yourself. this doesnt mean you need to store an actual db, just a way for the extension to get the data.
ideally, you are only adding new pairs. if so, an easy way is to store your pairs in a public google spreadsheet. the extension then remembers the last row synced and uses the row feed to get data incrementally.
there a few tricks to get right the spreadsheet sync. take a look at my github "plus for trello" for a full implementation.
this is a good way to incrementally sync, thou if the db isnt huge you could just host a csv file and get it periodically from the extension.
now that you can get the data into the extension, decide how to store it. chrome.storage.local or indexedDb should be fine thou indexedDb is usually best for later querying more complex things than just a hash table.

Resources