LevelDB access problems - google-chrome-extension

I'm trying to recover my lost data in chrome Extension "OneTab"
The extension uses level DB to store the data
I tried https://filext.com/file-extension/LDB
It opened partially the file and it seems to have the URL's
I have a good reason to believe one of the .ldb files have the data
But The encryption makes the file useless
So how can I decrypt the binaries and retrieve my data?

Related

How to store file in a disk and reference it in Postgres database and sync up?

I want to store as a file in server. It may be in the disk and store the path of that file in Postgres table. I know storing the file in the disk in no more Sync up in the Postgres. It may happen that someone deletes that file on the disk and my column still referencing it. So, I go deep down and find that Postgres has some feature which allows users to save the file in the disk and create a symbolic link and stored in the database itself which allows to sync up. It also works as a backup. I also want to control the file from accessing it based on some condition. How can we do it in Postgres? Any help is appreciated.
You can either store files in the filesystem and put their file names or paths into the database (but that way you will have to remember about those files when you do backups or migrate data) or you can let Postgres store the files in the database using BLOB, bytea or text data types.
See the documentation:
https://www.postgresql.org/docs/current/static/largeobjects.html
https://www.postgresql.org/docs/8.4/static/datatype-binary.html
https://www.postgresql.org/docs/8.4/static/datatype-character.html
If you store files in the database then they will be backed up automatically, you will have ACID and other database goodies.
If you store files in the file system then you are pretty much on your own, but the performance characteristics may have some advantages for certain use cases.
There is also a third option of storing files in services like S3, Cloudinary, Uploadcare etc. and storing their IDs (usually UUIDs) in the database - which is a pretty common use case for certain data, especially user-uploaded photos etc.
See this for more info:
https://wiki.postgresql.org/wiki/BinaryFilesInDB

Store and access data offline from a website

What is the best/easiest way to store data offline? I have a website that I only run local (it's just for personal use) so I am not using any php or sql. I have a lot of posts containing a date, a time, a description the consist of a lot of text and a few of them contain an audio file (there are very few audio files so they may be stored separately from the rest). Now I want to make a website which can show these posts at request, but since I am not using either a server or a database I'm not sure how to store them. Use of any kind of framework or library is allowed, as long as I can use it without an internet connection.
Thanks.
EDIT: JSON is a good way to read data without a server-side language, but I don't know if it's possible to or how to write to a file without a server-side language. To summarize: I want a database (for both storing and accessing) without the need for a server.
Easy way without setting up a web or database server is to use JSON files imo. The syntax is very easy to learn!
Edit: I'd there is a better way to do this without dB setup / server side languages I'd like to hear it

Images in Web application

I am working on application in which users will upload huge number of images and i have to show those image webpage
What is the best way to store and retrieve images.
1) Database
2) FileSystem
3) CDN
4) JCR
or something else
What i know is
Database: saving and retrieving image from database will lead to lot of queries to database and will convert blob to file everytime. I think it will degrade the website performance
FileSystem: If i keep image information in database and image file in filesystem there will be sync issues. Like if i took a backup of the database we do have take the backup of images folder. ANd if there are millions of files it will consume lot of server resources
i read it here
http://akashkava.com/blog/127/huge-file-storage-in-database-instead-of-file-system/
Another options are CDNs and JCR
Please suggest the best option
Regards
Using the File System is only really an option if you only plan to deploy to one server (i.e. not several behind a load balancer), OR if all of your servers will have access to a shared File System. It may also be inefficient, unless you cache frequently-accessed files in the application server.
You're right that storing the actual binary data in a Database is perhaps overkill, and not what databases do best.
I'd suggest a combination:
A CDN (such as AWS CloudFront), backed by a publicly-accessible (but crucially publicly read-only) storage such as Amazon S3 would mean that your images are efficiently served, wherever the browsing user is located and cached appropriately in their browser (thus minimising bandwidth). S3 (or similar) means you have an API to upload and manage them from your application servers, without worrying about how all servers (and the outside world) will access them.
I'd suggest perhaps holding meta data about each image in a Database however. This means that you could assign each image a unique key (generated by your database), add extra info (file format, size, tags, author, etc), and also store the path to S3 (or similar) via the CDN as the publicly-accessible path to the image.
This combination of Database and shared publicly-accessible storage is probably a good mix, giving you the best of both worlds. The Database also means that if you need to move / change or bulk delete images in future (perhaps deleting all images uploaded by an author who is deleting their account), you can perform an efficient Database query to gather metadata, followed by updating / changing the stored images at the S3 locations the Database says they exist.
You say you want to display the images on a web page. This combination means that the application server can query the database efficiently for the image selection you want to show (including restricting by author, pagination, etc), then generate a view containing images referring to the correct CDN path. It means viewing the images is also quite efficient as you combine dynamic content (the page upon which the images are shown) with static content (the image themselves via the CDN).
CDNs may be a good option for you.
You can store the link to the images along with the image information in your database.

Node.js / CouchDB: Use .json files instead of a database + Version Control

I'd like to just use .json files to store data, rather than using a database. Many simple sites have little data, and reading/writing to a file (that can be added to version control) seems adequate, and eliminates the need for database versioning / deployment logistics.
npm: node-store
Here's one way to do it, yet I'd need to implement all kinds of query functionality.
I'm really unfamiliar with CouchDB. From the little I've read, it looks like it might use files to store the JSON data, but it might use some kind of disk storage. Can someone shed some light on this?
Does CouchDB store its JSON in text-based files that can be added to version control (git)?
Does anyone know of another text-based storage system with some query functionality?
CouchDB is a full fledged database. The value that gives you above simply using file based storage is additional indexing. Ie., if you do file based then you can either only do key based look ups (the file name) or build your own secondary indexing methodology (symlinks or whatever). Now you're in the database building business instead of the app building business, which is silly because your entire premise seems to be simplicity and focusing on your app.
Also, keep in mind that when you have many (even just 2) people causing writes to your file(s), then you're going to run into either file system locking problems or users overwriting one another.
You're correct though, if you only have a few pieces of information then a single JSON file - basically a config file - is far easier than a database. Especially if people are only reading from the file.
Also, keep in mind that there are Database-as-a-Service solutions that remove the need for DIY install/configure/maintenance/administration. One of them is Cloudant which is based on CouchDB, is API compatible, contributes back, etc. (I work at Cloudant).
Does anyone know of another text-based storage system with some query functionality?
You can use ueberDB module with Dirty file storage.
As far as I remember, this storage just appends your data to the same text file over and over again, so if you really have small dataset, it'll work just fine.
If you data will grow too much, you can always change storage while using the same module.

Malicious code in automatically downloaded files?

I've done my research, but I want to double check:
Lets suppose I have a file like 'myfile.csv' on my website, which was acquired from a malicious source. Can any sort of executable server-side code be hidden in it?
If so, how do I prevent this?
A CSV file would likely not be "executed" on the server side, rather it would simply be served up as-is. You'll want to ensure that your web server's "handlers" are configured properly to prevent your CSV file from being processed by the PHP (or any other unintended) handler.
That said, you should if possible validate/sanitize all input into your application, even uploaded files e.g. CSV, especially if the content will later be accessible by other users.
Since you mentioned PHP, here is a link to some more info about Apache's Handlers. IIS has similar functionality and I imagine most mature web servers have a way of handling requests for different file extensions differently. http://httpd.apache.org/docs/2.2/handler.html
If possible, I'd recommend storing your CSV files in a database (or really anywhere they cannot be accessed via a URL) and streaming them on-demand via an intermediary PHP file to clients. Here is an example of that: Stream binary file from MySQL to download with PHP

Resources