chrome.storage.sync limits vs. Google Keep - google-chrome-extension

I understand the limitations of QUOTA_BYTES_PER_ITEM and QUOTA_BYTES when using chrome.storage.sync. I'm finding them quite limiting for a annotated history related extension I am writing. I understand that local storage could avoid this problem, but I need a user to be able to maintain their data as they move to other devices or someday replace their machine. My question is - are their other storage methods to get around this? What about Google Keep? It is an extension, but it appears capable of a "unlimited" storage of notes, or at least far more than the limitations of chrome.storage.sync. Is it simply not playing by the same rules, or are there other methods I could be using? Currently I'm concatenating information into large strings in javascript and storing these using chrome.storage.sync. Then parsing that information later as my database.
Thanks for any help!

Related

Best image upload directory structure practises?

I have developed a large web application with NodeJS. I allow my users to upload multiple images to my Google Cloud Storage bucket.
Currently, I am storing all images under the same directory of /uploads/images/.
I'm beginning to think that this is not the safest way, and could effect performance later down the track when the directory has thousands of images. It also opens up a threat since some images are meant to be private, and it could allow users to search for images by guessing a unique ID, such as uploads/images/29rnw92nr89fdhw.png.
Would I be best changing my structure to something like /uploads/{user-id}/images/ instead? That way each directory only has a couple dozen images. Although, can a directory handle thousands of other subdirectories without suffering performance issues? Does Google Cloud Storage happen to accomodate for issues like this?
GCS does not actually have "directories." They're an illusion that the UI and command-line tools provide as a nicety. As such, you can put billions of objects inside the same "directory" without running into any problems.
One addendum there: if you are inserting more than a thousand objects per second, there are some additional caveats worth being aware of. In such a case, you would see a performance benefit to avoiding sequential object names. In other words, uploading /uploads/user-id/images/000000.jpg through /uploads/user-id/images/999999.jpg, in order, in rapid succession, would likely be slower than if you used random object names. GCS has documentation with more on this, but this should not be a concern unless you are uploading in excess of 1000 objects per second.
A nice, long GUID should be effectively unguessable (or at least no more guessable than a password or an access token), but they do have the downside of being non-revocable without renaming the image. Once someone knows it, they know it forever and can leak it to others. If you need firm control of your objects, you could keep them all private and visible only to your project and allow users to access them only via signed URLs. This offers you the most flexibility and control, but it's also harder to implement.

Azure- one drive transfer

I am very new to microsoft azure. I would like to transfer 5gb of files(datasets) from my Microsoft one drive account to azure storage(blob storage I guess), and then share those files to about 10 other azure accounts on azure(I have some idea as to how to share files these files). I am not really sure how to go about it, and I prefer not downloading the 5gb of files from one drive and then uploading to azure. Help would be greatly appreciated, thanks a lot.
David's comment is correct, but I still want to provide a couple links to get you started. Like he mentioned, if you can break this into several questions that are more specific you can probably get much better StackOverflow response. I think the first part of the question could be phrased as 'How can I quickly transfer 5GB of files to Azure storage?'. This is still opinion based to some degree but has a couple more finite answers:
AzCopy/DmLib are, respectively, a command line tool and an Azure library that specialize in bulk transfer. There's a couple options including async copy and sync copy. These libraries are specialized to a greater degree for upload/download from the file system but will get you started.
There's a variety of language storage libraries where you can write custom code to connect up with OneDrive. Here is a getting started with .Net.
I think this is a very genuine question as downloading huge files and uploading them back is a very expensive and time consuming task. You can refer to a template here which would allow you to do a server side copy.
Hopefully, if not you, someone else would be benefited with this.

Technologies for File Sharing

We presently have a social networking kind of platform. We are next working on file sharing feature, wherein the user should be able to upload and share files(pdf,ppt,docs,images,zip) with friends and groups.
Which specific technologies we should look out for? We are not looking for storage providers like Dropbox, Amazon s3 as answer. We want some advice for efficient storage technologies. We have to store attributes of files like author, with whom the file is shared, edit rights, download rights etc.
Any help would be appreciated.
The answer depends on your specific requirements. In general, you should look for a provider that offers high availability (e.g. no single point of failure), high durability (once something is written, it stays written), and high performance (low latency, high throughput). In addition, you may want certain security features but the specifics are, again, a function of your needs. You noted the ability to specify sharing attributes so you'll want a provider that has a high degree of flexibility and control in terms of specifying access permissions. To store related data, like authorship, you'll want the ability to store and retrieve arbitrary meta data associated with the storage object. Finally, while you stated you don't want a specific provider recommendation, I will nonetheless add that Google Cloud Storage is an excellent choice because it provides all of the above functionality and more (full disclosure: I work on Google's cloud products).

TrueCrypt alternative with API

I am searching for a TrueCrypt alternative that has an API to programmatically access the files. Does anyone know a solution?
The API should support the listing, creating, changing and deleting of files.
Diskcryptor does not have an API, but it is GPL.
If I may, I beleive what you are asking for is for a abstract file system library. I understand that you want to load a TrueCrypt or similar container and list its content. When it is opened, such a container is just raw bytes reprenting sectors. On top the the encryption, such an API would see only raw sectors and it would have to make sense of them with a corresponding sector level api.
You can see the problem in another way. How would you write a program, such as zip, that can present such information on a zip file, a very common container if you will.
So the API you are looking for would need to acheive two things :
Understand the container's encryption scheme (possibly multiple version of it)
Understand the sector format of the embeeded filesystem
Provide a user friendly API.
I have asked myself the same questions a while ago, scoured the net for answers, and this answer is the sum of what I have found so far. I hope you find it a valid answer, even if its not actionable.
Not yet, anyways ;)
Our SolFS OS Edition might be what you are looking for if you plan to create new software. It's available for Windows, MacOS X, Linux and FreeBSD.
Java Filesystem Provider with integrated encryption : https://github.com/cryptomator/cryptofs

network drive file sharing

For the better part of 10 years + we have relied on various network mapped drives to allow file sharing. One drive letter for sharing files between teams, a seperate file share for the entire organization, a third for personal use etc. I would like to move away from this and am trying to decide if an ECM/Sharepoint type solution, or home grown app, is worth the cost and the way to go? Or if we should simply remain relying on login scripts/mapped drives for file sharing due to its relative simplicity? Does anyone have any exeperience within their own organization or thoughts on this?
Thanks.
SharePoint is very good at document sharing.
Documents generally follow a process for approval, have permissions, live in clusters... and these things lend themselves well to SharePoints document libraries.
However there are somethings that don't lend themselves well to living inside SharePoint... do you have a virtual hard drive (.vhd) file that you want to share with a workmate? Not such a good idea to try and put a 20GB file into SharePoint.
SharePoint can handle large files, and so can SQL Server behind it... but do you want your SQL Server bandwidth being saturated by such large files? Do you want your backup of SQL Server to hold copies of such large files multiple times?
I believe that there are a few Microsoft partners who offer the ability to disassociate file blobs from the SharePoint database, so that SharePoint can hold the metadata and a file system holds the actual files, and SharePoint simply becomes the gateway to manage access, permissions, and offer a centralised interface to files throughout an organisation. This would offer you the best of both worlds.
Right now though, I consider SharePoint ideal for documents, and I keep large files (that are not document centric) on Windows file shares.
Definetely, use a tool.
The main benefit here is version control. Being able to jump easily to a previous version, diff'ing and seeing who modified what (see most VCS' blame/annotate tool- it prints out a text file showing when/who modified each line in the text file).
Second, you can probably benefit from issue tracking/task tracking.
Other benefits include web access from the internet, having a wiki (which can be great in some situations), etc.
I use Subversion + Redmine at work, and I find it highly useful- test a few solutions and you will surely find out further advantages for you.
One thing that can be overlooked in the change to an document management tool is the planning required around how much is going to be stored and information architecture issues like where different content is going to end up.
SharePoint particularly is easy to setup without a good plan going forward and is particularly vulnerable to difficulties later on when things get to busy.
I would not recommend a home grown app for something like this. The problem has been solved by off the shelf tools and growing one from scratch is going to cost a huge amount and not get you any way near the features for the money.
Did I mention how important planning your security groups and document areas (IA) was?
If you need just document storage then sharepoint can do very well. WSS is ewen free and it provides very good document storage capabilities.
But you have to plan carefully as updating existing applications is painfull. If you decide to go with Sharepoint then I can give you few advices from top of my head
Pay attention to security configuration (user groups, privilegies,..)
Plan your document libraries well as it is not easy to just move documents betveen them
Also consider limiting number of versions that one document can have, because sharepoint stores full backups betveen verions, not just changes
Don't use infopath:) we have very bad experience with it (just don't tell this to managers)
If you don't really need to change graphical look of Sharepoint than don't bother with it as it brings many problems (I'm talking about custom masterpages and custom site templates)
Try to use as much OOB stuff as possible, because developing your own webparts not only cost more, but it can be quite complicated.
Make sure to turn-on search indexing. This is quite tricky, because it is by default turned off and then you will be as surprised that search is not working as I was :)
If you try to just deploy it and load 10.000 documents into it then you will surely have problems with it later. If you give a little thought about structure then you will end up with really good document storage.
Migrating is very probably worth the cost in the long term. You will gain reliability, versioning, traceability, and extensibility.
Be sure to first identify the groups/rights, and to identify which links need to be fixed (maybe you have applications that use links to the shares).
An open source alternative to SharePoint is Alfresco, it is very good for CIFS (Windows shares) too.

Resources