Transfer zip file to web server - zip

I would like to develop an app that targets everything from Gingerbread(version 2.3 API 9) to JellyBean(version 4.3 API 18).
The problem:
I need to transfer large images(40 to 50 at a time) either independently or in a zip file without the user having to click on each file being transferred. As far as I can tell I need to use the HttpClient(org.apache) that was deprecated after JellyBean.
Right now the application takes the images and zips them to a zip file prior to uploading. I can create additional zip files, for example if I have 50MB to transfer I can make each zip file about 10MB and have 5 files to be transferred if I have to. I need to transfer these files to a web server. I cant seem to find anything about transferring files after Jellybean. All the searching I've done uses the deprecated commands and the posts are 2-5 years old. I have installed andftp and transferred a 16MB zip file last night that was created by my app, but I really don't want to use that as it will require additional steps from the user. I will try andftp today and setup an intent to transfer the files to see how that works out. Supposedly andftp works until Lollipop(5.0). If there is an easier way please let me know, hopefully I've missed something about transferring files. Is there another way to do this after JellyBean?

Related

Zip Files In Browser Cache

Hey Guys
At the moment I have a NodeJS webapp in the making which scrapes a website for data. Specifically, this webapp scrapes images for the purpose of downloading them. For example, all the image permalinks are scraped from the reddit front page. They are then sent to the client to download individually. My issue is with the website I am scraping there can be thousands of images.
This provides a horrible user experience if 1000+ images are downloaded to the download folder.
As a result I have two options.
A) Download to a temporary folder on the server. Zip. Send to client for download. Delete from server
B) Download files to browser cache. zip. download to specified download directory.
My question to you is this; Is option B even possible?
I am relatively new to this entire process and I can't find anything to actually zip the files in the browser cache. I can implement option A relatively easily however this requires a large amount of bandwidth, something I can find for around $5/MO on DigitalOcean. However this entire project is a learning experience and as a result I would love to be able to manage files in the browser cache instead.
I am using the following NPM Modules:
NodeJS
Express
Cheerio
Request
Further Update
I had come across a plugin for NPM called jsZip: https://stuk.github.io/jszip/
However, I was unaware it could be implemented on the client side as well. This was purely an error on my part. This brings up an interesting issue of WebStorage: https://www.w3schools.com/html/html5_webstorage.asp
the maximum storage size for the session is 5MB
From here I will attempt to implement this answer here: How do you cache an image in Javascript to my current code and will update this answer with the result for anyone else facing this issue.

How to develop a real time file upload with Angular 2 and Node.js?

Usually, while we upload it takes files to the temp directory first and then move it to the desired directory. But I'm working on Big Data e.g. uploading thousands of files at once. So I need to upload those files directly to the desired location and as each one of them uploaded to that directory, the user must see the changes on the dashboard in real time.
Also I need to show user
If any exception has occurred while uploading e.g. if a file causing a problem in the uploading process.
There should be an option to skip that file or retry upload.
Report to show the list of files uploaded successfully vs files that failed to upload.
If there is any network outage, the upload manager should keep retrying until the network is restored.
User can pause upload and can restart it on next login(if it is feasible)
This is about full manipulation of the upload process to give user the best user experience while uploading large sets of data.
You can use ng2-file-upload, it has most of the feature you require.
You can also find demo here.
For rest of the features you require, you can implement those on top of this library (It's better than writing your own code from scratch).

JSON must be no more than 1000000 bytes

We have a Jenkins-Chef setup with a QA build project to a website for a client. The build gets the code from Bitbucket, and a script uploads the cookbooks from the Chef Client to the Chef Server.
These builds ran fine for a long time. Two days ago the automated and manual builds started failing with the following error (taken from the Jenkins console output):
Updated Environment qa
Uploading example-deployment [0.1.314]
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
From what I understand, JSON files are supposed to be related to nodejs which is what the developers use on this webserver.
We looked all over the config files for Jenkins, the Chef-Server and the QA server. We couldn't find a way to change this 1MB limit that is causing this error.
We tried changing client_max_body_size, didn't work.
We checked the JSON files size, non of them reach this limit.
Any idea where we can find a solution? Can this limit be changed? Is there anything we can do (Infrastructure wise) or should this be fixed from the developer side?
So first of all, the 1M value is more or less hardcoded, the chef-server is not intended to store large objects.
What happens is before uploading a cookbook, a json file with it's information is created, as this file will be stored in DB and indexed it should not exceed a too large size to avoid performances problems.
The idea is to upload to the chef-server only what is absolutely necessary, strip CVS directory, any IDE build/project file, etc.
Best solution to achieve it simply is using the chefignore file. It has to be created just under the cookbook_path.
The content of this is wildcard matches to ignore while uploading the cookbook so an example one could be:
*/.svn/* # To strip subversion directories
*/.git/* # To strip git directories
*~ # to ignore vim backup files

How to throttle bandwidth for OverGrive in Linux (Debian)?

I've installed trickle but can't seem to get it to throttle overGrive.
$ trickle -d 500 -u 100 overgrive
trickle: Could not reach trickled, working independently: No such file or directory
trickle: exec(): No such file or directory
Is there another way to get overGrive to stop sucking up all my bandwidth while syncing?
I managed to solve the issue I had and my overGrive works fine for the last couple of weeks. It turned out that it had synchronized with some public files created by different users, and which didn't have anything in common with my Google Drive account. What was common for all these files is that it belonged to some courses and had names like: "MATH&141, Mod 01: Quiz 2 Toolkit". For some reason these files didn't have .doc extension and had symbols & and : in names, which seems to cause overGrive stack on it forever.
Anyway, I did perfom the following steps and it fixed the issue:
downloaded and installed the latest version of overgrive.
clear all trash files from Google Drive online
delete all files from your local Google Drive folder if present and restart overGrive:
.overgrive.lastsync
.overgrive.cache
turn off automatic sync, and start synchronization manually.
wait until full synchronization is finished.
You can check in your Home folder log file called .overgrive.log to see if there are some errors during the synchronization. It might happen that overGrive blocks on some specific file and try to synchronize it over and over again causing large use of download/upload.

How create a single-file format of multiple documents with iOS for Windows, OSX, iPhone

I'm working in a end-user app that will create documents that could have several files inside it.
The main resource is a sqlite database. The user can store several media files that are referenced form that database.
My first impulse is use OSX Bundles, but that will show that are folders on windows.
Or put all the data inside the sqlite database, but will have issues when try to open large files.
Or maybe inside a zip file but need to compress/decompress.
Or maybe exist a magic trick to show a folder as a file under window...
Exist a working VFS (virtual file system) for iOS?
I'd go for a well-known archiving format such as zip or tar, because I'm sure there are pre-made libraries for those that you can use. By the way, the ZIP format allows you to store files without compression as well.
Yes, you can use our SolFS Application Edition, this is exactly what you are looking for and it's available for iOS as well as all other major platforms.
ZIP has the problem of ineffective modification - it's slow and resource-consuming.

Resources