How to reduce browser cache Size when developing Flutter Web App - flutter-web

I'm currently creating a Flutter website and I don't quite understand why the browser cache file has a size of about 8.5 MB, although I've only uploaded and use images with a total size of 1.4 MB.
What else does the cache consist of and how can it be reduced?
I don't use many packages and not much code is stored.
Is there anything I need to consider with Firebase hosting that affects the cache size?
Thanks in advance for all answers

Related

PWA app on Node js fetching data on Safari

Is there any limit on the amount of data fetched in one go on PWA app on Safari ?
I am fetching 50MB data in one go using fetch request it is taking 2mins but after that my service worker stops working on safari but the same continues to work on Chrome on Android.
Apple limits to you roughly 50MB in service worker cache. Which freaks a lot of folks out, but don't worry.
The #1 reason you should need more storage is for media (photos/videos). Regular site assets like html, js, css should fit with plenty of room for any application.
For sites with media dependencies I persist images, videos and other large binary assets in IndexedDB.
Even on iOS you have several GB of storage available. Of course that will vary by the amount of disk space on the device. But I have tested and found over 4GB consistently available on a 32GB iPhone 6.
I wrote an article on Service Worker cache limits, and maybe it will help you out.
https://love2dev.com/blog/what-is-the-service-worker-cache-storage-limit/
Google chrome is best way to render PWA app in browser or Mobile device as per documentation

Nodejs memory gets filled too quick when uploading images ~10MB

Summary
Uploading an image from a Nodejs backend to AWS S3 via JIMP fills up heaps of memory.
Workflow
Frontend (react) sends image via formsubmission to the API
The server parses the form data
JIMP is rotating the image
JIMP is resizing the image if > 1980px wide
JIMP creates Buffer
Buffer is being uploaded to S3
Promise resolved -> Image meta data (URL, Bucket name, index, etc.) saved in Database (MongoDB)
Background
The Server is hosted on Heroku with only 512MB RAM. Uploading smaller images and all other requests are working fine. However, the app crashes when uploading a single image larger than ~8MB, with only a single user online.
Investigation so far
I've tried to replicate this on my local environment. Since I don't have a memory restriction, the app won't crash but the memory usage is ~870MB when uploading a 10MB image. A 6MB image stays around 60MB RAM usage. I've updated all the npm packages, and have tried to disabled any processing of the image.
I've tried to look for memory leaks as seen in the following screenshots, however, following the same workflow as above for the same image (6MB) and taking 3 heap snapshots are giving around 60MB RAM usage.
First, I thought the problem is that the image processing (resizing) takes too much memory, but this would not explain the big gap between 60MB (for a 6MB image) and around 800MB for a 10MB image.
Then I thought it's related to the item "system / JSArrayBufferData" (seen in ref2) which is taking around 30% of the memory. However, this item is always there, even I do not upload an image. It only appears just before I stop the recording snapshot in the "Memory tab" under the "Chrome dev tools". However, I'm still not 100% sure what exactly it is.
Now, I believe this is related to the "TimeList" (seen in ref3). I think it's coming from timeouts waiting for the file to be uploaded to S3. However, here as well, I'm absolutely not sure why this is happening.
The following are the screenshots of in my opinion important parts of the snapshots of the Chrome Inspector running on the server on nodejs with the --inspect flag.
Ref1: Shows full items of 3rd Snapshot - All of the 3 snapshots have uploaded the same image of 6MB. Garbage seems properly collected as memory size did not increase
Ref2: Shows the end 3rd Snapshot, just before I stopped recording. Unsure what "system / JSArrayBufferData" is.
Ref3: Shows the end of the 5th Snapshot, this is the one with a 10MB image. Those little, continuous spikes are the items "TimeList" which seems to be related to a timeout. It seems they appear when the server is waiting for a response from AWS. It also seems this is what's filling up the memory as this item is not there when uploading something less than 10MB.
Ref4: Shows the immediate end of the 5th Snapshot, just before stopping the recording. "system / JSArrayBufferData" appears again, however, only at the end.
Question
Unfortunately, I'm not sure how to articulate my question as I don't know what the problem is or for what I really need to look out. I would be very appreciative for any tips or experiences.
the high memory consumption was caused by the package "Jimp" which has been used to read the file, rotate it, resize it, and create a buffer to upload to the file storage system.
The part of reading the file, i.e. Jimp.read('filename') has caused the memory problem. It's a known bug as seen here: https://github.com/oliver-moran/jimp/issues/153
By now, I've switched to the 'sharp' image processing package, and am now able to easily upload images and videos larger than 10MB.
I hope it helps for people running into this as well.
Cheers

Improving Deployed Server Performance of Large File Uploads

I’m currently working on a web application that deals with large file uploads (we’re talking 100mb to up to a gigabyte), and then reading that file into memory. I’m attempting to host the server on Heroku, but even with upgraded levels, the server struggles in 1) EXTREMELY slow upload speed - will take like 20-30 minutes for a decently sized file, and 2) memory issues when reading the file into memory. How would you go about improving performance in this scenario? Is the main concern RAM?
The stack we’re using is React in the front end, Node/Express for the back end.
Also worth noting, we’re currently looking to host our own server via our own Linux PC. This PC will obviously have more RAM and power than a Heroku server - will this actually improve performance when it comes to file uploads?
We're tried so far:
Compressing the file in the front-end, but it causes Chrome to crash
Upgrading Heroku to pro levels
Thanks in advance!

the react native close app when the size app 1 GB - PDF react native

I have a problem that I will explain and want to solve
I have an app you've built with (React native)
There is a company that has a large number of files in the file you want to apply without Internet
Out of line, its data size reaches approximately 10 GB
When adding only two sections of this data to the application, the size of this data reached 1 GB
I have a problem, that is, an application on android in Fremorek (react native cli)
It no longer works because of the large size of the application and if you delete multiple files and make it less than 1 GB, the application will work normally and smoothly
Terms of work on the application
All data is offline
The application works without the Internet
The application is not uploaded to the store
When the application is finished I install it on 100 iPads
I want to solve this problem that the application suffers

How to reduce memory consumption for Orchard CMS site hosted on Windows Azure Websites

I have an Orchard CMS website currently hosted on Windows Azure Websites.
Its a pretty standard blog where images are hosted via skydrive and linked, so the blog itself only serves html.
I've set it in Shared mode, running 1 instance.
But I keep getting quota reached. and it seems like my site is always maxing out the memory (max is 512mb per hour) and I can't understand why?
I've tried increasing to 3 instances, but it doesn't increase the maximum memory I can use.
Update:
The maximum usage for websites under Shared mode are:
CPU Time: 4 hours per day, 2.5minutes per 5 minute
File System: 1024mb
Memory usage: 512mb per hour
Database: 1024mb (web instance)
Update2:
I've tried re-creating my website in different zones. Currently my site is hosted in US West, which has the above limits, but other zones have slightly different limits, such as East Asia has 1024mb per hour memory usage limit! I haven't been able to dig up any documentation on this, which is puzzling.
Update3:
In Update2 I mentioned that different regions have different "memory usage per hour limit". This is actually not true. I had set up a new site under the "Free" setting with 1024mb per hour, but when I switched this to "Shared" the memory usage limit came down to 512mb per hour.
I have not been able to reproduce this issue in any of my other sites despite being the same source code, which leads me to believe its something weird with my particular azure website set up. Possibly something to do with the dashboard as mentioned by #Vinblad.
I'm planning to set up a new azure website in a different region, and while I'm at it, upgrade to Orchard 1.6
Had a similar issue on Azure with Orchard. It was due to the error log files continually increasing and taking up space. Manually deleting files at the moment but have to look into a more automated solution.
512MB / hour doesn't make any sense at all, I agree with Steve. 512MB (not per hour) is more than enough to host Orchard however. Try to measure memory on your local copy of the site. If you do get abnormal memory consumption, try to profile it and find the module that's responsible for it. If not, then contact Azure support and ask them why the same application would take more memory on Azure than on your local machine.
Another thing to investigate would be caching: do you have output caching enabled?
I saw this post on the Azure forums where they recommend disabling the dynamic module loader. We gave this a try but this gave us problems with the images so we had to revert back.

Resources