Error: quota exceeded (DNS resolutions : per day) - node.js

I have a Firebase app with a Cloud Function that generates some thumbnails when an image is uploaded to a particular bucket.
I keep getting these errors, pretty much nonstop:
My question is, and granted I am somewhat new to the Google Cloud Platform, how many times does DNS resolution happen? Does it happen on any upload and downloaded between Firebase and Google Cloud Storage?
All my operations are between Firebase and Google Cloud Storage (i.e. - download from bucket, resize in temp space, and upload back to bucket), and I have a check to make sure not to automatically return if an image begins with 'thumb_' to avoid infinite loop.
That being said, I believe I have this error because initially I accidentally did get myself into an infinite loop and blow out my quota.
Here is some more info about DNS resolutions, I'm not entirely sure how to interpret it but it appears 'DNS resolutions per 100 seconds' is exceeded, but 'DNS resolutions per day is not'

I think the quota limts are 40,000 per 100 seconds Either you are having so many calls, or there is some bug in your code which might be making too many calls unknowingly.

Related

Azure CDN high max latency

We are experimenting with using blazor webasembly with angular. It works nicely, but blazor requires a lot of dlls to be loaded, so we decided to store them in azure blob storage, and to serve it we use microsoft CDN on azure.
When we check average latency as users started working, it shows values between 200-400 ms. But max values of latency jump to 5-6 minutes.
This happens for our ussual workload 1k-2k users over course of 1 hour. If they dont have blazor files cached locally yet that can be over 60 files per user requested from cdn.
My question is if this is expected behaviour or we can have some bad configuration somewhere.
I mention blazor WS just in case, not sure it can be problem specificaly with way how these files are loaded, or it is only because it is big amount of fetched files.
Thanks for any advice in advance.
I did check if files are served from cache, and from response headers it seems so: x-cache: TCP_HIT. Also Byte Hit ratio from cdn profile seem ok,mostly 100% and it never falls under 65%.

How to Sync Chrome extension Storage.local when storage.sync exceeded Quota

As the title says, I want to sync chrome extension storage data across devices. I was using the storage.sync method till now. But today I got a new error message as follows:
Unchecked runtime.lastError: QUOTA_BYTES_PER_ITEM quota exceeded
After few googling, I found that storage.sync has limited quota and we need to use storage.local instead. But the problem is how do I sync data between devices while using storage.local.
Please help.
Sample Code:
chrome.storage.sync.set({tasks, function(storage) {
// do something
});
}
Unfortunately you cannot use chrome.local to sync between devices.
However the issue appears to be the item size is too large QUOTA_BYTES_PER_ITEM quota exceeded.
Please note that this quota is significantly smaller than the total amount (in bytes) of data you can store using chrome.sync.
Reference: https://developer.chrome.com/extensions/storage#property-sync
I would recommend trying to break up your items into smaller chunks and using chrome.syncfor each of the smaller set of items.
You could also consider integrating your extension with a cloud database service to allow a larger set of storage capabilities.

Sonos - Seek API returns 'ERROR_DISALLOWED_BY_POLICY'

I tried using the "Seek" API (/playbackSessions/{sessionId}/playbackSession/seek) to seek to a certain time within the track (that is loaded from an Amazon S3 bucket, not within the local network), and received the following error: "ERROR_DISALLOWED_BY_POLICY ".
In the reference it's mentioned that this occurs due to "scrubbing is not allowed".
How can I "allow scrubbing"?
I've also noticed that "Resume" (after pause), is playing the track from the start. So, my guess is that Pause/Resume/Seek is only allowed within the network. Is that the case? Any way to pause & resume a track, or seek a certain time, while using a track "outside" of the local network (A CDN link as Amazon-S3 bucket files)?
Thanks.
Playback policies are set by the content provider. In the case of Cloud Queue-based playback, these policies are set in the /context or /itemWindow responses, as described here.

Is this logic for a profile image uploader be scalable?

Would the following generic logic for implementing a profile upload feature be scalable in production?
1. Inside a web app user selects an image to upload
2. Image gets sent to the server where it gets stored in memory and validated using nodejs package called: multer
3. If the image file is valid, a unique name is generated for the file
4. The image is resized to 150 x 150 using nodejs package called: sharp
5. Image is streamed to google cloud storage
6. Once image is saved a public URL of the image is saved under the user’s profile inside of the database
7. The image ULR is sent back to the client and the image gets displayed
Languages used to implement the above
This would be implemented using:
firebase cloud functions running on Nodejs as a backend
Google Cloud Storage for holding images
firebase database for saving image url for the user uploading the image
My current concerns with this are:
Holding an image in memory while it gets validated and processed will potentially lead to servers clogging up during heavy load
How would this way of generating a unique name for the image scale?: uuidV4 + current date in milliseconds + last 5 characters of the image's original file name
Both multer and sharp are efficient enough for your needs:
Uploaded 2MB pictures downsized to 150x150 resolution.
Let's say you'll have 100,000 users. Every user uploads a picture that gets downsized to 20kb image. That's 1.907GB ~ 2GB of storage.
Google Cloud Storage:
One instance in Frankfurt
2GB regional storage
= $0.55 a year
Google Cloud Functions:
With 100k users, for simplicity, let's assume it will be evenly distributed and so we'll have 8334 per month users uploading their images. And let's be extremely pessimistic and say that one resize function will take 3s.
8334 resize operations per month
3s per function
Networking throughput 2MB per function
= $16.24 a year
So you're good to go. Happy coding!
This answer will get outdated eventually, so I'm including screenshot of google price calculator:

Azure Block Blob PUT fails when using HTTPS

I have written a Cygwin app that uploads (using the REST API PUT operation) Block Blobs to my Azure storage account, and it works well for different size blobs when using HTTP. However, use of SSL (i.e. PUT using HTTPS) fails for Blobs greater than 5.5MB. Blobs less than 5.5MB upload correctly. Anything greater and I find that the TCP session (as seen by Wireshark) reports a dwindling window size that goes to 0 once the aforementioned number of bytes are transferred. The failure is repeatable and consistent. As a point of reference, PUT operations against my Google/AWS/HP cloud storage accounts work fine when using HTTPS for various object sizes, which suggests the problem is not my client but specific to the HTTPS implementation on the MSAZURE storage servers.
If I upload the 5.5MB blob as two separate uploads of 4MB and 1.5MB followed by a PUT Block List, the operation succeeds as long as the two uploads used separate HTTPS sessions. Notice the emphasis on separate. That same operation fails if I attempt to maintain an HTTPS session across both uploads.
Any ideas on why I might be seeing this odd behavior with MS Azure? Same PUT operation with HTTPS works ok with AWS/Google/HP cloud storage servers.
Thank you for reporting this and we apologize for the inconvenience. We have managed to recreate the issue and have filed a bug. Unfortunately we cannot share a timeline for the fix at this time, but we will respond to this forum when the fix has been deployed. In the meantime, a plausible workaround (and a recommended best practice) is to break large uploads into smaller chunks (using the Put Block and Put Block List APIs), thus enabling the client to parallelize the upload.
This bug has now been fixed and the operation should now complete as expected.

Resources