strange behaviour for azure blob - node.js

i am using azure-storage to read blob files from container in my nodejs code
My code
blobService.listBlobsSegmented(containerName, token, { maxResults : 10 }, function(err, result) {
if (err) {
console.log("Couldn't list blobs for container %s", containerName);
console.error(err);
} else {
// do things here
}
});
It is working fine but when increase blob limit 10 to 500 my network stop working. what can be issue here ?

If you do an HTTP traffic capture with network analyzers like Fiddler or Wireshark, you will find this SDK (azure-storage) is just a REST API wrapper. So, it doesn't have much control on the network. If you still have this problem, I would recommend you check your computer or router network settings.

Related

Google Cloud Function running on nodejs10 test fails

Here are the relevant bits of my function:
// Finally send the JSON data to the browser or requestor
res.status(200).send(output);
} catch (err) {
res.status(500).send(err.message);
} finally {
await closeConnection(page, browser);
}
When I run this locally it works flawlessly and returns my output to the web browser. When I upload it to Google Cloud Functions and test it the res.status(200).send(output); line fails with this message:
Error: function execution failed. Details:
res.status is not a function
Has anyone else seen this behavior? I'm completely puzzled as to why it would work perfectly on my local machine, but fail when I run it as a cloud function.
After digging around a bunch I found the answer. Google Cloud Functions that have a 'background' trigger type do not recognize res.status. Instead they want callback:
https://cloud.google.com/functions/docs/writing/background#function_parameters
// Finally send the JSON data to the browser or requestor
callback(null, output);
} catch (err) {
callback(new Error('Failed'));
} finally {
await closeConnection(page, browser);
}
If you run your local development instance with the --signature-type flag it correctly starts up, but you can no longer test by hitting the port in a web browser:
"start": "functions-framework --target=pollenCount --signature-type=cloudevent",
Documentation on how to send mock pub/sub data into your local instance is here:
https://cloud.google.com/functions/docs/running/calling#background_functions

How to update file when hosting in Google App Engine?

I have node js server service running on a Google Cloud App Engine.
I have JSON file in the assets folder of the project that needs to update by the process.
I was able to read the file and configs inside the file. But when adding the file getting Read-Only service error from the GAE.
Is there a way I could write the information to the file without using the cloud storage option ?
It a very small file and using the cloud storage thing would be using a very big drill machine for a Allen wrench screw
Thanks
Nope, in App Engine Standard there is no such a file system. In the docs, the following is mentioned:
The runtime includes a full filesystem. The filesystem is read-only except for the location /tmp, which is a virtual disk storing data in your App Engine instance's RAM.
So having this consideration you can write in /tmp but I suggest to Cloud Storage because if the scaling shutdowns all the instances, the data will be lost.
Also you can think of App Engine Flex which offers to have a HDD (because its backend is a VM) but the minimum size is 10GB so it will be worst than using Storage.
Once thanks for steering me not to waste time finding a hack solution for the problem.
Any way there was no clear code how to use the /tmp directory and download/upload the file using the app engine hosted node.js application.
Here is the code if some one needs it
const {
Storage
} = require('#google-cloud/storage');
const path = require('path');
class gStorage {
constructor() {
this.storage = new Storage({
keyFile: 'Please add path to your key file'
});
this.bucket = this.storage.bucket(yourbucketname);
this.filePath = path.join('..', '/tmp/YourFileDetails');
// I am using the same file path and same file to download and upload
}
async uploadFile() {
try {
await this.bucket.upload(this.filePath, {
contentType: "application/json"
});
} catch (error) {
throw new Error(`Error when saving the config. Message : ${error.message}`);
}
}
async downloadFile() {
try {
await this.bucket.file(filename).download({
destination: this.filePath
});
} catch (error) {
throw new Error(`Error when saving the config. Message : ${error.message}`);
}
}
}

Editing a file in Azure Blob Storage with an API

Here is my scenario.
I have placed a config file (.xml) into an Azure Blob Storage container
I want to edit that xml file and update/add content to it.
I want to deploy an api to an azure app service that will do that.
I built an api that runs locally that handles this but that isn't exactly going to cut it as a cloud application. This particular iteration is a NODEjs api that uses the Cheerio and File-System modules in order to manipulate and read the file respectively.
How can I retool this to be work with a file that lives in Azure blob storage?
note: Are azure blobs the best place to start with the file even? Is there a better place to put it?
I found this but it isn't exactly what I am after.....Azure Edit blob
Considering the data stored in blob is XML (in other words string type), instead of using getBlobToStream method, you can use getBlobToText method, manipulate the string, and then upload that updated string using createBlockBlobFromText.
Here's the pseudo code:
blobService.getBlobToText('mycontainer', 'taskblob', function(error, result, response) {
if (error) {
console.log('Error in reading blob');
console.error(error);
} else {
var blobText = result;//It
var xmlContent = someMethodToConvertStringToXml(blobText);//Convert string to XML if it is easier to manipulate
var updatedBlobText = someMethodToEditXmlContentAndReturnString(xmlContent);
//Reupload blob
blobService.createBlockBlobFromText('mycontainer', 'taskblob', updatedBlobText, function(error, result, response) {
if (error) {
console.log('Error in updating blob');
console.error(error);
} else {
console.log('Blob updated successfully');
}
});
}
});
Simply refactor your code to use the Azure Storage SDK for Node.js https://github.com/Azure/azure-storage-node

Preferred method of downloading large files from AWS S3 to EC2 server

I'm having some intermittent problems downloading a largeish (3.5GB) file from S3 to an EC2 instance. about 95% of the time, it works great, and fast - maybe 30 seconds. However, that 5% of the time it stalls out and can take > 2 hours to download. Restarting the job normally solves this problem - indicating that the problem is transient. This is making me think there is a problem with how I'm downloading files. Below is my implementation - I pipe the read stream into a write stream to disk and return a promise which resolves when it is done (or rejects on error).
Is this the preferred method of downloading large files from S3 with node.js? Are there any "gotchas" I should know about?
function getDownloadStream(Bucket, Key) {
return s3
.getObject({
Bucket,
Key
})
.on('error', (error) => {
console.error(error);
return Promise.reject(`S3 Download Error: ${error}`);
})
.createReadStream();
}
function downloadFile(inputBucket, key, destination) {
return new Promise(function(resolve, reject){
getDownloadStream(inputBucket, key)
.on('end', () => {
resolve(destination);
})
.on('error', reject)
.pipe(fs.createWriteStream(destination));
});
}
By default traffic to s3 goes through internet so download speed can become unpredictable. To increase the download speed and for security reasons you can configure aws endpoints, which is a virtual device and it can be used for routing the traffic between your instance to s3 through their internal network(much faster) than going through internet.
While creating endpoint service for s3, you need select route tables of the instances where the app is hosted. after creating you will see a entry in those route tables like destination (com.amazonaws.us-east-1.s3) -> target vpce-xxxxxx, so when ever traffic goes to s3 it is routed through the endpoint instead of going through internet.
Alternatively you can also try parallelising the download like downloading range of bytes in parallel and combine it but for 3.5GB above approach should be fine.

Scheduled task only runs as expected if I run it once - never on its own (Azure mobile services)

I am running a simple script in azure mobile services scheduler:
function warmup() {
warmUpSite("http://safenoteit.ca/");
}
function warmUpSite(url) {
console.info("warming up: " + url);
var req = require('request');
req.get({ url: url }, function(error, response, body) {
if (!error) {
console.info("hot hot hot! " + url);
} else {
console.error('error warming up ' + url + ': ' + error);
}
});
}
This runs as expected when I manually run it (Run once button). However, despite scheduling it to run every 15 minutes, I don't see any console log messages coming from the script. Additionally, the portal tells me that the scheduler is enabled and running:
Anyone else see this issue? The mobile service is running on basic tier and I have very little load on it. I don't see what could cause this issue, which makes the whole scheduler service useless.
UPDATE: Tried the same scheduled script on another mobile service, and everything works! Something's messed up with the mobile service itself. Talking to Microsoft support to resolve this.
It was an issue only Microsoft can fix. They had to redeploy the mobile service.

Resources