error passing empty credentials to firestore emulator - node.js

I am trying to seed some sample data into my local firestore emulator database. I adapted the example from this github issue
My code looks like this:
const {Firestore} = require('#google-cloud/firestore');
const {credentials} = require('grpc');
const db = new Firestore({
projectId: 'my-project-id',
servicePath: 'localhost',
port: 8100,
sslCreds: credentials.createInsecure(),
customHeaders: {
"Authorization": "Bearer owner"
}
});
async function load_data() {
await db.collection("mycollection").doc("myid").set({ foo: "test" })
}
load_data();
But I receive the error
this.credentials._getCallCredentials is not a function
Tested on node 10 and 12 with same error.
Library versions:
#google-cloud/firestore 3.5.1
grpc 1.24.2
Is there a better approach to writing to local emulated firestore? Or is there something wrong with my code?

The problem here is that you're trying to use two different implementations of gRPC together. Internally firestore uses #grpc/grpc-js, so that is what you should be using. You should only need to change the second line to const {credentials} = require('#grpc/grpc-js'); and switch the dependency to that library.

Related

AWS Greengrass V2 Node Publishing problem with aws-iot-sdk-v2 JS

For the past few days I've been trying to solve the problem of publishing a message from Lambda to the AWS cloud, using Greengrass v2.
The code in python was even provided in the documentation, only had to be slightly reworked.
When it comes to SDK v2 JS in documentation there is only minimal mention about publish function in AWS-CRT library.
I tried to create code using components from this library, but it looks like the library also requires a script with parameters.
This is my code that requires installation of aws-iot-sdk-v2 js.
const iotsdk = require("aws-iot-device-sdk-v2");
const mqtt = iotsdk.mqtt;
const os = require("os");
const util = require("util");
const GROUP_ID = process.env.GROUP_ID;
const THING_NAME = process.env.AWS_IOT_THING_NAME;
const THING_ARN = process.env.AWS_IOT_THING_ARN;
(topic = "gg/message"),
(payload = JSON.stringify({ message: util.format("ping") }));
function greengrassHelloWorldRun() {
mqtt.MqttClientConnection.prototype.publish(topic, payload);
}
console.log(topic);
console.log(payload);
setInterval(greengrassHelloWorldRun, 5000);
exports.handler = function (event, context) {
console.log("event: " + JSON.stringify(event));
console.log("context: " + JSON.stringify(context));
};
I get errors about arguments and NAPI.
The same errors also appear when using this function as lambda component in greengrass logs
Maybe someone has some example how to publish some message on topic using Node lambda with sdk v2.
After contacting AWS Support I know it is impossible.
AWS doesn't support mqttProxy IPC for SDK V2 JS yet.
ChristopherTal
I'm also using the Greengrass SDKs for JS and indeed they're still a work in progress. But I was able to send messages to the IoTCore from Greengrass using the JS SDKs.
A few things to mention:
You seem to use the aws-iot-device-sdk-v2 SDK which is for things
The aws-greengrass-core-sdk npm package is made for components
It is important to differ between things and components and decide who's doing what.
To send data to IoTCore from a thing, you need indeed to use MQTT. On the deployment page on the Greengrass console, you need to revise the deployment and add following components:
MQTT Broker
MQTT Bridge
Client device auth
This way your thing connects to the local MQTT Broker through the client device auth component and the MQTT Bridge decides how the traffic is routed. You can read all info on the links above.
I even realised this using the standard mqtt npm package. You need to create a certificate and a thing using lambda or the console and use those certificates to access the broker.
const mqtt = require('mqtt')
const fs = require('fs')
const ca = fs.readFileSync(locationOfTheCA)
const key = fs.readFileSync(locationOfThePrivateKey)
const cert = fs.readFileSync(locationOfTheCertificate)
console.log('Welcome to MQTT Connector')
const client = mqtt.connect('mqtts://localhost:8883', {
clientId: 'yourThingNameHere',
ca: ca,
key: key,
cert: cert
})
client.on('connect', function () {
console.log('Connected to MQTT')
/* client.subscribe('$aws/*', function (err) {
if (!err) {
//client.publish('presence', 'Hello mqtt')
}
})*/
})
client.on('message', function (topic, message) {
// message is Buffer
console.log(message.toString())
client.end()
})
Hopefully this helps you out!
Warm regards
Hacor

Different Behavior Deploying AWS Lambda Standalone vs within an Application Stack

Hi everybody and thanks for taking time to look at my issue/question.
I am getting different results when deploying my AWS Lambda stand-alone versus within an Application Stack.
I'm trying to connect to AWS Elasticache Redis from within my Lambda. I have .Net Core 3.1 Lambdas (using StackExchange.Redis) which can connect. But I also need to be able to connect from my Node.js Lambdas.
For the Node.js Lambdas, I'm using "node-redis" and "async-redis". I have two Lambdas which are essentially identical except that one is deployed in an Application Stack and the other is deployed as a stand-alone Lambda. Both Lambdas reference the same Lambda Layer (i.e. same "node_modules"), have the same VPC settings, the same Execution Role, and essentially the same code. So they've pushed it up to another group.
The stand-alone Lambda connects to Redis without issue. The Application Stack Lambda does not and exits processing before completing but without raising any error.
At first I thought I might just need to configure my Application Stack but I cannot find any information indicating we even can configure Application Stacks. So I'm at a loss.
The stand-alone Lambda:
exports.handler = async (event) => {
const asyncRedis = require("async-redis");
const redisOptions =
{
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
}
console.log('A');
const client = asyncRedis.createClient(redisOptions);
console.log(client);
console.log('B');
const value = await client.get("Key");
console.log('C');
console.log(value);
console.log('D');
console.log(client);
};
The output of this function is essentially:
A
{RedisClient} --> the "client" object --> Shows connected = false
B
C
{ Correct Data From Redis }
D
{RedisClient} --> the "client" object --> Shows connected = true
The Application Stack Lambda:
async function testRedis2(event, context) {
console.log('In TestRedis2');
const asyncRedis = require("async-redis");
const redisOptions =
{
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
}
console.log('A');
const client = asyncRedis.createClient(redisOptions);
console.log(client);
console.log('B');
var value = await client.get("Key");
console.log('C');
console.log(value);
console.log('D');
console.log(client);
}
module.exports = {
testRedis2
};
The output of this function is essentially:
In TestRedis2
A
{RedisClient} --> the "client" object --> Shows connected = false
B
I don't understand why these don't perform identically. And I don't get why I don't see further entries in the output.
Has anyone else experienced issues connecting to VPC resources from within an Application Stack?
Thanks
I stumbled across the answer through extensive trial and error. It may be obvious to Node/js developers but, just in case another Javascript/Node newbie has the same issue, I'll post the answer here.
The import/require and creation of the client must be at the top of the module. Not in the function itself.
So, the following does work in my application stack:
const asyncRedis = require("async-redis");
const redisOptions = {
host: "XXXXXXXXX.XXXXX.XXXX.use2.cache.amazonaws.com",
port: 6379
};
const client = asyncRedis.createClient(redisOptions);
async function redisGet(key: string){
// console.log('In redisGet');
return await client.get(key);
}

Cloud function to export Firestore backup data. Using firebase-admin or #google-cloud/firestore?

I'm currently trying to build a cloud function to export my Firestore data to my Storage Bucket.
The only example I've found on the Firebase DOCs on how to do this:
https://googleapis.dev/nodejs/firestore/latest/v1.FirestoreAdminClient.html#exportDocuments
EXAMPLE
const firestore = require('#google-cloud/firestore');
const client = new firestore.v1.FirestoreAdminClient({
// optional auth parameters.
});
const formattedName = client.databasePath('[PROJECT]', '[DATABASE]');
client.exportDocuments({name: formattedName})
.then(responses => {
const response = responses[0];
// doThingsWith(response)
})
.catch(err => {
console.error(err);
});
From that example, it seems that I need to install #google-cloud/firestore as a dependency to my cloud function.
But I was wondering if I can access these methods using only the firebase-admin package.
I've thought of that because the firebase-admin has the #google-cloud/firestore as a dependency already.
> firebase-admin > package.json
"dependencies": {
"#firebase/database": "^0.4.7",
"#google-cloud/firestore": "^2.0.0", // <---------------------
"#google-cloud/storage": "^3.0.2",
"#types/node": "^8.0.53",
"dicer": "^0.3.0",
"jsonwebtoken": "8.1.0",
"node-forge": "0.7.4"
},
QUESTION:
Is it possible to get an instance of the FirestoreAdminClient and use the exportDocuments method using just the firebase-admin ?
Or do I really need to install the #google-cloud/firestore as a direct dependency and work with it directly?
The way you're accessing the admin client is correct as far as I can tell.
const client = new admin.firestore.v1.FirestoreAdminClient({});
However, you probably won't get any TypeScript/intellisense help beyond this point since the Firestore library does not actually define detailed typings for v1 RPCs. Notice how they are declared with any types: https://github.com/googleapis/nodejs-firestore/blob/425bf3d3f5ecab66fcecf5373e8dd03b73bb46ad/types/firestore.d.ts#L1354-L1364
Here is an implementation I'm using that allows you to do whatever operations you need to do, based on the template provided by firebase here https://firebase.google.com/docs/firestore/solutions/schedule-export
In my case I'm filtering out collections from firestore I don't want the scheduler to automatically backup
const { Firestore } = require('#google-cloud/firestore')
const firestore = new Firestore()
const client = new Firestore.v1.FirestoreAdminClient()
const bucket = 'gs://backups-user-data'
exports.scheduledFirestoreBackupUserData = async (event, context) => {
const databaseName = client.databasePath(
process.env.GCLOUD_PROJECT,
'(default)'
)
const collectionsToExclude = ['_welcome', 'eventIds', 'analyticsData']
const collectionsToBackup = await firestore.listCollections()
.then(collectionRefs => {
return collectionRefs
.map(ref => ref.id)
.filter(id => !collectionsToExclude.includes(id))
})
return client
.exportDocuments({
name: databaseName,
outputUriPrefix: bucket,
// Leave collectionIds empty to export all collections
// or define a list of collection IDs:
// collectionIds: ['users', 'posts']
collectionIds: [...collectionsToBackup]
})
.then(responses => {
const response = responses[0]
console.log(`Operation Name: ${response['name']}`)
return response
})
.catch(err => {
console.error(err)
})
}
firebase-admin just wraps the Cloud SDK and re-exports its symbols. You can use the wrapper, or use the Cloud SDK directly, or even a combination of the two if you want. If you want to use both, you have to declare an explicit dependency on #google-cloud/firestore in order to be able to import it directly into your code.
Here is the full explanation with code (I use it and it works very well) on how to do automated Firestore backups by mixing Cloud Scheduler, PubSub and Firebase Function https://firebase.google.com/docs/firestore/solutions/schedule-export

How to extract data using async and await function in node js?

I tried redis cache implement in node js using mongodb.I set the data in cache.but i cant get the data in cache.how to solve this issue.
cache.js
async function Get_Value(){
let response = await client.get('products')
console.log("_______________")
console.log(response)
}
I got output : true
Excepted output: json data
how to get json data using cache get method
Redis does not provide a full async await adapter for node.js so usually as a workaround people are promisifying the lib.
const { promisify } = require('util');
const getAsync = promisify(client.get).bind(client);
async function getValue(){
let response = await getAsync("products");
}
An other approach to promisify the entire redis library you can use:
const redis = require('redis');
bluebird.promisifyAll(redis);
Now you will be able also to use the methods using async/await.

How to connect to AWS ElasticSearch using npm elasticsearch and http-aws-es?

I am using the npm elasticsearch package to search my AWS ES domain. Everything seems to work fine when I use Postman to make POST requests with my AWS IAM credentials.
I wanted to do the same in my code (node.js). I referred to this answer here:
How to make calls to elasticsearch apis through NodeJS?
Here is code:
const elasticsearch = require('elasticsearch');
const awsHttpClient = require('http-aws-es');
const AWS = require('aws-sdk');
const client = new elasticsearch.Client({
host: 'my-aws-es-endpoint',
connectionClass: awsHttpClient,
amazonES: {
region: 'us-east-1',
credentials: new AWS.Credentials('my-access-key','my-secret-key')
}
});
But when I run client.search(), it fails with an error:
Elasticsearch ERROR: 2018-10-31T15:12:22Z
Error: Request error, retrying
POST https://my-endpoint.us-east-1.es.amazonaws.com/my-index/student/_search => Data must be a string or a buffer
It also gives me a warning
Elasticsearch WARNING: 2018-10-31T15:12:22Z
Unable to revive connection: https://my-endpoint.us-east-1.es.amazonaws.com/
When I use just the aws-sdk, it works fine (probably because I sign the request there?)
Can someone suggest what I am I doing wrong here?
I was able to solve this by specifying the region. There is a problem with the elasticsearch client where it's not able to the pick the region which we specify in
amazonES: {
region: 'us-east-1',
credentials: new AWS.Credentials('my-access-key','my-secret-key')
}
}
I solved it by specifying the region using AWS.config.region before the above code
AWS.config.region = 'us-east-1';

Resources