How to specify "cluster_id" while creating a NATS JetStream connection? - connect

In nats-streaming I could create a nats connection via the following code:
import nats, { Stan } from 'node-nats-streaming';
private _client?: NATS.JetStreamClient;
this._client = nats.connect(clusterId, clientId, { url });
And with some modifications I could do the following for the newer JetStream:
import NATS from "nats";
private _client?: NATS.JetStreamClient;
this._client = (await NATS.connect({ name: clientId, servers: url })).jetstream();
But it seems there is no cluster_id property between JetStream's ConnectionOptions!
So how can I create the same NATS client using the connection function using new NATS JetStream?

Related

Hyperledger Fabric Smart Contract with private data works in vscode but not in external app. How can I write data to multiple implicit collections?

Summary: An external app can submit a transaction that writes to a single implicit private collection but fails when writing to 2 implicit collections. Everything works OK when using the vscode blockchain extension instead of the external app.
Details:
I am using the vscode blockchain extension (v2.0.8) When used, it installs microfab version 0.0.11.
I am using a 2-org network created in the vscode extension using the 2-org template.
I have a Smart Contract that writes data to 2 implicit private collections (for Org1 and Org2).
Here is the relevant portion of the smart contract (typescript):
#Transaction()
public async createMyPrivateAsset(ctx: Context, myPrivateAssetId: string): Promise<void> {
const exists: boolean = await this.myPrivateAssetExists(ctx, myPrivateAssetId);
if (exists) {
throw new Error(`The asset my private asset ${myPrivateAssetId} already exists`);
}
const privateAsset: MyPrivateAsset = new MyPrivateAsset();
const transientData: Map<string, Uint8Array> = ctx.stub.getTransient();
if (transientData.size === 0 || !transientData.has('privateValue')) {
throw new Error('The privateValue key was not specified in transient data. Please try again.');
}
privateAsset.privateValue = transientData.get('privateValue').toString();
const collectionName: string = await getCollectionName(ctx, ctx.clientIdentity.getMSPID());
await ctx.stub.putPrivateData(collectionName, myPrivateAssetId, Buffer.from(JSON.stringify(privateAsset)));
}
#Transaction()
public async createMyPrivateAssetMultiple(ctx: Context, myPrivateAssetId: string): Promise<void> {
const exists: boolean = await this.myPrivateAssetExists(ctx, myPrivateAssetId);
if (exists) {
throw new Error(`The asset my private asset ${myPrivateAssetId} already exists`);
}
const privateAsset: MyPrivateAsset = new MyPrivateAsset();
const transientData: Map<string, Uint8Array> = ctx.stub.getTransient();
if (transientData.size === 0 || !transientData.has('privateValue')) {
throw new Error('The privateValue key was not specified in transient data. Please try again.');
}
privateAsset.privateValue = transientData.get('privateValue').toString();
for (const mspid of ['Org1MSP', 'Org2MSP']) {
var collectionName: string = await getCollectionName(ctx, mspid);
await ctx.stub.putPrivateData(collectionName, myPrivateAssetId, Buffer.from(JSON.stringify(privateAsset)));
}
}
#Transaction(false)
#Returns('MyPrivateAsset')
public async readMyPrivateAsset(ctx: Context, myPrivateAssetId: string): Promise<string> {
const exists: boolean = await this.myPrivateAssetExists(ctx, myPrivateAssetId);
if (!exists) {
throw new Error(`The asset my private asset ${myPrivateAssetId} does not exist`);
}
let privateDataString: string;
const collectionName: string = await getCollectionName(ctx, ctx.clientIdentity.getMSPID());
const privateData: Uint8Array = await ctx.stub.getPrivateData(collectionName, myPrivateAssetId);
privateDataString = JSON.parse(privateData.toString());
return privateDataString;
}
createMyPrivateAsset writes to a single implicit collection: everything OK.
createMyPrivateAssetMultiple writes to 2 implicit collections: fails in external app.
Both transactions work perfectly when I use the vscode Transaction View to submit transactions.
For createMyPrivateAssetMultiple, I submit using the Org1 gateway and then call readMyPrivateAsset using the Org1 gateway and also using the Org2 gateway and the private data is returned correctly.
Now, when I use an external app, the transaction createMyPrivateAsset works but createMyPrivateAssetMultiple does not.
Here is the relevant portion of the app (typescript):
// connection
const gateway: Gateway = new Gateway();
const connectionProfilePath: string = path.resolve(__dirname, '..', connectionFile);
const connectionProfile = JSON.parse(fs.readFileSync(connectionProfilePath, 'utf8'));
const connectionOptions: GatewayOptions = { wallet, identity: identity, discovery: { enabled: true, asLocalhost: true } };
await gateway.connect(connectionProfile, connectionOptions);
// Get the network (channel) our contract is deployed to.
const network = await gateway.getNetwork('mychannel');
// Get the contract from the network.
const contract = network.getContract('private-contract');
Here is the transaction submission for createMyPrivateAsset
let transientData = {
'privateValue': Buffer.from(`Private value for asset ${assetId}`)
};
const trans:Transaction = contract.createTransaction('createMyPrivateAsset');
const buffer: Buffer = await trans.setTransient(transientData).submit(assetId);
This works fine in the app.
Here is the code for createMyPrivateAssetMultiple
let transientData = {
'privateValue': Buffer.from(`Private value for asset ${assetId}`)
};
const trans:Transaction = contract.createTransaction('createMyPrivateAssetMultiple');
const buffer: Buffer = await trans.setTransient(transientData).submit(assetId);
For this transaction, the app throws this (using Org1 gateway):
2022-06-07T13:21:50.727Z - warn: [TransactionEventHandler]: strategyFail: commit failure for transaction "4e9921b590a361ae01bba673e1d3d204d106522780c820055cec0345e1e67e6f": TransactionError: Commit of transaction 4e9921b590a361ae01bba673e1d3d204d106522780c820055cec0345e1e67e6f failed on peer org1peer-api.127-0-0-1.nip.io:8084 with status ENDORSEMENT_POLICY_FAILURE
The microfab docker container log includes this:
> WARN 0ff Failed fetching private data from remote peers for dig2src:[map[{b16526e4cd2ac3f431103cda23a6f64adc12acab0550eff18c1f25f1cc0d8bc1 private-contract _implicit_org_Org2MSP 6 0}:[]]], err: Empty membership channel=mychannel
...
[ org2peer] 2022-06-07 13:22:50.794 UTC [gossip.privdata] RetrievePvtdata -> WARN 220 Could not fetch all 1 eligible collection private write sets for block [20] (0 from local cache, 0 from transient store, 0 from other peers). Will commit block with missing private write sets:[txID: 4e9921b590a361ae01bba673e1d3d204d106522780c820055cec0345e1e67e6f, seq: 0, namespace: private-contract, collection: _implicit_org_Org2MSP, hash: 3e0f263d2edcfaf29df346504a40fdbadce0807938f204fe3e6bf753b751d9a3
Also, package.json includes this:
"dependencies": {
"fabric-network": "~2.1.0"
},
Can anyone shed light on this problem?
You should definitely be using fabric-network#2.2.x, not 2.1.x.
I suspect what is happening is the VS Code client is not using service discovery and sending proposals for endorsement to all network peers, whereas the standalone client application is using service discovery by default and only sending proposals to orgs required by the chaincode endorsement policy. To get it to consider the collection endorsement policy you would need to add a chaincode interest to the Contract object before submitting transactions:
https://hyperledger.github.io/fabric-sdk-node/release-2.2/module-fabric-network.Contract.html#addDiscoveryInterest
There is a working example of this in the Fabric samples:
https://github.com/hyperledger/fabric-samples/blob/8ca50df4ffec311e59451c2a7ebe210d9e6f0004/asset-transfer-private-data/application-javascript/app.js#L166-L178
Alternatively you could either:
Disable service discovery in the Gateway connection options.
Explicitly set the endorsing orgs for a given transaction invocation:
https://hyperledger.github.io/fabric-sdk-node/release-2.2/module-fabric-network.Transaction.html#setEndorsingOrganizations
In general it's much better to be using service discovery so I would not recommend option 1.
The best approach would actually be to use Fabric v2.4+ and the Fabric Gateway client API:
https://hyperledger.github.io/fabric-gateway/
With this API the client (generally) does not need to worry about the organizations required for endorsement when using private data collections or state-/key-based endorsement policies. Things just work automagically.

How to connect to Google Cloud SQL (PostgreSQL) from Cloud Functions?

I feel like I've tried everything. I have a cloud function that I am trying to connect to Cloud SQL (PostgreSQL engine). Before I do so, I pull connection string info from Secrets Manager, set that up in a credentials object, and call a pg (package) pool to run a database query.
Below is my code:
Credentials:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"127.0.0.1",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));
Upon running the cloud function with this code, I get the following error:
error in pool.query: Error: connect ECONNREFUSED 127.0.0.1:5432
I have attempted to update the host to the private IP of the Cloud SQL instance, and also update the host to the Cloud SQL instance name on this environment, but that is to no avail. Any other ideas?
Through much tribulation, I figured out the answer. Given that there is NO documentation on how to solve this, I'm going to put the answer here in hopes that I can come back here in 2025 and see that it has helped hundreds. In fact, I'm setting a reminder in my phone right now to check this URL on November 24, 2025.
Solution: The host must be set as:
/cloudsql/<googleProjectName(notId)>:<region>:<sql instanceName>
Ending code:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"/cloudsql/my-first-project-191923:us-east1:my-first-cloudsql-inst",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));

How can we configure HttpProxy on services.AddAzureClients

we have the following code in our .Net 5.0 aspnetcore startup class
public void ConfigureServices(IServiceCollection services)
{
services.AddAzureClients(builder =>
{
// Add a storage account client
builder.AddBlobServiceClient(storageUrl);
// Use the environment credential by default
builder.UseCredential(new EnvironmentCredential());
});
services.AddControllers();
}
for HttpClient we are able to configure httpproxy using the following code, but how do we acheive the same for BlobServiceClient ??
services.AddHttpClient("SampleClient", client =>
{
client.BaseAddress = new Uri("https://sample.client.url.com");
})
.ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
Proxy = new WebProxy("https://sample.proxy.url.com")
});
The solution is to pass BlobClientOptions with the appropriate proxy URI specified, into the constructor of BlobServiceClient.
Checkout this sample code which shows how specify a proxy in a HttpClient
The HttpClient can be passed as a parameter when constructing a new HttpClientTransport, which can be set in the Transport property of BlobClientOption, which can then be passed into the constructor of BlobServiceClient.

Asp.net core API server logs events to Confluent Cloud Kafka when testing locally, but not when hosting on Azure App Service

I have some code that my Asp.net Core Web API uses to log certain events to a Kafka server running in Confluent Cloud. When I run the API server on my local machine, it can send and receive with Kafka just fine, but when it is running on an Azure App Service, I receive "Local: Message Timed Out" errors. Is there something about Azure App Service networking that I can modify to make the Kafka network traffic flow correctly?
Here is a snippet of the code below:
public class ConfluentKafkaService {
private readonly ClientConfig clientConfig = new ClientConfig
{
BootstrapServers = "...",
ClientId = Dns.GetHostName(),
SecurityProtocol = SecurityProtocol.SaslSsl,
SaslMechanism = SaslMechanism.Plain,
SaslUsername = "...",
SaslPassword = #"..."
};
public async Task SendDeviceEvent(DeviceEvent de) {
var config = new ProducerConfig(clientConfig);
string topicName = $"...";
using var producer = new ProducerBuilder<Null, DeviceEvent>(config)
.Build();
try {
await producer.ProduceAsync(topicName, new Message<Null, DeviceEvent> { Value = de });
}
catch (ProduceException<Null, string> e) {
Console.WriteLine($"Error producing message: {e.Message}");
}
}
}
My connectivity issue was ultimately caused because Azure App Service does not expose its Trusted Certificate store to librdkafka correctly. I downloaded cacert.pem from https://curl.haxx.se/docs/caextract.html and pointed to it by setting SslCaLocation in my ClientConfig like so:
private readonly ClientConfig clientConfig = new ClientConfig
{
BootstrapServers = "",
ClientId = Dns.GetHostName(),
SecurityProtocol = SecurityProtocol.SaslSsl,
SslCaLocation = Path.Combine("assets", "cacert.pem"),
SaslMechanism = SaslMechanism.Plain,
SaslUsername = ""
SaslPassword = ""
}
For further information, see this issue: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1112

Connect to EC2 server through AWS lambda

I have a shell script on my EC2 server and I want to trigger the same from AWS lambda function. Can anyone suggest how I can access the file in my lambda function. There is no connectivity issue between lambda and EC2.
I generated the private key with Putty gen and kept it in s3 bucket and using same key to connect(With this private key able to connect through putty).I have piece of code like this.
var driver, ssh;
driver = require('node-ssh');
ssh = new driver();
exports.handle = function(error, ctx, cb) {
ssh = new driver({
host: 'EC2 public ip',
username: 'uname',
privateKey : 'url of s3/privatekey.ppk'
});
ssh.connect().then(function() {
console.log('connected')
},function(error) {
console.log(error);
});
}
First I am trying to see if I can connect to my EC2 server and then I can run the shell script through ssh client. But connection is not happening.Getting below error.
{
"errorMessage": "config.host must be a valid string",
"errorType": "Error",
"stackTrace": [
"Object.<anonymous> (/var/task/node_modules/node-ssh/lib/helpers.js:15:13)",
"next (native)",
"step (/var/task/node_modules/node-ssh/lib/helpers.js:69:191)",
"/var/task/node_modules/node-ssh/lib/helpers.js:69:437",
"Object.<anonymous> (/var/task/node_modules/node-ssh/lib/helpers.js:69:99)",
"Object.normalizeConfig (/var/task/node_modules/node- ssh/lib/helpers.js:42:17)",
"/var/task/node_modules/node-ssh/lib/index.js:53:25",
"SSH.connect (/var/task/node_modules/node-ssh/lib/index.js:52:14)",
"exports.handle (/var/task/index.js:13:7)"
]
}
You would need something running on your EC2 instance to "receive" the request.
Some options:
Run a web server and call it from the Lambda function, or
Use the EC2 Run Command which uses an agent on the EC2 instance and can be called via the AWS API, or
Have the Lambda function push a message into an Amazon SQS queue and have the instance continually poll the queue
It would be much simpler if you could simply run the code in your Lambda function instead.
Posting answer to this question. Hope it will help.
package com.wb.mars.ingest;
import java.io.File;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.PrintWriter;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.jcraft.jsch.ChannelExec;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
import com.wb.mars.ingest.CustomEventInput;
import com.wb.mars.ingest.CustomEventOutput;
public class EC2ConnectLambda implements RequestHandler<CustomEventInput,CustomEventOutput> {
public CustomEventOutput handleRequest(CustomEventInput input, Context context) {
context.getLogger().log("Input: " + input);
System.out.println("test");
try {
String command1 = "cd /home/ec2-user/mydir; ./runjar.sh";
JSch jsch = new JSch();
String user = "ec2-user";
String host = "*.*.*.*";
int port = 22;
//File file = new File( EC2ConnectLambda.class.getResource( "/Linux_EC2.pem" ).toURI() );
File file = new File( EC2ConnectLambda.class.getResource( "/mykey.pem" ).toURI() );
String privateKeyabsolutePath = file.getAbsolutePath();
jsch.addIdentity(privateKeyabsolutePath);
System.out.println("identity added ");
Session session = jsch.getSession(user, host, port);
System.out.println("session created.");
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect();
System.out.println("session connected.....");
ChannelExec channel = (ChannelExec)session.openChannel("exec");
OutputStream o = channel.getOutputStream();
PrintWriter pw = new PrintWriter(o);
InputStream in = channel.getInputStream();
((ChannelExec) channel).setCommand(command1);
channel.connect();
// 4 - Clean up
channel.disconnect();
session.disconnect();
} catch (Exception e) {
System.err.println(e);
e.printStackTrace();
}
return new CustomEventOutput("lambdaInvoked");
}
}

Resources