Using the external mongo.exe I can connect to the databases of our environments via
mongo.exe “mongodb://aaa.unix.abc:27018,bbb.unix.abc:27018,ccc.unix.abc:27018/mydb?replicaSet=myreplicaset” —authenticationMechanism=GSSAPI —authenticationDatabase=$external —username “user#NONPROD#ABC.COM” —password “password” -ssl —-sslCAFile C:\mymongostuff\ca.pem
So I have no problem whatsoever connecting via batch scripts and powershell scripts but my problem comes with trying to connect via application (whether Java or JavaScript) running on my local machine
Below test script I’m trying to run (node v14.16.0, npm v6.14.11, mongodb npm library v4.13.0 on a windows PC)
const { MongoClient } = require(‘mongodb’);
const path = require(‘path’);
const capem = path.join(__dirname,’.\\ca.pen’);
async function main() {
const uri = “mongodb://aaa.unix.abc:27018,bbb.unix.abc:27018,ccc.unix.abc:27018/mydb?replicaSet=myreplicaset”;
var mongoOpt = { sslValidate=true, sslCert= capem };
const client = new MongoClient(uri,mongoOpt);
try {
await client.connect();
await doSomething(client);
}
Running above will run for many seconds without doing anything before giving MongoServerSelectionError
Reason: TopologyDescription {
type: ‘ReplicaSetNoPrimary’,
servers….
My suspicion is the uri is correct but that I somehow need to specify the “authenticationMechanism=GSSAPI —authenticationDatabase=$external —username “user#NONPROD#ABC.COM” —password “password” -ssl —-sslCAFile C:\mymongostuff\ca.pem” outside the uri for it to be equivalent to my working batch/powershell scripts
Related
I have a node server which handles gcp instance operations. I am trying to update the machine type of a existing running instance. I don't want to update any other properties like disk size or anything.
const computeClient = new Compute.InstancesClient({
projectId: "project",
keyFilename: "keyfile",
});
let resource = {
instance: "testinstance",
instanceResource: {
machineType : "zones/us-central1-a/machineTypes/e2-standard-4",
name: "testinstance"
},
project: "project",
zone : "us-central1-a"
}
const resp1 = await computeClient.update(resource);
When I try to run above code this error occurs
Stacktrace:
====================
Error: Invalid value for field 'resource.disks': ''. No disks are specified.
at Function.parseHttpError (////node_modules/google-gax/build/src/googleError.js:49:37)
at decodeResponse (///node_modules/google-gax/build/src/fallbackRest.js:72:49)
at ////node_modules/google-gax/build/src/fallbackServiceStub.js:90:42
at processTicksAndRejections (node:internal/process/task_queues:96:5)
node version : v16.14.0
#google-cloud/compute version : 3.1.2
Any solution ? Any code sample to update machine type ?
If you only want to update your instance's machine type you should use the setMachineType method directly which is meant for this specifically. See example below:
// Imports the Compute library
const {InstancesClient} = require('#google-cloud/compute').v1;
// Instantiates a client
const computeClient = new InstancesClient();
const instance = "instance-name";
const instancesSetMachineTypeRequestResource = {machineType: "zones/us-central1-a/machineTypes/n1-standard-1"}
const project = "project-id";
const zone = "us-central1-a";
async function callSetMachineType() {
// Construct request
const request = {
instance,
instancesSetMachineTypeRequestResource,
project,
zone,
};
// Run request
const response = await computeClient.setMachineType(request);
console.log(response);
}
callSetMachineType();
Note that machine type can only be changed on a TERMINATED instance as documented here. You'll need to first ensure the instance is stopped or stop it in your code prior to updating machine type. More details on available methods here.
I have a Google Cloud VM named cloudvm and a Node.js script to contact that virtual machine. I want to take a shell and execute
echo "this is fun" > a.txt
command using a ssh client in Node.js. I have tried node-ssh with userid,password and private key and following error occurs;
Message: All configured authentication methods failed
I have used
const {NodeSSH} = require('node-ssh')
const ssh = new NodeSSH()
ssh.connect({
host: 'localhost',
username: 'steel',
privateKey: '/home/steel/.ssh/id_rsa'
})
My final goal is to pass a value to a file inside the Google Cloud VM using Node.js environment. Any ideas?
After creating
OS user and password
in google compute engine you can connect it via node-ssh library in node JS .
const { NodeSSH } = require("node-ssh");
const ssh = new NodeSSH();
var session = await ssh.connect({
host: "xxx.xxx.x.xx",
username: "xxxxxx",
password: "xxxx",
});
var cmde = 'your/command to execute';
var ot = await session.execCommand(cmde);
I'm looking to create a small web application that lists some data about the ingresses in my cluster. The application will be hosted in the cluster itself, so I assume i'm going to need a service account attached to a backend application that calls the kubernetes api to get the data, then serves that up to the front end through a GET via axios etc. Am I along the right lines here?
You can use the JavaScript Kubernetes Client package for node directly in you node application to access kubeapi server over REST APIs
npm install #kubernetes/client-node
You can use either way to provide authentication information to your kubernetes client
This is a code which worked for me
const k8s = require('#kubernetes/client-node');
const cluster = {
name: '<cluster-name>',
server: '<server-address>',
caData: '<certificate-data>'
};
const user = {
name: '<cluster-user-name>',
certData: '<certificate-data>',
keyData: '<certificate-key>'
};
const context = {
name: '<context-name>',
user: user.name,
cluster: cluster.name,
};
const kc = new k8s.KubeConfig();
kc.loadFromOptions({
clusters: [cluster],
users: [user],
contexts: [context],
currentContext: context.name,
});
const k8sApi = kc.makeApiClient(k8s.NetworkingV1Api);
k8sApi.listNamespacedIngress('<namespace>').then((res) => {
console.log(res.body);
});
You need to Api client according to your ingress in my case I was using networkingV1Api
You can get further options from
https://github.com/kubernetes-client/javascript
JS client : https://github.com/kubernetes-client/javascript
If you have different ways to authenticate as you mentioned service account that is also one of them.
Yes, you will require it however if you are planning to run your script on cluster only there is no requirement of it.
you can directly use the method to authenticate
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8s = require('#kubernetes/client-node')
const kc = new k8s.KubeConfig()
kc.loadFromDefault()
const k8sApi = kc.makeApiClient(k8s.NetworkingV1beta1Api) // before 1.14 use extensions/v1beta1
k8sApi.listNamespacedIngress('<Namespace name>').then((res) => {
console.log(res.body);
});
You can check out this examples : https://github.com/kubernetes-client/javascript/tree/master/examples you can also use typescript.
I have an Azure Function that is written with Visual Studio Code and it is a nodejs application with javascript codes.
Also, the application connects to Oracle DB to run an Oracle Script.
Also, the application runs on a Docker image.
I added the npm packages for oracle connection;
npm i express
npm i oracledb
Below my some key points of code;
Dockerfile
FROM mcr.microsoft.com/azure-functions/node:3.0
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY . /home/site/wwwroot
RUN cd /home/site/wwwroot && \
npm install
index.js
module.exports = async function (context, req) {
let responseMessage = "";
let connection;
try {
const oracledb = require('oracledb');
connection = await oracledb.getConnection({
user: "xx",
password: "xx",
connectString: req.body
});
let query = 'select * from xx where rownum=1';
result = await connection.execute(query);
responseMessage = result;
} catch (err) {
responseMessage = err.message;
} finally {
if (connection) {
try {
// Always close connections
await connection.close();
} catch (err) {
responseMessage = err.message;
}
}
}
context.res = {
body: responseMessage
};
}
Here is my folder structure of project;
CASE1: When I run the project with "func start" the application is working properly and gets the result.
CASE2: When I run it on my local docker with below steps it returns an error form HTTP response.
Run "docker build ."
docker run -d -p 99:80 myimage
It is listed on "docker ps" list.
I call the endpoint "http://localhost:99/api/HttpExample" I get an error.
DPI-1047: Cannot locate a 64-bit Oracle Client library: "libclntsh.so: cannot open shared object file: No such file or directory". See https://oracle.github.io/node-oracledb/INSTALL.html for help
Node-oracledb installation instructions: https://oracle.github.io/node-oracledb/INSTALL.html
You must have 64-bit Oracle client libraries in LD_LIBRARY_PATH, or configured with ldconfig.
If you do not have Oracle Database on this computer, then install the Instant Client Basic or Basic Light package from
http://www.oracle.com/technetwork/topics/linuxx86-64soft-092277.html
I search some documentations but I can't find the solution for especially Azure Function Project. Because my dockerfile have to be created based on "FROM mcr.microsoft.com/azure-functions/node:3.0".
What should be the Dockerfile for this project?
I am using socket.io in node.js to implement chat functionality in my azure cloud project. In it i have been adding the user chat history to tables using node.js. It works fine when i run it on my local emulator, but strangely when i deploy to my azure cloud it doesnt work and it doesnt throw up any error either so its really mind boggling. Below is my code.
var app = require('express')()
, server = require('http').createServer(app)
, sio = require('socket.io')
, redis = require('redis');
var client = redis.createClient();
var io = sio.listen(server,{origins: '*:*'});
io.set("store", new sio.RedisStore);
process.env.AZURE_STORAGE_ACCOUNT = "account";
process.env.AZURE_STORAGE_ACCESS_KEY = "key";
var azure = require('azure');
var chatTableService = azure.createTableService();
createTable("ChatUser");
server.listen(4002);
socket.on('privateChat', function (data) {
var receiver = data.Receiver;
console.log(data.Username);
var chatGUID1 = 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8);
return v.toString(16);
});
var chatRecord1 = {
PartitionKey: data.Receiver,
RowKey: data.Username,
ChatID: chatGUID2,
Username: data.Receiver,
ChattedWithUsername: data.Username,
Timestamp: new Date(new Date().getTime())
};
console.log(chatRecord1.Timestamp);
queryEntity(chatRecord1);
}
function queryEntity(record1) {
chatTableService.queryEntity('ChatUser'
, record1.PartitionKey
, record1.RowKey
, function (error, entity) {
if (!error) {
console.log("Entity already exists")
}
else {
insertEntity(record1);
}
})
}
function insertEntity(record) {
chatTableService.insertEntity('ChatUser', record, function (error) {
if (!error) {
console.log("Entity inserted");
}
});
}
Its working on my local emulator but not on cloud and I came across a reading that DateTime variable of an entity should not be null when creating a record on cloud table. But am pretty sure the way am passing timestamp is fine, it is right? any other ideas why it might be working on local but not on cloud?
EDIT:
I hav also been getting this error when am running the socket.io server, but in spite of this error the socket.io functionality is working fine so i didnt bother to care about it. I have no idea what the error means in the first place.
{ [Error: connect ECONNREFUSED]
code: 'ECONNREFUSED',
errno: 'ECONNREFUSED',
syscall: 'connect' }
Couple things:
You shouldn't need to set Timestamp, the service should be populating that automatically when you insert a record.
When running it locally you can set the environment variables to the Windows Azure storage account settings and see if it will successfully write to the table when running on your developer box. Instead of running in the emulator, just set the environment variables and run the app directly with node.exe.
Are you running in a web role or worker role? I'm assuming it's a cloud service since you mentioned the emulator. If it's a worker role, maybe add some instrumentation to log to file to assist in debugging. If it's a web role you can add an iisnode.yml file in the root of the application, with the following line in the file to enable logging of stdout/stderr:
loggingEnabled: true
This will capture stdout/stderr to an iislog folder under the approot folder on e: or f: of the web role instance. You can remote desktop to the instance and look at the logs to see if the logs you have for successful insertion are occurring.
Otherwise, it's not obvious from the code above what's going on. Similar code worked fine for me. Relevant bits for my test code can be found at https://gist.github.com/Blackmist/5326756.
Hope this helps.