Service Discovery in Cloud Run - node.js

I have two micro services on Google Cloud Run which are meant to communicate via gRPC
Products Service
Customer Service.
Product Service is started like below:
....
let server = new grpc.Server();
server.addService(products_proto.Product.service, {...});
server.bindAsync("0.0.0.0:50051", grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log('Product Service Started');
});
...
How can Product Service locate Customer Service in production without explicitly specifying the port?
Or do I have to always ask my colleagues to tell me the port they exposed their micro-service on?
Wouldn't that be a very tedious process?
I want my product service to connect to my customer service using something like
....
new customer_proto.Customer("customer-service", grpc.credentials.createInsecure());
....
Instead of
....
new customer_proto.Customer("0.0.0.0:50052", grpc.credentials.createInsecure());
....
Can this be achieved?

Related

Calling nestjs gRPC service on AWS ECS Fargate, Connection not established

I have 2 nestjs microservices deployed to the AWS ECS(Fargate).
Service A exposes HTTP endpoints and works as gateway, and service B is using gRPC with nestjs. Service A calls service B gRPC API.
I’ve deployed both services on AWS ECS (Fargate) and using service discovery for service B. I’ve verified the service discovery name can be resolved with Cloud9 IDE.
Everything works end to end when I run them locally (hit service A’s endpoint and then service would call service B and get expected response). After deployment, I also verified that service B is reachable using postman gRPC request to directly call service B using public IP.
However, in Fargate, when service A tries to call service B, I keep getting Error 14 UNAVAILABLE Connection not established.
Below are how I bootstrap Service B and how I initiate Service B’s client in Service A
Bootstrap service B
async function bootstrap() {
const app: INestMicroservice = await NestFactory.createMicroservice(AppModule, {
transport: Transport.GRPC,
options: {
url: '0.0.0.0:50051',
package: protobufPackage,
protoPath: join('node_modules/services-proto/proto/serviceb.proto'),
},
});
app.useGlobalFilters(new HttpExceptionFilter());
app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }));
await app.listen();
}
bootstrap();
Initiate service B client in service A
imports: [
ClientsModule.register([
{
name:SERVICE_B_NAME,
transport: Transport.GRPC,
options: {
url: process.env.SERVICE_B_URL,
package: SERVICE_B_PACKAGE_NAME,
protoPath: 'node_modules/services-proto/proto/serviceb.proto',
},
},
]),
],
Appreciate any help! Thanks!
I have tested locally and everything works from end to end. I could successfully hit service A’s HTTP endpoint and service successfully invoked service B’s gRPC API.
I’ve verified on AWS Cloud9 IDE that service discovery of service B are successfully resolved.
I’ve make sure inbound rule and outbound rule are set correctly for both security groups of service A and service B.
I’ve tested the service B’s gRPC is reachable with postman gRPC request.

How can I execute terminal script/command in Google Compute Instance remotely from my React app?

Is it possible to execute a script remotely on Google Compute Engine (GCE) instance from my react app?
For example, I would upload a file to google cloud storage, then I have some script executed remotely that read the uploaded file and modify it (adding some metadata, etc.)
Is it possible at all?
For executing a script remotely, you have to be connected to the VM. For me, the easiest way is to deploy an endpoint on the VM (what ever is it, but think to secure it with SSL and authentication).
Then, call this endpoint with your react app. The endpoint can spawn scripts on the VM.
In case of storage, you can
Set an event that trigger a Cloud Function
The Cloud Function call the endpoint exposed on your VM
Plug a serverless VPC Connector to your function for reaching privately your VM (no public IP required). This is the most secure way.
UPDATE
Here the code to set on your vm:
const express = require('express');
const app = express();
var exec = require('child_process').exec;
app.get('/', (req, res) => {
//Change the script. Here it's pwd, set what you want
exec('pwd', function callback(error, stdout, stderr){
//Handle here the output of your bash script
res.send(stdout)
})
res.statusCode=200;
})
const port = process.env.PORT || 8080;
app.listen(port, () => {});
On your VM, install nodejs and express. Then run node index.js. You can customize the port by setting the env variable PORT at the value that you want. Else it's 8080 by default.
Open the required firewall rule to your VM for allowing a call to your express server.
Here there is neither SSL on the server nor authentication, even basic.

socket.io client not connected to azure server created in socket.io

//this is my client code which is previously pointing to my local server which is on my Lan network.it work fine
//my server code i post on azure machine and run it run fine
//but it not connected to my below client
var socket = io.connect('http://104.222.195.120:4000');// azure ip add
socket.on('news', function (data) {//angular client for socket
$rootScope.top = JSON.parse(data);//top receive data
$scope.$apply(function () {
// $scope.newCustomers.push(data.customer);
// $scope.a1 = data;
});
});
//io.connect('http://104.222.195.120:4000') is need more than ip
If you're running your server in an Azure Web App, you're going to have issues because you're trying to listen on port 4000. Web Apps only allow for ports 80 and 443.
If you're running your server in an Azure VM, you have to open port 4000 to the outside world via network security group (or endpoint if doing a Classic VM deployment).

Azure App Service - Detect Shutdown

I have a Node.js app hosted as an App Service on Microsoft Azure. Periodically, it shuts down. I'm trying to understand when this occurs. In attempt to do this, I'm sending myself an email on certain events.
Currently, I'm sending an email to myself when the app starts. Then, I try to send an email when the app service stops. I'm attempting this using the following code:
const app = require('./app');
const port = 1337;
const server = app.listen(port);
// Respond to the server starting.
server.on('listening', function() {
sendEmail('App Service - Listening', 'Web site server listening');
});
server.on('close', function() {
sendEmail('App Service - Closed', 'Web site server closed.');
});
process.on ('SIGTERM', function() {
sendEmail('App Service - Exited', 'Process exited (via SIGTERM)');
});
process.on ('SIGINT', function() {
sendEmail('App Service - Exited', 'Process exited (via SIGINT)');
});
process.on('exit', function() {
sendEmail('App Service - Exited', 'Process exited');
});
Please assume the sendEmail function exists and works. As mentioned, I'm successfully getting an email when the app is listening. However, I never receive one when the app goes to sleep/stops listening.
Am I missing something?
If your post code is hosted on Azure Web Apps, you need to modify your port to process.env.port to make your node.js application run on Azure.
As Azure Web Apps use IIS to handle mapping scripts, and use a pipe port to translate http requests. And Azure Web Apps only expose 80 and 443 port to public.
Meanwhile, you can modify your prot to process.env.port||1337 to make it both run Azure and local.
update
Always On. By default, web apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable Always On to keep the app loaded all the time. If your app runs continuous web jobs, you should enable Always On, or the web jobs may not run reliably.
You can config this setting on Azure portal:
refer to https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/ for more.
It could be an uncaught exception, try this one to catch them:
process.on('uncaughtException', function(err) {
console.log(err);
});

Azure webapp to elasticsearch, sometimes "A connection attempt failed" on azure

I have a web app and a elasticsearch cluster inside a virtiual network.
The web app is on one azure catalog and the elasticsearch cluster on another catalog/subscription. I cannot have them in the same catalog because of bizspark subscription rules (5 accounts $150 each).
Due to this I cannot have the web app connected to the virtual network with Point to site as I can understand.
Therefore I have opened traffic in the virtual network firewall for port 9200. And it works great to index and search in the web app which is connected to elasticsearch cluster. But only sometimes?!
Now and then I get this error:
A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond x.x.x.x:9200
I also get this error sometimes from my web job which is in the same webb app.
Are there something that is blocking my connection to elasticsearch?
If I change plan for my web app (e.g. Standard S1 to S1 or vice versa) it starts to work again. But then after a while I get blocked gain.
As Martin Laarman wrote. The SetTcpKeepAlive was the problem.
NEST has implemented this in their ConnnectionConfiguration with the method EnableTcpKeepAlive().
I don't know what parameters call EnableTcpKeepAlive. The values i set for now seems to work ok.
public static ElasticClient ElasticClient
{
get
{
var uri = new Uri(ConfigurationManager.AppSettings["ElasticSearchUrl"]);
var setting = new ConnectionSettings(uri);
setting.SetDefaultIndex("myIndex");
var connectionConfiguration = new ConnectionConfiguration(uri);
connectionConfiguration.EnableTcpKeepAlive(200000, 200000);
var client = new ElasticClient(setting, new HttpConnection(connectionConfiguration));
return client;
}
}
In version 7.13.0 of NEST, EnableTcpKeepAlive is on the settings object:
var settings = new ConnectionSettings(url);
settings.EnableTcpKeepAlive(TimeSpan.FromMinutes(1), TimeSpan.FromSeconds(1));
client = new ElasticClient(settings);

Resources