Connect to EC2 server through AWS lambda - node.js

I have a shell script on my EC2 server and I want to trigger the same from AWS lambda function. Can anyone suggest how I can access the file in my lambda function. There is no connectivity issue between lambda and EC2.
I generated the private key with Putty gen and kept it in s3 bucket and using same key to connect(With this private key able to connect through putty).I have piece of code like this.
var driver, ssh;
driver = require('node-ssh');
ssh = new driver();
exports.handle = function(error, ctx, cb) {
ssh = new driver({
host: 'EC2 public ip',
username: 'uname',
privateKey : 'url of s3/privatekey.ppk'
});
ssh.connect().then(function() {
console.log('connected')
},function(error) {
console.log(error);
});
}
First I am trying to see if I can connect to my EC2 server and then I can run the shell script through ssh client. But connection is not happening.Getting below error.
{
"errorMessage": "config.host must be a valid string",
"errorType": "Error",
"stackTrace": [
"Object.<anonymous> (/var/task/node_modules/node-ssh/lib/helpers.js:15:13)",
"next (native)",
"step (/var/task/node_modules/node-ssh/lib/helpers.js:69:191)",
"/var/task/node_modules/node-ssh/lib/helpers.js:69:437",
"Object.<anonymous> (/var/task/node_modules/node-ssh/lib/helpers.js:69:99)",
"Object.normalizeConfig (/var/task/node_modules/node- ssh/lib/helpers.js:42:17)",
"/var/task/node_modules/node-ssh/lib/index.js:53:25",
"SSH.connect (/var/task/node_modules/node-ssh/lib/index.js:52:14)",
"exports.handle (/var/task/index.js:13:7)"
]
}

You would need something running on your EC2 instance to "receive" the request.
Some options:
Run a web server and call it from the Lambda function, or
Use the EC2 Run Command which uses an agent on the EC2 instance and can be called via the AWS API, or
Have the Lambda function push a message into an Amazon SQS queue and have the instance continually poll the queue
It would be much simpler if you could simply run the code in your Lambda function instead.

Posting answer to this question. Hope it will help.
package com.wb.mars.ingest;
import java.io.File;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.PrintWriter;
import com.amazonaws.services.lambda.runtime.Context;
import com.amazonaws.services.lambda.runtime.RequestHandler;
import com.jcraft.jsch.ChannelExec;
import com.jcraft.jsch.JSch;
import com.jcraft.jsch.Session;
import com.wb.mars.ingest.CustomEventInput;
import com.wb.mars.ingest.CustomEventOutput;
public class EC2ConnectLambda implements RequestHandler<CustomEventInput,CustomEventOutput> {
public CustomEventOutput handleRequest(CustomEventInput input, Context context) {
context.getLogger().log("Input: " + input);
System.out.println("test");
try {
String command1 = "cd /home/ec2-user/mydir; ./runjar.sh";
JSch jsch = new JSch();
String user = "ec2-user";
String host = "*.*.*.*";
int port = 22;
//File file = new File( EC2ConnectLambda.class.getResource( "/Linux_EC2.pem" ).toURI() );
File file = new File( EC2ConnectLambda.class.getResource( "/mykey.pem" ).toURI() );
String privateKeyabsolutePath = file.getAbsolutePath();
jsch.addIdentity(privateKeyabsolutePath);
System.out.println("identity added ");
Session session = jsch.getSession(user, host, port);
System.out.println("session created.");
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
session.setConfig(config);
session.connect();
System.out.println("session connected.....");
ChannelExec channel = (ChannelExec)session.openChannel("exec");
OutputStream o = channel.getOutputStream();
PrintWriter pw = new PrintWriter(o);
InputStream in = channel.getInputStream();
((ChannelExec) channel).setCommand(command1);
channel.connect();
// 4 - Clean up
channel.disconnect();
session.disconnect();
} catch (Exception e) {
System.err.println(e);
e.printStackTrace();
}
return new CustomEventOutput("lambdaInvoked");
}
}

Related

How to specify "cluster_id" while creating a NATS JetStream connection?

In nats-streaming I could create a nats connection via the following code:
import nats, { Stan } from 'node-nats-streaming';
private _client?: NATS.JetStreamClient;
this._client = nats.connect(clusterId, clientId, { url });
And with some modifications I could do the following for the newer JetStream:
import NATS from "nats";
private _client?: NATS.JetStreamClient;
this._client = (await NATS.connect({ name: clientId, servers: url })).jetstream();
But it seems there is no cluster_id property between JetStream's ConnectionOptions!
So how can I create the same NATS client using the connection function using new NATS JetStream?

Authentication Failure when uploading to Azure Blob Storage using Azure.Storage.Blob v12.9.0

I get this error when trying to upload files to blob storage. The error is present both when I run on localhost and when I run in Azure Function.
My connection string looks like:
DefaultEndpointsProtocol=https;AccountName=xxx;AccountKey=xxx;EndpointSuffix=core.windows.net
Authentication information is not given in the correct format. Check the value of the Authorization header.
Time:2021-10-14T15:56:26.7659660Z
Status: 400 (Authentication information is not given in the correct format. Check the value of Authorization header.)
ErrorCode: InvalidAuthenticationInfo
But this used to work in the past but recently its started throwing this error for a new storage account I created. My code looks like below
public AzureStorageService(IOptions<AzureStorageSettings> options)
{
_connectionString = options.Value.ConnectionString;
_containerName = options.Value.ImageContainer;
_sasCredential = new StorageSharedKeyCredential(options.Value.AccountName, options.Value.Key);
_blobServiceClient = new BlobServiceClient(new BlobServiceClient(_connectionString).Uri, _sasCredential);
_containerClient = _blobServiceClient.GetBlobContainerClient(_containerName);
}
public async Task<string> UploadFileAsync(IFormFile file, string location, bool publicAccess = true)
{
try
{
await _containerClient.CreateIfNotExistsAsync(publicAccess
? PublicAccessType.Blob
: PublicAccessType.None);
var blobClient = _containerClient.GetBlobClient(location);
await using var fileStream = file.OpenReadStream();
// throws Exception here
await blobClient.UploadAsync(fileStream, true);
return blobClient.Uri.ToString();
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
// To be able to do this, I have to create the container client via the blobService client which was created along with the SharedStorageKeyCredential
public Uri GetSasContainerUri()
{
if (_containerClient.CanGenerateSasUri)
{
// Create a SAS token that's valid for one hour.
var sasBuilder = new BlobSasBuilder()
{
BlobContainerName = _containerClient.Name,
Resource = "c"
};
sasBuilder.ExpiresOn = DateTimeOffset.UtcNow.AddHours(1);
sasBuilder.SetPermissions(BlobContainerSasPermissions.Write);
var sasUri = _containerClient.GenerateSasUri(sasBuilder);
Console.WriteLine("SAS URI for blob container is: {0}", sasUri);
Console.WriteLine();
return sasUri;
}
else
{
Console.WriteLine(#"BlobContainerClient must be authorized with Shared Key
credentials to create a service SAS.");
return null;
}
}
Please change the following line of code:
_blobServiceClient = new BlobServiceClient(new BlobServiceClient(_connectionString).Uri, _sasCredential);
to
_blobServiceClient = new BlobServiceClient(_connectionString);
Considering your connection string has all the necessary information, you don't really need to do all the things you're doing and you will be using BlobServiceClient(String) constructor which expects and accepts the connection string.
You can also delete the following line of code:
_sasCredential = new StorageSharedKeyCredential(options.Value.AccountName, options.Value.Key);
and can probably get rid of AccountName and Key from your configuration settings if they are not used elsewhere.

azure web app loaded from github repo based on spring boot problem

yesterday i linked my github to an azure web app service
my repo built with rest requests and some of them is working with loading data from firestore based databased , i ran it all on localhost on the tomcat embedded server that comes with spring
,got the web app in the air and my post request which getting resource from firebase , the post request got me an internal 500 server so i check the app insights feature to check what exception i get
java.lang.IllegalStateException: FirebaseApp with name [DEFAULT] doesn't exist.
at com.google.firebase.FirebaseApp.getInstance(FirebaseApp.java:165)
at com.google.firebase.FirebaseApp.getInstance(FirebaseApp.java:136)
my init code for fire base is:
package Model;
import com.google.auth.oauth2.GoogleCredentials;
import com.google.firebase.FirebaseApp;
import com.google.firebase.FirebaseOptions;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.io.File;
import java.io.FileInputStream;
import java.util.Objects;
#Service
public class FBInitialize {
#PostConstruct
public void initialize() {
try {
String fileName = "name of json file with Credential.json";
ClassLoader classLoader = getClass().getClassLoader();
File file = new File(Objects.requireNonNull(classLoader.getResource(fileName)).getFile());
FileInputStream serviceAccount = new FileInputStream(file);
FirebaseOptions options = new FirebaseOptions.Builder()
.setCredentials(GoogleCredentials.fromStream(serviceAccount))
.setDatabaseUrl("https://qr-database-my data base url")
.build();
FirebaseApp.initializeApp(options);
} catch (Exception e) {
e.printStackTrace();
}
}
i checked on the logs and i did got the request i just getting this exception is anyone ever encounter
this exception ?
by the way on initiazlizeApp method the getIntstance methood is being called.
Edit:
found where the exception was thrown from :
public static FirebaseApp getInstance(#NonNull String name) {
synchronized(appsLock) {
FirebaseApp firebaseApp = (FirebaseApp)instances.get(normalize(name));
if (firebaseApp != null) {
return firebaseApp;
} else {
List<String> availableAppNames = getAllAppNames();
String availableAppNamesMessage;
if (availableAppNames.isEmpty()) {
availableAppNamesMessage = "";
} else {
availableAppNamesMessage = "Available app names: " + Joiner.on(", ").join(availableAppNames);
}
String errorMessage = String.format("FirebaseApp with name %s doesn't exist. %s", name, availableAppNamesMessage);
throw new IllegalStateException(errorMessage);
}
}
}
The problem may be the source code version on github. Please check build.gradle file under android/app folder.
Add the following line:
apply plugin:'com.google.gms.google-services'
Related Posts:
1. How to solve Exception Error: FirebaseApp with name [DEFAULT] doesn't exist?
2. FirebaseApp with name [DEFAULT] doesn't exist
im working with maven , also i figured it out that its has something with the json file with the google credentials location because repackaging the file changing the root content to BOOT-INF so i took the json file content put it in a String and cast it to Inputstream so the Initializeapp and the init would be independent but it still no joy :(
new update:
now i get a different exception I think its about security
java.lang.IllegalStateException: Could not find TLS ALPN provider; no working netty-tcnative, Conscrypt, or Jetty NPN/ALPN available
agian im working with maven and not on android

Asp.net core API server logs events to Confluent Cloud Kafka when testing locally, but not when hosting on Azure App Service

I have some code that my Asp.net Core Web API uses to log certain events to a Kafka server running in Confluent Cloud. When I run the API server on my local machine, it can send and receive with Kafka just fine, but when it is running on an Azure App Service, I receive "Local: Message Timed Out" errors. Is there something about Azure App Service networking that I can modify to make the Kafka network traffic flow correctly?
Here is a snippet of the code below:
public class ConfluentKafkaService {
private readonly ClientConfig clientConfig = new ClientConfig
{
BootstrapServers = "...",
ClientId = Dns.GetHostName(),
SecurityProtocol = SecurityProtocol.SaslSsl,
SaslMechanism = SaslMechanism.Plain,
SaslUsername = "...",
SaslPassword = #"..."
};
public async Task SendDeviceEvent(DeviceEvent de) {
var config = new ProducerConfig(clientConfig);
string topicName = $"...";
using var producer = new ProducerBuilder<Null, DeviceEvent>(config)
.Build();
try {
await producer.ProduceAsync(topicName, new Message<Null, DeviceEvent> { Value = de });
}
catch (ProduceException<Null, string> e) {
Console.WriteLine($"Error producing message: {e.Message}");
}
}
}
My connectivity issue was ultimately caused because Azure App Service does not expose its Trusted Certificate store to librdkafka correctly. I downloaded cacert.pem from https://curl.haxx.se/docs/caextract.html and pointed to it by setting SslCaLocation in my ClientConfig like so:
private readonly ClientConfig clientConfig = new ClientConfig
{
BootstrapServers = "",
ClientId = Dns.GetHostName(),
SecurityProtocol = SecurityProtocol.SaslSsl,
SslCaLocation = Path.Combine("assets", "cacert.pem"),
SaslMechanism = SaslMechanism.Plain,
SaslUsername = ""
SaslPassword = ""
}
For further information, see this issue: https://github.com/confluentinc/confluent-kafka-dotnet/issues/1112

How to use HTTPBuilder behind a proxy with authentication

I tried 2 hours and could not make it work.
This is what I did:
grails add-proxy myproxy "--host=<host>" "--port=<port>" "--username=<username>" "--password=<psw>"
grails use-proxy myproxy
I got connection refused error which mean the proxy is not working
In my groovy file, I add the proxy
def http = new HTTPBuilder("http://http://headers.jsontest.com/")
http.setProxy(host, port, "http");
http.request(Method.GET, JSON) {
uri.path = '/'
response.success = { resp, json ->
.....
}
}
I then get groovyx.net.http.HttpResponseException: Proxy Authentication Required
I could not figure out how I set the user/psw for the proxy to make it work
I tried the java way, not working
System.setProperty("http.proxyUser", username);
System.setProperty("http.proxyPassword", password);
and
Authenticator.setDefault(new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication(username, .toCharArray());
}});
Does anyone know how to do this?
Don't know if it will work, but there's some code over here that shows you should do:
import groovyx.net.http.*
import static groovyx.net.http.ContentType.*
import static groovyx.net.http.Method.*
import org.apache.http.auth.*
def http = new HTTPBuilder( 'http://www.ipchicken.com' )
http.client.getCredentialsProvider().setCredentials(
new AuthScope("myproxy.com", 8080),
new UsernamePasswordCredentials("proxy-username", "proxy-password")
)
http.setProxy('myproxy.com', 8080, 'http')
http.request( GET, TEXT ){ req ->
response.success = { resp, reader ->
println "Response: ${reader.text}"
}
}
Proxy authentication uses different HTTP header (https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Proxy-Authorization), so simply adding the header should work for you.
String basicAuthCredentials = Base64.getEncoder().encodeToString(String.format("%s:%s", username,password).getBytes());
http.setHeaders(['Proxy-Authorization' : "Basic " + basicAuthCredentials])

Resources