I need to sftp into an amazon ec2 instance to send files from data farmed from a firestore data base. I'm trying to open the connection but I need to have access to the ec2 secret key file in the cloud functions.
I've done slightly similar things such as with stripe and the secret key so I believe this should be possible. How do I upload my secret key file so I can have access to in the function below?
return sftp.connect({
host: 'xxxxxxxxxxxx',
port: 'xxxx',
username: 'xxxxxx',
privatekey: 'filepath'
})
I simply put the secret key in the main directory and read it into the environment with
var privateKey = require('fs').readFileSync('./xxxxxxx.pem', {'encoding':'utf8'});
I may ask another question later to see if this is secure but I don't see why not.
Related
I need to get the plaintext value for some 70+ secrets in my github organisation.
There is an endoint to get the list of secret names which i am currently using.
let secretsData = await octokit.request("GET /orgs/{org}/actions/secrets", {
org: process.env.org,
});
This gives back to me a list of action/secret keys, i need the plaintext value for each of these keys.
usecase:
I am migrating from github actions secrets to aws secret manager. So i need a nice way to get the keys/plaintext
I writing an API using nodejs and express. My database got hacked and I decided to use the Digital Ocean Managed Database. Digital Ocean Managed database requires SSL and they ONLY provide you with one CA certificate. In all the tutorials out there, SSL requires 3 files. I didn't find any tutorial on how to connect node-pg with only one file. I finally found the solution and I want to share it with the community. Hopefully, I save someone a few hours of digging.
The certificate that the Digital Ocean provides you is a Root Certificate NOT Client Certificate. In node-pg cert is referring to the client certificate and CA refers to the root certificate. CA option is the one that should be used, not Cert.
CA: root.crt (For Incoming messages; Digital Ocean gives you this one)
Key: postgresql.key (You don't need it)
Cert: Client Certificate (You don't need it)
const config = {
database: 'database-name',
host: 'host-or-ip',
user: 'username',
password: 'password',
port: 1234,
// this object will be passed to the TLSSocket constructor
ssl: {
ca: fs.readFileSync('/path/to/digitalOcean/certificate.crt').toString()
}
}
If this answer was useful to you give a thumbs up so other people can find it as well.
I have added google cloud service account in a project and its working. But the problem is that after an hour(i think), I get this error:
The API returned an error: TypeError: source.hasOwnProperty is not a function
Internal Server Error
and I need to restart the application to make it work.
Here in this StackOverflow post, I found this:
Once you get an access token it is treated in the same way - and is
expected to expire after 1 hour, at which time a new access token will
need to be requested, which for a service account means creating and
signing a new assertion.
but didn't help.
I'm using Node js and amazon secret service:
the code I have used to authorize:
const jwtClient = new google.auth.JWT(
client_email,
null,
private_key,
scopes
);
jwtClient.authorize((authErr) =>{
if(authErr){
const deferred = q.defer();
deferred.reject(new Error('Google drive authentication error, !'));
}
});
Any idea?
hint: Is there any policy in AWS secret to access a secret or in google cloud to access a service account? for example access in local or online?
[NOTE: You are using a service account to access Google Drive. A service account will have its own Google Drive. Is this your intention or is your goal to share your Google Drive with the service account?]
Is there any policy in AWS secret to access a secret or in google
cloud to access a service account? for example access in local or
online?
I am not sure what you are asking. AWS has IAM policies to control secret management. Since you are able to create a Signed JWT from stored secrets, I will assume that this is not an issue. Google does not have policies regarding accessing service accounts - if you have the service account JSON key material, you can do whatever the service account is authorized to do until the service account is deleted, modified, etc.
Now on to the real issue.
Your Signed JWT has expired and you need to create a new one. You need to track the lifetime of tokens that you create and recreate/refresh the tokens before they expire. The default expiration in Google's world is 3,600 seconds. Since you are creating your own token, there is no "wrapper" code around your token to handle expiration.
The error that you are getting is caused by a code crash. Since you did not include your code, I cannot tell you where. However, the solution is to catch errors so that expiration exceptions can be managed.
I recommend instead of creating the Google Drive Client using a Signed JWT that you create the client with a service account. Token expiration and refresh will be managed for you.
Very few Google services still support Signed JWTs (which your code is using). You should switch to using service accounts, which start off with a Signed JWT and then exchange that for an OAuth 2.0 Access Token internally.
There are several libraries that you can use. Either of the following will provide the features that you should be using instead of crafting your own Signed JWTs.
https://github.com/googleapis/google-auth-library-nodejs
https://github.com/googleapis/google-api-nodejs-client
The following code is an "example" and is not meant to be tested and debugged. Change the scopes in this example to match what you require. Remove the section where I load a service-account.json file and replace with your AWS Secrets code. Fill out the code with your required functionality. If you have a problem, create a new question with the code that you wrote and detailed error messages.
const {GoogleAuth} = require('google-auth-library');
const {google} = require('googleapis');
const key = require('service-account.json');
/**
* Instead of specifying the type of client you'd like to use (JWT, OAuth2, etc)
* this library will automatically choose the right client based on the environment.
*/
async function main() {
const auth = new GoogleAuth({
credentials: {
client_email: key.client_email,
private_key: key.private_key,
},
scopes: 'https://www.googleapis.com/auth/drive.metadata.readonly'
});
const drive = google.drive('v3');
// List Drive files.
drive.files.list({ auth: auth }, (listErr, resp) => {
if (listErr) {
console.log(listErr);
return;
}
resp.data.files.forEach((file) => {
console.log(`${file.name} (${file.mimeType})`);
});
});
}
main()
Currently, I am accessing AWS parameter store value as environment variable. It is defined in serverless yml like so:
environment:
XYZ_CREDS: ${ssm:xyzCreds}
In code, I access this like so process.env.XYZ_CREDS
I need to move this value to AWS secret manager and access the xyzCreds in the same way.
Based on the serverless document I tried like so -
custom:
xyzsecret: ${ssm:/aws/reference/secretsmanager/XYZ_CREDS_SECRET_MANAGERa~true}
environment:
XYZ_CREDS: ${self:custom.xyzsecret}}
But it's not working. Please help!
After struggling with this issue by myself I found the solution that worked for me.
Assume that we have a secret XYZ_CREDS where we store user and password ket-value pairs. AWS Secrets manager stores them in JSON format: {"user": "test", "password": "xxxx"}
Here is how to put user and password into Lambda function environment variables:
custom:
xyzsecret: ${ssm:/aws/reference/secretsmanager/XYZ_CREDS~true}
myService:
handler: index.handler
environment:
username: ${self:custom.xyzsecret.user}
password: ${self:custom.xyzsecret.password}
I'm using serverless 1.73.1 for deploying to cloudformation.
Hope this helps others.
Given that the name of your secret in secrets manager is correct. I think you might have an "a" after manager before the decryption.
Secret manager stores in key value/json format.So specify the variables individually
Eg.
environment:
user_name: ${self:custom.xyzsecret}.username
password: ${self:custom.xyzsecret}.password
otherwise pass secret manager name and decrypt using aws-sdk in the code
environment:
secretkey_name:XYZ_CREDS_SECRET_MANAGERa
I want to use Google Cloud Storage in NodeJS but authenticate with google-auth-library
Specifically: I want to host this on heroku, so I want to keep the secret in an environment variable, not in a file (as I'd have to commit the file to deploy to heroku). Basically what is suggested in the auth library:
https://github.com/googleapis/google-auth-library-nodejs#loading-credentials-from-environment-variables
But I can't seem to pass the resulting client to the Storage constructor?
Reading the code [1,2,3,4,5] you should be able to pass in credentials as constructor options:
storageOptions = {
projectId: 'your-project-id',
credentials: {
client_email: 'your-client-email',
private_key: 'your-private-key'
}
};
client = new Storage(storageOptions);