cloudformation/user data...pass OS user password without echoing to logs - linux

So i am trying to figure out how to NOT display my password once its gets passed on to OS from cloudformation. So first i am using below on my cloudformation script, with "NoEcho" the password that i put in is started out...
"DBPASS" : {
"NoEcho" : "true",
"Description" : "Password for oracle user",
"Type" : "String",
"MinLength" : "1",
"MaxLength" : "20"
},
then in the user data section of my cloud formation i do below to set the password for oracle user. But the problem is that the user is echo'ed out to the boot.log/cloud-init.log so the password is visible...i am trying to hide the password so its not seen in the logs.
"DBPASS=",
{
"Ref": "DBPASS"
},
"\n",
"echo -e \"$DBPASS\n$DBPASS\" | passwd $oracle\n",
Then i was thinking of doing something like below but not sure how to pass in "DBPASS" variable to the input twice..
stty -echo
read DBPASS
stty echo
My Goal is to set the password for oracle user without echoing out to the logs...

If you need to protect sensitive information from being readable from within your EC2 instance, then you shouldn't put it in your user-data boot script at all, regardless of whether it's being stored as part of Cloud-init's default log output, because the user-data script will still always be readable as part of the instance metadata.
Refer to this Important note in the Instance Metadata and User Data section of the EC2 documentation:
Important
Although you can only access instance metadata and user data from within the instance itself, the data is not protected by cryptographic methods. Anyone who can access the instance can view its metadata. Therefore, you should take suitable precautions to protect sensitive data (such as long-lived encryption keys). You should not store sensitive data, such as passwords, as user data.
As one alternate approach to sensitive data, you could upload the content to a private S3 bucket, then download it to the EC2 instance using aws s3 cp from your user-data script. See my answer to the question, How can I (securely) download a private S3 asset onto a new EC2 instance with cloudinit? for more details on this approach.

You can pass your DB credentials as Parameter from the command line. You will need to pass those credentials while launching the Cloudformation stack but will not be visible anywhere. Check out this templatewhere DB parameter are provided from parameter ('default' is not set in parameters. So, you have to pass them while launching your cloudformation stack)

Related

How to slurp a value in a file and assign this to a variable inside a puppet module

I'm pretty new to puppet and have run into an issue.
We have a proprietary home-grown API-based secrets management platform. We can either query the API directly or configure so that the secrets for that host are mounted to the root filesystem.
My problem is I can't figure out how to get that information within the context of a puppet module and into a variable so that I can use it. It seems you can't get stdout/stderr back from exec (or can you) otherwise this would be cake.
So for simplicity, let's say my secret is /etc/app/example/foo.
$roles.each |$role| {
case downcase($role) {
'foo': {
# SOMEHOW I NEED TO GET TOKEN FROM FILESYSTEM OR API CALL HERE
$token = <GET TOKEN SOMEHOW>
# here I need to do something with my value
exec { "my description":
command => '//bin/foo',
environment => ["TOKEN=${token}"]
}
This is basically what I need to do at a basic level. It doesn't matter if I call curl directly (preferred approach) or read a mounted file.
Thx for any help.
you can't get stdout/stderr back from exec (or can you) otherwise this would be cake.
You cannot capture the standard output or error of an Exec's command for reuse, but Puppet's built-in generate() function serves exactly the purpose of executing a command and capturing its output. Normally that would run the command on the server, during catalog compilation, but if you want it to run on the client instead then you can defer its execution. One of the primary purposes for deferring functions is for interaction with secret stores.
With that said, you might want to consider wrapping the whole thing up in a custom resource type. That's maybe a bit more work (especially if you don't speak Ruby), but it's a lot more flexible, and it should make for cleaner and clearer code on the Puppet DSL side, too.

What is the difference between `process.env.USER` and `process.env.USERNAME` in Node?

This is the most robust documentation I can find for the process.env property: https://nodejs.org/api/process.html#process_process_env.
It mentions USER, but not USERNAME. On my machine (Windows/Bash), when I print the contents of process.env, I see USERNAME (my windows username) but not USER. Similarly, echo $USERNAME shows my name but echo $USER returns nothing.
What is the difference between USER and USERNAME? Is it an operating system thing? Are they interchangeable?
The documentation about process.env that you linked to shows an example environment; it is not meant to be normative. process.env can be basically anything -- its values generally have OS defaults provided by the shell, but ultimately they are controlled by the user and/or the process that launched your process.
ie, a user could run
$ USER=lies node script.js
...and process.env would not contain the real username.
If you're interested in getting information about the user your process is running as, call os.userInfo(), which is (mostly1) consistent across platforms.
> os.userInfo()
{ uid: -1,
gid: -1,
username: 'josh',
homedir: 'C:\\Users\\josh',
shell: null }
1 - on Windows, uid, gid, and shell are useless, as seen above
os.userInfo() calls uv_os_get_passwd, which returns the actual current effective user, regardless of what's in environment variables.
uv_os_get_passwd Gets a subset of the password file entry for the current effective uid (not the real uid). The populated data includes the username, euid, gid, shell, and home directory. On non-Windows systems, all data comes from getpwuid_r(3). On Windows, uid and gid are set to -1 and have no meaning, and shell is NULL.
process.env is the process's environment variables, which are supplied by the OS to the process.
This object can really contain just about anything, as specified the OS and the process that launches it, but by default Windows stores the username in USERNAME and Unix-like systems (Linux, macOS, etc.) store it in USER.
I was having a similar issue when trying to connect node.js to mysql via dotenv.
None of the many answers in the web did not resolve my issue.
This worked perfectly well, without the .env file, but only with the information required for authentication inserted into the app.js file. I have tried unsuccessfully any of the posted answers, which include (but not only):
changing the information inside the .env file to be with and without ""
changing the name of the .env file
changing the path of the .env file
describing the path to .env file
writing different variations of the dotenv commands inside app.js
At last, I have tried to find if I had installed the dotenv using the npm install dotenv command. Also I have tried to show the version of the dotenv from the console.log(dotenv.MY_ENV_VAR); which again, showed undefined.
The issue was related to the fact, that dotenv confused USER (of the system, again like you I was using Linux) with USERNAME (of the mysql database). Actually USER returns the current system user instead of the mysql database user, which I have set to USERNAME in the .env file for convenience. Now it was able to connect to the database!
To check this, you could use:
console.log(process.env.USER);
and:
console.log(process.env.USERNAME);
1st gives you the system user, whereas the 2nd gives the database user.
Actually, any name for the variable, that holds the username of the mysql database could be used, as far as it does not match the reserved name for the system username in Linux, which is USER.

Mongodb authentication shell/ console

I have a Node JS program, which uses Mongo DB as my dbs. Now... everyone can access the mongo shell with no issues at all.
Is this how it is meant to be? I want to keep the mongo shell away from anyone else, i.e. you have to authenticate before using the shell.The reason being is that I dont want people deleting tables in the database, and insert/ modifying documents through the console.
Is there a way to do this? I had a look at https://docs.mongodb.com/manual/security/ However I am not sure how to implement this to my Node Js program (keeping the password a secret).
Any help would be appreciated. Thanks
A few solutions :
Restrict access to your db to only the required IP addresses. If your app and database are on the same machine, that would be 127.0.0.1 only + maybe your PC so you can run queries in a GUI.
enforce authentication as in this link, with a strong password.
To keep the password 'secret' in your Node program, which I understand as "not hardcoded", make it an env variable and give it to node at runtime, or write it in a file that doesn't live in your repo (.gitignore works too).
With a valid user/password, here's how to authenticate to mongodb using Node :
A mongodb address has 7 components :
protocol:"mongodb://",
host:"localhost",
user: "user",
password : "password",
options: "?authMechanism=MONGODB-CR",
port:"27017",
db:"db_name"
Which all together give a string like :
mongodb://user:password#localhost:27017/db_name?authMechanism=MONGODB-CR,
That should be enough for Node to connect using the native Mongo driver.
And to authenticate in the shell :
use db_name
db.auth("user", "password" )
or, directly on connection :
mongo -u "user" -p "password" --authenticationDatabase "db_name"

jenkins: setting root url via Groovy API

I'm trying to update Jenkins' root URL via the Groovy API, so I can script the deployment of a Jenkins master without manual input (aside: why is a tool as popular with the build/devops/automation community as Jenkins so resistant to automation?)
Based on this documentation, I believe I should be able to update the URL using the following script in the Script Console.
import jenkins.model.JenkinsLocationConfiguration
jlc = new jenkins.model.JenkinsLocationConfiguration()
jlc.setUrl("http://jenkins.my-org.com:8080/")
println(jlc.getUrl())
Briefly, this instantiates a JenkinsLocationConfiguration object; calls the setter setUrl with the desired value, http://jenkins.my-org.com:8080/; and prints out the new URL to confirm that it has changed.
The println statement prints what I expect it to, but following this, the value visible through the web interface at "Manage Jenkins" -> "Configure System" -> "Jenkins URL" has not updated as I expected.
I'm concerned that the value hasn't been update properly by Jenkins, which might lead to problems when communicating with external APIs.
Is this a valid way to fix the Jenkins root URL? If not, what is? Otherwise, why isn't the change being reflected in the config page?
You are creating a new JenkinsLocationConfiguration object, and updating the new one, not the existing one being used
use
jlc = JenkinsLocationConfiguration.get()
// ...
jlc.save()
to get the one from the global jenkins configuration, update it and save the config descriptor back.
see : https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/JenkinsLocationConfiguration.java

How to hide password from jenkins shell output

I have two scripts first on file system,second into jenkins job.
Second script calling the first and passed parameters into it.
Parameters contains password parameter.
How can I hide password into logs?
I have tried to hide output by using exec command but problem wasn't solved.
The Mask Passwords plugin does just that.
Please find below my findings with solution [without using Mask Passwords plugin]:
Brief Description about my jenkins job:
I wrote a job which downloads the artifacts from Nexus based on the parameters given at run-time and then makes a Database SQL connection and deploy the SQL scripts using maven flyway plugin. My job takes - Environment, Database Schema, Artifact version number, Flyway command, Database User and it's password as input parameters.
Brief Background about problem:
While passing the PASSWORD as MAVEN GOAL (Parameter), it was coming in Jenkins Console as a plain text.
Although I was using "Password Parameter" to pass the password at run-time but then also it was coming as plain text in console.
I tried to use the "secret text" to encrypt the password but then my job started failing because the encrypted password was getting passed to Maven Goals, which was not able to connect to DB.
Solution:
I used "Inject passwords to the build as environment variables" from Build Environment and defined its value as my "password parameter" (my password parameter name was db_password) which I am passing as parameter at run-time (eg.: I defined my inject password value as : ${db_password} ).
And this is working as expected. The password which I am passing while running my job is coming as [*******]
[console log:
Executing Maven: -B -f /work/jenkins_data/workspace/S2/database-deployment-via-flyway-EDOS/pom.xml clean compile -Ddb=UAT_cms_core -DdatabaseSchema=cms-core -Dmode=info -DdeploymentVersion=1.2.9 -Ddb_user=DB_USER -Ddb_password=[*******]
]

Resources