Passing Jenkins credentials through Node JS to shell script - node.js

My workflow goes like this :
I have a custom node module installed in my Jenkins slave that calls a few shell scripts . I would like to pass the credentials obtained through the credentials()
method provided by Jenkins declarative pipeline syntax.
What happens right now is that when I pass the username and password environment variables provided by the pipeline to the node module which in turn passes the arguments to the shell script which is used to CURL a file to a remote location
I get bad credentials failure.
But if I do the same thing by simply calling the shell script whithout passing thorugh the node_module the same creds work fine.
Is there any workaround for this behaviour ? I am just trying to understand here
Thanks in advance

Related

Clone git project from cloud source repository using Python

I would like to clone git project stored on cloud source repository from Python code.
I am using git.Repo.clone_from method and it is working. However, only after I authenticate using procedure described here:
https://cloud.google.com/source-repositories/docs/authentication
in "Authenticate by using manually generated credentials" section.
This procedure creates .gitcookies file and setup git to use it with my repository. When I use git.Repo.clone_from method, the same cookie is used as this method probably internally runs git command, which also uses .gitcookies file.
I would like to be independent from local git configuration and pass some login/password or token directly in my application to git.Repo.clone_from method. Is it possible? I even tried to get the cookie from .gitcookies and pass it to git.Repo.clone_from method using env parameter and set there GIT_ASKPASS variable. Unfortunately without success as I do not know what exactly should be set in it.

How to delete GKE (Google Kubernetes Engine) cluster using python?

I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API delete_cluster() from the google-cloud-container python library to delete the GKE cluster.
https://googleapis.dev/python/container/latest/index.html
But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?
Or else If there is any other way to delete the GKE cluster in python?
Thanks in advance.
First you'd need to configure the Python Client for Google Kubernetes Engine as explained on this section of the link you shared. Basically, set up a virtual environment and install the library with pip install google-cloud-container.
If you are running the script within an environment such as the Cloud Shell with an user that has enough access to manage the GKE resources (with at least the Kubernetes Engine Cluster Admin permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:
from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
Notice that as per the delete_cluster method documentation you only need to pass the name parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the credentials parameter to get the client correctly authenticated in a similar fashion to:
...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
Where the credentials variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.
Finally notice that the response variable that is returned by the delete_cluster method is of the Operations class which can serve to monitor a long running operation in a similar fashion as to how it is explained here with the self_link attribute corresponding to the long running operation.
After running the script you could use a curl command in a similar fashion to:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
by checking the status field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the requests library or any equivalent to automate this checking procedure of the long running operation within your script.
This page contains an example for the command you are trying to perform.
To give some more details that are required for the command to succeed -
Your environment needs to contain environment variables, this page contains instructions for how to do that.
Once your environment is successfully authenticated we can run the delete cluster command like so -
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=projects/<project>/locations/<location>/clusters/<cluster>)

Openwhisk - passing environment variables to action

Im using NodeJS action in openwhisk.
Is there any way to pass environment variables into whisk so I can read them in my NodeJS action using process.env ?
This is possible but you need to use a custom Docker runtime. The default built-in Node.js runtime does not support this. Apache OpenWhisk uses default action parameters, rather than environment parameters, to pass things like credentials and other application configuration to action code.
If you extend the existing Node.js Docker runtime for Apache OpenWhisk, you can set environment parameters in the build file for the image. This can then be used as the --docker parameter value when creating the action.

jenkins: setting root url via Groovy API

I'm trying to update Jenkins' root URL via the Groovy API, so I can script the deployment of a Jenkins master without manual input (aside: why is a tool as popular with the build/devops/automation community as Jenkins so resistant to automation?)
Based on this documentation, I believe I should be able to update the URL using the following script in the Script Console.
import jenkins.model.JenkinsLocationConfiguration
jlc = new jenkins.model.JenkinsLocationConfiguration()
jlc.setUrl("http://jenkins.my-org.com:8080/")
println(jlc.getUrl())
Briefly, this instantiates a JenkinsLocationConfiguration object; calls the setter setUrl with the desired value, http://jenkins.my-org.com:8080/; and prints out the new URL to confirm that it has changed.
The println statement prints what I expect it to, but following this, the value visible through the web interface at "Manage Jenkins" -> "Configure System" -> "Jenkins URL" has not updated as I expected.
I'm concerned that the value hasn't been update properly by Jenkins, which might lead to problems when communicating with external APIs.
Is this a valid way to fix the Jenkins root URL? If not, what is? Otherwise, why isn't the change being reflected in the config page?
You are creating a new JenkinsLocationConfiguration object, and updating the new one, not the existing one being used
use
jlc = JenkinsLocationConfiguration.get()
// ...
jlc.save()
to get the one from the global jenkins configuration, update it and save the config descriptor back.
see : https://github.com/jenkinsci/jenkins/blob/master/core/src/main/java/jenkins/model/JenkinsLocationConfiguration.java

How to hide password from jenkins shell output

I have two scripts first on file system,second into jenkins job.
Second script calling the first and passed parameters into it.
Parameters contains password parameter.
How can I hide password into logs?
I have tried to hide output by using exec command but problem wasn't solved.
The Mask Passwords plugin does just that.
Please find below my findings with solution [without using Mask Passwords plugin]:
Brief Description about my jenkins job:
I wrote a job which downloads the artifacts from Nexus based on the parameters given at run-time and then makes a Database SQL connection and deploy the SQL scripts using maven flyway plugin. My job takes - Environment, Database Schema, Artifact version number, Flyway command, Database User and it's password as input parameters.
Brief Background about problem:
While passing the PASSWORD as MAVEN GOAL (Parameter), it was coming in Jenkins Console as a plain text.
Although I was using "Password Parameter" to pass the password at run-time but then also it was coming as plain text in console.
I tried to use the "secret text" to encrypt the password but then my job started failing because the encrypted password was getting passed to Maven Goals, which was not able to connect to DB.
Solution:
I used "Inject passwords to the build as environment variables" from Build Environment and defined its value as my "password parameter" (my password parameter name was db_password) which I am passing as parameter at run-time (eg.: I defined my inject password value as : ${db_password} ).
And this is working as expected. The password which I am passing while running my job is coming as [*******]
[console log:
Executing Maven: -B -f /work/jenkins_data/workspace/S2/database-deployment-via-flyway-EDOS/pom.xml clean compile -Ddb=UAT_cms_core -DdatabaseSchema=cms-core -Dmode=info -DdeploymentVersion=1.2.9 -Ddb_user=DB_USER -Ddb_password=[*******]
]

Resources