I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API delete_cluster() from the google-cloud-container python library to delete the GKE cluster.
https://googleapis.dev/python/container/latest/index.html
But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?
Or else If there is any other way to delete the GKE cluster in python?
Thanks in advance.
First you'd need to configure the Python Client for Google Kubernetes Engine as explained on this section of the link you shared. Basically, set up a virtual environment and install the library with pip install google-cloud-container.
If you are running the script within an environment such as the Cloud Shell with an user that has enough access to manage the GKE resources (with at least the Kubernetes Engine Cluster Admin permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:
from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
Notice that as per the delete_cluster method documentation you only need to pass the name parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the credentials parameter to get the client correctly authenticated in a similar fashion to:
...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
Where the credentials variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.
Finally notice that the response variable that is returned by the delete_cluster method is of the Operations class which can serve to monitor a long running operation in a similar fashion as to how it is explained here with the self_link attribute corresponding to the long running operation.
After running the script you could use a curl command in a similar fashion to:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
by checking the status field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the requests library or any equivalent to automate this checking procedure of the long running operation within your script.
This page contains an example for the command you are trying to perform.
To give some more details that are required for the command to succeed -
Your environment needs to contain environment variables, this page contains instructions for how to do that.
Once your environment is successfully authenticated we can run the delete cluster command like so -
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=projects/<project>/locations/<location>/clusters/<cluster>)
Related
I am running a machine learning experiment in Databricks and I want to obtain the workspace URL for certain uses.
I know how to manually obtain the workspace URL of notebook from this link https://learn.microsoft.com/en-us/azure/databricks/workspace/per-workspace-urls
Similar to how you can obtain the path of your notebook by
dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get()
How do I programmatically obtain the notebook's URL?
There are two things available:
Browser host name - it gives you just host name, without http/https schema, but it's really a name that you see in the browser:
dbutils.notebook.entry_point.getDbutils().notebook().getContext() \
.browserHostName().get()
API URL: base URL with HTTPS schema that you can use to call APIs:
dbutils.notebook.entry_point.getDbutils().notebook().getContext() \
.apiUrl().get()
P.S. I really prefer to convert that information into a Python dict that it's easier to investigate and use. I use code like this:
import json
ctx = json.loads(dbutils.notebook.entry_point.getDbutils().notebook() \
.getContext().toJson())
ctx
I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.
As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"
I've gone through all the setup steps to make calls to the Google Vision API from a Node.js App. Link to the guide: https://cloud.google.com/vision/docs/libraries#setting_up_authentication
I'm using the ImageAnnotatorClient from the #google-cloud/vision package to make some text detections.
At first, it looked like everything was set up correctly but I don't know why it only allows me to do one request.
Further requests will give me the following error:
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the vision.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/
If I restart the Node app it again allows me to do one request to the Vision API but then the subsequent requests keep failing.
Here's my code which is almost the same as in the examples:
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
const detectText = async (imgPath) => {
// console.log(imgPath);
const [result] = await client.textDetection(imgPath);
const detections = result.textAnnotations;
return detections;
}
It is worth to mention that this works every time when I run the Node app in my local machine. The problem is happening on my Ubuntu Droplet from Digital Ocean.
Again, I set everything up as it is in the guides. Created a Service Account, downloaded the Service Account Key JSON file, set up the environment variable like this:
export GOOGLE_APPLICATION_CREDENTIALS="PATH-TO-JSON-FILE"
I'm also setting the environment variable in the .bashrc file.
What could I be missing? Before setting up everything from scratch and go through the whole process again I thought it would be good to ask for some help.
So I found the problem. In my case, it was a problem with PM2 not passing the system env variables to the Node app.
So I had everything set up correctly auth-wise but the Node app wasn't seeing the GOOGLE_APPLICATION_CREDENTIALS env var.
I deleted the PM2 process, created a new one and now it works.
We want to do some stuff with the data that is in the Google Datastore. We have a database already, We would like to use Python 3 to handle the data and make queries from a script on our developing machines. Which would be the easiest way to accomplish what we need?
From the Official Documentation:
You will need to install the Cloud Datastore client library for Python:
pip install --upgrade google-cloud-datastore
Set up authentication by creating a service account and setting an environment variable. It will be easier if you see it, please take a look at the official documentation for more info about this. You can perform this step by either using the GCP console or command line.
Then you will be able to connect to your Cloud Datastore client and use it, as in the example below:
# Imports the Google Cloud client library
from google.cloud import datastore
# Instantiates a client
datastore_client = datastore.Client()
# The kind for the new entity
kind = 'Task'
# The name/ID for the new entity
name = 'sampletask1'
# The Cloud Datastore key for the new entity
task_key = datastore_client.key(kind, name)
# Prepares the new entity
task = datastore.Entity(key=task_key)
task['description'] = 'Buy milk'
# Saves the entity
datastore_client.put(task)
print('Saved {}: {}'.format(task.key.name, task['description']))
As #JohnHanley mentioned, you will find a good example on this Bookshelf app tutorial that uses Cloud Datastore to store its persistent data and metadata for books.
You can create a service account and download the credentials as JSON and then set an environment variable called GOOGLE_APPLICATION_CREDENTIALS pointing to the json file. You can see the details at the link below.
https://googleapis.dev/python/google-api-core/latest/auth.html
My workflow goes like this :
I have a custom node module installed in my Jenkins slave that calls a few shell scripts . I would like to pass the credentials obtained through the credentials()
method provided by Jenkins declarative pipeline syntax.
What happens right now is that when I pass the username and password environment variables provided by the pipeline to the node module which in turn passes the arguments to the shell script which is used to CURL a file to a remote location
I get bad credentials failure.
But if I do the same thing by simply calling the shell script whithout passing thorugh the node_module the same creds work fine.
Is there any workaround for this behaviour ? I am just trying to understand here
Thanks in advance