FastApi azure cli swagger - python-3.x

sorry for this basic question but I would like some help from you expert as I am still learning fastaapi.
I have a simple testing application running python FastApi and trying to use it with azure cli.
what I am trying to do, is to have a get request using fastapi to list all the resource groups I have in my subscriptions.
Now, reading documentation and some forums, I have this code here:
#app.get("/azure")
def az_cli (args_str):
temp = tempfile.TemporaryFile()
args = args_str.split()
code = get_default_cli().invoke(['login', '--service-principal', '-u', '', '-p', '','--tenant',''])
resource = get_default_cli().invoke(args)
data = temp.read().strip()
temp.close()
return [args, resource]
This def authenticate the user with service principle, and invoke a az command args.
If I run unicorn and head to docs and in the args field I type resource list the code work just fine, doesn't throw any error, but nothing shows in the request body. the full output though is visible in the terminal.
Can please somebody explain me how can I have the body output in the docs body?
Thank you very much for any help you can provide and I hope my example is clear enough, and if not please feel free to ask more informations.

Related

Get a list of every Layer, in every Service, in every Folder in an ArcGIS REST endpoint

I have two ArcGIS REST endpoints for which I am trying to get a list of every layer:
https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services
https://services1.arcgis.com/RLQu0rK7h4kbsBq5/ArcGIS/rest/services
These are not my organization's endpoints so I don't have access to them internally. At each of these endpoints there can be folders, services, and layers, or just services and layers.
My goal is to get a list of all layers. So far I have tried:
endpoints=(["https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services",
"https://services1.arcgis.com/RLQu0rK7h4kbsBq5/ArcGIS/rest/services"])
for item in endpoints:
reqs = requests.get(item, verify=False)
# used this verify because otherwise I get an SSL error for endpoints[0]
soup =BeautifulSoup(reqs.text, 'html.parser')
layers = []
for link in soup.find_all('a'):
print(link.get('href'))
layers.append(link)
However this doesn't account for the variable nested folders/services/layers or services/layer schemas, and it doesn't seem to be fully appending to my layers list.
I'm thinking I could also go the JSON route and append ?f=psjon . So for example:
https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/?f=pjson would get me the folders
https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/broadband/?f=pjson would get me all the services in the broadband folder
and
https://rdgdwe.sc.egov.usda.gov/arcgis/rest/services/broadband/CDC_5yr_OpioidOverDoseDeaths_2016/MapServer?f=pjson would get me the CDC_OverDoseDeathsbyCounty2016_5yr layer in the first service (CDC_5yr_OpioidOverDoseDeaths_2016) in the broadband folder.
Any help is appreciated. I put this here vs in the GIS stack exchange as it seems a more python question than geospatial.
I don't really agree this is a Python question because there doesn't seem to be any issue with how to use the various Python libraries. The main issue appears to be how do you work with Esri's REST API. Seeing that Esri is very much a GIS company and their REST API is very much a GIS API, I think GIS StackExchange would have been a better forum for the question.
But, since we are here now....
If you are going to continue working with Esri's REST API with Python, I strongly encourage you to read up on Esri's ArcGIS API for Python. At its core, the ArcGIS API for Python is a Python wrapper for working with Esri's REST API. Unless someone has very basic needs, rolling one's own Python code for Esri's REST API isn't time well spent.
If you are set on rolling your own, I strongly encourage you to read Get started -- ArcGIS REST APIs | ArcGIS Developers. The documentation describes the structure of the REST API, syntax, and it includes some examples.
The following isn't pretty, it is meant more to help you connect the dots when reading Esri's documentation. That said, it will give you a list of Map Services on an ArcGIS Server site and the layers for those services.
import json
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
services = {}
services_endpoint = "https://fqdn/arcgis/rest/services"
req = requests.get(f"{services_endpoint}?f=json", verify=False)
svcs_root = json.loads(req.text)
for fld in svcs_root['folders']:
req = requests.get(f"{services_endpoint}/{fld}?f=json", verify=False)
svcs_fld = json.loads(req.text)
for svc in svcs_fld['services']:
if svc['type'] not in ('MapServer'): continue
req = requests.get(f"{services_endpoint}/{svc['name']}/{svc['type']}?f=json", verify=False)
svc_def = json.loads(req.text)
services.update({svc['name']:{'type':svc['type'], 'layers':svc_def['layers']}})
for svc in svcs_root['services']:
if svc['type'] not in ('MapServer'): continue
req = requests.get(f"{services_endpoint}/{svc['name']}/{svc['type']}?f=json", verify=False)
svc_def = svc = json.loads(req.text)
services.update({svc['name']:{'type':svc['type'], 'layers':svc_def['layers']}})
As part of developing GISsurfer (https://gissurfer.com) I was faced with that exact problem but for any ArcGIS server that did not require login creds. My solution was to write PHP code to 'walk the tree' to find all services.

Gcloud not found in flask api

I am using the following python method in GCP. This methods is in test.py
#functools.lru_cache()
def get_project():
return subprocess.check_output("gcloud config list --format 'value(core.project)'", shell=True).decode(
"utf-8").strip()
When I run test.py solely on tpu, it works and when I use this method in flask API then I get error
'gcloud not found'.
However, the same method works solely and under flask API in GCP VM.
I am not able to figure out what could be the possible cause of this.
This is not exactly an answer to your question, but you might be interested in knowing about the metadata server.
From this answer we can more or less deduce that the metadata server also works with TPUs. Note that I'm not 100% sure on this though.
Try the following code to see if you can get the project id with it.
import requests
def get_project():
# https://cloud.google.com/compute/docs/metadata/default-metadata-values#project_metadata
response = requests.get(
"http://metadata.google.internal/computeMetadata/v1/project/project-id",
headers={"Metadata-Flavor": "Google"}
)
return response.text

Using the SSM send_command in Boto3

I'm trying to create a lambda function that will shutdown systemd services running on an EC2 instance. I think using the ssm client from the boto3 module probably is the best choice, and the specific command I was considering to use is the send_command(). Ideally I would like to use Ansible to shutdown the systemd service. So I'm trying to use the "AWS-ApplyAnsiblePlaybooks" It's here that I get stuck, it seems like the boto3 ssm client wants some parameters, I've tried following the boto3 documentation here, but really isn't clear on how it wants me to present the parameters, I found the parameters it's looking for inside the "AWS-ApplyAnsiblePlaybooks" document - but when I include them in my code, it tells me that the parameters are invalid. I also tried going to AWS' GitHub repository because I know they sometime have examples of code but they didn't have anything for the send_command(). I've upload a gist in case people are interested in what I've written so far, I would definitely be interested in understanding how others have gotten their Ansible playbooks to run using ssm via boto3 python scripts.
As far I can see by looking at the documentation for that SSM document and the code you shared in the gist. you need to add "SourceType":["S3"] and you need to have a path in the Source Info like:
{
"path":"https://s3.amazonaws.com/path_to_directory_or_playbook_to_download"
}
so you need to adjust your global variable S3_DEVOPS_ANSIBLE_PLAYBOOKS.
Take a look at the CLI example from the doc link, it should give you ideas on how yo re-structure your Parameters:
aws ssm create-association --name "AWS-ApplyAnsiblePlaybooks" \
--targets Key=tag:TagKey,Values=TagValue \
--parameters '{"SourceType":["S3"],"SourceInfo":["{\"path\":\"https://s3.amazonaws.com/path_to_Zip_file,_directory,_or_playbook_to_download\"}"],"InstallDependencies":["True_or_False"],"PlaybookFile":["file_name.yml"],"ExtraVariables":["key/value_pairs_separated_by_a_space"],"Check":["True_or_False"],"Verbose":["-v,-vv,-vvv, or -vvvv"]}' \
--association-name "name" --schedule-expression "cron_or_rate_expression"

How to delete GKE (Google Kubernetes Engine) cluster using python?

I'm new to GKE-Python. I would like to delete my GKE(Google Kubernetes Engine) cluster using a python script.
I found an API delete_cluster() from the google-cloud-container python library to delete the GKE cluster.
https://googleapis.dev/python/container/latest/index.html
But I'm not sure how to use that API by passing the required parameters in python. Can anyone explain me with an example?
Or else If there is any other way to delete the GKE cluster in python?
Thanks in advance.
First you'd need to configure the Python Client for Google Kubernetes Engine as explained on this section of the link you shared. Basically, set up a virtual environment and install the library with pip install google-cloud-container.
If you are running the script within an environment such as the Cloud Shell with an user that has enough access to manage the GKE resources (with at least the Kubernetes Engine Cluster Admin permission assigned) the client library will handle the necessary authentication from the script automatically and the following script will most likely work:
from google.cloud import container_v1
project_id = "YOUR-PROJECT-NAME" #Change me.
zone = "ZONE-OF-THE-CLUSTER" #Change me.
cluster_id = "NAME-OF-THE-CLUSTER" #Change me.
name = "projects/"+project_id+"/locations/"+zone+"/clusters/"+cluster_id
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=name)
print(response)
Notice that as per the delete_cluster method documentation you only need to pass the name parameter. If by some reason you are just provided the credentials (generally in the form of a JSON file) of a service account that has enough permissions to delete the cluster you'd need to modify the client for the script and use the credentials parameter to get the client correctly authenticated in a similar fashion to:
...
client = container_v1.ClusterManagerClient(credentials=credentials)
...
Where the credentials variable is pointing to the JSON filename (and path if it's not located in the folder where the script is running) of the service account credentials file with enough permissions that was provided.
Finally notice that the response variable that is returned by the delete_cluster method is of the Operations class which can serve to monitor a long running operation in a similar fashion as to how it is explained here with the self_link attribute corresponding to the long running operation.
After running the script you could use a curl command in a similar fashion to:
curl -X GET \
-H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
https://container.googleapis.com/v1/projects/[RPOJECT-NUMBER]/zones/[ZONE-WHERE-THE-CLUSTER-WAS-LOCATED]/operations/operation-[OPERATION-NUMBER]
by checking the status field (which could be in RUNNING state while it is happening) of the response to that curl command. Or your could also use the requests library or any equivalent to automate this checking procedure of the long running operation within your script.
This page contains an example for the command you are trying to perform.
To give some more details that are required for the command to succeed -
Your environment needs to contain environment variables, this page contains instructions for how to do that.
Once your environment is successfully authenticated we can run the delete cluster command like so -
from google.cloud import container_v1
client = container_v1.ClusterManagerClient()
response = client.delete_cluster(name=projects/<project>/locations/<location>/clusters/<cluster>)

Get list of application packages available for a batch account of Azure Batch

I'm making a python app that launches a batch.
I want, via user inputs, to create a pool.
For simplicity, I'll just add all the applications present in the batch account to the pool.
I'm not able to get the list of available application packages.
This is the portion of code:
import azure.batch.batch_service_client as batch
from azure.common.credentials import ServicePrincipalCredentials
credentials = ServicePrincipalCredentials(
client_id='xxxxx',
secret='xxxxx',
tenant='xxxx',
resource="https://batch.core.windows.net/"
)
batch_client = batch.BatchServiceClient(
credentials,
base_url=self.AppData['CloudSettings'][0]['BatchAccountURL'])
# Get list of applications
batchApps = batch_client.application.list()
I can create a pool, so credentials are good and there are applications but the returned list is empty.
Can anybody help me with this?
Thank you,
Guido
Update:
I tried:
import azure.batch.batch_service_client as batch
batchApps = batch.ApplicationOperations.list(batch_client)
and
import azure.batch.operations as batch_operations
batchApps = batch_operations.ApplicationOperations.list(batch_client)
but they don't seem to work. batchApps is always empty.
I don't think it's an authentication issue since I'd get an error otherwise.
At this point I wonder if it just a bug in the python SDK?
The SDK version I'm using is:
azure.batch: 4.1.3
azure: 4.0.0
This is a screenshot of the empty batchApps var:
Is this the link you are looking for:
Understanding the application package concept here: https://learn.microsoft.com/en-us/azure/batch/batch-application-packages
Since its python SDK in action here: https://learn.microsoft.com/en-us/python/api/azure-batch/azure.batch.operations.applicationoperations?view=azure-python
list operation and here is get
hope this helps.
I haven't tried lately using the Azure Python SDK but the way I solved this was to use the Azure REST API:
https://learn.microsoft.com/en-us/rest/api/batchservice/application/list
For the authorization, I had to create an application and give it access to the Batch services and the I programmatically generated the token with the following request:
data = {'grant_type': 'client_credentials',
'client_id': clientId,
'client_secret': clientSecret,
'resource': 'https://batch.core.windows.net/'}
postReply = requests.post('https://login.microsoftonline.com/' + tenantId + '/oauth2/token', data)

Resources