Python - Error querying Solarwinds N-Central via SOAP - python-3.x

I'm using python 3 to write a script that generates a customer report for Solarwinds N-Central. The script uses SOAP to query N-Central and I'm using zeep for this project. While not new to python I am new to SOAP.
When calling the CustomerList fuction I'm getting the TypeError: __init__() got an unexpected keyword argument 'listSOs'
import zeep
wsdl = 'http://' + <server url> + '/dms/services/ServerEI?wsdl'
client = zeep.CachingClient(wsdl=wsdl)
config = {'listSOs': 'true'}
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass, Settings=config)
Per the perameters below 'listSOs' is not only a valid keyword, its the only one accepted.
CustomerList
public com.nable.nobj.ei.Customer[] CustomerList(String username, String password, com.nable.nobj.ei.T_KeyPair[] settings) throws RemoteException
Parameters:
username - MSP N-central username
password - Corresponding MSP N-central password
settings - A list of non default settings stored in a T_KeyPair[]. Below is a list of the acceptable Keys and Values. If not used leave null
(Key) listSOs - (Value) "true" or "false". If true only SOs with be shown, if false only customers and sites will be shown. Default value is false.
I've also tried passing the dictionary as part of a list:
config = []
key = {'listSOs': 'true'}
config += key
TypeError: Any element received object of type 'str', expected lxml.etree._Element or builtins.dict or zeep.objects.T_KeyPair
Omitting the Settings value entirely:
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass)
zeep.exceptions.ValidationError: Missing element Settings (CustomerList.Settings)
And trying zeep's SkipValue:
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass, Settings=zeep.xsd.SkipValue)
zeep.exceptions.Fault: java.lang.NullPointerException
I'm probably missing something simple but I've been banging my head against the wall off and on this for awhile I'm hoping someone can point me in the right direction.

Here's my source code from my getAssets.py script. I did it in Python2.7, easily upgradeable though. Hope it helps someone else, N-central's API documentation is really bad lol.
#pip2.7 install zeep
import zeep, sys, csv, copy
from zeep import helpers
api_username = 'your_ncentral_api_user'
api_password='your_ncentral_api_user_pw'
wsdl = 'https://(yourdomain|tenant)/dms2/services2/ServerEI2?wsdl'
client = zeep.CachingClient(wsdl=wsdl)
response = client.service.deviceList(
username=api_username,
password=api_password,
settings=
{
'key': 'customerId',
'value': 1
}
)
# If you can't tell yet, I code sloppy
devices_list = []
device_dict = {}
dev_inc = 0
max_dict_keys = 0
final_keys = []
for device in response:
# Iterate through all device nodes
for device_properties in device.items:
# Iterate through each device's properties and add it to a dict (keyed array)
device_dict[device_properties.first]=device_properties.second
# Dig further into device properties
device_properties = client.service.devicePropertyList(
username=api_username,
password=api_password,
deviceIDs=device_dict['device.deviceid'],
reverseOrder=False
)
prop_ind = 0 # This is a hacky thing I did to make my CSV writing work
for device_node in device_properties:
for prop_tree in device_node.properties:
for key, value in helpers.serialize_object(prop_tree).items():
prop_ind+=1
device_dict["prop" + str(prop_ind) + "_" + str(key)]=str(value)
# Append the dict to a list (array), giving us a multi dimensional array, you need to do deep copy, as .copy will act like a pointer
devices_list.append(copy.deepcopy(device_dict))
# check to see the amount of keys in the last item
if len(devices_list[-1].keys()) > max_dict_keys:
max_dict_keys = len(devices_list[-1].keys())
final_keys = devices_list[-1].keys()
print "Gathered all the datas of N-central devices count: ",len(devices_list)
# Write the data out to a CSV
with open('output.csv', 'w') as csvfile:
fieldnames = final_keys
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for csv_line in devices_list:
writer.writerow(csv_line)

Related

Google cloud function (python) does not deploy - Function failed on loading user code

I'm calling a simple python function in google cloud but cannot get it to save. It shows this error:
"Function failed on loading user code. This is likely due to a bug in the user code. Error message: Error: please examine your function logs to see the error cause: https://cloud.google.com/functions/docs/monitoring/logging#viewing_logs. Additional troubleshooting documentation can be found at https://cloud.google.com/functions/docs/troubleshooting#logging. Please visit https://cloud.google.com/functions/docs/troubleshooting for in-depth troubleshooting documentation."
Logs don't seem to show much that would indicate error in the code. I followed this guide: https://blog.thereportapi.com/automate-a-daily-etl-of-currency-rates-into-bigquery/
With the only difference environment variables and the endpoint I'm using.
Code is below, which is just a get request followed by a push of data into a table.
import requests
import json
import time;
import os;
from google.cloud import bigquery
# Set any default values for these variables if they are not found from Environment variables
PROJECT_ID = os.environ.get("PROJECT_ID", "xxxxxxxxxxxxxx")
EXCHANGERATESAPI_KEY = os.environ.get("EXCHANGERATESAPI_KEY", "xxxxxxxxxxxxxxx")
REGIONAL_ENDPOINT = os.environ.get("REGIONAL_ENDPOINT", "europe-west1")
DATASET_ID = os.environ.get("DATASET_ID", "currency_rates")
TABLE_NAME = os.environ.get("TABLE_NAME", "currency_rates")
BASE_CURRENCY = os.environ.get("BASE_CURRENCY", "SEK")
SYMBOLS = os.environ.get("SYMBOLS", "NOK,EUR,USD,GBP")
def hello_world(request):
latest_response = get_latest_currency_rates();
write_to_bq(latest_response)
return "Success"
def get_latest_currency_rates():
PARAMS={'access_key': EXCHANGERATESAPI_KEY , 'symbols': SYMBOLS, 'base': BASE_CURRENCY}
response = requests.get("https://api.exchangeratesapi.io/v1/latest", params=PARAMS)
print(response.json())
return response.json()
def write_to_bq(response):
# Instantiates a client
bigquery_client = bigquery.Client(project=PROJECT_ID)
# Prepares a reference to the dataset
dataset_ref = bigquery_client.dataset(DATASET_ID)
table_ref = dataset_ref.table(TABLE_NAME)
table = bigquery_client.get_table(table_ref)
# get the current timestamp so we know how fresh the data is
timestamp = time.time()
jsondump = json.dumps(response) #Returns a string
# Ensure the Response is a String not JSON
rows_to_insert = [{"timestamp":timestamp,"data":jsondump}]
errors = bigquery_client.insert_rows(table, rows_to_insert) # API request
print(errors)
assert errors == []
I tried just the part that does the get request with an offline editor and I can confirm a response works fine. I suspect it might have to do something with permissions or the way the script tries to access the database.

how to patch configmap field using python client library

I have below configmap.yml i want to patch/update date field from python script from container in kubernates deployment i searched various side but couldn't get any reference to do that. Any reference or code sample would be a great help
apiVersion: v1
kind: ConfigMap
metadata:
name: sample-configmap
labels:
app: test
parameter-type: sample
data:
storage.ini: |
[DateInfo]
date=1970-01-01T00:00:00.01Z
I went through this reference code but couldn't figure out what will be content of body and which parameter i should use and which parameter i should neglect
partially update the specified ConfigMap
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest
import ApiException
from pprint import pprint
configuration = kubernetes.client.Configuration()
# Configure API key authorization: BearerToken configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# Defining host is optional and default to http://localhost configuration.host = "http://localhost"
# Enter a context with an instance of the API kubernetes.client
with kubernetes.client.ApiClient(configuration) as api_client:
# Create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(api_client)
name = 'name_example' # str | name of the ConfigMap
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = None # object |
pretty = 'pretty_example'
dry_run = 'dry_run_example'
field_manager = 'field_manager_example'
force = True
try:
api_response = api_instance.patch_namespaced_config_map(name, namespace, body, pretty=pretty, dry_run=dry_run, field_manager=field_manager, force=force)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->patch_namespaced_config_map: %s\n" % e)
The body parameter in patch_namespaced_config_map is the actual configmap data that you want to patch and needs to be first obtained with read_namespaced_config_map.
Following steps are required for all the operations that have the body argument:
Get the data using the read_*/get_*method
Use the data returned in the first step in the API modifying the object.
Further, for most cases, it is enough to pass the required arguments namely
name, namespace and body but here is the info about each:
Parameters
Name
Type
Description
Notes
name
str
name of the ConfigMap
namespace
str
object name and auth scope, such as for teams and projects
body
object
pretty
str
If 'true', then the output is pretty printed.
[optional]
dry_run
str
When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed
[optional]
field_manager
str
fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint. This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch).
[optional]
force
bool
Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests.
[optional]
Review the K8s python client README for the list of all supported APIs and their usage.

export data from service now rest API using python

I have to export incident data from service now rest API. The incident state is one of new, in progress, pending not resolved and closed. I am able to fetch data that are in active state but not able to apply correct filter also in output it is showing one extra character 'b', so how to remove that extra character?
input:
import requests
URL = 'https://instance_name.service-now.com/incident.do?CSV&sysparm_query=active=true'
user = 'user_name'
password = 'password'
headers = {"Accept": "application/xml"}
response = requests.get(URL,auth=(user, password), headers=headers)
if response.status_code != 200:
print('Status:', response.status_code, 'Headers:', response.headers, 'Error Response:', response.content)
exit()
print(response.content.splitlines())
Output:
[b'"number","short_description","state"', b'"INC0010001","Test incident creation through REST","New"', b'"INC0010002","incident creation","Closed"', b'"INC0010004","test","In Progress"']
It's a Byte literal (for more info. please refer https://docs.python.org/3/reference/lexical_analysis.html#string-and-bytes-literals)
To remove Byte literal, we need to decode the strings,
Just give a try as-
new_list = []
s = [b'"number","short_description","state"', b'"INC0010001","Test
incident creation through REST","New"', b'"INC0010002","incident
creation","Closed"', b'"INC0010004","test","In Progress"']
for ele in s:
ele= ele.decode("utf-8")
new_list.append(ele)
print(new_list)
Output:
['"number","short_description","state"', '"INC0010001","Test incident
creation through REST","New"', '"INC0010002","incid
ent creation","Closed"', '"INC0010004","test","In Progress"']
Hope! It will work

InvalidResponse error in bigquery load_table_from_file

I am trying to upload a set of csv data into a BigQuery from a BytesIO object, but keep getting an error InvalidResponse: Response headers must contain header 'location'
Here is my code
# self.database = authenticated bigquery.Client
config = bigquery.LoadJobConfig()
config.skip_leading_rows = 1
config.source_format = bigquery.SourceFormat.CSV
config.allow_jagged_rows = True
schema = [
bigquery.SchemaField("date", "DATE", mode="REQUIRED"),
bigquery.SchemaField("page_id", "STRING", mode="REQUIRED")
]
# ... Appending a list of bigquery.SchemaField("name", "INTEGER")
config.schema = schema
table = self.get_or_create_table(name, config.schema) # returns TableReference
file = self.clip_data(local_fp, cutoff_date) # returns BytesIO
job = self.database.load_table_from_file(
file, table,
num_retries=self.options.num_retries,
job_id=uuid.uuid4().int,
job_config=config
) # Error is here.
I have tried searching around but I cannot find any reason or fix for this exception.
InvalidResponse: ('Response headers must contain header', 'location')
The problem was caused by not providing a location in the load_table_from_file method.
location="US"
was enough to fix the problem.

Getting access key age AWS Boto3

I am trying to figure out a way to get a users access key age through an aws lambda function using Python 3.6 and Boto 3. My issue is that I can't seem to find the right api call to use if any exists for this purpose. The two closest that I can seem to find are list_access_keys which I can use to find the creation date of the key. And get_access_key_last_used which can give me the day the key was last used. However neither or others I can seem to find give simply the access key age like is shown in the AWS IAM console users view. Does a way exist to get simply the Access key age?
This simple code do the same stuff without converting a lot of time etc:
import boto3
from datetime import date
client = boto3.client('iam')
username = "<YOUR-USERNAME>"
res = client.list_access_keys(UserName=username)
accesskeydate = res['AccessKeyMetadata'][0]['CreateDate'].date()
currentdate = date.today()
active_days = currentdate - accesskeydate
print (active_days.days)
There is no direct way. You can use the following code snippet to achieve what you are trying:
import boto3, json, time, datetime, sys
client = boto3.client('iam')
username = "<YOUR-USERNAME>"
res = client.list_access_keys(UserName=username)
accesskeydate = res['AccessKeyMetadata'][0]['CreateDate'] ### Use for loop if you are going to run this on production. I just wrote it real quick
accesskeydate = accesskeydate.strftime("%Y-%m-%d %H:%M:%S")
currentdate = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime())
accesskeyd = time.mktime(datetime.datetime.strptime(accesskeydate, "%Y-%m-%d %H:%M:%S").timetuple())
currentd = time.mktime(datetime.datetime.strptime(currentdate, "%Y-%m-%d %H:%M:%S").timetuple())
active_days = (currentd - accesskeyd)/60/60/24 ### We get the data in seconds. converting it to days
print (int(round(active_days)))
Let me know if this works as expected.
Upon further testing, I've come up with the following which runs in Lambda. This function in python3.6 will email users if their IAM keys are 90 days or older.
Pre-requisites
all IAM users have an email tag with a proper email address as the value.
Example;
IAM user tag key: email
IAM user tag value: someone#gmail.com
every email used, needs to be confirmed in SES
import boto3, os, time, datetime, sys, json
from datetime import date
from botocore.exceptions import ClientError
iam = boto3.client('iam')
email_list = []
def lambda_handler(event, context):
print("All IAM user emails that have AccessKeys 90 days or older")
for userlist in iam.list_users()['Users']:
userKeys = iam.list_access_keys(UserName=userlist['UserName'])
for keyValue in userKeys['AccessKeyMetadata']:
if keyValue['Status'] == 'Active':
currentdate = date.today()
active_days = currentdate - \
keyValue['CreateDate'].date()
if active_days >= datetime.timedelta(days=90):
userTags = iam.list_user_tags(
UserName=keyValue['UserName'])
email_tag = list(filter(lambda tag: tag['Key'] == 'email', userTags['Tags']))
if(len(email_tag) == 1):
email = email_tag[0]['Value']
email_list.append(email)
print(email)
email_unique = list(set(email_list))
print(email_unique)
RECIPIENTS = email_unique
SENDER = "AWS SECURITY "
AWS_REGION = os.environ['region']
SUBJECT = "IAM Access Key Rotation"
BODY_TEXT = ("Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older.\r\n"
"Log into AWS and go to your IAM user to fix: https://console.aws.amazon.com/iam/home?#security_credential"
)
BODY_HTML = """
AWS Security: IAM Access Key Rotation: Your IAM Access Key need to be rotated in AWS Account: 123456789 as it is 3 months or older. Log into AWS and go to your https://console.aws.amazon.com/iam/home?#security_credential to create a new set of keys. Ensure to disable / remove your previous key pair.
"""
CHARSET = "UTF-8"
client = boto3.client('ses',region_name=AWS_REGION)
try:
response = client.send_email(
Destination={
'ToAddresses': RECIPIENTS,
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=SENDER,
)
except ClientError as e:
print(e.response['Error']['Message'])
else:
print("Email sent! Message ID:"),
print(response['MessageId'])
Using the above methods you will only get the age of the access keys. But as a best practice or a security approach, you need to check the rotation period, when the keys are last rotated. If the keys rotation age is more than 90 days you could alert your team.
The only way to get the rotation age of the access keys is by using the credentials report from IAM. Download it, parse it, and calculate the age.

Resources