python3 : append string to every list - python-3.x

I am using python3 with gitpython and generating the result as shown below :
0bf35c4cf243e0fe13adbe7aeba99a03ddf6acfd refs/release/17.xp.0.95/head
d0c5f748e65488ce2e90c1ed027c2da252a5c6a2 refs/release/17.xp.0.96/head
530bdbf8f06859d8aca55cee7b57e27e68e87a94 refs/release/17.xp.0.97/head
0dd0342466540bc38e26ef74af6c8837d165cae5 refs/release/17.xp.0.98/head
919b78fb737b00830a8e48353b0f977c442600dd refs/release/17.xp.0.99/head
But i want to append the string name "acme" to every line, for example
0bf35c4cf243e0fe13adbe7aeba99a03ddf6acfd refs/release/17.xp.0.95/head
acme
d0c5f748e65488ce2e90c1ed027c2da252a5c6a2 refs/release/17.xp.0.96/head
acme
530bdbf8f06859d8aca55cee7b57e27e68e87a94 refs/release/17.xp.0.97/head
acme
0dd0342466540bc38e26ef74af6c8837d165cae5 refs/release/17.xp.0.98/head
acme
919b78fb737b00830a8e48353b0f977c442600dd refs/release/17.xp.0.99/head
acme
Below is the code i am using, please advise the solution to append/concatenate the string to every end of the lines.
import os,re,sys,argparse
import git
if len(sys.argv) < 2:
print('Usage : --track <track name> without "track/" ')
sys.exit()
input_track = sys.argv[1].strip()
print ("Checking for the track name - track/",input_track)
def show_ref(input_track,gitname):
url = "git#github/"+gitname+".git"
g = git.cmd.Git()
ig1 = g.ls_remote(url,"refs/heads/track/"+input_track).split('\d')
print ("Branch for glide-test:\n",'\n'.join(ig1))
for x in range(13,20):
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head").split('|')
print ('\n'.join(ig6))
#"\n".join(map(lambda word: word+"x", s.split("\n")))
show_ref(input_track,"acme")

You can simply modify this line
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head").split('|')
By adding the string "acme" to the string you build.
Like this
ig6 = g.ls_remote(url,"refs/release/"+str(x)+"."+input_track+".*/head acme").split('|')
Is that what you meant?

Related

Execute customized script when launching instance using openstacksdk

I'm new to Openstack and I'm trying to create a tool so that I can launch any number of instances in an Openstack cloud. This was easily done using the nova-client module of openstacksdk.
Now the problem is that I want to make the instances execute a bash script as they are created by adding it as a userdata file, but it doesn't execute. This is confusing because I don't any error or warning message. Does anyone know what could it be?
Important parts of the code
The most important parts of the Python program are the function which gets the cloud info, the one that creates the instances and the main function, . I'll post them here as #Corey told.
"""
Function that allow us to log at cloud with all the credentials needed.
Username and password are not read from env.
"""
def get_nova_credentials_v2():
d = {}
user = ""
password = ""
print("Logging in...")
user = input("Username: ")
password = getpass.getpass(prompt="Password: ", stream=None)
while (user == "" or password == ""):
print("User or password field is empty")
user = input("Username: ")
password = getpass.getpass(prompt="Password: ", stream=None)
d['version'] = '2.65'
d['username'] = user
d['password'] = password
d['project_id'] = os.environ['OS_PROJECT_ID']
d['auth_url'] = os.environ['OS_AUTH_URL']
d['user_domain_name'] = os.environ['OS_USER_DOMAIN_NAME']
return d
Then we have the create_server function:
"""
This function creates a server using the info we got from JSON file
"""
def create_server(server):
s = {}
print("Creating "+server['compulsory']['name']+"...")
s['name'] = server['compulsory']['name']
s['image'] = server['compulsory']['os']
s['flavor'] = server['compulsory']['flavor']
s['min_count'] = server['compulsory']['copyNumber']
s['max_count'] = server['compulsory']['copyNumber']
s['userdata'] = server['file']
s['key_name'] = server['compulsory']['keyName']
s['availability_zone'] = server['compulsory']['availabilityZone']
s['nics'] = server['compulsory']['network']
print(s['userdata'])
if(exists("instalacion_k8s_docker.sh")):
print("Exists")
s['userdata'] = server['file']
nova.servers.create(**s)
And now the main function:
"""
Main process: First we create a connection to Openstack using our credentials.
Once connected we cal get_serverdata function to get all instance objects we want to be created.
We check that it is not empty and that we are not trying to create more instances than we are allowed.
Lastly we create the instances and the program finishes.
"""
credentials = get_nova_credentials_v2()
nova = client.Client(**credentials)
instances = get_serverdata()
current_instances = len(nova.servers.list())
if not instances:
print("No instance was writen. Check instances.json file.")
exit(3)
num = 0
for i in instances:
create_server(i)
exit(0)
For the rest of the code you can access to this public repo on github.
Thanks a lot!
Problem solved
The problem was the content of the server['file'] as #Corey said. It cannot be the Path to the file where you wrote the data but the content of it or a file type object. In the case of OpenstackSDK it must be base64 encoded but it is not the case in Novaclient.
Thanks a lot to #Corey for all the help! :)

Python skip the lines which do not have any of starting line in the output

I am trying to write a code after getting help from google and So to parse a command output but still getting some problem, as the output what i am expecting continuous there line starting with dn , instance and tag but somehow the very first output only contains dn and tag So, i want those line which do not have all these three starting strings then just skip those, as i am learning so not getting the idea to do that.
Below is my code:
import subprocess as sp
p = sp.Popen(somecmd, shell=True, stdout=sp.PIPE)
stout = p.stdout.read().decode('utf8')
output = stout.splitlines()
startline = ["instance:", "tag"]
for line in output:
print(line)
Script output:
dn: ou=People,ou=pti,o=pt
tag: pti00631
dn: cn=pti00857,ou=People,ou=pti,o=pt
instance: Jassu Lal
tag: pti00857
dn: cn=pti00861,ou=People,ou=pti,o=pt
instance: Gatti Lal
tag: pti00861
Desired output:
dn: cn=pti00857,ou=People,ou=pti,o=pt
instance: Jassu Lal
tag: pti00857
dn: cn=pti00861,ou=People,ou=pti,o=pt
instance: Gatti Lal
tag: pti00861
Assuming your output always the same, your loop can look like this:
lines_to_skip = 3
skip_lines = False
skipped_lines = 0
for line in output():
if "dn: " in line and not "dn: cn" in line:
skip_lines = True
if skip_lines:
if skipped_lines < lines_to_skip:
skipped_lines += 1
continue
if skipped_lines == lines_to_skip:
skip_lines = False
skipped_lines = 0
print(line)
It will check if there is a dn without the cn, counts to 3 (or rather lines_to_skip) and starts outputting when it's reached the lines to skip.
It's a pretty hacky solution but the best one I could come up with for the given context
The below code is flexible. You only need to add the tags in the necessary_tags dictionary without which you do not want to print. It can be more than 3 as well. It also accounts for situations when you receive a particular tag more than once.
import subprocess as sp
p = sp.Popen(somecmd, shell=True, stdout=sp.PIPE)
stout = p.stdout.read().decode('utf8')
output = stout.splitlines()
output.append("")
necessary_tags = {'dn':0, 'instance':0, 'tag':0}
temp_output = []
for line in (output):
tag = line.split(':')[0].strip()
if necessary_tags.get(tag, -1) != -1:
necessary_tags[tag] += 1
temp_output.append(line)
elif line == "":
if all(necessary_tags.values()):
for out in temp_output:
print(out)
temp_output = []
necessary_tags.update({}.fromkeys(necessary_tags,0))
print()

How to handle blank line,junk line and \n while converting an input file to csv file

Below is the sample data in input file. I need to process this file and turn it into a csv file. With some help, I was able to convert it to csv file. However not fully converted to csv since I am not able to handle \n, junk line(2nd line) and blank line(4th line). Also, i need help to filter transaction_type i.e., avoid "rewrite" transaction_type
{"transaction_type": "new", "policynum": 4994949}
44uu094u4
{"transaction_type": "renewal", "policynum": 3848848,"reason": "Impressed with \n the Service"}
{"transaction_type": "cancel", "policynum": 49494949, "cancel_table":[{"cancel_cd": "AU"}, {"cancel_cd": "AA"}]}
{"transaction_type": "rewrite", "policynum": 5634549}
Below is the code
import ast
import csv
with open('test_policy', 'r') as in_f, open('test_policy.csv', 'w') as out_f:
data = in_f.readlines()
writer = csv.DictWriter(
out_f,
fieldnames=[
'transaction_type', 'policynum', 'cancel_cd','reason'],lineterminator='\n',
extrasaction='ignore')
writer.writeheader()
for row in data:
dict_row = ast.literal_eval(row)
if 'cancel_table' in dict_row:
cancel_table = dict_row['cancel_table']
cancel_cd= []
for cancel_row in cancel_table:
cancel_cd.append(cancel_row['cancel_cd'])
dict_row['cancel_cd'] = ','.join(cancel_cd)
writer.writerow(dict_row)
Below is my output not considering the junk line,blank line and transaction type "rewrite".
transaction_type,policynum,cancel_cd,reason
new,4994949,,
renewal,3848848,,"Impressed with
the Service"
cancel,49494949,"AU,AA",
Expected output
transaction_type,policynum,cancel_cd,reason
new,4994949,,
renewal,3848848,,"Impressed with the Service"
cancel,49494949,"AU,AA",
Hmm I try to fix them but I do not know how CSV file work, but my small knoll age will suggest you to run this code before to convert the file.
txt = {"transaction_type": "renewal",
"policynum": 3848848,
"reason": "Impressed with \n the Service"}
newTxt = {}
for i,j in txt.items():
# local var (temporar)
lastX = ""
correctJ = ""
# check if in J is ascii white space "\n" and get it out
if "\n" in f"b'{j}'":
j = j.replace("\n", "")
# for grammar purpose check if
# J have at least one space
if " " in str(j):
# if yes check it closer (one by one)
for x in ([j[y:y+1] for y in range(0, len(j), 1)]):
# if 2 spaces are consecutive pass the last one
if x == " " and lastX == " ":
pass
# if not update correctJ with new values
else:
correctJ += x
# remember what was the last value checked
lastX = x
# at the end make J to be the correctJ (just in case J has not grammar errors)
j = correctJ
# add the corrections to a new dictionary
newTxt[i]=j
# show the resoult
print(f"txt = {txt}\nnewTxt = {newTxt}")
Termina:
txt = {'transaction_type': 'renewal', 'policynum': 3848848, 'reason': 'Impressed with \n the Service'}
newTxt = {'transaction_type': 'renewal', 'policynum': 3848848, 'reason': 'Impressed with the Service'}
Process finished with exit code 0

Python3 decode removes white spaces when should be kept

I'm reading a binary file that has a code on STM32. I placed deliberate 2 const strings in the code, that allows me to read SW version and description from a given file.
When you open a binary file with hex editor or even in python3, you can see correct form. But when run text = data.decode('utf-8', errors='ignore'), it removes a zeros from the file! I don't want this, as I keep EOL characters to properly split and extract string that interest me.
(preview of the end of the data variable)
Svc\x00..\Src\adc.c\x00..\Src\can.c\x00defaultTask\x00Task_CANbus_receive\x00Task_LED_Controller\x00Task_LED1_TX\x00Task_LED2_RX\x00Task_PWM_Controller\x00**SW_VER:GN_1.01\x00\x00\x00\x00\x00\x00MODULE_DESC:generic_module\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00**Task_SuperVisor_Controller\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01\x02\x03\x04\x06\x07\x08\t\x00\x00\x00\x00\x01\x02\x03\x04..\Src\tim.c\x005!\x00\x08\x11!\x00\x08\x01\x00\x00\x00\xaa\xaa\xaa\xaa\x01\x01\nd\x00\x02\x04\nd\x00\x00\x00\x00\xa2J\x04'
(preview of text, i.e. what I receive after decode)
r g # IDLE TmrQ Tmr Svc ..\Src\adc.c ..\Src\can.c
defaultTask Task_CANbus_receive Task_LED_Controller Task_LED1_TX
Task_LED2_RX Task_PWM_Controller SW_VER:GN_1.01
MODULE_DESC:generic_module
Task_SuperVisor_Controller ..\Src\tim.c 5! !
d d J
with open(path_to_file, "rb") as binary_file:
# Read the whole file at once
data = binary_file.read()
text = data.decode('utf-8', errors='ignore')
# get index of "SW_VER:" sting in the file
sw_ver_index = text.rfind("SW_VER:")
# SW_VER found
if sw_ver_index is not -1:
# retrive the value, e.g. "SW_VER:WB_2.01" will has to start from position 7 and finish at 14
sw_ver_value = text[sw_ver_index + 7:sw_ver_index + 14]
module.append(tuple(('DESC:', sw_ver_value)))
else:
# SW_VER not found
module.append(tuple(('DESC:', 'N/A')))
# get index of "MODULE_DESC::" sting in the file
module_desc_index = text.rfind("MODULE_DESC:")
# MODULE_DESC found
if module_desc_index is not -1:
module_desc_substring = text[module_desc_index + 12:]
module_desc_value = module_desc_substring.split()
module.append(tuple(('DESC:', module_desc_value[0])))
print(module_desc_value[0])
As you can see my white characters are gone, while they should be present

Python - Error querying Solarwinds N-Central via SOAP

I'm using python 3 to write a script that generates a customer report for Solarwinds N-Central. The script uses SOAP to query N-Central and I'm using zeep for this project. While not new to python I am new to SOAP.
When calling the CustomerList fuction I'm getting the TypeError: __init__() got an unexpected keyword argument 'listSOs'
import zeep
wsdl = 'http://' + <server url> + '/dms/services/ServerEI?wsdl'
client = zeep.CachingClient(wsdl=wsdl)
config = {'listSOs': 'true'}
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass, Settings=config)
Per the perameters below 'listSOs' is not only a valid keyword, its the only one accepted.
CustomerList
public com.nable.nobj.ei.Customer[] CustomerList(String username, String password, com.nable.nobj.ei.T_KeyPair[] settings) throws RemoteException
Parameters:
username - MSP N-central username
password - Corresponding MSP N-central password
settings - A list of non default settings stored in a T_KeyPair[]. Below is a list of the acceptable Keys and Values. If not used leave null
(Key) listSOs - (Value) "true" or "false". If true only SOs with be shown, if false only customers and sites will be shown. Default value is false.
I've also tried passing the dictionary as part of a list:
config = []
key = {'listSOs': 'true'}
config += key
TypeError: Any element received object of type 'str', expected lxml.etree._Element or builtins.dict or zeep.objects.T_KeyPair
Omitting the Settings value entirely:
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass)
zeep.exceptions.ValidationError: Missing element Settings (CustomerList.Settings)
And trying zeep's SkipValue:
customers = client.service.CustomerList(Username=nc_user, Password=nc_pass, Settings=zeep.xsd.SkipValue)
zeep.exceptions.Fault: java.lang.NullPointerException
I'm probably missing something simple but I've been banging my head against the wall off and on this for awhile I'm hoping someone can point me in the right direction.
Here's my source code from my getAssets.py script. I did it in Python2.7, easily upgradeable though. Hope it helps someone else, N-central's API documentation is really bad lol.
#pip2.7 install zeep
import zeep, sys, csv, copy
from zeep import helpers
api_username = 'your_ncentral_api_user'
api_password='your_ncentral_api_user_pw'
wsdl = 'https://(yourdomain|tenant)/dms2/services2/ServerEI2?wsdl'
client = zeep.CachingClient(wsdl=wsdl)
response = client.service.deviceList(
username=api_username,
password=api_password,
settings=
{
'key': 'customerId',
'value': 1
}
)
# If you can't tell yet, I code sloppy
devices_list = []
device_dict = {}
dev_inc = 0
max_dict_keys = 0
final_keys = []
for device in response:
# Iterate through all device nodes
for device_properties in device.items:
# Iterate through each device's properties and add it to a dict (keyed array)
device_dict[device_properties.first]=device_properties.second
# Dig further into device properties
device_properties = client.service.devicePropertyList(
username=api_username,
password=api_password,
deviceIDs=device_dict['device.deviceid'],
reverseOrder=False
)
prop_ind = 0 # This is a hacky thing I did to make my CSV writing work
for device_node in device_properties:
for prop_tree in device_node.properties:
for key, value in helpers.serialize_object(prop_tree).items():
prop_ind+=1
device_dict["prop" + str(prop_ind) + "_" + str(key)]=str(value)
# Append the dict to a list (array), giving us a multi dimensional array, you need to do deep copy, as .copy will act like a pointer
devices_list.append(copy.deepcopy(device_dict))
# check to see the amount of keys in the last item
if len(devices_list[-1].keys()) > max_dict_keys:
max_dict_keys = len(devices_list[-1].keys())
final_keys = devices_list[-1].keys()
print "Gathered all the datas of N-central devices count: ",len(devices_list)
# Write the data out to a CSV
with open('output.csv', 'w') as csvfile:
fieldnames = final_keys
writer = csv.DictWriter(csvfile, fieldnames=fieldnames)
writer.writeheader()
for csv_line in devices_list:
writer.writerow(csv_line)

Resources