Stackdriver logging misses log entries - python-3.x

Our team uses python to log some user access activities.
We created both local logging and google cloud logging (Stackdriver) to capture the exceptions.
Local log shows 5 entries
Our team's Stackdriver log shows 2 entries
I also tested with my own google cloud stackdriver log. It shows 5 tries.
Here is the code:
local_logger = local_logging.getLogger(__name__)
local_logger.setLevel(local_logging.INFO)
handler = local_logging.FileHandler('Azure-user-access-audit-log.log')
handler.setLevel(local_logging.CRITICAL)
local_logging.Formatter.converter = time.gmtime
formatter = local_logging.Formatter('%(asctime)s | %(levelname)s | %(message)s')
handler.setFormatter(formatter)
local_logger.addHandler(handler)
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "my-credential.json"
logging_client = cloud_logging.Client()
log_name = 'Azure-user-access-audit-log'
cloud_logger = logging_client.logger(log_name)
............
if item['subcriptionType'] == 'Hacker':
user_log = str(item['cloudName'] + " - " + item['tenantId'] + " | " +
item['subcriptionType'] + " " + item['principalName'] + " has access to " + item['subscriptionName'] + " as "
+ item['roleDefinitionName'])
local_logger.critical(user_log)
# The data to log to Google Cloud
google_log_message = item['subcriptionType'] + " " + item['principalName'] + " has access to " + item['subscriptionName'] + " as " + item['roleDefinitionName']
google_log_severity = 'CRITICAL'
google_log_insert_id = item['cloudName'] + " - " + item['tenantId']
print(google_log_message)
print(google_log_severity)
print(google_log_insert_id)
# Writes the log entry
# cloud_logger.log_text(str(google_log_message), severity = str(google_log_severity), insert_id = str(google_log_insert_id))
# # cloud_logger.log_struct({
# # 'subcriptionType': item['subcriptionType'],
# # 'principalName': item['principalName'],
# # 'subscriptionName': item['subscriptionName']
# # }, severity = str(google_log_severity), insert_id = str(google_log_insert_id))
cloud_logger.log_text(str(google_log_message))
If i added the commented-out piece of code for severity and insert-id, nothing would go through. I'm pretty sure that syntax was good.
Please help me out.
Thank y'all so much

You are using insertId incorrectly. The Stackdriver Logging API considers all log entries in the same project, with the same timestamp, and with the same insertId to be duplicates which can be removed. All of your insertId values seem to be the same. The only reason you're seeing two entries in Stackdriver Logging and not one is that the two entries that do make it through have different timestamps.
You can just omit the insertId field. The API will set one automatically.

Related

How to set revision interval and add commit field to output in pysvn Subversion?

Why does the output not work correctly, taking into account the revision interval? Print all files from all revisions!
enter image description here
How to print commit in the general output?
import pysvn
url = 'http://svn.code.sf.net/p/keepass/code/trunk/'
client = pysvn.Client()
url_info = client.list(
url,
peg_revision=pysvn.Revision(pysvn.opt_revision_kind.number, 137),
revision=pysvn.Revision(pysvn.opt_revision_kind.head),
recurse=True,
dirent_fields=pysvn.SVN_DIRENT_ALL,
patterns=["*.txt"]
)
for entry in url_info:
print(str(entry[0]["created_rev"].number) + " " + str(entry[0]["last_author"]) + " " + str(entry[0]["repos_path"]))

Telnet.read_very_eager in FOR loop and the first often returns b' ' while the rest works fine

I'm using Telnetlib to control a VIAVI 5800.Here is my code.
And I log results of telnet.read_very_eager. Sometimes the first read returns b'', maybe say it occurs a lot. I really don't understand why only the first read has this problem while the others just work fine. I also tried adding more time after I finishing changing the time slot, this still happens.
import telnetlib
import time
from robot.libraries.BuiltIn import BuiltIn
for vc4 in range(vc4, vc4_end):
for tug3 in range(stm_start, tug3_end):
for tug2 in range(1, 8):
for tu12 in range(1, 4):
tu12_com = ":SENSE:SDH:DS1:E1:LP:C12:CHANNEL" + " " + str(tu12) + "\n"
tug2_com = ":SENSE:SDH:DS1:E1:LP:C2:CHANNEL" + " " + str(tug2) + "\n"
tug3_com = ":SENSE:SDH:DS1:E1:LP:C3:CHANNEL" + " " + str(tug3) + "\n"
vc4_com = ":SENSE:SDH:CHANNEL:STMN" + " " + str(vc4)
tn.write(tu12_com.encode('ascii')) # change tu12
time.sleep(0.1)
tn.write(tug2_com.encode('ascii')) # change tug2
time.sleep(0.1)
tn.write(tug3_com.encode('ascii')) # change tug3
time.sleep(0.1)
tn.write(vc4_com.encode('ascii')) # change vc4
time.sleep(1.5)
tn.write(b":ABOR\n")
time.sleep(1)
tn.write(b":INIT\n")
time.sleep(5)
tn.write(b":SENSE:DATA? TEST:SUMMARY\n")
time.sleep(2)
result = tn.read_very_eager()
result_num = str(vc4) + "-" + str(tug3) + "-" + str(tug2) + "-" + str(tu12)
BuiltIn().log_to_console(result_num)
BuiltIn().log_to_console(result)
The results are like under:
results saved in excel workbook
results in RF Ride console
I'm so confused and wondering can anyone explain this. Thanks a lot.
BTW, my python version is:
C:\Users\Quinn>python -V
Python 3.7.9

Is there a way to Keep track of all the bad records that are allowed while loading a ndjson file into Bigquery

I have a requirement where I need to keep track of all the bad records that were not feeded into bigquery after allowing max_bad_records. So I need them written in a File on storage for Future reference. I'm using BQ API for Python, Is there a way we can achieve this? I think if we are allowing max_bad_records we dont have the details of failed loads in BQ Load Job.
Thanks
Currently, there isn't a direct way of accessing and saving the bad records. However, you can access some job statistics including the reason why the record was skipped within BigQuery _job_statistics().
I have created an example, in order to demonstrate how the statistics will be shown. I have the following sample .csv file in a GCS bucket:
name,age
robert,25
felix,23
john,john
As you can see, the last row is a bad record, because I will import age as INT64 and there is a string in that row. In addition, I used the following code to upload it to BigQuery:
from google.cloud import bigquery
client = bigquery.Client()
table_ref = client.dataset('dataset').table('table_name')
job_config = bigquery.LoadJobConfig(
schema=[
bigquery.SchemaField("name", "STRING"),
bigquery.SchemaField("age", "INT64"),
]
)
job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE
job_config.skip_leading_rows = 1
job_config.max_bad_records = 5
#job_config.autodetect = True
# The source format defaults to CSV, so the line below is optional.
job_config.source_format = bigquery.SourceFormat.CSV
uri = "gs://path/file.csv"
load_job = client.load_table_from_uri(
uri, table_ref, job_config=job_config
) # API request
print("Starting job {}".format(load_job.job_id))
load_job.result() # Waits for table load to complete.
print("Job finished.")
destination_table = client.get_table(table_ref)
print("Loaded {} rows.".format(destination_table.num_rows))
#Below all the statistics that might be useful in your case
job_state = load_job.state
job_id = load_job.job_id
error_result = load_job.error_result
job_statistics = load_job._job_statistics()
badRecords = job_statistics['badRecords']
outputRows = job_statistics['outputRows']
inputFiles = job_statistics['inputFiles']
inputFileBytes = job_statistics['inputFileBytes']
outputBytes = job_statistics['outputBytes']
print("***************************** ")
print(" job_state: " + str(job_state))
print(" non fatal error: " + str(load_job.errors))
print(" error_result: " + str(error_result))
print(" job_id: " + str(job_id))
print(" badRecords: " + str(badRecords))
print(" outputRows: " + str(outputRows))
print(" inputFiles: " + str(inputFiles))
print(" inputFileBytes: " + str(inputFileBytes))
print(" outputBytes: " + str(outputBytes))
print(" ***************************** ")
print("------ load_job.errors ")
The output from the statistics :
*****************************
job_state: DONE
non fatal errors: [{u'reason': u'invalid', u'message': u"Error while reading data, error message: Could not parse 'john' as INT64 for field age (position 1) starting at location 23", u'location': u'gs://path/file.csv'}]
error_result: None
job_id: b2b63e39-a5fb-47df-b12b-41a835f5cf5a
badRecords: 1
outputRows: 2
inputFiles: 1
inputFileBytes: 33
outputBytes: 26
*****************************
As it is shown above, the erros field returns the non fatal errors, which includes the bad records. In other words, it retrieves individual errors generated by the job. Whereas, the error_result returns the error information as the job as a whole.
I believe these statistics might help you analyse your bad records. Lastly, you can output them into a file, using write(), such as:
with open("errors.txt", "x") as f:
f.write(load_job.errors)
f.close()

list of all SGs allowing 0.0.0.0/0 - AWS Lambda using Python

I'm relatively new to lambda functions and python. Trying to accomplish a lambda function, that lists out all security groups in my AWS account, which are allowing 0.0.0.0/0. Appreciate if anyone could help
I have tried doing this, but it gives the instances, that is open to 0.0.0.0/0, instead I need a list of all SGs that has the rule
import sys
import boto
from boto import ec2
from boto import sns
connection=ec2.connect_to_region("region-name")
connSNS = boto.sns.connect_to_region("region-name")
sg=connection.get_all_security_groups()
listOfInstances=""
messages="Following Instances have port open to all"
def getTag(instanceId):
reservations=connection.get_all_instances(filters={'instance_id':instanceId})
for r in reservations:
for i in r.instances:
return i.tags['Name']
try:
for securityGroup in sg:
for rule in securityGroup.rules:
global instanceId;
if (rule.from_port=='0' and rule.to_port == '65535') and '0.0.0.0/0' in str(rule.grants):
for instanceid in securityGroup.instances():
instanceId=str(instanceid)
listOfInstances += "Instance Name : " + getTag(instanceId.split(':')[1]) + "\t State:" + instanceid.state + "\t SecurityGroup:" +securityGroup.name + "\n"
connSNS.publish(topic='SNS-topic-arn-endpoint',message = messages + "\n" + listOfInstances, subject='ProjectName : Server List with Port Open to all')
except :
print 'Some Error occurred : '
print sys.exc_info()
connSNS.publish(topic='SNS-topic-arn-endpoint',message = sys.exc_info(), subject='script ended with error')
These lines are specifically finding instances for the given security groups:
for instanceid in securityGroup.instances():
instanceId=str(instanceid)
listOfInstances += "Instance Name : " + getTag(instanceId.split(':')[1]) + "\t State:" + instanceid.state + "\t SecurityGroup:" +securityGroup.name + "\n"
If you do not want the instances, then remove those lines and instead return the security groups themselves.

PKCS11 Python FindObjects in available slots

I wrote a script in python that gets info from a Cryptoki library. From there I can make (only)LowLevel API calls such as:
C_getInfo
C_GetSlotList
C_SlotInfo
C_OpenSession
C_GetTokenInfo
C_Logout
C_CloseSession
C_Initialize
Here's a few examples on their implementation
a.C_Initialize()
print("C_GetInfo:", hex(a.C_GetInfo(info)))
print("Library manufacturerID:", info.GetManufacturerID())
del info
print("C_GetSlotList(NULL): " + hex(a.C_GetSlotList(0, slotList)))
print("\tAvailable Slots: " + str(len(slotList)))
for x in range(len(slotList)):
print("\tC_SlotInfo(): " + hex(a.C_GetSlotInfo(slotList[x], slotInfo)))
print("\t\tSlot N." + str(x) + ": ID=" + str(slotList[x]) + ", name='" + slotInfo.GetSlotDescription() + "'")
print("\tC_OpenSession(): " + hex(a.C_OpenSession(slotList[x], CKF_SERIAL_SESSION | CKF_RW_SESSION, session)))
print("\t\tSession:" + str(session))
#print("\tMechList:" + hex(a.C_GetMechanismList(0, slotList[x])))
print("\tC_GetTokenInfo(): " + hex(a.C_GetTokenInfo(slotList[x], tokenInfo)))
print("\t\tTokenInfo: Label=" + tokenInfo.GetLabel() + ", ManufacturerID=" + tokenInfo.GetManufacturerID())
print("\t\tTokenInfo: flags=" + hex(tokenInfo.flags) + ", Model=" + tokenInfo.GetModel())
print("\tC_Login(): " + hex(a.C_Login(session, CKU_USER, pin)))
print("\t\tSessionInfo: state=" + hex(sessionInfo.state) + ", flags=" + hex(sessionInfo.flags))
QUESTION
I can't seem to figure out what api call is needed to find objects in the slot list.. i have something like print("Finding objects: " + hex(a.C_FindObjects(slotList[x], CKA_CLASS, CKO_CERTIFICATE)))
I'm not sure what arguments to pass or if it's structured the right way.
Im using this documentation LowLevel API pkcs11
Ultimately I'm trying to extract the specific omnikey smart card token.. use its private key and cert to sign and verify data..
SearchResult = PyKCS11.LowLevel.ckobjlist(10)
SearchTemplate = PyKCS11.LowLevel.ckattrlist(0)
print "C_FindObjectsInit: " + hex(a.C_FindObjectsInit(session,SearchTemplate))
print "C_FindObjects: " + hex(a.C_FindObjects(session, SearchResult))
print "C_FindObjectsFinal: " + hex(a.C_FindObjectsFinal(session))
First you have to make a result variable and use it to search the object list. I passed 10 in, as I knew there were only a handful of objects and tokens within the list. You can construct a template variable that searches each object for specific attributes, such if it is Private, Modifiable, Key Type, Encrypt, etc. Then you have to make (a.C_FindObjectsInit(session,SearchTemplate)) call to initialize the search in the session by template specifications. Working with the LowLevel API can be confusing, there is virtually no documentation. Hope this helps.

Resources