append arbitrary data to IIS W3C logs? - iis

I want to append my own column to the IIS W3C request logs is this possible?
I want IIS to add a custom column and value to each log line.

In my understanding, It is not possible to add a new line as the only available fields are W3C fields in your log format( is chosen as W3C in IIS settings). You can overwrite a particular field coming in W3C log but you have to write a custom Module to do that and this article has sample code to do that
Howewer you can install AdvancedLogging module for IIS and it
allows a comprehensive list of custom fields

You can append custom log data to the IIS log. However it is appended to UriQuery column.
Append via a call to Response.AppendToLog(...)

W3C.zip related files
You could import the W3C format, scrubbing out the headers, then add your own.
Like this for example:
Step 1:
get the log u_ex181121.log [get rid of the first 3 header rows leaving the 4th but taking out just this '#Fields: ']
so your header should look like this:
date time s-ip cs-method cs-uri-stem cs-uri-query s-port cs-username c-ip cs(User-Agent) sc-status sc-substatus sc-win32-status time-taken
Step 2:
Remove any more headers that may appear in the log file throughout the life of the log file.
Step 3:
Import Parse the file (For example manually with EXCEL or some other ETL process delimiting by space
See W3C.zip for related files:
u_ex181121.log = the log file generated by IIS
u_ex181121.log.xls = the imported version of u_ex181121.log
_ULog_StatusLU.xls = the Status Code Lookup
2018.11.21_Query_Results.xls = the results from the W3C_LOG.sql query
W3C_LOG.sql = the query
--Query Logs
SELECT
tRet.[date]
, tRet.[date-time]
, tRet.[date-time-mst]
, tRet.[s-ip]
, tRet.[cs-method]
, tRet.[cs-uri-stem]
, tRet.[cs-uri-query]
, tRet.[s-port]
, tRet.[cs-username]
, tRet.[c-ip]
, tRet.[cs(User-Agent)]
, tRet.[sc-status]
, tRet.[sc-substatus]
, tRet.[sc-win32-status]
, tRet.[time-taken]
, tRet.[time-taken-seconds]
, tRet.[time-taken-minutes]
, tRet.[sc-status-Description]
, tRet.[sc-status-Notes]
From
(
SELECT
Cast(l.[date] As Date) [date]
, Cast(Replace(Cast(l.[date] As varchar(20)), ' 00:00:00.', '') + Cast(Replace(Replace(Cast(l.[time] as varchar(20)), '1899-12-31', ''), '.', '') as varchar(20)) As DateTime) As [date-time]
, DateAdd('n', (-60*7), (Cast(Replace(Cast(l.[date] As varchar(20)), ' 00:00:00.', '') + Cast(Replace(Replace(Cast(l.[time] as varchar(20)), '1899-12-31', ''), '.', '') as varchar(20)) As DateTime))) As [date-time-mst]
, l.[s-ip]
, l.[cs-method]
, l.[cs-uri-stem]
, l.[cs-uri-query]
, l.[s-port]
, l.[cs-username]
, l.[c-ip]
, l.[cs(User-Agent)]
, l.[sc-status]
, l.[sc-substatus]
, l.[sc-win32-status]
, l.[time-taken]
, ROUND((l.[time-taken] * .001), 2) As [time-taken-seconds]
, ROUND((l.[time-taken] * 1.66667e-5), 4) As [time-taken-minutes]
, lu.[sc-status-Description]
, lu.[sc-status-Notes]
FROM
[u_ex181121.log].[u_ex181121] l
Left Outer Join [_ULog_StatusLU].[StatusLU] lu on l.[sc-status] = lu.[sc-status]
) tRet
Order By
tRet.[date-time-mst] DESC[enter link description here][1]

Related

Python addind black lines when writting out text and cannot get rid of them

I am in the process of writting a python script that downloads a copy of quarantined emails from our gateway. The emails are in .eml format ( text ) so was thinking this would be easy but the resulting file does not open in Outlook properly due to the newlines added .
Here is the download/write function :
def download_message(api_key,endpoint,id):
endpoint = endpoint + "email/" + id
headers = {
'x-fireeye-api-key': api_key,
'content-type': 'application/json',
'Accept': 'application/json'}
data = {}
r = requests.get(endpoint, headers=headers, data=json.dumps(data))
if "not found in quarantine." in r.text:
print("Not found in Quarantine")
else:
filename = base_directory + id + ".eml"
f = open(filename,"w")
f.write(r.text)
f.close()
print("Written : " + id + ".eml" + " to disk")
Here is an example of the output file when opened in a text editor
When opened in Outlook this is what it looks like :
If i manually remove all those blank lines ( regex : ^\n ) and save the file it works as expected.
I have tried quite a few ways of removing those blank lines including strip , rstrip , re.sub and nothing seems to have worked.
If it helps what i was trying to do was create a new variable to hold the "modified" text and then pass that to the write function.
would have looked something like this ( sorry i have tried loads of variations which i have saved but i think you will get the point )
filedata = r.body.strip("\n") or filedata = re.sub('^\n' , "" , r.text)
...
f.write(filedata)
Can anyone help ?

Odoo 12 : How to prevent default field method to be executed

I scheduled an cron that execute every 1st of the month, the purpose is to allocate leave for all employee according to their tags. here is sample of my code :
for leave in leave_type_ids:
for employee_tag in employee_tag_ids:
values = {
'name': 'Allocation mensuelle %s %s' % (now.strftime('%B'), now.strftime('%Y')),
'holiday_status_id': leave.id,
'number_of_days': employee_tag.allocation,
'holiday_type': 'category',
'category_id': employee_tag.id,
}
try:
self.create(values).action_approve()
except Exception as e:
_logger.critical(e)
I want to point out that self is instance of 'hr.leave.allocation'.
The problem is when I create the record, the field employee_id is automatically fill with the user/employee OdooBot (the one who executed the program in the cron) and that is not all, the employee OdooBot was allocated a leaves.
This behavior is due to those codes in odoo native modules :
def _default_employee(self):
return self.env.context.get('default_employee_id') or self.env['hr.employee'].search([('user_id', '=', self.env.uid)], limit=1)
employee_id = fields.Many2one(
'hr.employee', string='Employee', index=True, readonly=True,
states={'draft': [('readonly', False)], 'confirm': [('readonly', False)]}, default=_default_employee, track_visibility='onchange')
So my question is how to prevent this when it's the cron and set it to normal when it's in Form view?
The field "employé" should be empty here (in image below), because it is an allocation by tag.
You must loop over hr.employee because then you can do either of the following:
self.with_context({'default_employee_id': employee.id}).create(...)
OR
self.sudo(employee.id).create(...)

Python 3 Imaplib - (Errors : EXPUNGE failed ,BAD [b'Command Argument Error. 11'] ) Unable to delete the mails from Microsoft service account

Trail 1 :
result, data = mail.uid("STORE", str(message_id), "+X-GM-LABELS", '"\\Trash"')
o/p :
BAD [b'Command Argument Error. 11']
Trail 2 :
result, data = mail.uid('STORE', str(message_id) , '+FLAGS', '(\\Deleted)')
print("Deleted the mail : " , result ,"-", details_log[4])
result, data = mail.uid('EXPUNGE', str(message_id))
print("result",result)
print("data",data)
o/p :
Deleted the mail : OK
result NO
data [b'EXPUNGE failed.']
Issue : After Expunge , I even tried to close and logout the connection , but still it doesnt get deleted.
I know this post is old, but for anyone who reads it later on:
When using imaplib's select function to choose a mailbox to view (in my case, the "Inbox" mailbox), I had the readonly argument set to True, to be safe, but this blocked me from deleting emails in Microsoft Outlook. I set it to False and was able to delete emails with the store and expunge methods:
conn.select("Inbox", readonly=False)
# modify search query as you see fit
typ, data = conn.search(None, "FROM", "scammer#whatever.com")
for email in data[0].split():
conn.store(email, "+FLAGS", "\\Deleted")
conn.expunge()
conn.close()
conn.logout()

Rhythmbox plugin. Get one number limited of songs

Get all songs:
for row in self.shell.props.library_source.props.base_query_model:
print(row[0].get_string(RB.RhythmDBPropType.TITLE))
I need get only 10 songs (for example).
First try:
self.shell.props.library_source.props.base_query_model.set_property("limit-value", GLib.Variant("n", 10))
for row in self.shell.props.library_source.props.base_query_model:
print(row[0].get_string(RB.RhythmDBPropType.TITLE))
Result:
Warning: g_object_set_property: construct property "limit-value" for object 'RhythmDBQueryModel' can't be set after construction
self.shell.props.library_source.props.base_query_model.set_property("limit-value", GLib.Variant("n", 10))
Second try: I do not know how to set the limit value because try with GENRE
db = self.shell.props.db
query_model = RB.RhythmDBQueryModel.new_empty(db)
query = GLib.PtrArray()
db.query_append_params(query, RB.RhythmDBQueryType.EQUALS, RB.RhythmDBPropType.GENRE, "Salsa")
db.do_full_query_parsed(query_model, query)
for row in query_model:
print(row[0].get_string(RB.RhythmDBPropType.ARTIST))
Result:
Rhythmbox closed. Error detailed in: How do I query for data in Rhythmbox

Why doesn't psycopg2 allow us to open multiple server-side cursors in the same connection?

I am curious that why psycopg2 doesn't allow opening multiple server-side cursors (http://initd.org/psycopg/docs/usage.html#server-side-cursors) in the same connection. I got this problem recently and I have to solve it by replacing the second cursor by a client-side cursor. But I still want to know if there is any way to do that.
For example, I have these 2 tables on Amazon Redshift:
CREATE TABLE tbl_account (
acctid varchar(100),
regist_day date
);
CREATE TABLE tbl_my_artist (
user_id varchar(100),
artist_id bigint
);
INSERT INTO tbl_account
(acctid, regist_day)
VALUES
('TEST0000000001', DATE '2014-11-23'),
('TEST0000000002', DATE '2014-11-23'),
('TEST0000000003', DATE '2014-11-23'),
('TEST0000000004', DATE '2014-11-23'),
('TEST0000000005', DATE '2014-11-25'),
('TEST0000000006', DATE '2014-11-25'),
('TEST0000000007', DATE '2014-11-25'),
('TEST0000000008', DATE '2014-11-25'),
('TEST0000000009', DATE '2014-11-26'),
('TEST0000000010', DATE '2014-11-26'),
('TEST0000000011', DATE '2014-11-24'),
('TEST0000000012', DATE '2014-11-24')
;
INSERT INTO tbl_my_artist
(user_id, artist_id)
VALUES
('TEST0000000001', 2000011247),
('TEST0000000001', 2000157208),
('TEST0000000001', 2000002648),
('TEST0000000002', 2000383724),
('TEST0000000003', 2000002546),
('TEST0000000003', 2000417262),
('TEST0000000004', 2000076873),
('TEST0000000004', 2000417266),
('TEST0000000005', 2000077991),
('TEST0000000005', 2000424268),
('TEST0000000005', 2000168784),
('TEST0000000006', 2000284581),
('TEST0000000007', 2000284581),
('TEST0000000007', 2000000642),
('TEST0000000008', 2000268783),
('TEST0000000008', 2000284581),
('TEST0000000009', 2000088635),
('TEST0000000009', 2000427808),
('TEST0000000010', 2000374095),
('TEST0000000010', 2000081797),
('TEST0000000011', 2000420006),
('TEST0000000012', 2000115887)
;
I want to select from those 2 tables, then do something with query result.
I use 2 server-side cursors because I need 2 nested loops in my query. I want to use server-side cursor because the result can be very huge.
I use fetchmany() instead of fetchall() because I'm running on a single-node cluster.
Here is my code:
import psycopg2
from psycopg2.extras import DictCursor
conn = psycopg2.connect('connection parameters')
cur1 = conn.cursor(name='cursor1', cursor_factory=DictCursor)
cur2 = conn.cursor(name='cursor2', cursor_factory=DictCursor)
cur1.execute("""SELECT acctid, regist_day FROM tbl_account
WHERE regist_day <= '2014-11-25'
ORDER BY 1""")
for record1 in cur1.fetchmany(50):
cur2.execute("""SELECT user_id, artist_id FROM tbl_my_artist
WHERE user_id = '%s'
ORDER BY 1""" % (record1["acctid"]))
for record2 in cur2.fetchmany(50):
print '(acctid, artist_id, regist_day): (%s, %s, %s)' % (
record1["acctid"], record2["artist_id"], record1["regist_day"])
# do something with these values
conn.close()
When running, I got an error:
Traceback (most recent call last):
File "C:\Users\MLD1\Desktop\demo_cursor.py", line 20, in <module>
for record2 in cur2.fetchmany(50):
File "C:\Python27\lib\site-packages\psycopg2\extras.py", line 72, in fetchmany
res = super(DictCursorBase, self).fetchmany(size)
InternalError: opening multiple cursors from within the same client connection is not allowed.
That error occured at line 20, when I tried to fetch result from the second cursor.
An answer four years later, but it is possible to have more than one cursor open from the same connection. (It may be that the library was updated to fix the problem above.)
The caveat is that you are only allowed to call execute() only once using a named cursor, so if you reuse one of the cursors in the fetchmany loop you'd need to either remove the name or create another "anonymous" cursor.

Resources