I am in the process of writting a python script that downloads a copy of quarantined emails from our gateway. The emails are in .eml format ( text ) so was thinking this would be easy but the resulting file does not open in Outlook properly due to the newlines added .
Here is the download/write function :
def download_message(api_key,endpoint,id):
endpoint = endpoint + "email/" + id
headers = {
'x-fireeye-api-key': api_key,
'content-type': 'application/json',
'Accept': 'application/json'}
data = {}
r = requests.get(endpoint, headers=headers, data=json.dumps(data))
if "not found in quarantine." in r.text:
print("Not found in Quarantine")
else:
filename = base_directory + id + ".eml"
f = open(filename,"w")
f.write(r.text)
f.close()
print("Written : " + id + ".eml" + " to disk")
Here is an example of the output file when opened in a text editor
When opened in Outlook this is what it looks like :
If i manually remove all those blank lines ( regex : ^\n ) and save the file it works as expected.
I have tried quite a few ways of removing those blank lines including strip , rstrip , re.sub and nothing seems to have worked.
If it helps what i was trying to do was create a new variable to hold the "modified" text and then pass that to the write function.
would have looked something like this ( sorry i have tried loads of variations which i have saved but i think you will get the point )
filedata = r.body.strip("\n") or filedata = re.sub('^\n' , "" , r.text)
...
f.write(filedata)
Can anyone help ?
I scheduled an cron that execute every 1st of the month, the purpose is to allocate leave for all employee according to their tags. here is sample of my code :
for leave in leave_type_ids:
for employee_tag in employee_tag_ids:
values = {
'name': 'Allocation mensuelle %s %s' % (now.strftime('%B'), now.strftime('%Y')),
'holiday_status_id': leave.id,
'number_of_days': employee_tag.allocation,
'holiday_type': 'category',
'category_id': employee_tag.id,
}
try:
self.create(values).action_approve()
except Exception as e:
_logger.critical(e)
I want to point out that self is instance of 'hr.leave.allocation'.
The problem is when I create the record, the field employee_id is automatically fill with the user/employee OdooBot (the one who executed the program in the cron) and that is not all, the employee OdooBot was allocated a leaves.
This behavior is due to those codes in odoo native modules :
def _default_employee(self):
return self.env.context.get('default_employee_id') or self.env['hr.employee'].search([('user_id', '=', self.env.uid)], limit=1)
employee_id = fields.Many2one(
'hr.employee', string='Employee', index=True, readonly=True,
states={'draft': [('readonly', False)], 'confirm': [('readonly', False)]}, default=_default_employee, track_visibility='onchange')
So my question is how to prevent this when it's the cron and set it to normal when it's in Form view?
The field "employé" should be empty here (in image below), because it is an allocation by tag.
You must loop over hr.employee because then you can do either of the following:
self.with_context({'default_employee_id': employee.id}).create(...)
OR
self.sudo(employee.id).create(...)
Trail 1 :
result, data = mail.uid("STORE", str(message_id), "+X-GM-LABELS", '"\\Trash"')
o/p :
BAD [b'Command Argument Error. 11']
Trail 2 :
result, data = mail.uid('STORE', str(message_id) , '+FLAGS', '(\\Deleted)')
print("Deleted the mail : " , result ,"-", details_log[4])
result, data = mail.uid('EXPUNGE', str(message_id))
print("result",result)
print("data",data)
o/p :
Deleted the mail : OK
result NO
data [b'EXPUNGE failed.']
Issue : After Expunge , I even tried to close and logout the connection , but still it doesnt get deleted.
I know this post is old, but for anyone who reads it later on:
When using imaplib's select function to choose a mailbox to view (in my case, the "Inbox" mailbox), I had the readonly argument set to True, to be safe, but this blocked me from deleting emails in Microsoft Outlook. I set it to False and was able to delete emails with the store and expunge methods:
conn.select("Inbox", readonly=False)
# modify search query as you see fit
typ, data = conn.search(None, "FROM", "scammer#whatever.com")
for email in data[0].split():
conn.store(email, "+FLAGS", "\\Deleted")
conn.expunge()
conn.close()
conn.logout()
Get all songs:
for row in self.shell.props.library_source.props.base_query_model:
print(row[0].get_string(RB.RhythmDBPropType.TITLE))
I need get only 10 songs (for example).
First try:
self.shell.props.library_source.props.base_query_model.set_property("limit-value", GLib.Variant("n", 10))
for row in self.shell.props.library_source.props.base_query_model:
print(row[0].get_string(RB.RhythmDBPropType.TITLE))
Result:
Warning: g_object_set_property: construct property "limit-value" for object 'RhythmDBQueryModel' can't be set after construction
self.shell.props.library_source.props.base_query_model.set_property("limit-value", GLib.Variant("n", 10))
Second try: I do not know how to set the limit value because try with GENRE
db = self.shell.props.db
query_model = RB.RhythmDBQueryModel.new_empty(db)
query = GLib.PtrArray()
db.query_append_params(query, RB.RhythmDBQueryType.EQUALS, RB.RhythmDBPropType.GENRE, "Salsa")
db.do_full_query_parsed(query_model, query)
for row in query_model:
print(row[0].get_string(RB.RhythmDBPropType.ARTIST))
Result:
Rhythmbox closed. Error detailed in: How do I query for data in Rhythmbox
I am curious that why psycopg2 doesn't allow opening multiple server-side cursors (http://initd.org/psycopg/docs/usage.html#server-side-cursors) in the same connection. I got this problem recently and I have to solve it by replacing the second cursor by a client-side cursor. But I still want to know if there is any way to do that.
For example, I have these 2 tables on Amazon Redshift:
CREATE TABLE tbl_account (
acctid varchar(100),
regist_day date
);
CREATE TABLE tbl_my_artist (
user_id varchar(100),
artist_id bigint
);
INSERT INTO tbl_account
(acctid, regist_day)
VALUES
('TEST0000000001', DATE '2014-11-23'),
('TEST0000000002', DATE '2014-11-23'),
('TEST0000000003', DATE '2014-11-23'),
('TEST0000000004', DATE '2014-11-23'),
('TEST0000000005', DATE '2014-11-25'),
('TEST0000000006', DATE '2014-11-25'),
('TEST0000000007', DATE '2014-11-25'),
('TEST0000000008', DATE '2014-11-25'),
('TEST0000000009', DATE '2014-11-26'),
('TEST0000000010', DATE '2014-11-26'),
('TEST0000000011', DATE '2014-11-24'),
('TEST0000000012', DATE '2014-11-24')
;
INSERT INTO tbl_my_artist
(user_id, artist_id)
VALUES
('TEST0000000001', 2000011247),
('TEST0000000001', 2000157208),
('TEST0000000001', 2000002648),
('TEST0000000002', 2000383724),
('TEST0000000003', 2000002546),
('TEST0000000003', 2000417262),
('TEST0000000004', 2000076873),
('TEST0000000004', 2000417266),
('TEST0000000005', 2000077991),
('TEST0000000005', 2000424268),
('TEST0000000005', 2000168784),
('TEST0000000006', 2000284581),
('TEST0000000007', 2000284581),
('TEST0000000007', 2000000642),
('TEST0000000008', 2000268783),
('TEST0000000008', 2000284581),
('TEST0000000009', 2000088635),
('TEST0000000009', 2000427808),
('TEST0000000010', 2000374095),
('TEST0000000010', 2000081797),
('TEST0000000011', 2000420006),
('TEST0000000012', 2000115887)
;
I want to select from those 2 tables, then do something with query result.
I use 2 server-side cursors because I need 2 nested loops in my query. I want to use server-side cursor because the result can be very huge.
I use fetchmany() instead of fetchall() because I'm running on a single-node cluster.
Here is my code:
import psycopg2
from psycopg2.extras import DictCursor
conn = psycopg2.connect('connection parameters')
cur1 = conn.cursor(name='cursor1', cursor_factory=DictCursor)
cur2 = conn.cursor(name='cursor2', cursor_factory=DictCursor)
cur1.execute("""SELECT acctid, regist_day FROM tbl_account
WHERE regist_day <= '2014-11-25'
ORDER BY 1""")
for record1 in cur1.fetchmany(50):
cur2.execute("""SELECT user_id, artist_id FROM tbl_my_artist
WHERE user_id = '%s'
ORDER BY 1""" % (record1["acctid"]))
for record2 in cur2.fetchmany(50):
print '(acctid, artist_id, regist_day): (%s, %s, %s)' % (
record1["acctid"], record2["artist_id"], record1["regist_day"])
# do something with these values
conn.close()
When running, I got an error:
Traceback (most recent call last):
File "C:\Users\MLD1\Desktop\demo_cursor.py", line 20, in <module>
for record2 in cur2.fetchmany(50):
File "C:\Python27\lib\site-packages\psycopg2\extras.py", line 72, in fetchmany
res = super(DictCursorBase, self).fetchmany(size)
InternalError: opening multiple cursors from within the same client connection is not allowed.
That error occured at line 20, when I tried to fetch result from the second cursor.
An answer four years later, but it is possible to have more than one cursor open from the same connection. (It may be that the library was updated to fix the problem above.)
The caveat is that you are only allowed to call execute() only once using a named cursor, so if you reuse one of the cursors in the fetchmany loop you'd need to either remove the name or create another "anonymous" cursor.