Send an email from a Python3 script with localhost? - python-3.x

I need to send mails from my Python3 script. Now it does, but my gmail password is visible and I cannot trust in any admin of this machine, so the solution I see is to mount a local mail server. To do some tests, I was trying to execute a script (this one: SMTP sink server). While this one is running, I execute my old script with some changes:
import smtplib
# server = smtplib.SMTP('smtp.gmail.com:587')
server = smtplib.SMTP('localhost:25')
# smtp.ehlo()
# server.starttls()
# smtp.ehlo()
# server.login('my_account#gmail.com', 'my_password')
server.login(None, None)
server.sendmail('Me <my_account#gmail.com'>, ['to_user#gmail.com'], 'Hi!'.as_string())
server.quit()
I understand the script at the link will create a file in the folder where it is with the mail content, but nothing happens, because I get this error message:
SMTP AUTH extension not supported by server.
I googled and I think this could be sorted out if I uncomment the line server.starttls(), but it gives another error, which is supposed to be solved with the lines smtp.ehlo(), but not in my case.
Any suggestions?

OK, I managed to send the email, what I only had to do was removing this line:
server.login(None, None)

Related

How to send emails via outlook using terminal sever

I have a python script which is used to send email through outlook, but the script has got my system's path,
now I have to run this script on centralised system using putty, as the code is now on git, how do I change the path for 'mail.Attachments' section.
Below is the script which runs fine on my local machine, but when i try to run it on putty it throws error, module win32com.client not found and doesn't even allow me to install pywin32
import win32com.client
outlook = win32com.client.Dispatch('outlook.application')
mail = outlook.createItem(0)
mail.To = 'xsupport#xsample.com '
mail.Subject = 'certificate CSR'
mail.Body = "Attached is the CSR for 'xxx.sample.com'.\
\nPlease request a duplicate certificate for the Cert Project.\
\n\nThanks,\nPraveen"
mail.Attachments.Add('C:/Users/praveen23/akamai_cert/sample.pem')
mail.Send()

Google Drive API in Celery Task is Failing

Latest Update:http request within task are working but not https.
I am trying to use Celery Task to Upload Files to Google Drive, once the files have been Uploaded to Local Web Server for Backup.
I saw multiple question asking similar things .I cannot make Google API work in a celery task but it works when I run it without delay().The questions didn't recieve any answers.
Question 1 where #chucky struggling like me.
Implementation and Information:
Server: Django Development Server (localhost)
Celery: Working with RabbitMQ
Database: Postgres
GoogleDriveAPI: V3
I was able to get credentials and token for accessing drive files and
display first ten files,If the quickstart file is run separately.
Google Drive API Quickstart.py
Running this Quickstart.py shows files and folder list of drive.
So I added the same code with all included imports in tasks.py task
name create_upload_folder() to test whether task will work and show
list of files.
I am running it with a Ajax Call but i keep getting this error.
So tracing back show that this above error occurs due to:
Root of the Error is :
[2021-07-13 21:10:03,979: WARNING/MainProcess]
[2021-07-13 21:10:04,052: ERROR/MainProcess] Task create_upload_folder[2463ad5b-4c7c-4eba-b862-9417c01e8314] raised unexpected: ServerNotFoundError('Unable to find the server at www.googleapis.com')
Traceback (most recent call last):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1346, in _conn_request
conn.connect()
File "f:\repos\vuetransfer\vuenv\lib\site-packages\httplib2\__init__.py", line 1136, in connect
sock.connect((self.host, self.port))
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 257, in connect
if socket_connect(fd, address):
File "f:\repos\vuetransfer\vuenv\lib\site-packages\eventlet\greenio\base.py", line 40, in socket_connect
err = descriptor.connect_ex(address)
It's failing on the name resolution (can't find the IP of www.googleapis.com) because most likely it can't contact a DNS server that has the IP (or can't contact any DNS server).
Make sure you have your DNS server properly set up or if you are behind a corporate proxy/VPN that you're using it.
You can verify it working by fetching the IPs manually:
nslookup www.googleapis.com
$ nslookup www.googleapis.com
Non-authoritative answer:
Name: www.googleapis.com
Address: 172.217.23.234
Name: www.googleapis.com
Address: 216.58.201.74
Name: www.googleapis.com
Address: 172.217.23.202
Name: www.googleapis.com
Address: 2a00:1450:4014:80c::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:800::200a
Name: www.googleapis.com
Address: 2a00:1450:4014:80d::200a
In case you can fetch the IPs manually there's a connectivity problem with Python itself not being aware of the proxies (that might have been set up on your PC) and for this try to use:
http_proxy=http://your.proxy:port
https_proxy=http://your.proxy:port
in the environment or as a command prefix or directly in the HTTP client configuration httplib2 uses.
The major problem is with using httplib2 with python3 or some other complication even though google_client_api for python says it is fully supported you have some problems with requests.Atleast the problem is there for me with python3 on Windows.
Which after a lot of research i found that falling back to python2 is a solution but another one can be using httplib2shim after creating a credentials for your service and before .build() for your service you need to call
.
.
httplib2shim.patch()
service = build(API_SERVICE_NAME, API_VERSION, credentials=creds)
This will solve the issue of httplib2 not able to find the www.googleapis.com

How to fix "Cannot connect to server" exception?

I was setting up chromedriver with selenium, using the test script provided on the chromdriver website. Everything worked fine, until I switched to a different WiFi network. Now I'm getting an error message when running my script.
I have searched the web for solutions, and I've tried the following things:
Made sure the chromedriver version matches my chrome version.
Try to whitelist the ip-address
I checked for 127.0.0.1 localhost in /etc/hosts
The test code I'm running (/path/to/my/chromedriver is correct):
import time
from selenium import webdriver
driver = webdriver.Chrome("/path/to/my/chromedriver") # Optional argument, if not specified will search path.
driver.get('http://www.google.com/xhtml');
time.sleep(5) # Let the user actually see something!
search_box = driver.find_element_by_name('q')
search_box.send_keys('ChromeDriver')
search_box.submit()
time.sleep(5) # Let the user actually see something!
driver.quit()
I'm expecting the program to run fine, and the browser should pop up. However, the browser is not popping up and I'm getting the following error message:
File "test.py", line 4, in
driver = webdriver.Chrome("/path/to/my/chromedriver") # Optional argument, if not specified will search path.
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/selenium/webdriver/chrome/webdriver.py", line 73, in init
self.service.start()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/selenium/webdriver/common/service.py", line 104, in start
raise WebDriverException("Can not connect to the Service %s" % self.path)
selenium.common.exceptions.WebDriverException: Message: Can not connect to the Service /path/to/my/chromedriver
When running the chromedriver in the terminal I'm getting the following message (and the browser is also not popping up as supposed to):
Only local connections are allowed.
Please protect ports used by ChromeDriver and related test frameworks to prevent access by malicious code.
EDIT: I have the same problem with the geckodriver for firefox, so it is not specific for Chrome.

Error psycopg2.OperationalError: fe_sendauth: no password supplied even after postgres authorized connection

My python script is raising an 'psycopg2.OperationalError: fe_sendauth: no password supplied' error, even though the Postgre server is authorizing the connect.
I am using Python 3.5, psycopg2, Postgre 9.5 and the password is stored in a .pgpass file. The script is part of a restful flask application, using flask-restful. The script is running on the same host as the Postgre server.
I am calling the connect function as follows:
conn_admin = psycopg2.connect("dbname=database user=username")
When I execute the script I get the following stack trace:
File "/var/www/flask/content_provider.py", line 84, in get_report
conn_admin = psycopg2.connect("dbname=database user=username")
File "/usr/local/lib/python3.5/dist-packages/psycopg2/__init__.py", line 130, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: fe_sendauth: no password supplied
However when I look at the Postgre server log I see the following (I enabled the logger to also show all connection requests):
2019-01-04 18:28:35 SAST [17736-2] username#database LOG: connection authorized: user=username database=database
This code is running fine on my development PC, however when I put it onto the Unbuntu server, I start getting this problem.
To try and find the issue, I have hard-coded the password into the connection string, but I still get the same error.
If I execute the above line directly into my Python terminal on the host, it works fine, with and without the password in the connection string.
EDIT:
One thing I did notice is that on my desktop I use Python 3.6.2, while on the server I use Python 3.5.2.
Try adding the host:
conn_admin = psycopg2.connect("dbname=database user=username host=localhost")
Try adding the password ie
conn = psycopg2.connect("dbname=database user=username host=localhost password=password")

Paramiko cannot open an ssh connection even with load_system_host_keys + WarningPolicy

I am connecting to a remote server with the following code:
ssh = paramiko.SSHClient()
ssh.load_system_host_keys()
ssh.set_missing_host_key_policy(paramiko.WarningPolicy())
ssh.connect(
hostname=settings.HOSTNAME,
port=settings.PORT,
username=settings.USERNAME,
)
When I'm on local server A, I can ssh onto the remote from the command line, suggesting it is in known_hosts. And the code works as expected.
On local server B, I can also ssh onto the remote from the command line. But when I try to use the above code I get:
/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py:763: UserWarning: Unknown ssh host key for [hostname]:22: b'12345'
key.get_fingerprint())))
...
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 416, in connect
look_for_keys, gss_auth, gss_kex, gss_deleg_creds, t.gss_host,
File "/opt/mysite/virtualenv/lib/python3.5/site-packages/paramiko/client.py", line 702, in _auth
raise SSHException('No authentication methods available')
paramiko.ssh_exception.SSHException: No authentication methods available
Unlike "SSH - Python with paramiko issue" I am using both load_system_host_keys and WarningPolicy, so I should not need to programatically add a password or key (and I don't need to on local server A).
Is there some system configuration step I've missed?
Try to use the fabric (this is written based on invoke + paramiko) instead of the paramiko and set the following parameters:
con = fabric.Connection('username#hostname' ,connect_kwargs={'password': 'yourpassword', 'allow_agent': False}
If it's keep falling, try to check if your password is still valid and you're not required to change your password.
I tested with the wrong user on local server B. The user running the Python process did not have ssh permissions after all. (Command line ssh failed for that user.) Once I gave it permissions, the connection worked as expected.

Resources