Cloud Run Python3.7 Fatal Python error: Segmentation fault - python-3.x

Getting this error if I call an API which publishes message to pub sub in background.
Application have same pub sub code somewhere else in another script as well and that works fine. This addition is breaking.
2022-11-01 22:37:01.168 MDT
Current thread 0x00003ecdf3e00700 (most recent call first):
2022-11-01 22:37:01.169 MDT
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 933 in _blocking
2022-11-01 22:37:01.169 MDT
File "/usr/local/lib/python3.7/site-packages/grpc/_channel.py", line 945 in __call__
2022-11-01 22:37:01.169 MDT
File "/usr/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 72 in error_remapped_callable
2022-11-01 22:37:01.169 MDT
File "/usr/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 99 in func_with_timeout
2022-11-01 22:37:01.170 MDT
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 190 in retry_target
2022-11-01 22:37:01.170 MDT
File "/usr/local/lib/python3.7/site-packages/google/api_core/retry.py", line 288 in retry_wrapped_func
2022-11-01 22:37:01.170 MDT
File "/usr/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 154 in __call__
2022-11-01 22:37:01.170 MDT
File "/usr/local/lib/python3.7/site-packages/google/pubsub_v1/services/publisher/client.py", line 785 in publish
2022-11-01 22:37:01.171 MDT
File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/client.py", line 272 in _gapic_publish
2022-11-01 22:37:01.171 MDT
File "/usr/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/publisher/_batch/thread.py", line 278 in _commit
2022-11-01 22:37:01.171 MDT
File "/usr/local/lib/python3.7/threading.py", line 870 in run
2022-11-01 22:37:01.171 MDT
File "/usr/local/lib/python3.7/threading.py", line 926 in _bootstrap_inner
2022-11-01 22:37:01.172 MDT
File "/usr/local/lib/python3.7/threading.py", line 890 in _bootstrap
2022-11-01 22:37:01.172 MDT
Thread 0x00003ecec975e740 (most recent call first):
2022-11-01 22:37:01.172 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 36 in wait
2022-11-01 22:37:01.172 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 84 in run_for_one
2022-11-01 22:37:01.172 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/sync.py", line 125 in run
2022-11-01 22:37:01.172 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/workers/base.py", line 142 in init_process
2022-11-01 22:37:01.173 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 589 in spawn_worker
2022-11-01 22:37:01.173 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 622 in spawn_workers
2022-11-01 22:37:01.173 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 551 in manage_workers
2022-11-01 22:37:01.173 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 202 in run
2022-11-01 22:37:01.173 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 72 in run
2022-11-01 22:37:01.174 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 231 in run
2022-11-01 22:37:01.174 MDT
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 67 in run
2022-11-01 22:37:01.174 MDT
File "/usr/local/bin/gunicorn", line 8 in <module>
2022-11-01 22:37:01.174 MDT
Uncaught signal: 11, pid=42, tid=54, fault_addr=42.
2022-11-01 22:37:01.184 MDT
[2022-11-02 04:37:01 +0000] [2] [WARNING] Worker with pid 42 was terminated due to signal 11```

Related

dbbackup with docker/celery/celery-beat not working: [Errno 2] No such file or directory: 'pg_dump'

I am setting up a backup to use dbbackup. However, I am receiving an error when backing up my data. There is a similar question where the person was able to resolve it, however, the answer doesn't show how. here
My dbbackup version is django-dbbackup==4.0.2
Please find below my dockerfile:
database:
build:
context: .
dockerfile: pg-Dockerfile
expose:
- "5432"
restart: always
volumes:
- .:/myworkdir
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: faranga_db
redis:
image: redis
restart: on-failure
volumes:
- .:/myworkdir
expose:
- "6379"
celery:
build: .
restart: on-failure
command: bash -c "sleep 10; celery -A project worker -l info"
volumes:
- .:/myworkdir
env_file:
- .env
depends_on:
- database
- redis
beat:
build: .
restart: on-failure
command: bash -c "sleep 10; celery -A project beat -l info --pidfile=/tmp/celeryd.pid"
volumes:
- .:/myworkdir
env_file:
- .env
depends_on:
- database
- redis
my celery task:
#app.task
def database_backup():
management.call_command('dbbackup')
# media backup works just fine
#app.task
def media_backup():
management.call_command('mediabackup')
DB backup settings
# django db backup https://django-dbbackup.readthedocs.io/en/master/installation.html
DBBACKUP_STORAGE = 'django.core.files.storage.FileSystemStorage'
DBBACKUP_STORAGE_OPTIONS = {'location': '~/myworkdir/backups/db/'}
def backup_filename(databasename, servername, datetime, extension, content_type):
pass
DBBACKUP_FILENAME_TEMPLATE = backup_filename
DBBACKUP_CONNECTOR = "dbbackup.db.postgresql.PgDumpBinaryConnector"
Error stack trace:
[2023-02-09 14:44:00,052: ERROR/ForkPoolWorker-6] CommandConnectorError: Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom
faranga-celery-1 | [Errno 2] No such file or directory: 'pg_dump'
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/utils.py", line 120, in wrapper
faranga-celery-1 | func(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 93, in handle
faranga-celery-1 | self._save_new_backup(database)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 106, in _save_new_backup
faranga-celery-1 | outputfile = self.connector.create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 92, in create_dump
faranga-celery-1 | return self._create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/postgresql.py", line 112, in _create_dump
faranga-celery-1 | stdout, stderr = self.run_command(cmd, env=self.dump_env)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 180, in run_command
faranga-celery-1 | raise exceptions.CommandConnectorError(
faranga-celery-1 |
faranga-celery-1 | [2023-02-09 14:44:00,080: ERROR/ForkPoolWorker-6] Task administration.tasks.database_backup[601e33a6-0eef-42c0-a355-cb7d33d7ebaa] raised unexpected: CommandConnectorError("Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom \n[Errno 2] No such file or directory: 'pg_dump'")
faranga-celery-1 | Traceback (most recent call last):
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 165, in run_command
faranga-celery-1 | process = Popen(
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/stdlib.py", line 201, in sentry_patched_popen_init
faranga-celery-1 | rv = old_popen_init(self, *a, **kw)
faranga-celery-1 | File "/usr/lib/python3.10/subprocess.py", line 969, in __init__
faranga-celery-1 | self._execute_child(args, executable, preexec_fn, close_fds,
faranga-celery-1 | File "/usr/lib/python3.10/subprocess.py", line 1845, in _execute_child
faranga-celery-1 | raise child_exception_type(errno_num, err_msg, err_filename)
faranga-celery-1 | FileNotFoundError: [Errno 2] No such file or directory: 'pg_dump'
faranga-celery-1 |
faranga-celery-1 | During handling of the above exception, another exception occurred:
faranga-celery-1 |
faranga-celery-1 | Traceback (most recent call last):
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/celery/app/trace.py", line 451, in trace_task
faranga-celery-1 | R = retval = fun(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/celery.py", line 207, in _inner
faranga-celery-1 | reraise(*exc_info)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/_compat.py", line 56, in reraise
faranga-celery-1 | raise value
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/sentry_sdk/integrations/celery.py", line 202, in _inner
faranga-celery-1 | return f(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/celery/app/trace.py", line 734, in __protected_call__
faranga-celery-1 | return self.run(*args, **kwargs)
faranga-celery-1 | File "/myworkdir/administration/tasks.py", line 8, in database_backup
faranga-celery-1 | management.call_command('dbbackup')
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/django/core/management/__init__.py", line 181, in call_command
faranga-celery-1 | return command.execute(*args, **defaults)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/django/core/management/base.py", line 398, in execute
faranga-celery-1 | output = self.handle(*args, **options)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/utils.py", line 120, in wrapper
faranga-celery-1 | func(*args, **kwargs)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 93, in handle
faranga-celery-1 | self._save_new_backup(database)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/management/commands/dbbackup.py", line 106, in _save_new_backup
faranga-celery-1 | outputfile = self.connector.create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 92, in create_dump
faranga-celery-1 | return self._create_dump()
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/postgresql.py", line 112, in _create_dump
faranga-celery-1 | stdout, stderr = self.run_command(cmd, env=self.dump_env)
faranga-celery-1 | File "/usr/local/lib/python3.10/dist-packages/dbbackup/db/base.py", line 180, in run_command
faranga-celery-1 | raise exceptions.CommandConnectorError(
faranga-celery-1 | dbbackup.db.exceptions.CommandConnectorError: Error running: pg_dump --dbname=postgresql://postgres:password#database:5432/faranga_db --format=custom
faranga-celery-1 | [Errno 2] No such file or directory: 'pg_dump'
If you´re running this command inside your project container, you don´t have access to the postgres binaries to dump and restore your database. Try access your postgres container and call this command in shell.
You could get you database container id from docker ps and access your container shell with docker exec -it [your_container_id] bash . Then try to call pg_dump from inside of it. This should probably work

Crontab #reboot and streamlink PATH issues

I am attempting to run a python3 script on boot with my raspberry. The python script contains a subprocess call that is giving me problems. I suspect that the problem has to do with the PATH of the streamlink call, but I am not managing to solve it.
the script:
#!/usr/bin/env python3
import subprocess
def streamlistener(streamname):
try:
print('Listening to stream: ',streamname)
grepOut = subprocess.check_output(['streamlink','-p','omxplayer','-a','--timeout 20', '--player-fifo','--retry-strea$
print(grepOut.decode())
streamlistener(streamname)
except subprocess.CalledProcessError as e:
print("ERRORS: " , e.output.decode())
streamlistener(streamname)
streamlistener('STREAMNAME')
the crontab (same results without the sleep command):
#reboot sleep 60 && sudo python3 /home/pi/Desktop/stream.py 2>&1 | logger -p user.debug -t 'stream'
The error message I get:
Jun 28 17:40:39 raspberrypi stream: Traceback (most recent call last):
Jun 28 17:40:39 raspberrypi stream: File "/home/pi/Desktop/stream.py", line 26, in <module>
Jun 28 17:40:39 raspberrypi stream: streamlistener('northernstreaming')
Jun 28 17:40:39 raspberrypi stream: File "/home/pi/Desktop/stream.py", line 19, in streamlistener
Jun 28 17:40:39 raspberrypi stream: grepOut = subprocess.check_output(['streamlink','-p','omxplayer','-a','--timeout 20', '--pla$
Jun 28 17:40:39 raspberrypi stream: File "/usr/lib/python3.7/subprocess.py", line 395, in check_output
Jun 28 17:40:39 raspberrypi stream: **kwargs).stdout
Jun 28 17:40:39 raspberrypi stream: File "/usr/lib/python3.7/subprocess.py", line 472, in run
Jun 28 17:40:39 raspberrypi stream: with Popen(*popenargs, **kwargs) as process:
Jun 28 17:40:39 raspberrypi stream: File "/usr/lib/python3.7/subprocess.py", line 775, in __init__
Jun 28 17:40:39 raspberrypi stream: restore_signals, start_new_session)
Jun 28 17:40:39 raspberrypi stream: File "/usr/lib/python3.7/subprocess.py", line 1522, in _execute_child
Jun 28 17:40:39 raspberrypi stream: raise child_exception_type(errno_num, err_msg, err_filename)
Jun 28 17:40:39 raspberrypi stream: FileNotFoundError: [Errno 2] No such file or directory: 'streamlink': 'streamlink'
The problem had to do with the PATH pip3 used to install streamlink and not with the subprocess.check_output call, setting the paths properly in both bashrc and at the start of the crontab worked.
Setting path in bashrc:https://linuxize.com/post/how-to-add-directory-to-path-in-linux/
Setting path in cron: https://unix.stackexchange.com/questions/148133/how-to-set-crontab-path-variable

I am getting the errors when i am running the command the qtl seq downloaded by conda. I am running following command

[QTL-seq:2019-10-09 09:13:37] !!ERROR!! bcftools concat -a -O z -o Chikpea_qtl/30_vcf/qtlseq.vcf.gz Chikpea_qtl/30_vcf/qtlseq.*.vcf.gz >> Chikpea_qtl/log/bcftools.log 2>&1
Failed to open Chikpea_qtl/30_vcf/qtlseq.NW_004516646.1.vcf.gz: could not load index
Traceback (most recent call last):
File "/home/jthakur/Desktop/Software/QTL-seq/qtlseq/mpileup.py", line 191, in concat
check=True)
File "/home/jthakur/anaconda2/lib/python3.6/subprocess.py", line 438, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command 'bcftools concat -a -O z -o Chikpea_qtl/30_vcf/qtlseq.vcf.gz Chikpea_qtl/30_vcf/qtlseq.*.vcf.gz >> Chikpea_qtl/log/bcftools.log 2>&1' returned non-zero exit status 255.
Traceback (most recent call last):
File "/home/jthakur/anaconda2/bin/qtlseq", line 11, in
load_entry_point('qtlseq', 'console_scripts', 'qtlseq')()
File "/home/jthakur/Desktop/Software/QTL-seq/qtlseq/qtlseq.py", line 192, in main
QTLseq(args).run()
File "/home/jthakur/Desktop/Software/QTL-seq/qtlseq/qtlseq.py", line 123, in mpileup
mp.run()
File "/home/jthakur/Desktop/Software/QTL-seq/qtlseq/mpileup.py", line 232, in run
self.concat()
File "/home/jthakur/Desktop/Software/QTL-seq/qtlseq/mpileup.py", line 194, in concat
sys.exit()
NameError: name 'sys' is not defined

Why is my Dockerized Flask app timing out on mail.send?

I'm trying to send mail from my Flask app that is served by Gunicorn & Nginx. Connection to Flask app times out when using mail.send() from container, works fine when running locally with werkzeug's development server.
I tried changing gunicorn worker classes to 'gevent' from 'sync', but the problem persists.
app.py
from flask import Flask
from werkzeug.middleware.proxy_fix import ProxyFix
from flask_sqlalchemy import SQLAlchemy
from flask_mail import Mail
app = Flask(__name__, instance_relative_config=True)
app.config.from_pyfile('config.py')
app.wsgi_app = ProxyFix(app.wsgi_app)
db = SQLAlchemy(app)
mail = Mail(app)
import views
views.py
from app import app, mail
from flask_mail import Message
import logging
#app.route('/')
def hello_world():
email_msg = Message("Subject", sender="mail#mail.com", recipients=["someone#mail.com"])
email_msg.body = f"Hello!"
logging.error("Starting email send /request")
mail.send(email_msg)
return 'Hello World!'
wsgi.py
from app import app
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
if __name__ == "__main__":
app.run()#host='0.0.0.0')
Dockerfile
...
CMD ["gunicorn", "-b", "0.0.0.0:8080", "--worker-class=gevent", "--workers=9","wsgi:app"]
I expect the email to be sent, but instead it just times-out.
Gunicorn output:
gbcodes | ERROR:root:Starting email send /request
gbcodes | ERROR:app:Exception on / [GET]
gbcodes | Traceback (most recent call last):
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 2446, in wsgi_app
gbcodes | response = self.full_dispatch_request()
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1951, in full_dispatch_request
gbcodes | rv = self.handle_user_exception(e)
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1820, in handle_user_exception
gbcodes | reraise(exc_type, exc_value, tb)
gbcodes | File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
gbcodes | raise value
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request
gbcodes | rv = self.dispatch_request()
gbcodes | File "/usr/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request
gbcodes | return self.view_functions[rule.endpoint](**req.view_args)
gbcodes | File "/app/views.py", line 13, in hello_world
gbcodes | mail.send(email_msg)
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 491, in send
gbcodes | with self.connect() as connection:
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 144, in __enter__
gbcodes | self.host = self.configure_host()
gbcodes | File "/usr/lib/python3.6/site-packages/flask_mail.py", line 156, in configure_host
gbcodes | host = smtplib.SMTP_SSL(self.mail.server, self.mail.port)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 1031, in __init__
gbcodes | source_address)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 251, in __init__
gbcodes | (code, msg) = self.connect(host, port)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 336, in connect
gbcodes | self.sock = self._get_socket(host, port, self.timeout)
gbcodes | File "/usr/lib/python3.6/smtplib.py", line 1037, in _get_socket
gbcodes | self.source_address)
gbcodes | File "/usr/lib/python3.6/site-packages/gevent/socket.py", line 96, in create_connection
gbcodes | sock.connect(sa)
gbcodes | File "/usr/lib/python3.6/site-packages/gevent/_socket3.py", line 335, in connect
gbcodes | raise error(err, strerror(err))
gbcodes | TimeoutError: [Errno 110] Operation timed out
Turns out my VPS host - Scaleway is blocking SMTP. Fixed by using port 443 which the SMTP service I use also supports.

sudo -H -u git gitosis-init < ~/id_rsa.pub | error: no such file or directory

I got this shell script from a blog about how to equip git with gitosis.
But i got a "No such file or directory" error after running the script.
[git#209285 ~]$ sudo -H -u git gitosis-init < ~/id_rsa.pub
Traceback (most recent call last):
File "/usr/local/bin/gitosis-init", line 9, in <module>
load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-init')()
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/app.py", line 24, in run
return app.main()
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/app.py", line 38, in main
self.handle_args(parser, cfg, options, args)
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/init.py", line 138, in handle_args
user=user,
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/init.py", line 75, in init_admin_repository
template=resource_filename('gitosis.templates', 'admin')
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/repository.py", line 63, in init
close_fds=True,
File "/usr/local/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/local/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I am puzzled,as the man manual says that:
-H The -H (HOME) option sets the HOME environment variable to the homedir of the target user (root by default) as specified in passwd(5). By default, sudo
does not modify HOME (see set_home and always_set_home in sudoers(5)).
,which is cited from linux manual.
The -H option just sets the HOME environment variable to the homedir of the target user as specified in passwd.
However i specified "/home/git" as homedir for git user in my /etc/passwd file.
apache:x:48:48:Apache:/var/www:/sbin/nologin
git:x:100:101:git version control:/home/git:/bin/bash
duanduan:x:101:500::/home/duanduan:/bin/bash
But why i still got this message? or was incorrect my comprehension of the description in manual?
Append for comments:
And it seems like before with specifying a absolute path.Maybe, it's not the cause.
sudo -H -u git gitosis-init < /home/git/id_rsa.pub
Traceback (most recent call last):
File "/usr/local/bin/gitosis-init", line 9, in <module>
load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-init')()
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/app.py", line 24, in run
return app.main()
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/app.py", line 38, in main
self.handle_args(parser, cfg, options, args)
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/init.py", line 138, in handle_args
user=user,
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/init.py", line 75, in init_admin_repository
template=resource_filename('gitosis.templates', 'admin')
File "/usr/local/lib/python2.7/site-packages/gitosis-0.2-py2.7.egg/gitosis/repository.py", line 63, in init
close_fds=True,
File "/usr/local/lib/python2.7/subprocess.py", line 522, in call
return Popen(*popenargs, **kwargs).wait()
File "/usr/local/lib/python2.7/subprocess.py", line 710, in __init__
errread, errwrite)
File "/usr/local/lib/python2.7/subprocess.py", line 1335, in _execute_child
raise child_exception
OSError: [Errno 2] No such file or directory
I guess it is because ~ is expanded by bash before transferring to sudo as a argument, why not try to specify a absolute path for you public key file?

Resources