There is a daemon that goes to a remote server and fetches files from it. Then performs operations on them. Some files it picks up without any problems. But it gives an error on one of them. I can't figure out why. The file extensions are identical.
Error on a file that was previously uploaded to the save folder. There were no other files in the save folder, so it works correctly. How do I get around this error so that sftp_client will update the file if it is already in the directory?
....
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy)
try:
ssh.connect(hostname=ip, username=user_name, password=passwd)
except Exception:
logger.info("Server is not available, check that the data entered is correct")
sys.exit()
sftp_client = ssh.open_sftp()
path_serv = '/home/lacit/defect_storage/' # Путь по которому производится поиск файлов
path_serv_file = sftp_client.listdir_attr(path=path_serv) # Все файлы в по данному пути
if path_serv_file:
for entry in path_serv_file:
checking_number = os.path.splitext(entry.filename)[0] # Split .png
try:
id_file_name = int(checking_number) # Checking for number
logger.info(f"Name of scan - {id_file_name}")
except ValueError:
logger.info(f"The scan name must have a numeric value \n Name of scan : {checking_number}")
sys.exit()
sftp_client.get(path_serv + entry.filename, '/home/ubuntu/media/' + entry.filename)
# Сохранение пути файла
path_file_save = '/home/ubuntu/media/' + entry.filename
# Переменная обновления даты
update_date = datetime.datetime.now()
logger.info(f"Try to save or update data for leather - {id_file_name}")
Defect.objects.update_or_create(id_leather=id_file_name,
defaults={'update': update_date,
'result_auto': None,
'result_manual': None,
'path': path_file_save})
logger.info("Try to call defection_start")
....
An error in the line sftp_client.get(path_serv + entry.filename, '/home/ubuntu/media/' + entry.filename)
Screenshot from the console
Related
I am trying an assignment to open a syslog for a server program (called ticky) that creates logs and errors and then assign the errors to a dictionary and export to a csv file to sort and host to a webpage,
I am unsure of how to access syslog files, as the course only went into sys.argv and I don't know if this can be used or if I need to figure out how to use syslog module. Once the log is opened, the regex will pull the error message and add it to the dictionary, either creating a new entry or adding value to an existing key.
Am I on the right track?
#!/usr/bin/env python3
import re
import sys
errors = {}
# log line format
# Jun 1 11:06:48 ubuntu.local ticky: ERROR: Connection to DB failed (username)
logfile = sys.argv[1]
# NOTE: Check to find correct log file
with open(logfile) as f:
for line in f:
if "ERROR:" not in line:
continue
regex_error = r"ERROR: (\d+) "
"""searches for error messages"""
error = re.search(regex_error, line)
if error is None:
continue
name = error[1]
errors[name] = errors.get(name, 0) + 1
I am trying to donwload a huge zip file (~9Go zipped and ~130GO unzipped) from an FTP with python using the ftplib library but unfortunately when using the retrbinary method, it does create the file in my local diretory but it is not writing into the file. After a while the code runs, I get an timeout error. It used to work fine before, but when I tried to go deeper in the use of sockets by using this code it does not work anymore. Indeed, as the files I am trying to download are huge I want to have more control with the connection to prevent timeout error while downloading the files. I am not very familar with sockets so I may have misused it. I have been searching online but did not find any problems like this. (I tried with smaller files too for test but still have the same issues)
Here are the function that I tried but both have problems (method 1 is not writing to file, method 2 donwloads file but I can't unzip it)
import time
import socket
import ftplib
import threading
# To complete
filename = ''
local_folder = ''
ftp_folder = ''
host = ''
user = ''
mp = ''
# timeout error in method 1
def downloadFile_method_1(filename, local_folder, ftp_folder, host, user, mp):
try:
ftp = ftplib.FTP(host, user, mp, timeout=1600)
ftp.set_debuglevel(2)
except ftplib.error_perm as error:
print(error)
with open(local_folder + '/' + filename, "wb") as f:
ftp.retrbinary("RETR" + ftp_folder + '/' + filename, f.write)
# method 2 works to download zip file, but header error when unziping it
def downloadFile_method_2(filename, local_folder, ftp_folder, host, user, mp):
try:
ftp = ftplib.FTP(host, user, mp, timeout=1600)
ftp.set_debuglevel(2)
sock = ftp.transfercmd('RETR ' + ftp_folder + '/' + filename)
except ftplib.error_perm as error:
print(error)
def background():
f = open(local_folder + '/' + filename, 'wb')
while True:
block = sock.recv(1024*1024)
if not block:
break
f.write(block)
sock.close()
t = threading.Thread(target=background)
t.start()
while t.is_alive():
t.join(60)
ftp.voidcmd('NOOP')
def unzip_file(filename, local_folder):
local_filename = local_folder + '/' + filename
with ZipFile(local_filename, 'r') as zipObj:
zipObj.extractall(local_folder)
And the error I get for method 1:
ftplib.error_temp: 421 Timeout - try typing a little faster next time
And the error I get when I try to unzip after using method 2:
zipfile.BadZipFile: Bad magic number for file header
Alos, regarding this code If anyone could explain what this does concerning socketopt too would be helpful:
ftp.sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
ftp.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 75)
ftp.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 60)
Thanks for your help.
I am running a script that acts as a server, allows two clients to connect to it, and for one specific client to send a message to the server, the server modifies it, then sends it to the other client.
This appears to work, as the receiving client acknowledges that the input was received and is valid. This is a script that I intend to run continuously.
However, a big issue is that my /tmp/ directory is filling up with directories named _M... (The ellipses representing a random string), that contains python modules (such as cryptography, which, as far as I'm aware, I'm not using), and timezone information (quite literally every timezone that python supports). It seems to be creating them very frequently, but I can't identify what in the process exactly is doing this.
I have created a working cleanup bash script that removes files older than 5 minutes from the directory every 5 minutes, however, I cannot guarantee that when I am duplicating this process for other devices, that the directories will have the same name formatting. Rather than create a unique bash script for each process that I create, I'd rather be able to clean up the directories from within the python script, or even better, to prevent the directories from being created at all.
The problem is, I'm not certain of how this is accomplished, and I do not see anything on SO regarding what is creating these directories, nor how to delete them.
The following is my script
import time, socket, os, sys, re, select
IP = '192.168.109.8'
PORT = [3000, 3001]
PID = str(os.getpid())
PIDFILE = "/path/to/pidfile.pid"
client_counter = 0
sockets_list = []
def runCheck():
if os.path.isfile(PIDFILE):
return False
else:
with open(PIDFILE, 'w') as pidfile:
pidfile.write(PID)
return True
def openSockets():
for i in PORT:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
s.bind((IP, i))
s.listen(1)
sockets_list.append(s)
def receiveMessage(client_socket):
try:
message = client_socket.recv(2048).decode('utf-8')
if not message:
return False
message = str(message)
return message
except:
return False
def fixString(local_string):
#processes
return local_string
def main():
try:
openSockets()
clients = {}
print(f'Listening for connections on {IP}:{PORT[0]} and {PORT[1]}...')
client_count = 0
while True:
read_sockets, _, exception_sockets = select.select(sockets_list, [], sockets_list)
for notified_socket in read_sockets:
if notified_socket == sockets_list[0] or notified_socket == sockets_list[1]:
client_socket, client_address = sockets_list[client_count].accept()
client_count = (client_count + 1) % 2
sockets_list.append(client_socket)
clients[client_socket] = client_socket
print('Accepted new connection from: {}'.format(*client_address))
else:
message = receiveMessage(notified_socket)
if message is False:
continue
message = fixString(message)
for client_socket in clients:
if client_socket != notified_socket:
if message != "N/A":
client_socket.send(bytes(message, "utf-8"))
for notified_socket in exception_sockets:
sockets_list.remove(notified_socket)
del clients[notified_socket]
time.sleep(1)
except socket.timeout:
for i in sockets_list:
i.close()
os.remove(PIDFILE)
sys.exit()
except Exception as e:
for i in sockets_list:
i.close()
err_details = str('Error in line {}'.format(sys.exc_info()[-1].tb_lineno), type(e).__name__, e)
os.remove(PIDFILE)
print("Exception: {}".format(err_details))
sys.exit()
if __name__ == "__main__":
if runCheck():
main()
else:
pass
How might I set it up so that the python script will delete the directories it creates in the /tmp/ directory, or better, to not create them in the first place? Any help would be greatly appreciated.
As it would turn out, it is PyInstaller that was generating these files. In the documentation, it states that pyinstaller generates this _MEI directory when creating the executable in single-file mode, and it is supposed to delete it as well, but for some reason it didn't.
I am trying to copy contents of a remote directory to my local directory using pysftp.
Here's the code:
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
p=pysftp.Connection("10.2.2.99",username= "user",
password="password", cnopts=cnopts)
remote_path = '/cradius-data/files/webapps/vcm/somedirectory'
local_path = 'E:\\New Folder\\FTP Download Folder'
p.get_r(remotedir=remote_path,localdir=local_path)
I get the following File Not Found error message,
No such file or directory: 'E:\\New Folder\\FTP Download Folder\\./cradius-data/files/webapps/vcm/somedirectory/SOME_File.ZIP'
It seems that both the paths are being concatenated for some reason which is leading to an incorrect FileNotFound exception.
Also ,I've verified that the file is present in the remote directory.
Any help is much appreciated.
It is because of the '\\' in path string from windows, if you are copying from windows to Linux then this problem occurs. I had the same problem, I rewrote the path strings removing the extra '\' and everything worked fine, you can use my code, include the DirectoryCopy.py in your script and use it as used in script.py below.
DirectoryCopy.py:
import paramiko
import os
import socket
from stat import S_ISDIR
class SSHSession(object):
# Usage:
# Detects DSA or RSA from key_file, either as a string filename or a
# file object. Password auth is possible, but I will judge you for
# using it. So:
# ssh=SSHSession('targetserver.com','root',key_file=open('mykey.pem','r'))
# ssh=SSHSession('targetserver.com','root',key_file='/home/me/mykey.pem')
# ssh=SSHSession('targetserver.com','root','mypassword')
# ssh.put('filename','/remote/file/destination/path')
# ssh.put_all('/path/to/local/source/dir','/path/to/remote/destination')
# ssh.get_all('/path/to/remote/source/dir','/path/to/local/destination')
# ssh.command('echo "Command to execute"')
def __init__(self,hostname,username='root',key_file=None,password=None):
#
# Accepts a file-like object (anything with a readlines() function)
# in either dss_key or rsa_key with a private key. Since I don't
# ever intend to leave a server open to a password auth.
#
self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
self.sock.connect((hostname,22))
self.t = paramiko.Transport(self.sock)
self.t.start_client()
#keys = paramiko.util.load_host_keys(os.path.expanduser('~/.ssh/known_hosts'))
key = self.t.get_remote_server_key()
# supposed to check for key in keys, but I don't much care right now to find the right notation
key_file=None
if key_file is not None:
if isinstance(key,str):
key_file=open(key,'r')
key_head=key_file.readline()
key_file.seek(0)
if 'DSA' in key_head:
keytype=paramiko.DSSKey
elif 'RSA' in key_head:
keytype=paramiko.RSAKey
else:
raise Exception("Can't identify key type")
pkey=keytype.from_private_key(key_file)
self.t.auth_publickey(username, pkey)
else:
if password is not None:
self.t.auth_password(username,password,fallback=False)
else: raise Exception('Must supply either key_file or password')
self.sftp=paramiko.SFTPClient.from_transport(self.t)
def command(self,cmd):
# Breaks the command by lines, sends and receives
# each line and its output separately
#
# Returns the server response text as a string
chan = self.t.open_session()
chan.get_pty()
chan.invoke_shell()
chan.settimeout(20.0)
ret=''
try:
ret+=chan.recv(1024)
except:
chan.send('\n')
ret+=chan.recv(1024)
for line in cmd.split('\n'):
chan.send(line.strip() + '\n')
ret+=chan.recv(1024)
return ret
def put(self,localfile,remotefile):
# Copy localfile to remotefile, overwriting or creating as needed.
self.sftp.put(localfile,remotefile)
def put_all(self,localpath,remotepath):
# recursively upload a full directory
os.chdir(os.path.split(localpath)[0])
parent=os.path.split(localpath)[1]
#print('Parent dir:',parent)
for walker in os.walk(parent):
#print("walker:",walker)
try:
#print("Directory created with path:",os.path.join(remotepath,walker[0]) )
#temp path for directory for changin all the '\' to '/'
direcTemp = os.path.join(remotepath,walker[0])
direcTemp = direcTemp.replace('\x5c','/')
#self.sftp.mkdir(os.path.join(remotepath,walker[0]))
self.sftp.mkdir(direcTemp)
except:
pass
for file in walker[2]:
#in order to transfer the whole directory it is necesssary to change all the '\' to '/'
localTemp=os.path.join(walker[0],file)
localTemp=localTemp.replace('\x5c','/')
remoTemp =os.path.join(remotepath,walker[0],file)
remoTemp = remoTemp.replace('\x5c','/')
# print('Local windows path:',localTemp)
print('Remo Linux path:',remoTemp)
#self.put(localTemp,os.path.join(remotepath,walker[0],file))
self.put(localTemp,remoTemp)
def get(self,remotefile,localfile):
# Copy remotefile to localfile, overwriting or creating as needed.
self.sftp.get(remotefile,localfile)
def sftp_walk(self,remotepath):
# Kindof a stripped down version of os.walk, implemented for
# sftp. Tried running it flat without the yields, but it really
# chokes on big directories.
path=remotepath
files=[]
folders=[]
for f in self.sftp.listdir_attr(remotepath):
if S_ISDIR(f.st_mode):
folders.append(f.filename)
else:
files.append(f.filename)
print (path,folders,files)
yield path,folders,files
for folder in folders:
new_path=os.path.join(remotepath,folder)
for x in self.sftp_walk(new_path):
yield x
def get_all(self,remotepath,localpath):
# recursively download a full directory
# Harder than it sounded at first, since paramiko won't walk
#
# For the record, something like this would gennerally be faster:
# ssh user#host 'tar -cz /source/folder' | tar -xz
self.sftp.chdir(os.path.split(remotepath)[0])
parent=os.path.split(remotepath)[1]
try:
os.mkdir(localpath)
except:
pass
for walker in self.sftp_walk(parent):
try:
os.mkdir(os.path.join(localpath,walker[0]))
except:
pass
for file in walker[2]:
self.get(os.path.join(walker[0],file),os.path.join(localpath,walker[0],file))
def write_command(self,text,remotefile):
# Writes text to remotefile, and makes remotefile executable.
# This is perhaps a bit niche, but I was thinking I needed it.
# For the record, I was incorrect.
self.sftp.open(remotefile,'w').write(text)
self.sftp.chmod(remotefile,755)
script.py:
import paramiko
from DirectoryCopy import SSHSession
def creat_ssh_connection(host, port, username, password, keyfilepath, keyfiletype):
ssh = None
key = None
try:
if keyfilepath is not None:
# Get private key used to authenticate user.
if keyfiletype == 'DSA':
# The private key is a DSA type key.
key = paramiko.DSSKey.from_private_key_file(keyfilepath)
else:
# The private key is a RSA type key.
key = paramiko.RSAKey.from_private_key(keyfilepath)
# Create the SSH client.
ssh = paramiko.SSHClient()
# Setting the missing host key policy to AutoAddPolicy will silently add any missing host keys.
# Using WarningPolicy, a warning message will be logged if the host key is not previously known
# but all host keys will still be accepted.
# Finally, RejectPolicy will reject all hosts which key is not previously known.
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect to the host.
if key is not None:
# Authenticate with a username and a private key located in a file.
ssh.connect(host, port, username, None, key)
else:
# Authenticate with a username and a password.
ssh.connect(host, port, username, password)
#console = ssh.invoke_shell()
#console.keep_this = ssh
return ssh
except:
print("exception")
ssh=SSHSession('ipaddres','user_name',password='password_string')
#directory_win is copied inside the directory_lin folder.
ssh.put_all('../../windows/example/directory_win','/linux/example/directory_lin/')
I am creating a tool that gives an overview of hundredths of test results. This tool access a log file, checks for Pass and Fail verdicts. When it is a fail, I need to go back to previous lines of the log to capture the cause of failure.
The linecache.getline works in my workspace (Python Run via eclipse). But after I created a windows installer (.exe file) and installed the application in my computer, the linecache.getline returns nothing. Is there something I need to add into my setup.py file to fix this or is it my code issue?
Tool Code
precon:
from wx.FileDialog, access the log file
self.result_path = dlg.GetPath()
try:
with open(self.result_path, 'r') as file:
self.checkLog(self.result_path, file)
def checkLog(self, path, f):
line_no = 1
index = 0
for line in f:
n = re.search("FAIL", line, re.IGNORECASE) or re.search("PASS", line, re.IGNORECASE)
if n:
currentline = re.sub('\s+', ' ', line.rstrip())
finalresult = currentline
self.list_ctrl.InsertStringItem(index, finaltestname)
self.list_ctrl.SetStringItem(index, 1, finalresult)
if currentline == "FAIL":
fail_line1 = linecache.getline(path, int(line_no - 3)) #Get reason of failure
fail_line2 = linecache.getline(path, int(line_no - 2)) #Get reason of failure
cause = fail_line1.strip() + " " + fail_line2.strip()
self.list_ctrl.SetStringItem(index, 2, cause)
index += 1
line_no += 1
The issue was resolved by doing the get_line function from this link:
Python: linecache not working as expected?