Rename images after they're scanned - Python3 - python-3.x

I am trying to do a Python3 program that rename the files in a sequential order immediatly after they're scanned.
For example: The Scanner create the files named Scan_001.jpg, Scan_002.jpg and so on, and then, if i remove one of the .jpg files, the the scanner will restart the Counter. I don't need that.
What i want to do, is rename the scanner files to a sequential order, regardless what the file name is, the sequence must continue to a crescent order and never restart the counter.
something like:
counter = 0
if newfile
rename newfilefile.jpg to file_counter.jpg
counter++
end
that's the logic i want to follow, but in Python3 and it must save the counter for the future scanned files.
Sorry, english isn't my mother language.
Thank's for the help guys!
Edit: I made this simple script.
import os, time
os.rename('teste', time.strftime("%Y%m%d%H%M%S.jpg"))
Now, i need to automate this code to run on each Scan_xxx.jpg file each time that the scanner generate the file.
Solution: I made the script run on the Windows Task Scheduler every 1 minute. It's not the appropriate but it is doing his job.

Related

Want to create a specific script for my Raspberry Pi to watch directory and do some actions

I am a complete newbie in writing scripts, I have just started a few days ago, and was already able to create simple scripts to find files, move them, delete them, etc...
I have a Raspberry Pi 4 with Raspberry Pi OS installed on it.
Now I want to create a better script, using "inotify" to monitor a specific directory and performs some actions if some specific files are found. Aaaaaand, I am a bit lost.
Here is what I have found and tested :
MONITORDIR="/my_dir"
inotifywait -m -r -e create --format '%w%f' "${MONITORDIR}" | while read NEWFILE
do
...
With this, I can generate an action whenever any new file appeared in my folder.
What I want :
If a new file with specific name (not the complete name, but just a part of the name of the file) with a specific .pdf extension is detected in the directory,
Then, move this file in another directory
And send an email using postfix, including the name of this new file, without the complete path of the file
Any help with this will be good for me, since I am a beginner, I know have a lot to learn, and I am sure I will.
Thank you !

Run script on Task Scheduler windows where an excel file is created

I need help with that problem. Searching on Google, I've found a way to run the R script without error. It's creating a .bat file with the directorys of Rscript.exe and the script I want to run:
My script is very simple: create a dataframe and save it on a Excel.
library(xlsx)
employee <- c('John Doe','Peter Gynn','Jolie Hope')
salary <- c(21000, 23400, 26800)
startdate <- as.Date(c('2010-11-1','2008-3-25','2007-3-14'))
employ.data <- data.frame(employee, salary, startdate)
write.xlsx(employ.data, 'prueba_r_excel.xlsx')
print('final script')
When I manually run the file, it works without problems and creates the Excel:
But when I put it on task scheduler of windows,
it seems that it does the whole script without problems (it shows the print) but it doesn't create the file for me. Someone knows what could be the problem? Do I have to give some kind of special permission to creat new files from the task scheduler?
First I thought that Task Scheduler could not create new files, because I saw that the Task Scheduler doesn't have write permissions. But later I found the error, and that is that in the part of the path where the file is saved you have to put it entire.
The problem was that if I ran the R script manually, the Excel is created in that same folder, but when executing it from the Task Scheduler I also created it but in another folder (system32)

Python ImportError when executing through windows scheduler

I did some searches for this topic and found some prior threads, but I did not understand any of them as I am still a total beginner in Python.
I have a Python script which has some long string variables stored in various .py files in a sub-directory. I'm importing the .py files from that sub-directory when I run the script. There is a __init__.py file in the sub-directory. The only reason I'm using this setup is that the long string variables which I'm storing in those other files would make the code very difficult to read as they are SQL strings and can span 50-100 lines each.
Everything works perfectly when I run this script through PyCharm.
However, when I run the script through Windows Scheduler or a batch file, I get an ImportError for all of the .py files in the sub-directory. The problem is definitely related to the python script not knowing where to look for those .py files when it's run through Windows Scheduler. But I'm not sure how to fix it.
The action for the scheduler task is to run the python exe
D:\Python35\python.exe
with the argument as the script
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py
So the full action looks like:
D:\Python35\python.exe "D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0.py"
The subdirectory which stores the long string variables .py files is:
D:\python\tableaudatasourcebuilds\dcitechnicalperformance\dcitechnicalperformance0\
The imports look like:
from dcitechnicalperformance.dcitechnicalperformance0.dciquer import nzsqldciwk
Does anyone know how to address this problem? Any help is much appreciated.
Good afternoon,
First of all i don't know how much sense there is to store long SQL querys on a module, I'm not by any means an expert, but something like a JSON file (or hell, even store them in a table inside the sql) seems like a better approach.
About your problem I think it resides on the current directory where the task is launched, let me explain:
In PyCharm when you run the code it launches from the location of the file, and with so, it's able to find the directory with the module.
With the scheduled task it may be launching in another directory and so, it's unable to find the module as the directory is not present.
If you decide to stick with your reproach a plausible solution would be to create a .bat file that browses to the project location:
#ECHO OFF
D:
cd D:\python\tableaudatasourcebuilds\dcitechnicalperformance\
D:\Python35\python.exe dcitechnicalperformance0.py
And that should work.

How to retrive Files generated in the past 120 minutes in Linux and also moved to another location

For one of my Project, I have a certain challenge where I need to take all the reports generated in a certain path, I want this to be an automated process in "Linux". I know the way how to get the file names which have been updated in the past 120 mins, but not the files directly. Now my requirements are in such a way
Take a certain files that have been updated in past 120 mins from the path
/source/folder/which/contains/files
Now do some bussiness logic on this generated files which i can take care of
Move this files to
/destination/folder/where/files/should/go
I know how to achieve #2 and #3 but not sure of #1. Can someone help me how can i achieve this.
Thanks in Advance.
Write a shell script. Sample below. I haven't provided the commands to get the actual list of file names as you said you know how to do that.
#!/bin/sh
files=<my file list>
for file in $files; do
cp $file <destination_dirctory>
done

Unload a file from a ftp and rename it in host

I have one file delivered in a ftp daily. This file doesn´t have the same name everyday. It has the date and the hour of the creation. For example, today the file has the name 20130814_XX_YY_20130814152345, created at 15:23:45 and tomorrow the file can name 20130815_XX_YY_20130815152421. The _XX_YY_ is always the same but the hour will change everyday.
I want to create a host jcl that gets this file with variable name and rename it to a host file. How can I do this ?
Thank you
Regards
Chuchito
STEP1: You can use LS in FTP to write to disk, so you can have a file with the file-name in it. Then GET that file.
STEP2: Process the contents of your file to generate the FTP Control Cards (at least for the GET). The GET generated will be of the form GET 20130814_XX_YY_20130814152345 'HLQ.MAINFRAM.DATASET', where the server name has come from the file GETted in STEP1 and the local (Mainframe) file can be hard-coded, or supplied to the generation if flexibility is required.
STEP3: Run FTP again with the Control Card(s) generated.
Isn't there anything in the Spec?
Sometimes we create complexities where an "out of the box" solution simplifies life considerably.
After the post updated, I now understand the problem a bit better.
If the name is required to be so specific, then the other suggested solution (if i understand it) is to have a fixed file name on the server that contains a list of file names to be uploaded.
In fact, the server could create a fixed file name that is really the JCL to run on the mainframe!!! This file would include the //SYSIN DD * and GET commands! The mainframe uploads this file and submits it as-is to the job reader, which then runs on the mainframe. The last step of this job (created by the server, but run on the mainframe) is to FTP an empty JCL file back to the server, in this way the server "knows" that the mainframe has uploaded the files.
Alternatively, why does the non-Z\os system need to name the file with time information? If the mainframe processes the file daily then date should be sufficient.
With this change the mainframe can reliably predict the file name for the day, generate the appropriate GET command and run.
With a job scheduler it would be easy to run the upload to the mainframe twice a day. This might address any concerns that are expressed in the desire to include a time in the file's name.
Run a Rexx step via a Background TSO step:
Background TSO step
You can then run a listcat to get all the files. You could either write the listcat output to a file and read it in or trap the output via the Address command
or the OutTrap function.
Then use the standard TSO Rename command.
Alternatively you could run ISPF background rexx program and use the ISPF equivalents to get the file name
(1) The real solution to this should be through a scheduling tool for Mainframe jobs. These tools provide capabilities to take care of formatting like the one you described.
(2) Alternatives: REXX and COBOL
(3) If you don't want to prefer REXX, here's a little brief into how you could create the JCL dynamically using COBOL:
A COBOL program that would read a "template" JCL.
Using INSPECT / REPLACE, you could substitute the prototypes with the string that is populated with the date of your choice (you could supply this as a simple SYSIN parm too, if you want the COBOL code to be flexible on the date selection)
Now that your formatted JCL is ready, you could write it to the output stream
//OUTFILE DD SYSOUT=(INTRDR,)
or
//OUTFILE DD SYSOUT=(,INTRDR)
Anything that is written to INTRDR (Internal Reader), goes straight to JES to submit your job!
Hope this helps.

Resources