When I run 2to3.py -w my_script.py it converts my_script.py to Python3 and then puts the original version my_script.py.bak.
I want the old file to remain as is, and the converted file to go into a new file, like my_script.converted.py. Is there a 2to3.py argument that allows this?
Turns out there's several options for this:
Copy the file first to a new location, then run 2to3 -w -n which modifies the file in place (-w) without making a backup (-n)
2to3 -n -o desired/path/to/new/file specifies an output directory (-o) and disables backup (-n)
2to3 -n -W --add-suffix=3 will put the file in the same location, but put a suffix on it (-W --add-suffix=) without making a backup (n)
Related
I want to download all the files in a specific directory of my site.
Let's say I have 3 files in my remote SFTP directory
www.site.com/files/phone/2017-09-19-20-39-15
a.txt
b.txt
c.txt
My goal is to create a local folder on my desktop with ONLY those downloaded files. No parents files or parents directory needed. I am trying to get the clean report.
I've tried
wget -m --no-parent -l1 -nH -P ~/Desktop/phone/ www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
I got
I want to get
How do I tweak my wget command to get something like that?
Should I use anything else other than wget ?
Ihue,
Taking a shell programatic perspective I would recommend you try the following command line script, note I also added the citation so you can see the original threads.
wget -r -P ~/Desktop/phone/ -A txt www.site.com/files/phone/2017-09-19-20-39-15 --reject=index.html* -e robots=off
-r enables recursive retrieval. See Recursive Download for more information.
-P sets the directory prefix where all files and directories are saved to.
-A sets a whitelist for retrieving only certain file types. Strings and patterns are accepted, and both can be used in a comma separated list. See Types of Files for more information.
Ref: #don-joey
https://askubuntu.com/questions/373047/i-used-wget-to-download-html-files-where-are-the-images-in-the-file-stored
I have a file named test:
[test#mypc ~]$ ls -i
4982967 test
Then I use vim to change its content and enter :w to save it.
It now has a different inode:
[test#mypc ~]$ ls -i
4982968 test
That means it's a different file already, why would vim save it to another file as I use :w to save to the original one?
You see, echo to a file will not change the inode, which is expected:
[test#mypc ~]$ echo v >> test
[test#mypc ~]$ ls -i
4982968 test
It is trying to protect you from disk and os problems. It writes out a complete copy of the file, and when it is satisfied this has finished properly, renames this file to the required filename. Hence, new inode number.
If there were a crash during the save process, the original file would remain untouched, possibly saving you from losing the file completely.
Given a path (on my computer), how can I test whether that file is under version control (ie. a copy exists in the Perforce depot)? I'd like to test this at the command line.
Check p4 help files. In short, you run p4 files <your path here> and it will give you the depot path to that file. If it isn't in the depot, you'll get "no such file(s)".
For scripting, p4 files FILE is insufficient because it doesn't change its exit code when there is no such file.
Instead, you can pass it through grep, which looks for perforce paths' telltale pair of leading slashes:
# silently true when the file exists or false when it does not.
p4_exists() {
p4 files -e "$1" 2>/dev/null |grep -q ^//
}
You can get rid of the 2>/dev/null and grep's -q if you want visible output.
Before p4 files version 2012.1 (say p4 files version 2011.1), it didn't support -e. You'll have to add |grep -v ' - delete [^-]*$' before the grep above.
Warning: A future p4 release could change the formatting and break this logic.
Similar to Adam Katz's solution except more likely to be supported by future releases of p4, you can pass global option -s which 'prepends a descriptive field' to each line. This is one of 'text', 'info', 'error' or 'exit' followed by a colon (and a space, it seems). This is intended for facilitating scripting.
For all files passed in to the p4 -s files command, you should get one line back per file. If the file exists in the depot, the line starts with info: whereas if the file does not exist in the depot, the line starts with error:. For example:
info: //depot/<depot-path-to-file>
error: <local-path-to-file>
So, essentially, the status lines are the equivalent of an exit code but on a per-file basis. An exit code alone wouldn't cope neatly with an arbitrary number of files passed in.
Note that if there is a different error (e.g. a connection error) then an error line is still output. So, if you really want the processing to be robust, you may want to combine this with what Adam Katz suggested or perhaps grep for the basename of the file in the output line.
In a bash script called through shell in some directory ($PWD), there is a line where I need to call an executable located at $PWD/bin so that it reads a input file located at $PWD/inputfiles and the resulting output files are stored in $PWD/output.
Can this be achieved?
PS: Now if I am at
cd /home/user
I do
./run config.inp output.dat
with config.inp being at /home/user
config.inp reads files data.txt and lines.txt which are in the same directory.
Now I want to read from /home/user/input and write the output files to /home/user/output
and I try
./run input/config.inp
it says
error, data.txt not found
As the problem is described, this will do it:
bin/executable < inputfiles/input > output/output
If the problem is really that bin/executable creates files in the current directory without allowing the user to specify the input and output files, then it will be a little more complicated. What you would probably want to do instead is:
cd output
ln -s ../inputfiles/input
../bin/executable
rm input
This will create a symbolic link to inputfiles/input from within the output directory, and then delete it later. If you want to eliminate the chance of collisions with files in the output directory, then you need to create a temporary directory with something like TMPDIR = $(mktemp -d), do everything there, and then copy it back to $OLDPWD/output.
I'm trying to apply a patch using 2 files in different directories. The output file should be in a different directory too. The first file is in /var/local/documents/document.xml and patch file is located in /var/local/patches/patch.diff and I want the output file should be created in /var/local/final/final.xml.
When the files are located in the same directory, this command works:
patch document.xml -i patch.diff -o final.xml
But when they are in separate directories and I try to use the following command:
patch
/var/local/documents/document.xml -i
/var/local/patches/patch.diff -o
/var/local/final/final.xml
I get the following error:
(Stripping trailing CRs from patch.)
patching file {file}
Hunk#1 FAILED at 20.
1 out of 1 hunk FAILED -- saving rejects to file {file}
I've read somewhere that I should use -d and -p to work correctly with directories but I have no clue how I should do it...
Yes, it's -p switch (in your case it should strip 2 entries from patch path):
cd /var/local/documents
patch -p 2 -o ../final/final.xml document.xml < ../patches/patch.diff
Try this:
$ mv /var/local/final/final.xml /var/local/final/document.xml
$ (cd /var/local/final && patch document.xml) < patch.diff
$ mv /var/local/final/document.xml /var/local/final/final.xml