Can someone tell me the .cmd equivalent command for the shell command find -o -samefile ?
I am trying to loop inside folders to copy files into a different location.
My original shell script is :
for qapi_file in find . -type d -iname qapi_export -o -samefile qapi/common -o -samefile core/api; do cp -p $qapi_file/qapi*.h include/qapi/
I have tried doing this:
FOR /d %%i IN ( DIR -i qapi_export qapi\common\ core\api) do (xcopy /O %i%\qapi*.h include\qapi)
this does not work as I expect it to. If I use for /d , it loops once and returns no files to copy. On the other hand if I use for /r the for loop becomes endless and I am unable to break the loop (I tried using goto :eof for this but it gives an error - goto was unexpected here)
I am pretty new to this and am unsure how to proceed further. I would appreciate any input in this regard
Using %%i instead of %i% and looping through all subdirectories with /r will do the trick:
for /r . %%i in (.) do (
xcopy /y/o %%i\qapi_export\qapi*.h include\qapi
xcopy /y/o %%i\qapi\common\qapi*.h include\qapi
xcopy /y/o %%i\core\api\qapi*.h include\qapi
) >nul
Related
Please, can someone help me with an alternative for Windows to the following command:
Linux Version:
find Win32_EXE -type f -name '*.json' -exec grep --files-with-match -i 'dbtool' '{}' ;
Windows version:
FIND /i "dbtool" \Win32_EXE*.json
Problem: The windows syntax lacks an option letting me specify that I want a result returned on the first match.
In python I would use a recursive glob:
from glob import iglob
def search(file):
with open(file) as fo:
for line in fo:
if "dbtool" in line.lower():
return True
return False
for file in iglob("Win32_EXE/**/*.json", recursive=True):
if search(file):
print(file)
but I think in general I wouldn't use Python - other solutions are probably faster/better.
e.g. Powershell
get-childitem -recurse *.json | where-object { select-string -quiet -pattern "dbtool" -path $_ }
or, if you can install ripgrep, that works well.
rg -l -g "*.json" -i dbtool Win32_EXE
This will search for *.json files and search them for the string dbtool. Change the PUSHD directory to your search directory.
PUSHD "C:\src\t"
FOR /F "delims=" %%A IN ('DIR /S /B /A:-D "*.json"') DO (
FIND /I "dbtool" "%%~A"
)
POPD
I want to delete a line with string 'Generation' in all the files under a folder named KCG. I tried the following in Windows OS command prompt:
sed -i '/Generation/d' file
But got the following error message:
sed: can't read file: No such file or directory
Next I tried:
sed -i '/Generation/d' airport_related_altitudes_derived_data.c
This worked, but I do not want to enter the filenames of all the files in the folder each time. Is there a command to recursively look for the string in all the files under the folder?
Try this:
find . -type f -exec sed -i '/Generation/d' {} \;
That will recursively find all files ("-type f") under the current directory (".") and call "sed -i '/Generation/d'" on each file ("{}").
A windows way to enumerate files of a pattern/file type in current folder.
#Echo off
For %%A in (*.c) Do sed -i '/Generation/d' %%A
Recursive from a start path
For /r "X:\start\path" %%A in (*.c) do sed -i '/Generation/d' %%A
A pure batch way to drop lines containing words from a file is using find /v or findstr /v but requires a different output file name, no inplace editing.
#Echo off
for %%A in (*.c) do findstr /vi "Generation" "%%~A" >"%%~dpnA_new%%~xA"
what creates new files with a appended _new before the extension.
It's of course possible to rename the old to .bak and write the changed version to the original name.
due to migrating of batch job to Linux server I have problem finding the equivalent of the following commands in Linux:
Y drive is a map drive to the NAS drive which is also connected to Ubuntu server /NAS/CCTV . Need to search every sub folders for all .264 files
Z drive is on the Ubuntu server itself. Just move every .mp4 files here, no folder here. Path on Ubuntu is /Share/CCTV/
Its just a simple script to convert the cctv capture .264 format to mp4 and move to server to be process and delete off any h264 files and any folder thats older than 1 day, the script will schedule to run every 3 mins.
I have ffmpeg installed on the Ubuntu server, just unable to find the for each file in the folders to do the same.
Also for the last for files command that delete folder older than 1 days
FOR /r y:\ %%F in (*.h264) do c:\scripts\ffmpeg -i %%F %%F.mp4
FOR /r y:\ %%F in (*.h264) do del %%F
FOR /r y:\ %%G in (*.mp4) do move %%G Z:\
forfiles -p "Y:\" -d -1 -c "cmd /c IF #isdir == TRUE rd /S /Q #path"
Appreciate any forms of help or point me to the right guide so I can rewrite it on the Linux server. I did try to search for for loop but all show me to count number, maybe I search wrongly.
Find all .h264 files (recursively)
find /NAS/CCTV -type f -name '*.h264'
Convert all such files to .mp4
while IFS= read -d '' -r file ; do
ffmpeg -i "$file" "$file".mp4
done < <(find /NAS/CCTV -type f -name '*.h264' -print0)
Note that this will create files called like filename.h264.mp4. This matches your batch file behavior. If you would prefer to replace the extension use ffmpeg -i "$file" "${file%.*}".mp4 instead and you will get a name like filename.h264.
Also move those mp4 files to another directory
while IFS= read -d '' -r file ; do
ffmpeg -i "$file" "$file".mp4
if [[ -f $file.mp4 ]] ; then
mv -f -- "$file".mp4 /Share/CCTV
fi
done < <(find /NAS/CCTV -type f -name '*.h264' -print0)
Delete old directories (recursively)
find /NAS/CCTV -type d -not -newermt '1 day ago' -exec rm -rf {} +
Documentation.
The find command recursively lists files according to criteria you specify. Any time you need to deal with files in multiple directories or very large numbers of files it is probably what you want to use. For safety against malicious file names it's important to -print0 so file names are delimited by null rather than newline, which then requires using the IFS= read -d '' construct to interpret later.
The while read variable ; do ... done construct reads data from input and assigns each record to the named variable. This allows each matching file to be handled one at a time inside the loop. The insides of the loop should be fairly obvious.
Again find is used to select files, but in this case the files are directories. The switches -not -newer select files which are not newer (in other words, files which are older) according to their m time, the modification time, compared against t, which in this case means that the next argument is text describing a time. Here you can use any expression understood by GNU date's -d switch, so I can write in plain English and it will work as expected.
As you embark on your shell scripting journey you should keep two things by your side:
shellcheck - Always runs scripts you write through shellcheck to catch basic errors.
Bash FAQ - The bash FAQ at wooledge.org. Most of the answers to questions you have not thought of yet will be here. For example FAQ 15 is highly relevant to this question.
for f in /NAS/CCTV/*.h264; do ffmpeg -i "$f" "$f".mp4; done
rm /NAS/CCTV/*.h264
mv /NAS/CCTV/*.mp4 /Share/CCTV
find /NAS/CCTV/ -type d -ctime +1 -exec rm -rf {} \;
Given there is a secret file deep inside a nested ZIP file, i.e. a zip file inside a zip file inside a zip file, etc...
The zip files are named 1.zip, 2.zip, 3.zip, etc...
We don't know how deep the zip files are nested, but it may be thousands.
What would be the easiest way to loop through all of them up until the last one to read the secret file?
My initial approach would have been to call unzip recursively, but my Bash skills are limited. What are your ideas to solve this?
Thanks Cyrus! The master wizard Shawn J. Goff had the perfect script for this:
while [ "`find . -type f -name '*.zip' | wc -l`" -gt 0 ]; do find -type f -name "*.zip" -exec unzip -- '{}' \; -exec rm -- '{}' \;; done
Here's my 2 cents.
#!/bin/bash
function extract(){
unzip $1 -d ${1/.zip/} && eval $2 && cd ${1/.zip/}
for zip in `find . -maxdepth 1 -iname *.zip`; do
extract $zip 'rm $1'
done
}
extract '1.zip'
Probably not the cleanest way, but that should do the trick:
#!/bin/sh
IDX=1 # ID of your first zip file
while [ 42 ]
do
unzip $IDX.zip # Extract
if [[ $? != 0 ]]
then
break # Quit if unzip failed (no more files)
fi
if [ $IDX -ne 1 ]
then
rm $IDX.zip # Remove zip to leave your directory clean
fi
(( IDX ++ )) # Next file
done
Checkout this java based utility nzip for nested zips.
Extracting and compressing nested zips can be done easily using following commands:
java -jar nzip.jar -c list -s readme.zip
java -jar nzip.jar -c extract -s "C:\project\readme.zip" -t readme
java -jar nzip.jar -c compress -s readme -t "C:\project\readme.zip"
PS. I am the author and will be happy to fix any bugs quickly.
Here is a solution for windows assuming 7zip is installed in the default location.
#echo off
Setlocal EnableDelayedExpansion
Set source=%1
Set SELF=%~dpnx0
For %%Z in (!source!) do (
set FILENAME=%%~nxZ
)
set FILENAME=%FILENAME:"=%
"%PROGRAMFILES%\7-zip\7z.exe" x -o* -y "%FILENAME%"
REM DEL "%FILENAME%"
rem " This is just to satisfy stackoverflow code formatting!
For %%Z in (!source!) do (
set FILENAME=%%~nZ
)
for %%a in (zip rar jar z bz2 gz gzip tgz tar lha iso wim cab rpm deb) do (
forfiles /P ^"%FILENAME%^" /S /M *.%%a /C "cmd /c if #isdir==FALSE \"%SELF%\" #path"
)
This has been adapted from here https://social.technet.microsoft.com/Forums/ie/en-US/ccd7172b-85e3-4b4a-ad93-5902e0abd903/batch-file-extracting-all-files-from-nested-archives?forum=ITCG
Notes:
The only way to do variable modification using the ~ modifiers is to use a dummy for..in loop. If there is a better way please edit.
~nx modifies the variable to make it a full path+file name.
~dpnx also does the same thing to %0 i.e. gets the full path and filename of the script.
-o* in the 7zip command line allows 7zip to create folder names without the .zip extension like it does when extracting with a right click in the gui.
~n modifies the variable to make it a filename without an extension. i.e. drops the .zip
Note that the escape character (for quotes) in FORFILES /P is ^ (caret) while for the CMD /C it is \. This ensures that it handles path and filenames with spaces also recursively without any problem.
You can remove the REM from the DEL statement if you want the zip file to be deleted after unzipping.
I have the following Shell command for Linux, I need to rewrite it for Window to be used in prompt Command Prompt.
find . -name \*.uncompressed.js -type f -delete and find . -name \*.js.map -type f -delete
At the moment I am using
del /s *.uncompressed.js and del /s *.js.map
but does not work.
Any idea what could be wrong?
You need two separate statements for Windows command line.
del /s *.uncompressed.js
del /s *.js.map