I am making a script that allow's me to unzip a given file. My problem is that i don't now how to change directory to the directory just created by the unzip process.
I tried with this command, but it's not working: SITE_DIRECTORY="$(ls -dt */ | head -1)"
Any idea on how to get the name of the directory just extracted ?
Edit: Now i got to SITE_DIRECTORY=unzip $SITE_NAME | grep 'creating:' | head -1 | cut -d' ' -f5-
But a new problem arise: the unzip command does not extract all the files.
New ideas ?
If the directory is known, you could
unzip -j yourzip.zip -d /path/to/dir && cd /path/to/dir
Extra info from man page (j option)
-j junk paths. The archive's directory structure is not recreated; all files are deposited in the extraction directory (by default, the
current one).
The solution to my problem was the following commands:
unzip $SITE_NAME >output.txt
SITE_DIRECTORY=$(cat output.txt | grep -m1 'creating:' | cut -d' ' -f5-)
rm output.txt
Thanks goes to Evan # Unzip File which directory was created
Related
I want to diff two files in the same directory in a bash script. To get the full paths of these two files (I need this because the script isn't running the same directory), I did:
pathToOld=$(ls -Art /dir/path/here | grep somestring | tail -n2 | head -n1)
pathToOld="/dir/path/here/${pathToOld}"
and
pathToNew=$(ls -Art /dir/path/here | grep somestring | tail -n 1)
pathToNew="/dir/path/here/${pathToNew}"
I was able to figure out the above from the following links: link1, link2, link3
If I echo these path in the .sh script, it comes out correctly, like:
>echo "${pathToOld}"
/dir/path/here/oldFile
But when I try to diff the files like so:
diff pathToOld pathToNew
It tells me:
diff: pathToOld: No such file or directory
diff: pathToNew: No such file or directory
How do I make this work?
Btw, I have also tried to pipe sed -z 's/\n/ /g' (inspired by this) to both lines but that hasn't helped.
I am new to linux. I have a folder with many files in it and i need to get the latest file depending on the file name. Example: I have 3 files RAT_20190111.txt RAT_20190212.txt RAT_20190321.txt . I need a linux command to move the latest file here RAT20190321.txt to a specific directory.
If file pattern remains the same then you can try below command :
mv $(ls RAT*|sort -r|head -1) /path/to/directory/
As pointed out by #wwn, there is no need to use sort, Since the files are lexicographically sortable ls should do the job already of sorting them so the command will become :
mv $(ls RAT*|tail -1) /path/to/directory
The following command works.
ls | grep -v '/$' |sort | tail -n 1 | xargs -d '\n' -r mv -- /path/to/directory
The command first splits output of ls with newline. Then sorts it, takes the last file and then it moves this to the required directory.
Hope it helps.
Use the below command
cp ls |tail -n 1 /data...
grep -n '[0-9]' test.txt > output.txt
I would like to redirect the above grep results on to a new file (not yet created, output2.txt), which needs to be located in another directory than the directory of test.txt. For example, maybe at nothome/labs/output2.txt. How can I do this?
You can put the absolute path to your output, like this:
grep -n '[0-9]' test.txt > /path/to/output/output.txt
From your posting I guess you might want to create the output path first:
OUTPUT_PATH=/path/to/output
mkdir -p ${OUTPUT_PATH}
grep -n '[0-9]' test.txt > ${OUTPUT_PATH}/output.txt
The problem:
I want to get all lines of code in my project folder that have ".js" in order to check that i don't have un-minimized JavaScript files.
When I'm trying to do the following: grep -H ".js\"" *
I'm getting everything right. but still have a problem as I don't want to get lines with ".min.js" which i don't want to get.
Is it possible using grep command to search my project folder for all files/lines that have ".js" but not ".min.js" ?
Thanks.
GalT.
Just pipe the output to another grep as
grep -H ".js" | grep -vH ".min.js"
You can do this with awk
awk '/.js/ && !/.min.js/'
To print filename:
awk '/.js/ && !/.min.js/ {print FILENAME}' *
The following command will work in folder as well.
For current dir you can use this
find . | xargs grep ".js" | grep -v "min.js"
For any specific folder
find (folder path) | xargs grep ".js" | grep -v "min.js"
My server has been infected with malware. I have upgraded my Linux server to the latest version and no new files are being infected, but I need to clean up all the files now.
I can locate all the files doing the following:
grep -H "gzinflate(base64_decode" /home/website/data/private/assets/ -R | cut -d: -f1
But, I want to now delete the line containing gzinflate(base64_decode in every single file.
I'd use sed -i '/gzinflate(base64_decode/d' to delete those matching line in a file:
... | xargs -I'{}' sed -i '/gzinflate(base64_decode/d' '{}'
Note: You really want to be using grep -Rl not grep -RH .. | cut -d: -f1 as -l lists the matching filenames only so you don't need to pipe to cut.
Warning: You should really be concerned about the deeper issue of security here, I wouldn't trust the system at all now, you don't know what backdoors are open or what files may still be infected.
once you got these files using your command
grep -H "gzinflate(base64_decode" /home/website/data/private/assets/ -R | cut -d: -f1
you loop throu files one by one and use
grep -v "gzinflate(base64_decode" file > newfile