Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have several directories structured like this in a parent directory:
/app/bpp/cpp/dpp/ASM/Report
/ghh/hhh/hhh/ASM/Report
/hh/ASM/Report
As we see above, all the ASM directories have Report directories in them along with other sub directories and files. I want a separate directory that has a parent directory to ASM (with ASM only), and ASM with Report directory in it. The result should look like this:
/dpp/ASM/Report
/hhh/ASM/Report
/hh/ASM/Report
It's not absolutely clear what you are asking; do you want to make a copy of the initial directories; do you want rather to move the initial directory to a new location? (Since you seem to want something related to "shell script" you should also tag your question with these words).
The best would probably to start with find; the following command:
find / -type d -name Report
will list all directories called Report; you could pipe the output of this command to grep in order to select those ending with /ASM/Report with:
find / -type d -name Report | grep "\/ASM\/Report$"
this would give to you a good starting point for detecting the directories to be moved/copied.
You can also use the -exec option of find for directly perform some action on a file or directory found by the command. You should type man find in order to see all the power of this tool.
It looks like you will have to search in the whole filesystem; thus find may print some warnings (related to permissions), but it shouldn't hurt; you can discard these warnings (if any) by ending the find command with 2>/dev/null for discarding the stderr stream (the error messages).
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed last year.
Improve this question
I would like to be able to sync to revision #0 for all files with a specific extension (so they get deleted). These files are taking up too much space and I don't use them.
I have tried a few different things with no luck:
p4 sync //root/.../*.psd#0
p4 sync //root/...#0/*.psd
p4 sync //root/.../*.psd#0
The syntax you want is:
p4 sync //....psd#none
(#none is the idiomatic way to specify "no revision", but #0 and #0 should also work.)
The revision specifier always goes immediately after the file path, never in the middle of it. Providing a path like //root/...#0/*.psd should have gotten you an error like Invalid revision number '0/*.psd'.
Note that if your server is case-sensitive (the default if it's hosted on a Unix platform), all parts of a file path are case-sensitive, so you may need to do both .psd and .PSD to cover all your bases.
The following variations might work, with caveats:
p4 sync //.../*.psd#0 -- this works, but is slower due to the double wildcard. You almost never want to do .../* in place of simply ....
p4 sync //root/.../*.psd#0 -- this should also work, but only in a depot that is literally called root. The "root" of the repository (i.e. the parent "directory" of all depots) is just //. If you ran a command against a //root/... path and there is no root depot, you should have gotten an error like //root/... - must refer to client 'yourclient', which is the error you get if you try to reference a specific domain (//something) that isn't a depot and isn't your current client.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I work in a software development company and every day upon my machine boot I have to execute the same commands to start coding. I recently decided to create a bash script to do that for me. The problem is that the commands I need to type have a single difference from one another, and that is the folder I need to access.
I always have to access a directory where it will contain folders with different versions of the company code (let's call it "codes" for the sake of the discussion), and everyday another folder is added to the "codes" directory (they update the company code everyday) with name as timestamp e.g. 2021-07-05-17-52-51.
To be able to create my automation script I need to be able to get into the "codes" directory and get the most recent folder added to it, or get the latest timestamp.
I am new to bash and I couldn't find answers on how to get the last added folder to a directory using bash or someway to use tab and get the last one.
You can use something like this:
directory=$(ls -At1 | head -n 1)
An explanation in parts:
ls -At1 lists sorted by time with one entry per line
head -n 1 returns the first entry
$(...) runs the command as a subshell, evaluates, and sets directory to the name of the item with the most recent modified datestamp. If you want to ignore hidden files and folders, you can lose the -A flag from ls.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
First of all I want thank all of you who will help me solve this. I have an exam tomorrow and I have to prepare this script for the exam. I am really new to linux and those bourne shell script.
My project should be a portable bourne shell script which scans a directory for the following files: header.txt, footer.txt and content.txt. The content of the files should be read but ignoring the lines starting with # and this content should be used for generating an HTML page with the following header, footer and content. This files can contain any text and/or HTML code but the cannot contain head and body tags. When scanning the directory the script have to compare the date of the last change of the files (header.txt, footer.txt and content.txt) with the date of the last change of the HTML page (if you have one already) and if the date of the last edit on the files is newer than the one on the HTML page the script should generate a new HTML page with the latest content.
Guys thank you very much as this is very important for me. Please help me getting this done.
Thank you very much!
To remove lines beginning with # try this:
grep -v "^#" file
To remove lines that may contain spaces (or blank characters) before a #:
grep -v "^[[:blank:]]*#" file
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions must demonstrate a minimal understanding of the problem being solved. Tell us what you've tried to do, why it didn't work, and how it should work. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I have some image files with the wrong date (date of file creation, the value that is shown by ls -l), because it was set wrong in the camera. How can I increment the date by two days in a script changing all *.jpg files in a directory? Bash, Perl, what ever runs on a Linux machine and is appropriate for the job would be fine.
Searching the web I found that touch is used to manipulate date, but I did not found a way incrementing it by two days, while preserving the time.
Thank you.
I guess that instead of modifying the date of the file (like all other responses #this time), you would like to modify the metadatas, so see this page : http://savvyadmin.com/fixing-dates-in-image-exif-tag-data-from-linux/
you have to use jhead (or exiv2) like this :
jhead -ts2003:01:01-00:00:00 image.jpg
Last but not least, there's a special switch -ta to modify directly the date : ex. for 2 days later :
for i in *.jpg; do jhead -ta+48:00 "$i"; done
Use touch to change modtime.
Use date to operate on the date.
Untested:
for f in *jpg; do
mtime=`date -r $f`
nextt=`date "$mtime + 2 days"`
touch -d "$nextt" $f
done
touch is the tool for the job.
for file in P123*.JPG ; do
touch --date="$(date -r $file) + 2 days" $file
done
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I created a few .sh files and put them under one of the directories under $PATH. Unfortunately every time I start a new session I have to source them if I want to use them. I did a google search and couldn't really find what I am looking for to not having to source these files.
I guess I can place a source all command at ~/.bashrc but there should be a way to get this done in a simple way.
Thanks
Let's say all of your scripts are under the ~/.functions directory. Put this in your $HOME/.bashrc:
for file in ~/.functions/*
do
. $file
done
This will source in all files in the ~/.functions directory whenever you start a new shell.
Sourcing all commands in .bashrc is the simple way.
You may want a sophisticated way of sourcing your start scripts by creating a specific directory, say ~/.start_scripts, where you put all your commands, and write a loop in your .bashrc that sources whatever executable is in this directory. That way, you no longer have to edit .bashrc each time a new command is put in the .start_scripts directory.