Using AC_CONFIG_FILES to copy a file - autoconf

In my configure.ac file, I have this:
AC_CONFIG_FILES([Makefile foo/bar.h],
[mkdir -p ../dir1 && cp foo/bar.h ../dir1]
)
The goal is to:
Generate the Makefile from Makefile.in
Generate foo/bar.h from foo/bar.h.in
Copy foo/bar.h to dir1/bar.h.
While it works, I'm pretty sure I've done that last part wrong. Looking at the generated output, I see:
case $ac_file$ac_mode in
"Makefile":F) mkdir -p ../dir1 && cp foo/bar.h ../dir1
;;
"foo/bar.h":F) mkdir -p ../dir1 && cp foo/bar.h ../dir1
;;
esac
So it looks like it is doing my 'mkdir' command once for each file in the file list which is a bit redundant. The fact that it does this in a 'case' statement suggests there is some way to specify commands to run specific to each file (otherwise why have a 'case'?).
What's the trick?

In case you don't need to keep bar.h in the dir foo, I suppose you can do
AC_CONFIG_FILES([Makefile bar.h:foo/bar.h.in])
source: https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Configuration-Files.html#Configuration-Files
In case bar.h needs to be in both locations and a link is acceptable, use
AC_CONFIG_FILES([Makefile foo/bar.h])
AC_CONFIG_LINKS([bar.h:foo.bar.h])
source: https://www.gnu.org/software/autoconf/manual/autoconf-2.69/html_node/Configuration-Links.html#Configuration-Links

So it turns out I was misinterpreting what I was seeing.
Seeing that 'case' statement made me think there was some way to specify that some of the cmds applied to specific entries from the files parameter (something like [makefile:stuff1 foo/bar.h:stuff2]). But (thankfully) that's not why the 'case' is there.
The trick here is that at the point where the case statement is produced, we aren't just walking the entries from AC_CONFIG_FILES files. So in order to limit the cmds to the just AC_CONFIG_FILES entries, it uses a 'case.'
While my early attempts suggested that using two AC_CONFIG_FILES was not supported, that was wrong too.
My solution was to use 2 AC_CONFIG_FILES.

Related

moving files from a folder into subfolders based on the prefix number with Linux

I'm relatively new to bash and I have tried multiples solutions that I could find here but none of them seem to be working in my case. It's pretty simple, I have a folder that looks like this:
- images/
- 0_image_1.jpg
- 0_image_2.jpg
- 0_image_3.jpg
- 1_image_1.jpg
- 1_image_2.jpg
- 1_image_3.jpg
and I would like to move these jpg files into subfolders based on the prefix number like so:
- images_0/
- 0_image_1.jpg
- 0_image_2.jpg
- 0_image_3.jpg
- images_1/
- 1_image_1.jpg
- 1_image_2.jpg
- 1_image_3.jpg
Is there a bash command that could do that in a simple way ?
Thank you
for src in *_*.jpg; do
dest=images_${src%%_*}/
echo mkdir -p "$dest"
echo mv -- "$src" "$dest"
done
Remove both echos if the output looks good.
I would do this with rename a.k.a. Perl rename. It is extremely powerful and performant. Here's a command for your use case:
rename --dry-run -p '$_="images_" . substr($_,0,1) . "/" . $_' ?_*jpg
Let's dissect that. At the right end, we specify we only want to work on files that start with a single character/digit before an underscore so we don't do damage trying to apply the command to files it wasn't meant for. Then --dry-run means it doesn't actually do anything, it just shows you what it would do - this is a very useful feature. Then -p which handily means "create any necessary directories for me as you go". Then the meat of the command. It passes you the current filename in a variable called $_ and we then need to create a new variable called $_ to say what we want the file to be called. In this case we just want the word images_ followed by the first digit of the existing filename and then a slash and the original name. Simples!
Sample Output
'0_image_1.jpg' would be renamed to 'images_0/0_image_1.jpg'
'0_image_2.jpg' would be renamed to 'images_0/0_image_2.jpg'
'1_image_3.jpg' would be renamed to 'images_1/1_image_3.jpg'
Remove the --dry-run and run again for real, if the output looks good.
Using rename has several benefits:
that it will warn and avoid any conflicts if two files rename to the same thing,
that it can rename across directories, creating any necessary intermediate directories on the way,
that you can do a dry run first to test it,
that you can use arbitrarily complex Perl code to specify the new name.
Note: On macOS, you can install rename using homebrew:
brew install rename
Note: On some Ones, rename is referred to as prename for Perl rename.

Using for in a Script, Ubuntu command line

How can I pass each one of my repository files and to do something with them?
For instance, I want to make a script:
#!/bin/bash
cd /myself
#for-loop that will select one by one all the files in /myself
#for each X file I will do this:
tar -cvfz X.tar.gz /myself2
So a for loop in bash is similar to python's model (or maybe the other way around?).
The model goes "for instance in list":
for some_instance in "${MY_ARRAY[#]}"; do
echo "doing something with $some_instance"
done
To get a list of files in a directory, the quick and dirty way is to parse the output of ls and slurp it into an array, a-la array=($(ls))
To quick explain what's going on here to the best of my knowledge, assigning a variable to a space-delimited string surrounded with parens splits the string and turns it into a list.
Downside of parsing ls is that it doesn't take into account files with spaces in their names. For that, I'll leave you with a link to turning a directory's contents into an array, the same place I lovingly :) ripped off the original array=($(ls -d */)) command.
you can use while loop, as it will take care of whole lines that include spaces as well:
#!/bin/bash
cd /myself
ls|while read f
do
tar -cvfz "$f.tar.gz" "$f"
done
you can try this way also.
for i in $(ls /myself/*)
do
tar -cvfz $f.tar.gz /myfile2
done

Linux script variables to SCP and delete files

I am looking to set up a script to do the following:
1st: SCP a directory on the first day of month to another server
2nd: Delete the directory after successful transfer
The directory I need to move will always have a different name, and the lowest numbered one is always the one that needs to move:
2018/files/02/
2018/files/03/
So what im looking to write up is something like:
scp /2018/files/% user#host:/backups/2018/files/
{where % = lowest num} &&
rm -rf /2018/files/%
{where % = lowest num} &&
exit
Thanks for any advice
If you are open to using Ruby, you could accomplish it with something like this:
def file_number(filespec)
filespect.split('/').last.to_i
end
directories = Dir['/2018/files'].select { |f| File.directory?(f) }
sorted_dirs = directories.sort_by do |dir1, dir2|
file_number(dir1) <=> file_number(dir1)
end
dir_to_copy = sorted_dirs.first
destination_dir = File.join('/', 'backups', dir_to_copy)
`scp #{dir_to_copy} user#host:#{destination_dir}`
`rm -rf #{dir_to_copy}`
I have not tested this, but if you have any problems, let me know what they are and I can work through it with you.
While using shell scripting eliminates the need for the Ruby interpreter, to me the code is not nearly as straightforward.
In very large directory lists (maybe 10,000's?) the sort might be intolerably slow, and another method would be needed to optimize for speed.
I would caution you against doing an unconditional rm -rf after the backup -- that seems really risky to me.
The big challenge here is to actually find the right files to copy, and shudder, delete. So let us call that step 0.
Let's start with some boiler plate
sourceD=/2018/files/
targetD=/backups/2018/files/
And a little assertion, which bails out from the script if $1 does not equate to a directory.
assert_directory() { (cd ${1:?directory name}) || exit; }
step 0: Identify directory:
assert_directory $sourceD
to_be_archived=$(
# source must be two characters, hence "??"
# source must a directory, hence trailing "/"
# set -- sorts its arguments
# First match must be our source
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This is only a couple of lines of condensed code. Note that this may
cause trouble if you (accidentally) run this multiple times in a row.
Step 1, Copy files now appears to be the easy part.
scp -r ${to_be_archived:?} user#host:${targetD:?}
This is a simple method for copying files, but also slow and risky.
Lookup rsync over ssh for alternatives.
Step 2, Remove
The rm -fr line will do the job, but I won't include that here.
We are missing an essential step, as we need to make sure that our
files have arrived safely. Again, rsync has options for that.
In summary:
assert_directory() { (cd ${1:?directory name}) || exit; }
assert_directory $sourceD
to_be_archived=$(
set -- $sourceD/??/ &&
assert_directory "$1"
echo ${1:?nothing found}
) || exit
This will give you the first two-character name directory (if one exists) in sourceD or abort the running script. It will break if $sourceD contains spaces.

bash -- copying and change filename

I need to copy all files from
/dirA/[NAME].20151231.txt
to
/dirB/20151231.[NAME].txt
and
/dirC/20151231/[NAME].txt
i.e. I need to copy the files, but change the name.
You can assume that I know the "date" string before hand, so we can assume 20151231 is a supplied argument.
if I have a list of names, I can do something like
for n in $names; do; cp /dirA/$n.$date.txt /dirB/$date.$n.txt; done;
But what if I dont have a list of names? I am looking for an elegant solution as extracting them from dirA sounds a bit cumbersome.
Thanks!
A reasonably reliable way of processing this material is:
date=20151231
cd /dirA || exit 1
mkdir -p "/dirC/$date" || exit 1
for file in *."$date.txt"
do
name="${file%.$date.txt}"
cp "$file" "/dirB/$date.$name.txt"
cp "$file" "/dirC/$date/$name.txt"
done
The cd operation is checked; if it fails, there is no point in continuing. Likewise, the mkdir -p operation ensures that the dated directory under /dirC exists or exits. The relevant error messages were already generated by cd and mkdir.
Using the shell globbing to generate the file names is best; it avoids issues with 'what happens if the file name contains spaces (or newlines, or other unexpected characters)'.
The assignment extracts the '[NAME]' portion of the file name. This is then used to copy the file from /dirA to the relevant locations under /dirB and /dirC. It would be feasible to check that /dirB and /dirC also exist if you thought that was necessary.
Maybe I am just awful at asking questions. What I was looking for was a "sed for file names". And I found the answer -- that's rename.

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

Resources