BASH_REMATCH not outputting a match - linux

Context
I'm trying to use regex to pull out the repository name from a github https clone link, and I can't seem to get the BASH_REMATCH to work.
For context, I'm writing this as a .sh file and running it in Git Bash.
Code Logic
As you'll see below, I basically feed in a clone https link, and also provide the regex pattern. I tried this on https://regex101.com/ and the regex correctly pulls out the final bit of the string:
/liam_test_3.git
With it also pulling out the specific capture group of just the repository name:
liam_test_3
But the code I've tried to test below is defaulting to the "else" statement and outputting "no match".
Code
#!/bin/bash
# $1 = SSH Clone Link
# $2 = Github Organisation
CLONE="https://github.cloud.companyname.com/Organisation/liam_test_3.git";
re="\/(?!.*\/)(.*).git";
echo "$CLONE"
echo "$re"
if [[ $CLONE =~ $re ]]
then
repo_dir=${BASH_REMATCH[1]}
echo "Your repo name is $repo_dir"
else
echo "No Match"
fi;

Try re="([^/]*)\.git"; This will match the final part of the url (everything after the last /), and capture the repository name.
Note that you need to escape the . before the git otherwise this would match the first occurrence of git in the url, as the . would match the 2nd / of https://github

Related

If statement comparing variable to files in list

I am using terminal emulator. I have a folder with save files in it and am trying to determine whether the entered text matches any file in the list.
I created a variable called saveFiles using the ls. Only displaying files with .save and removing it from the output:
saveFiles=$(cd "${0%/*}"/save; ls *.save* | ls *.save*; cd "${0%/*}")
echo -n ">"
read -r "name"
So $saveFiles equals:
Savegame1 savegame2 savegame3
I'm trying to make an if statement that tests wether the entered variable equals any of the files in the folder.
The following script works except when I type a letter contained at the end of the file. So if one of the files is called savegame, if I type game it comes up with a match because game.save is contained in the string.
if [[ $saveFiles = *"$name".save* ]]
then
scene=$(cat "save/$name".save)
fi
I need to find a way to test wether any of the strings in $saveFiles are equal to the entered variable $name.
To reiterate, files in folder:
Save1.save
Save2.save
...
Read `$name`
If $name matches any file in the list then load scene otherwise repeat.
I hope this isn't confusing. Please feel free to ask me to clarify further. Thank you.
Maybe I am not understanding the question correctly, but why don't you first request the file name and then query the file system with precisely that name, e.g.
read name
if [[ -f "${name}.save" ]];
echo "Found the file ${name}.save"
fi

Pass two variables sequentially

In Unix shell,git clone <url> will prompt user for username then password.
I defined $username and $password variables.
how could I pass two variables to the command in order.
I have tried
echo $password | echo $username | git clone <url>
,which did not work
There are several ways you can do this. What you probably should do, because it's more secure, is use a configuration where the script doesn't have to contain and pass the username and password. For example, you could set up ssh, or you could use a credential helper. (Details depend on your environment, so I'd recommend searching for existing questions and answers re: how to set those up.)
If you still want to have the script pass the values, you basically have two choices: You can use a form of the git command that takes the values on the command line (see brokenfoot's answer), or you can pass the values on STDIN (which is the approach you're attempting, but doesn't work quite the way you're attempting it).
When you use |, you're sending the "standard output" of the command on the left to the "standard input" of the command on the right. So when you chain commands like you show, the first echo is sending output to the second echo - which ignores it. That's not what you want.
You would need a single command that outputs the username, and end-of-line character, the password, and another end-of-line character. That's not easy to do with echo (at least, not portably). You could do something like
git clone *url* <<EOF
$username
$password
EOF
Let me pretend the question is neither git-related no security-related
and my answer to the literal question "How to pass two variables to a
program" is:
( echo $username; echo $password ) | git clone 'url'
That is, just output two strings separated by a newline (echo adds the newline); or do it in one
call to echo:
echo "$username
$password" | git clone 'url'
You can pass variable like so:
username="xyz"
password="123"
echo "git clone https://$username:$password#github.com/$username/repository.git"
Output:
git clone https://xyz:123#github.com/xyz/repository.git

command_not_found_handler does not with with slashes

I have a problem with "/" sign in bash shell (version 4.3 in Ubuntu 16). I have a function:
command_not_found_handler() {
if [[ "$1" =~ any$ ]]; then
echo "$1"
fi
}
This function should write back the contents of any command written in the terminal when this command ends with any.
This works well, except in the situation when I write something with a /, such as whatever/any. In that event, I receive an error akin to the following:
bash: no such file or directory: whatever/any
Any attempts to escape this / in the function have no effect (for instance if [[ "$1" =~ /any$ ]]; then or if [[ "$1" =~ \/any$ ]]; then).
What can I do to make it work with / sign?
command_not_found_handle (no trailing r) is only invoked after doing a search through the PATH for a given command.
No such search occurs when the user is passing an explicit path to a command, which is how anything containing a / is interpreted.
To quote the relevant documentation, with emphasis added:
If the name is neither a shell function nor a builtin, and contains no slashes, Bash searches each element of $PATH for a directory containing an executable file by that name. Bash uses a hash table to remember the full pathnames of executable files to avoid multiple PATH searches (see the description of hash in Bourne Shell Builtins). A full search of the directories in $PATH is performed only if the command is not found in the hash table. If the search is unsuccessful, the shell searches for a defined shell function named command_not_found_handle. If that function exists, it is invoked with the original command and the original command’s arguments as its arguments, and the function’s exit status becomes the exit status of the shell. If that function is not defined, the shell prints an error message and returns an exit status of 127.
The entire paragraph of documentation is relevant only in the set of conditions set out at the beginning: A command must not be a shell function, must not be a builtin, and must not contain slashes.

One liner to append a file into another file but only if it hasn't already been added

I have an automated process that has a number of lines like the following pattern:
sudo cat /some/path/to/a/file >> /some/other/file
I'd like to transform that into a one liner that will only append to /some/other/file if /some/path/to/a/file has not already been added.
Edit
It's clear I need some examples here.
example 1: Updating a .bashrc script for a specific login
example 2: Creating a .screenrc for different logins
example 3: Appending to the end of a /etc/ config file
Some other caveats. The text is going to be added in a block (>>). Consequently, it should be relatively straight forward to see if the entire code block is added or not near the end of a file. I am trying to come up with a simple method for determining whether or not the file has already been appended to the original.
Thanks!
Example python script...
def check_for_appended(new_file, original_file):
""" Checks original_file to see if it has the contents of new_file """
new_lines = reversed(new_file.split("\n"))
original_lines = reversed(original_file.split("\n"))
appended = None
for new_line, orig_line in zip(new_lines, original_lines):
if new_line != orig_line:
appended = False
break
else:
appended = True
return appended
Maybe this will get you started - this GNU awk script:
gawk -v RS='^$' 'NR==FNR{f1=$0;next} {print (index($0,f1) ? "present" : "absent")}' file1 file2
will tell you if the contents of "file1" are present in "file2". It cannot tell you why, e.g. because you previously concatenated file1 onto the end of file2.
Is that all you need? If not update your question to clarify/explain.
Here's a technique to see if a file contains another file
contains_file_in_file() {
local small=$1
local big=$2
awk -v RS="" '{small=$0; getline; exit !index($0, small)}' "$small" "$big"
}
if ! contains_file_in_file /some/path/to/a/file /some/other/file; then
sudo cat /some/path/to/a/file >> /some/other/file
fi
EDIT: Op just told me in the comments that the files he wants to concatenate are bash scripts -- this brings us back to the good ole C preprocessor include guard tactics:
prepend every file with
if [ -z "$__<filename>__" ]; then __<filename>__=1; else
(of course replacing <filename> with the name of the file) and at the end
fi
this way, you surround the script in each file with a test for something that's only true once.
Does this work for you?
sudo (set -o noclobber; date > /tmp/testfile)
noclobber prevents overwriting an existing file.
I think it doesn't, since you wrote you want to append something but this technique might help.
When the appending all occurs in one script, then use a flag:
if [ -z "${appended_the_file}" ]; then
cat /some/path/to/a/file >> /some/other/file
appended_the_file="Yes I have done it except for permission/right issues"
fi
I would continue into writing a function appendOnce { .. }, with the content above. If you really want an ugly oneliner (ugly: pain for the eye and colleague):
test -z "${ugly}" && cat /some/path/to/a/file >> /some/other/file && ugly="dirt"
Combining this with sudo:
test -z "${ugly}" && sudo "cat /some/path/to/a/file >> /some/other/file" && ugly="dirt"
It appears that what you want is a collection of script segments which can be run as a unit. Your approach -- making them into a single file -- is hard to maintain and subject to a variety of race conditions, making its implementation tricky.
A far simpler approach, similar to that used by most modern Linux distributions, is to create a directory of scripts, say ~/.bashrc.d and keep each chunk as an individual file in that directory.
The driver (which replaces the concatenation of all those files) just runs the scripts in the directory one at a time:
if [[ -d ~/.bashrc.d ]]; then
for f in ~/.bashrc.d/*; do
if [[ -f "$f" ]]; then
source "$f"
fi
done
fi
To add a file from a skeleton directory, just make a new symlink.
add_fragment() {
if [[ -f "$FRAGMENT_SKELETON/$1" ]]; then
# The following will silently fail if the symlink already
# exists. If you wanted to report that, you could add || echo...
ln -s "$FRAGMENT_SKELETON/$1" "~/.bashrc.d/$1" 2>>/dev/null
else
echo "Not a valid fragment name: '$1'"
exit 1
fi
}
Of course, it is possible to effectively index the files by contents rather than by name. But in most cases, indexing by name will work better, because it is robust against editing the script fragment. If you used content checks (md5sum, for example), you would run the risk of having an old and a new version of the same fragment, both active, and without an obvious way to remove the old one.
But it should be straight-forward to adapt the above structure to whatever requirements and constraints you might have.
For example, if symlinks are not possible (because the skeleton and the instance do not share a filesystem, for example), then you can copy the files instead. You might want to avoid the copy if the file is already present and has the same content, but that's just for efficiency and it might not be very important if the script fragments are small. Alternatively, you could use rsync to keep the skeleton and the instance(s) in sync with each other; that would be a very reliable and low-maintenance solution.

find based filename autocomplete in Bash script

There is a command line feature I've been wanting for a long time, and I've thought about how to best realize it, but I got nothing...
So what I'd like to have is when I start typing a filename and hit tab,for example:
# git add Foo<tab>
I'd like it to run a find . -name "*$1*" and basically autocomplete the complete path to the matched File to my command line.
What I have so far:
I know I'll have to write a function that will call the app with the parameters I want,
for example git add. After that it needs to catch the tab-keystroke event and do the find mentioned above, and display the results if many, or fill in the result if one.
What I haven't been able to figure out:
How to catch the tab key event within a function within function.
So basically in pseudocode:
gadd() {git add autocomplete_file_search($1)}
autocomplete_file_search(keyword) {
if( tab-key-pressed ){
files = find . -name "*$1*";
if( filecount > 1 ) {
show list;
}
if( files == 1 ) {
return files
}
}
}
Any ideas?
thanks.
Matching anywhere in the filename is rather complicated, and I'm not sure it's really all that useful. Matching at the start of filenames makes more sense and is much easier to implement, even recursively.
Now, you mentioned find as a requirement, but bash (since version 4.0) can also find files recursively, and it should be more efficient to let bash do that part. To match recursively in bash, you enable the globstar shell option by running shopt -s globstar, then two consecutive asterisks, **, will match recursively.
Next up, given that you want to match files recursively inside a git repository, we best have a way to detect that we're actually in a git repository, otherwise, if you accidentally trigger it in / for instance, your prompt will hang while waiting for bash to search through your entire filesystem. The following function should be fairly efficient at determining if we're inside a git repository. Given the current working directory, e.g. /foo/bar/baz, it'll look for /foo/bar/baz/.git, /foo/bar/.git, /foo/.git, /.git and return true if it finds one, false otherwise.
isgit() {
local p=$PWD
while [[ $p ]]; do
[[ -d $p/.git ]] && return
p=${p%/*}
done
return 1
}
For simplicity, we'll create a gadd command to add the completions for. A completion function can only be applied to the first word of the command. E.g. we can add completion for git, but not git add, thus we'll make a new command that turns git add into one word.
gadd() {
git add "$#"
}
Now for the actual completion function. When triggered by hitting TAB, the function will be invoked with three arguments. $1 is the command being completed, $2 is the current word of the command line being completed, and $3 is the previous word on the line. So the files we want to search will be matched by the glob **/"$2"*; all files starting with "$2". We iterate these filenames, and append them to the COMPREPLY array. If the COMPREPLY array only contains one value when the function is done, the word will be replaced by that value. If it contains more than one value, hit tab another time to get a list of all the matches.
shopt -s globstar
_git_add_complete() {
local file
isgit || return
for file in **/"$2"*; do
# If the glob doesn't match, we'll get the glob itself, so make sure
# we have an existing file
[[ -e $file ]] || continue
# If it's a directory, add a trailing /
[[ -d $file ]] && file+=/
COMPREPLY+=( "$file" )
done
}
complete -F _git_add_complete gadd
Add the above three code blocks to your ~/.bashrc, then open a new terminal, enter a git repository and try gadd something<tab>.
You should take a look at this introduction to bash completion. Briefly, bash has a system for configuring and extending tab completion. Other shells do this, too, and each one has a different way to set it up. Using this system it is not necessary to do everything yourself and adding custom argument completion to a command is relatively easy.
Does this work?
$ cat .bash_completion
_foo()
{
local files
cur=${COMP_WORDS[COMP_CWORD]}
local files=$(for x in `find -type f`; do echo ${x}; done)
COMPREPLY=( $( compgen -W "${files}" -- ${cur} ) )
return 0
}
complete -F _foo foo
$ . /etc/bash_completion
$ foo ./[tab]
I wrote git-number so that I never have to hit tab when specifying files to git.
With git-number I can use numbers to represent the filenames that I want git to handle.

Resources