ln has unexpected behavior when using a wildcard - linux

I am planning on filing a bug on coreutils for this, as this behavior is unexpected, and there isn't any practical use for it in the real world... Although it did make me chuckle at first, as I never even knew one could create files with wildcard in their filename. How practical is a filename with a wildcard in it? Who even uses such a feature?
I recently ran a bash command similar to this:
ln -s ../../avatars/* ./
Unfortunately, I did not add the correct amount of "../", so rather than providing me with an informative error, it merely creates a link to a "*" file which does not exist. I would expect this to do that:
ln -s "../../avatars/*" ./
As this is the proper way to address such a filename.
Before a submit a bug on coreutils, I would like the opinion of others. Is there any practical use for this behavior, or should ln provide a meaningful error message?
And yes, I know one can just link to the entire directory, rather than each file within, but I do not wish newly created files to be replicated to the old location. There are only a few files in there that are being linked right now.
Some might even say that using a wildcard in symlinking is bad practice. However, I know the contents of the directory exactly, and this is much quicker than manually doing each file manually.

This isn't a bug.
In the shell, if you use a wildcard pattern that doesn't match anything, then the pattern isn't substituted. For example, if you do this:
echo *.c
If you have no .c files in the current directory, it will just print "*.c". If there are .c files in the current directory, then *.c will be replaced with that list.
For many commands, if you specify files that don't exist it is an error, and you get a message that seems to make sense, like "cannot access *.c". But for ln -s, since it is a symbolic link, the actual file doesn't have to exist, and it goes ahead and makes the link.

Related

Linux: How to delete the original file if the shortcut/link is deleted

I believe the question is self explanatory so I am gonna make more efficient use of the body section by sharing with you why I asked the question in the first place to get a better solution than the one I am trying to achieve and get two for one.
Basically I am trying to sync two local directories bi-directionally that respects a kind of .gitignore logic i.e. they are gonna ignore particular files and directories. Better yet, I would love something along the line of whitelisting!
I am familiar with tools like rsync and unison that get the syncing part done but not the ignoring/whitelisting.
You can get the original file name and delete it when deleting the symlink. For example:
rm symlink_name $(readlink -f symlink_name)
But remember that if there are other symlinks to the same file, then they'll be dangling.

Remove a lot of UUID format named files using rm

I'm having a lot of files in a directory under a linux Environment.
The problem is that those files are mixed with also a lot of UUID named files that who knows how got there.
Is there a way to issue a "rm" command that allows me to delete those files? without the risk of removing the other files (None of the other files have a UUID format for filename).
I think it has something to do by defining how many characters there is before each " - " simbol, so something among the lines of "rm 8chars-4chars-4-4-12" but I don't know how to say that to rm, I only know "rm somefolder/*" using * to delete its contents, but that's it.
Thanks in advance.
Actually solved it!
It was as easy as using the "?" wildcard, it determines a character and only one character.
So, in this particular case:
rm -v ????????-????-????-* //This says "remove (verbosely) 8-4-4-whatever"
So, that way, it deletes only files that follow this same format for the filename.
More information here: http://www.linfo.org/wildcard.html

Add comments next to files in Linux

I'm interested in simply adding a comment next to my files in Linux (Ubuntu). An example would be:
info user ... my_data.csv Raw data which was sent to me.
info user ... my_data_cleaned.csv Raw data with duplicates filtered.
info user ... my_data_top10.csv Cleaned data with only top 10 values selected for each ID.
So sort of the way you can comment commits in Git. I don't particularly care about searching on these tags, filtering them etc. Just seeings them when I list files in a directory. Bonus if the comments/tags follow the document around as I copy or move it.
Most filesystem types support extended attributes where you could store comments.
So for example to create a comment on "foo.file":
xattr -w user.comment "This is a comment" foo.file
The attributes can be copied/moved with the file just be aware that many utilities require special options to copy the extended attributes.
Then to list files with comments use a script or program that grabs the extended attribute. Here is a simple example to use as a starting point, it just lists the files in the current directory:
#!/bin/sh
ls -1 | while read -r FILE; do
comment=`xattr -p user.comment "$FILE" 2>/dev/null`
if [ -n "$comment" ]; then
echo "$FILE Comment: $comment"
else
echo "$FILE"
fi
done
The xattr command is really slow and poorly written (it doesn't even return error status) so I suggest something else if possible. Use setfattr and getfattr in a more complex script than what I have provided. Or maybe a custom ls command that is aware of the user.comment attribute.
This is a moderately serious challenge. Basically, you want to add attributes to files, keep the attributes when the file is copied or moved, and then modify ls to display the values of these attributes.
So, here's how I would attack the problem.
1) Store the information in a sqlLite database. You can probably get away with one table. The table should contain the complete path to the file, and your comment. I'd name the database something like ~/.dirinfo/dirinfo.db. I'd store it in a subfolder, because you may find later on that you need other information in this folder. It'd be nice to use inodes rather than pathnames, but they change too frequently. Still, you might be able to do something where you store both the inode and the pathname, and retrieve by pathname only if the retrieval by inode fails, in which case you'd then update the inode information.
2) write a bash script to create/read/update/delete the comment for a given file.
3) Write another bash function or script that works with ls. I wouldn't call it "ls" though, because you don't want to mess with all the command line options that are available to ls. You're going to be calling ls always as ls -1 in your script, possibly with some sort options, such as -t and/or -r. Anyway, your script will call ls -1 and loop through the output, displaying the file name, and the comment, which you'll look up using the script from 2). You may also want to add file size, but that's up to you.
4) write functions to replace mv and cp (and ln??). These would be wrapper functions that would update the information in your table, and then call the regular Unix versions of these commands, passing along any arguments received by the functions (i.e. "$#"). If you're really paranoid, you'd also do it for things like scp, which can be used (inefficiently) to copy files locally. Still, it's unlikely you'll catch all the possibilities. What if someone else does a mv on your file, who doesn't have the function you have? What if some script moves the file by calling /bin/mv? You can't easily get around these kinds of issues.
Or if you really wanted to get adventurous, you'd write some C/C++ code to do this. It'd be faster, and honestly not all that much more challenging, provided you understand fork() and exec(). I can't recall whether sqlite has a C API. I assume it does. You'd have to tangle with that, too, but since you only have one database, and one table, that shouldn't be too challenging.
You could do it in perl, too, but I'm not sure that it would be that much easier in perl, than in bash. Your actual code isn't that complex, and you're not likely to be doing any crazy regex stuff or string manipulations. There are just lots of small pieces to fit together.
Doing all of this is much more work than should be expected for a person answering a question here, but I've given you the overall design. Implementing it should be relatively easy if you follow the design above and can live with the constraints.

Redirect program output without changing directory

Problem
I'm writing a set of scripts to help with automated batch job execution on a cluster.
The specific thing I have is a $OUTPUT_DIR, and an arbitrary $COMMAND.
I would like to execute the $COMMAND such that its output ends up in $OUTPUT_DIR.
For example, if COMMAND='cp ./foo ./bar; mv ./bar ./baz', I would like to run it such that the end result is equivalent to cp ./foo ./$OUTPUT_DIR/baz.
Ideally, the solution would look something like eval PWD="./$OUTPUT_DIR" $COMMAND, but that doesn't work.
Known solutions
[And their problems]
Editing $COMMAND: In most cases the command will be a script, or a compiled C or FORTRAN executable. Changing the internals of these isn't an option.
unionfs, aufs, etc.: While this is basically perfect, users running this won't have root, and causing thousands+ of arbitrary mounts seems like a questionable choice.
copying/ hard/soft links: This might be the solution I will have to use: some variety of actually duplicating the entire content of ./ into ./$OUTPUT_DIR
cd $OUTPUT_DIR; ../$COMMAND : Fails if $COMMAND ever reads files
pipes : only works if $COMMAND doesn't directly work with files; which it usually does
Is there another solution that I'm missing, or is this request actually impossible?
[EDIT:]Chosen Solution
I'm going to go with something where each object in the directory is symbolic-linked into the output directory, and the command is then run from there.
This has the downside of creating a lot of symbolic links, but it shouldn't be too bad.
You can't solve this without making some assumptions about the interface of $COMMAND. There is no single definition of what "output ends up in $OUTPUT_DIR" means. For one program this may be some files, but another program might just print something to stdout and yet another might try sending some data over the internet using some protocol or display something in a GUI and there isn't an obvious way of mapping all of these to "output goes to $OUTPUT_DIR".
So, you need to invent some assumptions and require any $COMMAND implementation to follow them. Then, it may get as simple as requesting that the command accept a parameter such as --target=<DIR>. If your command was some simple command, you would have to create a wrapper script around it to translate that parameter into what the app accepts. cp, mv and a few more utils already accept the parameter --target, so that may be a good starting point.
You cannot set the output directory, you can only set the working directory.
The problem is, once you set the working directory, other references are going to be invalid. For example in your code foo:
cp ./foo ./bar
If you have a specific command, there are workarounds (creating a script that alters arguments, prepending the directory to specific arguments), but in general this is not possible.

How do you search for all the files that contain a particular string?

Let's say you're working on a big project with multiple files, directories, and subdirectories. In one of these directories/subdirectories/files, you've defined a method, but now you want to know exactly which files in your entire project have been calling your method. How do you do this?
You mentioned grep so I'll throw this solution out there. A more robust solution would be to implement a version control system as Fibbe suggested.
find . -exec grep 'method_name' {} \; -print 2> /dev/null
The idea is, for each file that is found in the current directory and sub-directories, a grep for 'method_name' is executed on that file. The 2> /dev/null is nice if you don't want to get warned about all of the directories and files you don't have access to.
The most common way to do this is by using your editor. For example emacs can do this if you create a tag index with etags.
Source: http://www.gnu.org/software/emacs/emacs-lisp-intro/html_node/etags.html
The you just types M-. and type the name of the function you want to visit and emacs will take you there.
I don't know what system or which editor you are using but most editors has a simular function.
If you don't use emacs an other good way to keep track of functions, and get a lots of other good features, is to use a versions control system. Like git, it provides really fast search.
If you don't use a version control system you may want to look at a program that is designed just for searching. Like OpenGrok.

Resources