need to include a new file with 'diff' utility as a patch - linux

When doing "diff -bBupwr" of two directories, dir and dir.orig to capture the differences, the util doesn't include files that exist only in dir, it only reports that e.g. dir/app.c exists only in dir/, but I would like it to be added in a resulting diff file, so that it could be applied as a patch.
I checked 'man diff' but no clues was found. I'd appreciate helpful advises for this. Thanks.

Use the option -N. The man page says:
-N, --new-file
treat absent files as empty

Related

Find out the path to the original file of a hard link

Now, I get the feeling that some people will think that there was no original file of a hard link, but I would strongly disagree because of the following experiment I did.
Let's create a file with the content pwd and make a hard link to a subfolder:
echo "pwd" > original
mkdir subfolder
cp -l original subfolder/hardlink
Now let's see what the files output if I run it with shell:
sh original
sh subfolder/hardlink
The output is the same, even though the file hardlink is in a subfolder!
Sorry, for the long intro, but I wanted to make sure that nobody says that my following question is irrelevent.
So my question now is: If the content of the original file was not conveniently pwd, how do I find out the path to the original file from a hard link file?
I know that linux programs seem to know the path somehow, but not the filename, because some programs returned error messages that <path to original file>/hardlinkname was not found. But how do they do that?
Thanks in advance for an answer!
Edit: Btw, I fixed the error messages mentioned above by naming the hard links the same as the original file.
But how do they do that?
By looking for the same inode value. Here's one way you can list files with the same inode:
find /home -xdev -samefile original
replace /home with any other starting directory for find to start searching.
how do I find out the path to the original file from a hard link file?
For hard links there are no multiple files, just one file (inode) with multiple (file) names.
ADDENDUM:
is there no other way to find the hard links of an inode than searching through folders?
ln, ls, find, and stat are the common ways of discovering and querying the filesystem for inodes. Then depending on what next you want to accomplish, many file, directory, archiving, and searching commands recognize inode values. Some may require a special -inum or --follow or equivalent option to specify inodes.
The find example I gave above is just one such usage. Another is to combine with xargs to operate on all the found files. Here's one way to delete them all:
find /home -xdev -samefile original | xargs rm
Look under --help for other standard os commands. Most Linux distributions also come with help files that explain inodes and which tools work with inodes.
pwd is the present working directory, so of course, the output should be the same, since you didnt cd't into your subfolder.
Sorry to say, but there is no "original" file if you create other hardlinks. If you want to get other hardlinks of a file, look at How to find all hard links to a given file? for example.
Agree with #Emacs User. Your example of pwd is irrelevant and confused you.
There is no concept of original file for hard-links. The file names just act as a reference count to the content on the disk pointed by the i-node (see 'ls -li original subfolder/hardlink'). So even if you delete the original file hardlink still points to the same content.
It is impossible to find out as all hard links are treated the same way pointing to one inode.

How to exclude multiple directories with Exuberant ctags?

I have looked and tried to use exuberant ctags with no luck with what I want to do. I am on a Mac trying to work in a project where I want to exclude such directories as .git, node_modules, test, etc. When I try something like ctags -R --exclude=[.git, node_modules, test] I get nothing in return. I really only need to have it run in my core directory. Any ideas on how to accomplish this?
The --exclude option does not expect a list of files. According to ctags's man page, "This option may be specified as many times as desired." So, it's like this:
ctags -R --exclude=.git --exclude=node_modules --exclude=test
Read The Fantastic Manual should always be the first step of any attempt to solve a problem.
From $ man ctags:
--exclude=[pattern]
Add pattern to a list of excluded files and directories. This option may
be specified as many times as desired. For each file name considered by
both the complete path (e.g. some/path/base.ext) and the base name (e.g.
base.ext) of the file, thus allowing patterns which match a given file
name irrespective of its path, or match only a specific path. If appro-
priate support is available from the runtime library of your C compiler,
then pattern may contain the usual shell wildcards (not regular expres-
sions) common on Unix (be sure to quote the option parameter to protect
the wildcards from being expanded by the shell before being passed to
ctags; also be aware that wildcards can match the slash character, '/').
You can determine if shell wildcards are available on your platform by
examining the output of the --version option, which will include "+wild-
cards" in the compiled feature list; otherwise, pattern is matched
against file names using a simple textual comparison.
If pattern begins with the character '#', then the rest of the string is
interpreted as a file name from which to read exclusion patterns, one per
line. If pattern is empty, the list of excluded patterns is cleared.
Note that at program startup, the default exclude list contains "EIFGEN",
"SCCS", "RCS", and "CVS", which are names of directories for which it is
generally not desirable to descend while processing the --recurse option.
From the two first sentences you get:
$ ctags -R --exclude=dir1 --exclude=dir2 --exclude=dir3 .
which may be a bit verbose but that's what aliases and mappings and so on are for. As an alternative, you get this from the second paragraph:
$ ctags -R --exclude=#.ctagsignore .
with the following in .ctagsignore:
dir1
dir2
dir3
which works out to excluding those 3 directories without as much typing.
You can encapsulate a comma separated list with curly braces to handle multiples with one --exclude option:
ctags -R --exclude={folder1,folder2,folder3}
This appears to only work for folders in the root of where you're issuing the command. Excluding nested folders requires a separate --exclude option.
The other answers were straight to the point, and I thought a little example may help:
You should add an asterisk unix-like style to exclude the whole directory.
ctags -R --exclude={.git/*,.env/*,.idea/*} ./
A bit late but following on romainl response, you could use your .gitignore file as a basis, you only need to remove any leading slashes from the file, like so:
sed "s/\///" .gitignore > .ctagsignore
ctags -R --exclude=#.ctagsignore
I really only need to have it run in my core directory.
Simply remove the -R (recursion) flag!!!

Adding a path to the end of gcc search path

I can see that adding a path to the gcc search path can be done by using the -I flag. However, when using -v I can see that the path is being searched first.
Is there anyway I can have the search path I added, searched at the very end?
The -idirafter option allows you to specify an include directory for consideration only after all regular -I directories and the standard system directories. This is documented here:
https://gcc.gnu.org/onlinedocs/cpp/Invocation.html#Invocation
-idirafter dir
Search dir for header files, but do it after all directories specified with -I and the standard system directories have been exhausted. dir is treated as a system include directory. If dir begins with =, then the = will be replaced by the sysroot prefix; see --sysroot and -isysroot.
There is an explanation here on SO: Manipulating the search path for include files and also here which may help you.
All three methods from above are mentioned in the linked SO post.
Use the -idirafter option to add a directory to the end of the include search path.

Linux command file for symbolic links

when I try to check the type of an executable file, I always get this:
symbolic link to <another_file>
Then I have to dig further and run file <another_file>. Sometimes this could take 5 or 6 rounds. I'm wondering whether there is a way to let file to recursively go to the original file and tell me the file type.
Specifying -L to file should make it follow symlinks. From the man page:
-L, --dereference
option causes symlinks to be followed, as the like-named option
in ls(1) (on systems that support symbolic links). This is the
default if the environment variable POSIXLY_CORRECT is defined.
use "file -L" to follow symbolic links

How can I open for edit 10k perforce files using list of files as parameter?

I have 10k perforce files mentioned in my file.txt.
I need to open them using p4 edit command.
I expect some command like "p4 edit ?????file.txt". Can you help me to check these files out?
You can use the -x flag on p4. This is assuming a UNIX shell.
cat file.txt | p4 -x - edit
I assume you have some copy of directories structure where you have changes, and now you need to add those files to a change list. Which is impossible to do without checking them out. Am I right?
If I needed to change that much amount of files, I would do like this:
Copy all files I wanted to check in replacing read-only files (Wondows Explorer can do that)
In P4V go to a directory you need to check out files in, and then call "Reconcile offline work".
In appeared dialog choose all files.
Get new changelist with changed files being checked out.
I used this solution a couple of times - it works for added, changed and deleted files.
Just use below command to edit all files present in file.txt
p4 -x file.txt edit

Resources