On Linux, I need to know which files were added/modified/moved/deleted after compiling and installing an application from source code, ie. the command-line, Linux equivalent to the venerale InCtrl5.
Is there a utility that does this, or a set of commands that I could run and would show me the changes?
Thank you.
Edit: The following commands are sort of OK, but I don't need to know the line numbers on which changes occured or that "./.." were updated:
# ls -aR /tmp > b4.txt
# touch /tmp/test.txt
# ls -aR /tmp > after.txt
# diff -u b4.txt after.txt
If you only need to know which files were touched, then you can use find for this:
touch /tmp/MARK
# install application here
find / -newercm /tmp/MARK
This will show you all files whose contents or metadata have changed since you touched /tmp/MARK (including newly added files).
I would personally use something like Mercurial (version control) to do this.
The main reason, is that it is not only effective but it is also clean, since it will only add a hidden directory to the top of the tree where you want to check these changes.
Let's say that you need to know what files changed in /etc/. So before installation (you need to have mercurial installed) you add the directory to mercurial:
cd /etc
hg init
hg add
hg ci -m "adding all files in /etc/ to track them down"
The above will effectively "add" all the files to track them. To verify nothing has changed:
hg st
Should return no files.
If you (or the installation) modifies a file, you should see something like this:
hg st
M foo.sh
The "M" before the file states the given file was modified.
For new files you would see a ? before the file like:
? bar.sh
After you are done and no longer want Mercurial, simple remove the hidden directory:
cd /etc
rm -rf .hg
Related
I have a terminal open from one of my projects. Let's say at that time I was in /Users/test_user/Desktop/Project/test_project.
Now in that same terminal, I changed the directory to home(~). Now I wanted to go back to the directory from which I opened the terminal initially. Is there a way for me to do that?
You can use cd - to go back to your previous directory which is inside OLDPWD environment variable.
If you want to go, to another directory (which is not the previous directory you are in), there are multiple ways to do it. One way is, you can see the directory stack by the command dirs -v. Sample output wil be like this:
0 ~/workspace/stack/c
1 ~/Downloads
2 ~
Then you can use cd ~$INDEX, here the INDEX being the number shown in the dirs -v output. So cd ~1 will cd into ~/Downloads in the above example.
Another interesting commands to take a look at it are pushd and popd.
I wanted to "clean" my git repo before pushing by removing every JPG file, so I entered:
find . | xargs rm *.png
in the git root and now everything is delted. Also my *.py files are deleted, but I do not know why? It is a Linux, Ubuntu machine. Is there any chance to recover my files? Maybe over my OS?
The command you typed is plain wrong:
find .
This command outputs the name of every file and directory below ., including hidden files.
xargs
This command takes its input and runs the command given as its argument, feeding it one line at a time as an argument. Therefore, it will run rm *.png <line1_from_find>, then rm *.png <line2_from_find>, etc.
There is no safeguard like stop on errors, so if you let the command run completely, it unlinked all files and you know have an empty directory tree. And git will not help you, because it works by storing its metadata and current state within a .git directory at the root of the working directory. In which you just deleted all files.
So no, unless you made a copy, either manually or by pushing you state to some other place, it's probably gone, but see below. For future reference, here is the correct command to destroy all files ending in png:
find . -name '*.png' -delete
Optionnaly add -type f before the -delete if you may have directories ending in .png, to filter them out.
Okay, what now: it happens that git marks some of its internal state as read-only, which rm honors if you didn't use rm -f and you might be able to recover some data from that. Go to the .git directory at your working directory's root. It will contain a objects directory, and some files may have survived there.
Those files are raw compressed streams, you can see their content using that command:
zlib-flate -uncompress <path_to_the_file
(the zlib-flate command comes from qpdf package on ubuntu)
for me the following worked:
$ git checkout -- name_of_deleted_directory
or
$ git checkout -- path_to_local_delected_directory
In my case, I deleted a directory by mistake and I didn't change any code in that directory but I had changed code in other directories of my git repo.
What is the difference between running: svn update DIR and running svn update with DIR as cwd? (DIR is my checkout's root).
Intuitively, I'd expect the two to do the same thing, but I noticed that when running the former (when cwd is outside the local checkout), sometimes not all updates are fetched. But then running the latter fetches what it needs to.
(running on linux)
EDIT: to all the skeptics, here's a session I've just had:
$ svn up DIR/
Password for 'xxx': ...
Skipped 'DIR'
$ cd DIR/
$ svn up
Password for 'xxx': ...
U aaa
U bbb
...
U .
Updated to revision 8965.
$
The following is pure wild speculation. I know virtually nothing about svn and nothing about its internals.
That being said I would guess that from outside the checkout svn looks in the current directory for configuration information, doesn't find any, and then does the minimum necessary to update the given directory (by reading its specific configuration information) and that from inside the checkout svn operates in a more recursive/project-aware mode because the local directory contains the configuration it needs.
Examining the operational differences between the two runs with something like strace might provide some clues.
Assuming there is a difference after all and what you are seeing isn't merely later changes getting pulled in by a second update (with an active project for example).
There is no difference. svn update without a target specified simply uses . as the target.
Based on your updated question. There are two ways you get the "Skipped 'DIR/'" message:
You had a path in conflict (either the target or one of it's parents) and you would have had to resolve it between the two commands. Which seems unlikely given your example
You had typoed the path in the svn up command and have the cdspell option turned on in your shell.
Take this for example:
$ ls -d trunk
trunk/
$ svn up truunk/
Skipped 'truunk'
$ cd truunk/
trunk/
$ svn up
At revision 1540579.
If you have a simple reproduction method I'd be interested in it.
I am experimenting some linux configuration and I want to track my changes? Of course I don't want to to put my whole OS under version control?
Is there a way (with git, mercurial or any VCS) to track the change without storing the whole OS?
This is what I imagine:
I do a kind of git init -> all hashes of all files are stored, but not the content of the files
I make some changes to my file system -> git detect that the hash of this file has changed
I commit -> the content of the file is stored (or even better the original file and the diff are stored! I know, that is impossible... )
Possible? Impossible? Work-arounds?
EDIT: What I care about is just to minimize the size of the repository and to have a repository containing only my changes. Having all files in my repository is not relevant for me. For example if i push to github I just want it to contain only the files that has changed.
Take a look at etckeeper, it will probably do the job.
What you want is git update-index --info-only or ... --index-info, from the man page: " --info-only is used to register files without placing them in the object database. This is useful for status-only repositories.". --index-info is its industrial-scale cousin.
Do that with the files you want to track, write-tree to write the index structure into the object db, commit-tree that, and update-ref to update a branch.
To get the object name use git hash-objectfilename.
Here is what we do...
su -
cd /etc
echo "*.cache" > .gitignore
git init
chmod 700 .git
cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Then setup crontab:
su -
crontab -e
# Put the following in:
0 3 * * * cd /etc; git add . && git add -u && git commit -m "Daily Commit"
Now you will have a nightly commit of all changes in /etc
If you want to track more than /etc in one repo, then you could simply do it at the root of your filesystem, except add the proper ignore paths to your /.gitignore. I am unclear on the effects of having git within git, so you might want to be extra careful in that case.
I know this question is old, but I thought this might help someone. Inspired by #Jonathon's comment on the How to record concrete modification of specific files question, I have created a shell script that enables you to monitors all the changes done on a specific file, while keeping all the changes history. the script depends on the inotifywait and git packages being installed.
You can find the script here
https://github.com/hisham-hassan/linux-file-monitor
Usage: file-monitor.sh [-f|--file] <absolute-file-path> [-m|--monitor|-h|--history]
file-monitor.sh --help
-f,--file <absolute-file-path> Adding a file to the monitored files List. The <absolute-file-path>
is the absolute file path of the file we need to action.
PLEASE NOTE: Relative file path could cause issues in the script,
please make sure to use the abolute path of the file. also try to
avoid sym links, as it has not been tested.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-m, --monitor Monitoring all the changes on the file. the monitoring will keep
happening as long as the script is running; you may need to run it
in the background.
example: file-monitor.sh -f /absolute/path/to/file/test.txt -m
-h, --history showing the full history of the file.
To exit, press "q"
example: file-monitor.sh -f /absolute/path/to/file/test.txt -h
--uninstall uninstalls the script from the bin direcotry,
and removes the monitoring history.
--install Adds the script to the bin directory, and creates
the directories and files needed for monitoring.
--help Prints this help message.
I want to mirror a folder via FTP, like this:
wget --mirror --user=x --password=x ftp://ftp.site.com/folder/subfolder/evendeeper
But I do not want to create a directory structure like this:
ftp.site.com -> folder -> subfolder -> evendeeper
I just want:
evendeeper
And anything below it to be the resulting structure. It would also be acceptable for the contents of evendeeper to wind up in the current directory as long as subdirectories are created for subdirectories of evendeeper on the server.
I am aware of the -np option, according to the documentation that just keeps it from following links to parent pages (a non-issue for the binary files I'm mirroring via FTP). I am also aware of the -nd option, but this prevents creating any directory structure at all, even for subdirectories of evendeeper.
I would consider alternatives as long as they are command-line-based, readily available as Ubuntu packages and easily automated like wget.
For a path like: ftp.site.com/a/b/c/d
-nH would download all files to the directory a/b/c/d in the current directory, and -nH --cut-dirs=3 would download all files to the directory d in the current directory.
I had a similar requirement and the following combination seems to be the perfect choice:
In the below example, all the files in http://url/dir1/dir2 (alone) are downloaded to local directory /dest/dir
wget -nd -np -P /dest/dir --recursive http://url/dir1/dir2
Thanks #ffledgling for the hint on "-nd"
For the above example:
wget -nd -np --mirror --user=x --password=x ftp://ftp.site.com/folder/subfolder/evendeeper
Snippets from manual:
-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the
filenames will get extensions .n).
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
-np (no parent) option will probably do what you want, tied in with -L 1 (I think, don't have a wget install before me), which limits the recursion to one level.
EDIT. ok. gah... maybe I should wait until I've had coffee.. There is a --cut or similar option, which allows you to "cut" a specified number of directories from the output path, so for /a/b/c/d, a cut of 2 would force wget to create c/d on your local machine
Instead of using:
-nH --cut-dirs=1
use:
-nH --cut-dirs=100
This will cut more directories and no folders will be created.
Note: 100 = the number of folders to skip creating.
You can change 100 to any number.