Not symlinking one or two files inside symlinked directory - linux

Is there a way not to symlink one or two files within a symlinked directory in CentOS?
I've got the entire directory symlinked but there are two css files that I would like to use the current copy for the website

In short: no.
Another way to do this would be to symlink all the files in that directory, except those you want local copy of.
Still another way to go might be using unionfs or aufs to union-mount the original directory and a directory containing the files you need local, with the directory containing local files being "on top".
Say, your original directory is orig, the directory with files that should be local is local, the union directory is union, and you want files from both directories to be writable. Then you can union-mount them like this:
unionfs-fuse local=RW:orig=RW union
And unmount like this:
fusermount -u union
See unionfs manpage (unionfs-fuse(8) at least on Debian) for details.

Related

Copying multiple files knowing their directories

I have multiple files' directories and i need to copy them into specific folder using terminal, how do i do it. I also have access to GUI of the system, as all this is being done in virtual machine using ssh
According to its manpage, cp is capable of copying various source files to one output directory.
The syntax is as follows:
cp /dir1/file1 /dir1/file2 /dir2/file1_2 /outputdir/
Using this command, you can copy files from multiple directories (/dir1/ and /dir2/ in this example) to one output directory (/outputdir/).

will coping files recursively from one directory to another lead to changes in one directory reflect in another directory files also?

If I copy files from one directory to another directory:
Will their inode numbers also change?
Changes in file of one directory, will it reflect in same file of another directory also?
When I use command like:
cp -r dir1/ dir2/
With a simple copy the file system handle the copied files as newly created ones, therefore assining new inodes to them.
Any change made in the origin wouldn't change the copys. This only happens when you create symbolic or hard links between files.
You can check the inodes of your files with "ls -i filename".

How does the 'mv' command work?

I used the command mv to move files from directory /a/b to directory /v/c. I wanted the whole 'b' directory to be moved to the path /v/c.
Now while running this command- mv /a/b /v/c I interrupted it in middle where the source had a large amount of data. Later I deleted directory 'c' since I thought it had partial files.
Now my question is will the directory 'b' contain all the original files along with the files that where moved to path /v/c? Or did I lose files by deleting the directory 'c'?
mv across filesystems will:
create the destination directory
for each file: copy and remove original
remove origin directory
Thus, if you interrupt it, some of the files will have been moved but not all. A mv of a directory within the same filesystem is atomic as it's just re-linking the directory's inode to a new location.
At one time, mv could only do the latter.
I believe it depends on if the source and destination directories were on the same file system or different file systems. If they were on the same file system then a "move" just changes the path information for each file. But if they're on different file systems the "move" command will copy one file at a time, and subsequently delete it on the source.
So, in your scenario if the source and destination were on separate file systems then yes, you just lost files if you interrupted mv and then deleted "c".

Linux recover deleted directory

I had a directory named foo in my linux server. home/foo
I also had a file named foo.tgz in home directory.
I issued an extract of foo.tgz from home directory and the tar file had a directory named foo in it. So the directory home/foo got overwritten. Can I recover the old home/foo directory.
No, you cannot. It wasn't overwritten though, their contents were merged.

Wget - output directory prefix

Currently I try to use:
"wget --user=xxx --password=xxx -r ftp://www.domain.com/htdocs/"
But this saves output files to current directory in this fashion:
curdir/www.domain.com/htdocs/*
I need it to be:
curdir/*
Is there a way to do this, I only see a way to use output prefix, but i think this will just allow me to define directory outside current dir?
You can combine --no-directories if you want all your files inside one directory or --no-host-directories to have subdirectories but no subdirectories per host with your --directory-prefix option.
2.6 Directory Options
‘-nd’
‘--no-directories’
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions ‘.n’).
‘-nH’
‘--no-host-directories’
Disable generation of host-prefixed directories. By default, invoking Wget with ‘-r http://fly.srk.fer.hr/’ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.
‘-P prefix’
‘--directory-prefix=prefix’
Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is ‘.’ (the current directory).
(From the wget manual.)

Resources