I was wondering if I can do a tail on screen session files,
so I went into /var/run/screen/S-Username.
This is what I found on that directory (using ll -l)
XXXX#ubuntu:/var/run/screen/S-XXXX $ ll -a
total 0
drwx------ 2 XXXX XXXX 60 XXXX 5 09:42 ./
drwxrwxr-x 3 root utmp 60 XXXX 5 09:42 ../
prwx------ 1 XXXX XXXX 0 XXXX 5 09:42 3031.pts-1.ubuntu
I’ve tried googling for “Linux file permissions”,
and no one seems to mention the p flag. Can anyone
tell me what the p permission flag is?
P.S: Also, it seems that I can’t do cat or tail on that file either.
It's not a permission. The p means that it's a named pipe, not a regular file.
p stands for FIFO, a named pipe. So it's not a permission, but a file type (just like d for directory).
You can't use cat or tail to get its content, because a FIFO isn't a regular file, it's used for inter-process communication.
Related
We have complaints "from the field" (i.e. from sysadmins installing software) that cygwin "messes up" windows permissions on NTFS (Windows 7/10/2008/2012, etc).
Problem Usecase
The general usecase is this:
Sysadmin launches some 'software installer' from the cygwin bash cmd line
Installer runs fine
Sysadmin tries to start windows services
Result:
Service fails to start
Workaround Steps
These steps seem to get past the problem:
Sysadmin resets ntfs permissions with windows ICACLS command : (in this example "acme" is the newly created directory. This command sets acme and its children to re-inherit permissions from folder "d:\instances"
d:\instances> icacls acme /RESET /T /C /Q
Sysadmin starts service
Result:
Windows service starts
Question
What makes cygwin handle permissions for newly-written files differently than powershell? Is it a matter of a wrong version of umask?
Can the sysadmin take steps in advance to ensure cygwin sets up permissions properly?
thanks in advance
I found the answer here; it refers to this mailing-list letter.
You need to edit Cygwin's /etc/fstab and add "noacl" to the list of mount-options.
To add to the answer of ulathek here is the copy-paste of the two URLs:
First:
How to fix incorrect Cygwin permission in Windows 7
Cygwin started to behave quite strangely after recent updates. I was not able to edit files in vim, because it was complaining that files are read only. Even cp -r didn’t work correctly. Permission of new directory was broken and I was not able to remove it. Pretty weird behavior.
E.g. ls -l
total 2
----------+ 1 georgik None 34 Jul 14 18:09 index.jade
----------+ 1 georgik None 109 Jul 14 17:40 layout.jade
Hm. It is clear that something is wrong with permission. Even owner has no permission on those files.
Output of mount command:
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
I found a solution at cygwin forum. It’s quite easy to fix it.
Open /etc/fstab and enter following line:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
Save it. Close all cygwin terminals and start new terminal.
Output of mount:
C: on /cygdrive/c type ntfs (binary,noacl,posix=0,user,noumount,auto)
Output of ls -l
total 2
-rw-r--r-- 1 georgik None 34 Jul 14 18:09 index.jade
-rw-r--r-- 1 georgik None 109 Jul 14 17:40 layout.jade
Second:
7/14/2010 10:57 AM
> Drive Y is a mapping to a network location. Interestingly, ls -l
>> /cygdrive returns:
>> d---------+ 1 ???????? ???????? 24576 2010-07-09 11:18 c
>> drwx------+ 1 Administrators Domain Users 0 2010-07-14 06:58 y
>>
>> The c folder looks weird, the y folder looks correct.
>>
> Try ls -ln /cygdrive. The user and group ownerships on the root of the
> C: drive are most likely not found in your passwd and group files. The
> -n option for ls will print the user and group IDs rather than try to
> look up their names. Unfortunately, I can't think of any way offhand to
> generate the passwd and group entries given only user and group IDs.
> Maybe someone else can comment on that.
>
I think your answer is correct:
$ ls -ln /cygdrive
total 24
d---------+ 1 4294967295 4294967295 24576 2010-07-09 11:18 c
drwx------+ 1 544 10513 0 2010-07-14 11:45 y
I edited my /etc/fstab file (it contained only commented lines) and
added this line at the end of the file:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
I closed all my Cygwin processes, opened a new terminal and did an ls-l
on visitor.cpp again:
-rw-r--r-- 1 cory Domain Users 3236 2010-07-11 22:37 visitor.cpp
Success!!! The permissions are now reported as 644 rather than 000 and I
can edit the file with Cygwin vim and not have bogus read-only issues.
Thank you Jeremy.
cory
How can I run a shell command on several files in linux/mac while keeping the same name (excluding the extension) ?
e.g. let's assume that I want to compile a list of files using a command to some other files with the same name :
{command} [name].less [same-name].css
EDIT: Supposing, more generally, that the two targets are located in two different paths, say, "path/to/folder2" and "path/to/folder3" and keeping in mind you can always specify the list used in the for cycle, you can try:
for i in $(ls path/to/folder3 | grep .less); do . /path/to/folder1/script.sh $(echo "path/to/folder3/$i $( echo "path/to/folder2/$i" | sed -e s/.less/.css/)") ; done
Still sorry for the brutality and perhaps non-elegant solution.
You can do something like this:
ls sameName.*
or simply
ls same* > list_of_filenams_starting_with_SAME.txt
IMHO, the more concise, performant and intuitive solution is to use GNU Parallel. Your command becomes:
parallel command {} {.}.css ::: *.less
So, for example, let's say your "command" is ls -l, and you have these files in your directory:
Freddy Frog.css
Freddy Frog.less
a.css
a.less
then your command would be
parallel ls -l {} {.}.css ::: *.less
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 Freddy Frog.css
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 Freddy Frog.less
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 a.css
-rw-r--r-- 1 mark staff 0 7 Aug 08:09 a.less
The benefits are firstly that it is a nice, concise syntax and a one-liner. Secondly, it'll run commands in parallel using as many cores as your CPU(s) have so it will be faster. If you do that, you may want the -k option to keep the outputs in order from the different commands.
If, you need it to run across many folders in a hierarchy, you can pipe the filenames in like this:
find <somepleace> -name \*.less | parallel <command> {} {.}.css
To understand these last two points (piping in and order), look at this example:
seq 1 10 | parallel echo
6
7
8
5
4
9
3
2
1
10
And now with -k to keep the order:
seq 1 10 | parallel -k echo
1
2
3
4
5
6
7
8
9
10
If, for some reason, you want to run the jobs sequentially one after the other, just add the switch -j 1 to the parallel command to set the number of parallel jobs to 1.
Try this out on your Linux machine as GNU Parallel is generally installed there. On the Mac under OS X , the easiest way to install GNU Parallel is with homebrew - please ask before trying if you are not familiar.
Early in a script, I see this:
exec 3>&2
And later:
{ $app $conf_file &>$app_log_file & } 1>&3 2>&1
My understanding of this looks something like this:
Create fd 3
Redirect fd 3 output to stderr
(Upon app execution) redirect stdout to fd 3, then redirect stderr to stdout
Isn't that some kind of circular madness? 3>stderr>stdout>3>etc?
I'm especially concerned as to the intention/implications of this line because I'd like to start running some apps using this script with valgrind. I'd like to see valgrind's output interspersed with the app's log statements, so I'm hoping that the default output of stderr is captured by the confusing line above. However, in some of the crashes that have led me to wanting to use valgrind, I've seen glibc errors outputted straight to the terminal, rather than captured in the app's log file.
So, the question(s): What does that execution line do, exactly? Does it capture stderr? If so, why do I see glibc output on the command line when an app crashes? If not, how should I change it to accomplish this goal?
You misread the 3>&2 syntax. It means open fd 3 and make it a duplicate of fd 2. See Duplicating File Descriptors.
In the same way 2>&1 does not mean make fd 2 point to the location of fd 1 it means re-open fd 2 as a duplicate of fd 1 (mostly the same net effect but different semantics).
Also remember that all redirections occur as they happen and that there are no "pointers" here. So 2>&1 1>/dev/null does not redirect standard error to /dev/null it leaves standard error attached to wherever standard output had been attached to (probably the terminal).
So the code in question does this:
Open fd 3 as a duplicate of fd 2
Re-open fd 1 as a duplicate of fd 3
Re-open fd 2 as a duplicate of fd 1
Effectively those lines send everything to standard error (or wherever fd 2 was attached when the initial exec line ran). If the redirections had been 2>&1 1>&3 then they would have swapped locations. I wonder if that was the original intention of that line since, as written, it is fairly pointless.
Not to mention that with the redirection inside the brace list the redirections on the outside of the brace list are fairly useless.
Ok, well let's see what happens in practice:
peter#tesla:/tmp/test$ bash -c 'exec 3>&2; { sleep 60m &>logfile & } 1>&3 2>&1' > stdout 2>stderr
peter#tesla:/tmp/test$ psg sleep
peter 22147 0.0 0.0 7232 836 pts/14 S 15:51 0:00 sleep 60m
peter#tesla:/tmp/test$ ll /proc/22147/fd
total 0
lr-x------ 1 peter peter 64 Jul 8 15:51 0 -> /dev/null
l-wx------ 1 peter peter 64 Jul 8 15:51 1 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 2 -> /tmp/test/logfile
l-wx------ 1 peter peter 64 Jul 8 15:51 3 -> /tmp/test/stderr
I'm not sure exactly why the author of your script ended up with that line of code. Presumably it made sense to them when they wrote it. The redirections outside the curly braces happen before the redirections inside, so they're both overriden by the &>logfile. Even errors from bash, like command not found would end up in the logfile.
You say you see glibc messages on your terminal when the app crashes. I think your app must be using fd 3 after it starts. i.e., it was written to be started from a script that opened fd 3 for it, or else it opens /dev/tty or something.
BTW, psg is a function I define in my .bashrc:
psg(){ ps aux | grep "${#:-$USER}" | grep -v grep; }
recently updated to:
psg(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps u -p $pids; }
psgw(){ local pids=$(pgrep -f "${#:--u$USER}"); [[ $pids ]] && ps uww -p $pids; }
You need a context first, as in #Peter Cordes example. He provided the context by setting >stdout and 2>stderr first.
I have modified his example a bit.
$ bash -c 'exec 3>&2; { sleep 60m & } 1>&3 2>&1' >stdout 2>stderr
$ ps aux | grep sleep
logan 272163 0.0 0.0 8084 580 pts/2 S 19:22 0:00 sleep 60m
logan 272165 0.0 0.0 8912 712 pts/2 S+ 19:23 0:00 grep --color=auto sleep
$ ll /proc/272163/fd
total 0
dr-x------ 2 logan logan 0 Aug 25 19:23 ./
dr-xr-xr-x 9 logan logan 0 Aug 25 19:23 ../
lr-x------ 1 logan logan 64 Aug 25 19:23 0 -> /dev/null
l-wx------ 1 logan logan 64 Aug 25 19:23 1 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 2 -> /tmp/tmp.Vld71a451u/stderr
l-wx------ 1 logan logan 64 Aug 25 19:23 3 -> /tmp/tmp.Vld71a451u/stderr
First, exec 3>&2 sets fd3 to point to stderr file. Then 1>&3 sets fd1 to point to stderr file also. Lastly, 2>&1 sets fd2 to point to stderr file too! (don't get confused with stderr fd2 and in this case stderr just being a random file name)
The reason fd0 is set to /dev/null, I'm guessing, is because the command is run in a non-interactive shell.
Sorry if this makes no sense, but I will try to give all the information needed!
I would like to use rsync to copy a range of sequentially numbered files from one folder to another.
I am archiving a DCDM (Its a film thing) and it contains in the order of 600,000 individually numbered, sequential .tif image files (~10mb ea.).
I need to break this up to properly archive onto LTO6 tapes. And I would like to use rsync to prep the folders such that my simple bash .sh file can automate the various folders and files that I want to back up to tape.
The command I normally use when running rsync is:
sudo rsync -rvhW --progress --size only <src> <dest>
I use sudo if needed, and I always test the outcome first with --dry-run
The only way I’ve got anything to work (without kicking out errors) is by using the * wildcard. However, this only does files with the set pattern (eg. 01* will only move files from the range 010000 - 019999) and I would have to repeat for 02, 03, 04 etc..
I've looked on the internet, and am struggling to find an answer that works.
This might not be possible, and with 600,000 .tif files, I can't write an exclude for each one!
Any thoughts as to how (if at all) this could be done?
Owen.
You can check for the file name starting with a digit by using pattern matching:
for file in [0-9]*; do
# do something to $file name that starts with digit
done
Or, you could enable the extglob option and loop over all file names that contain only digits. This could eliminate any potential unwanted files that start with a digit but contain non-digits after the first character.
shopt -s extglob
for file in +([0-9]); do
# do something to $file name that contains only digits
done
+([0-9]) expands to one or more occurrence of a digit
Update:
Based on the file name pattern in your recent comment:
shopt -s extglob
for file in legendary_dcdm_3d+([0-9]).tif; do
# do something to $file
done
Globing is the feature of the shell to expand a wildcard to a list of matching file names. You have already used it in your question.
For the following explanations, I will assume we are in a directory with the following files:
$ ls -l
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 file.txt
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 funny_cat.jpg
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-2.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-3.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-4.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-2.pdf
The most simple case is to match all files. The following makes for a poor man's ls.
$ echo *
file.txt funny_cat.jpg report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf report_2014-1.pdf report_2014-2.pdf
If we want to match all reports from 2013, we can narrow the match:
$ echo report_2013-*.pdf
report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf
We could, for example, have left out the .pdf part but I like to be as specific as possible.
You have already come up with a solution to use this for selecting a range of numbered files. For example, we can match reports by quater:
$ for q in 1 2 3 4; do echo "$q. quater: " report_*-$q.pdf; done
1. quater: report_2013-1.pdf report_2014-1.pdf
2. quater: report_2013-2.pdf report_2014-2.pdf
3. quater: report_2013-3.pdf
4. quater: report_2013-4.pdf
If we are to lazy to type 1 2 3 4, we could have used $(seq 4) instead. This invokes the program seq with argument 4 and substitutes its output (1 2 3 4 in this case).
Now back to your problem: If you want chunk sizes that are a power of 10, you should be able to extend the above example to fit your needs.
old question i know, but someone may find this useful. the above examples for expanding a range also work with rsync. for example to copy files starting with a, b and c but not d and e from dir /tmp/from_here to dir /tmp/to_here:
$ rsync -avv /tmp/from_here/[a-c]* /tmp/to_here
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
alice/
bob/
cedric/
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 89 bytes received 24 bytes 226.00 bytes/sec
total size is 0 speedup is 0.00
If you are writing to LTO6 tapes, you should consider including "--inplace" to your command. Inplace is meant for writing to linear filesystems such as LTO
I know this is really basic, but I cannot find this information
in the ls man page, and need a refresher:
$ ls -ld my.dir
drwxr-xr-x 1 smith users 4096 Oct 29 2011 my.dir
What is the meaning of the number 1 after drwxr-xr-x ?
Does it represent the number of hard links to the direcory my.dir?
I cannot remember. Where can I find this information?
Thanks,
John Goche
I found it on Wikipedia:
duuugggooo (hard link count) owner group size modification_date name
The number is the hard link count.
If you want a more UNIXy solution, type info ls. This gives more detailed information including:
`-l'
`--format=long'
`--format=verbose'
In addition to the name of each file, print the file type, file
mode bits, number of hard links, owner name, group name, size, and
timestamp (*note Formatting file timestamps::), normally the
modification time. Print question marks for information that
cannot be determined.
That is the number of named (hard links) of the file. And I suppose, there is an error here. That must be at least 2 here for a directory.
$ touch file
$ ls -l
total 0
-rw-r--r-- 1 igor igor 0 Jul 15 10:24 file
$ ln file file-link
$ ls -l
total 0
-rw-r--r-- 2 igor igor 0 Jul 15 10:24 file
-rw-r--r-- 2 igor igor 0 Jul 15 10:24 file-link
$ mkdir a
$ ls -l
total 0
drwxr-xr-x 2 igor igor 40 Jul 15 10:24 a
-rw-r--r-- 2 igor igor 0 Jul 15 10:24 file
-rw-r--r-- 2 igor igor 0 Jul 15 10:24 file-link
As you can see, as soon as you make a directory, you get 2 at the column.
When you make subdirectories in a directory, the number increases:
$ mkdir a/b
$ ls -ld a
drwxr-xr-x 3 igor igor 60 Jul 15 10:41 a
As you can see the directory has now three names ('a', '.' in it, and '..' in its subdirectory):
$ ls -id a ; cd a; ls -id .; ls -id b/..
39754633 a
39754633 .
39754633 b/..
All these three names point to the same directory (inode 39754633).
Trying to explain why for directory the initial link count value =2.
Pl. see if this helps.
Any file/directory is indentified by an inode.
Number of Hard Links = Number of references to the inode.
When a directory/file is created, one directory entry (of the
form - {myname, myinodenumber}) is created in the parent directory.
This makes the reference count of the inode for that file/directory =1.
Now when a directory is created apart from this the space for directory is also created which by default should be having two directory entries
one for the directory which is created and another for the
parent directory that is two entries of the form {., myinodenumber}
and {.., myparent'sinodenumber}.
Current directory is referred by "." and the parent is referred by ".." .
So when we create a directory the initial number of Links' value = 1+1=2,
since there are two references to myinodenumber. And the parent's number
of link value is increased by 1.