Fortran OPEN-call differs on NFSv3 vs NFSv4 - linux

I'm trying to understand the difference between why you can do an OPEN-call in fortran on NFSv3 in read-write mode on a file that you only have read-permissions on, while if you do the same thing on NFSv4 the OPEN-call will fail.
Let me explain, below is a simple fortran-program that opens given file (argument to the program) in read-write mode,
PROGRAM test_open
IMPLICIT NONE
! Parameters
INTEGER, PARAMETER :: lunin = 10
CHARACTER(LEN=100) :: fname
! Local
INTEGER :: i,ierr,siteid,nstation
REAL :: lat, lon, asl
CHARACTER(len=15) :: name
!----------------------------------------------------------------
!
! Open input file
!
CALL getarg(1,fname)
OPEN(lunin,file=fname,STATUS='OLD',IOSTAT=ierr)
IF ( ierr /= 0 ) THEN
WRITE(6,*)'Could not open ',TRIM(fname),ierr
STOP
ENDIF
WRITE(6,*)'Opened OK'
CLOSE(lunin)
END PROGRAM test_open
Save the above in test_open.f90 and compile with,
gfortran -o fortran test_open.f90
Now, execute the following on a mountpoint with NFSv3,
strace -eopen ./fortran file-with-only-read-permissions
And you should see the following lines (along with a lot of other output),
> open("file-with-only-read-permissions", O_RDWR) = -1 EACCES (Permission denied)
> open("file-with-only-read-permissions", O_RDONLY) = 3
So, we can clearly see that we get an "EACCES (Permission denied)" while trying to open in 'O_RDWR' (open read-write), but right after we see another open O_RDONLY (open read-only) and that succeeds.
Run the same program on a file on a NFSv4 share, and we get the following,
strace -eopen ./fortran file-with-only-read-permissions-on-nfsv4-share
> open("file-with-only-read-permissions-on-nfsv4-share", O_RDWR) = -1 EPERM (Operation not permitted)
So, here we get an "EPERM (Operation not permitted)" while trying to open the file in 'O_RDWR' (open read-write) and nothing more (ie application fails).
Doing the same tests in C with a small test-program it will fail to open the file in both scenarios (that is, it will not try to open the file in 'read-only-mode' after getting the "EACCES" on NFSv3).
So to the questions,
I assume the above behaviour is due to the implementation of the OPEN-call in fortran, and that if fortran gets an "EACCES (Permission denied)" while trying to open a file, it will automatically try to open the file in read-only (O_RDONLY). Is this assumption correct ?
I also assume that fortran doesn't have this "fallback-method" when getting an "EPERM (Operation not permitted)" while trying to open a file. Is this assumption correct, or am I missing something ?
C doesn't seem to implement a "fallback-method" in either a "EACCES" nor "EPERM". This seems correct to me, since this doesn't leave any room for confusion. If you try to open a file in a way that you do not have the permissions to do, the program should fail - my opinion.
I am aware of that there is a distinct difference between "Permission denied" and "Operation not permitted". And I guess that when mounting NFSv4 over kerberos there is a reason for getting "Permission denied" instead of "Operation not permitted", however some clarification regarding this area would be great.
Of course, adding the appropriate flags to the open-call (ACTION=READ) solves the problem. I'm just curios about my assumptions and if they are correct.

To answer your question, in order:
You are correct that gfortran will try to reopen a file in read-only mode when EACCES (or EROFS) is encountered.
You are also correct that EPERM is not handled this way, it is not mentioned in the libgfortran source tree at all.
As you say, this is a matter of opinion. Gfortran made the decision to do this a long time ago, and it seems to suit the users just fine.
I do not understand why NFS v4 returns EPERM in such a case. This seems at odds at least with the documentation in the open(2) Linux manpage that I have access to, where it is only mentioned when O_NOATIME has been specified (which libgfortran does not do). At least, this behavior does not seem to be portable.

Related

echo into '/proc/filename' gives "Operation Not Permitted" error

Platform: Ubuntu 5.15.0-43-generic.
I have written a loadable kernel module to create a file under /proc called testproc. The kernel module loads perfectly and created the /proc/testproc. The permissions on /proc/testproc are 0666 and owned by root. I am logged in as root for all operations.
I have implemented the read and write handler in my kernel module and they get called too.
When I run the command
echo "Hello" > /proc/testproc
the error seen is
bash: echo: write error: Operation not permitted
I am using the call
proc_create("testproc", 0666, NULL, &procfsFuncs)
to create the entry under /proc
Any pointers much appreciated.
I figured out my (trivial) mistake. I was expecting a non-zero result from copy_from_user(), when in reality copy_from_user returns 0 on success.

500 error on binary compiled CGI script after migration to new servers [duplicate]

I have a Perl CGI script that isn't working and I don't know how to start narrowing down the problem. What can I do?
Note: I'm adding the question because I really want to add my very lengthy answer to Stack Overflow. I keep externally linking to it in other answers and it deserves to be here. Don't be shy about editing my answer if you have something to add.
This answer is intended as a general framework for working through
problems with Perl CGI scripts and originally appeared on Perlmonks as Troubleshooting Perl CGI Scripts. It is not a complete guide to every
problem that you may encounter, nor a tutorial on bug squashing. It
is just the culmination of my experience debugging CGI scripts for twenty (plus!) years. This page seems to have had many different homes, and I seem
to forget it exists, so I'm adding it to the StackOverflow. You
can send any comments or suggestions to me at
bdfoy#cpan.org. It's also community wiki, but don't go too nuts. :)
Are you using Perl's built in features to help you find problems?
Turn on warnings to let Perl warn you about questionable parts of your code. You can do this from the command line with the -w switch so you don't have to change any code or add a pragma to every file:
% perl -w program.pl
However, you should force yourself to always clear up questionable code by adding the warnings pragma to all of your files:
use warnings;
If you need more information than the short warning message, use the diagnostics pragma to get more information, or look in the perldiag documentation:
use diagnostics;
Did you output a valid CGI header first?
The server is expecting the first output from a CGI script to be the CGI header. Typically that might be as simple as print "Content-type: text/plain\n\n"; or with CGI.pm and its derivatives, print header(). Some servers are sensitive to error output (on STDERR) showing up before standard output (on STDOUT).
Try sending errors to the browser
Add this line
use CGI::Carp 'fatalsToBrowser';
to your script. This also sends compilation errors to the browser window. Be sure to remove this before moving to a production environment, as the extra information can be a security risk.
What did the error log say?
Servers keep error logs (or they should, at least).
Error output from the server and from your script should
show up there. Find the error log and see what it says.
There isn't a standard place for log files. Look in the
server configuration for their location, or ask the server
admin. You can also use tools such as CGI::Carp
to keep your own log files.
What are the script's permissions?
If you see errors like "Permission denied" or "Method not
implemented", it probably means that your script is not
readable and executable by the web server user. On flavors
of Unix, changing the mode to 755 is recommended:
chmod 755 filename. Never set a mode to 777!
Are you using use strict?
Remember that Perl automatically creates variables when
you first use them. This is a feature, but sometimes can
cause bugs if you mistype a variable name. The pragma
use strict will help you find those sorts of
errors. It's annoying until you get used to it, but your
programming will improve significantly after awhile and
you will be free to make different mistakes.
Does the script compile?
You can check for compilation errors by using the -c
switch. Concentrate on the first errors reported. Rinse,
repeat. If you are getting really strange errors, check to
ensure that your script has the right line endings. If you
FTP in binary mode, checkout from CVS, or something else that
does not handle line end translation, the web server may see
your script as one big line. Transfer Perl scripts in ASCII
mode.
Is the script complaining about insecure dependencies?
If your script complains about insecure dependencies, you
are probably using the -T switch to turn on taint mode, which is
a good thing since it keeps you have passing unchecked data to the shell. If
it is complaining it is doing its job to help us write more secure scripts. Any
data originating from outside of the program (i.e. the environment)
is considered tainted. Environment variables such as PATH and
LD_LIBRARY_PATH
are particularly troublesome. You have to set these to a safe value
or unset them completely, as I recommend. You should be using absolute
paths anyway. If taint checking complains about something else,
make sure that you have untainted the data. See perlsec
man page for details.
What happens when you run it from the command line?
Does the script output what you expect when run from the
command line? Is the header output first, followed by a
blank line? Remember that STDERR may be merged with STDOUT
if you are on a terminal (e.g. an interactive session), and
due to buffering may show up in a jumbled order. Turn on
Perl's autoflush feature by setting $| to a
true value. Typically you might see $|++; in
CGI programs. Once set, every print and write will
immediately go to the output rather than being buffered.
You have to set this for each filehandle. Use select to
change the default filehandle, like so:
$|++; #sets $| for STDOUT
$old_handle = select( STDERR ); #change to STDERR
$|++; #sets $| for STDERR
select( $old_handle ); #change back to STDOUT
Either way, the first thing output should be the CGI header
followed by a blank line.
What happens when you run it from the command line with a CGI-like environment?
The web server environment is usually a lot more limited
than your command line environment, and has extra
information about the request. If your script runs fine
from the command line, you might try simulating a web server
environment. If the problem appears, you have an
environment problem.
Unset or remove these variables
PATH
LD_LIBRARY_PATH
all ORACLE_* variables
Set these variables
REQUEST_METHOD (set to GET, HEAD, or POST as appropriate)
SERVER_PORT (set to 80, usually)
REMOTE_USER (if you are doing protected access stuff)
Recent versions of CGI.pm ( > 2.75 ) require the -debug flag to
get the old (useful) behavior, so you might have to add it to
your CGI.pm imports.
use CGI qw(-debug)
Are you using die() or warn?
Those functions print to STDERR unless you have redefined
them. They don't output a CGI header, either. You can get
the same functionality with packages such as CGI::Carp
What happens after you clear the browser cache?
If you think your script is doing the right thing, and
when you perform the request manually you get the right
output, the browser might be the culprit. Clear the cache
and set the cache size to zero while testing. Remember that
some browsers are really stupid and won't actually reload
new content even though you tell it to do so. This is
especially prevalent in cases where the URL path is the
same, but the content changes (e.g. dynamic images).
Is the script where you think it is?
The file system path to a script is not necessarily
directly related to the URL path to the script. Make sure
you have the right directory, even if you have to write a
short test script to test this. Furthermore, are you sure
that you are modifying the correct file? If you don't see
any effect with your changes, you might be modifying a
different file, or uploading a file to the wrong place.
(This is, by the way, my most frequent cause of such trouble
;)
Are you using CGI.pm, or a derivative of it?
If your problem is related to parsing the CGI input and you
aren't using a widely tested module like CGI.pm, CGI::Request,
CGI::Simple or CGI::Lite, use the module and get on with life.
CGI.pm has a cgi-lib.pl compatibility mode which can help you solve input
problems due to older CGI parser implementations.
Did you use absolute paths?
If you are running external commands with
system, back ticks, or other IPC facilities,
you should use an absolute path to the external program.
Not only do you know exactly what you are running, but you
avoid some security problems as well. If you are opening
files for either reading or writing, use an absolute path.
The CGI script may have a different idea about the current
directory than you do. Alternatively, you can do an
explicit chdir() to put you in the right place.
Did you check your return values?
Most Perl functions will tell you if they worked or not
and will set $! on failure. Did you check the
return value and examine $! for error messages? Did you check
$# if you were using eval?
Which version of Perl are you using?
The latest stable version of Perl is 5.28 (or not, depending on when this was last edited). Are you using an older version? Different versions of Perl may have different ideas of warnings.
Which web server are you using?
Different servers may act differently in the same
situation. The same server product may act differently with
different configurations. Include as much of this
information as you can in any request for help.
Did you check the server documentation?
Serious CGI programmers should know as much about the
server as possible - including not only the server features
and behavior, but also the local configuration. The
documentation for your server might not be available to you
if you are using a commercial product. Otherwise, the
documentation should be on your server. If it isn't, look
for it on the web.
Did you search the archives of comp.infosystems.www.authoring.cgi?
This use to be useful but all the good posters have either died or wandered off.
It's likely that someone has had your problem before,
and that someone (possibly me) has answered it in this
newsgroup. Although this newsgroup has passed its heyday, the collected wisdom from the past can sometimes be useful.
Can you reproduce the problem with a short test script?
In large systems, it may be difficult to track down a bug
since so many things are happening. Try to reproduce the problem
behavior with the shortest possible script. Knowing the problem
is most of the fix. This may be certainly time-consuming, but you
haven't found the problem yet and you're running out of options. :)
Did you decide to go see a movie?
Seriously. Sometimes we can get so wrapped up in the problem that we
develop "perceptual narrowing" (tunnel vision). Taking a break,
getting a cup of coffee, or blasting some bad guys in [Duke Nukem,Quake,Doom,Halo,COD] might give you
the fresh perspective that you need to re-approach the problem.
Have you vocalized the problem?
Seriously again. Sometimes explaining the problem aloud
leads us to our own answers. Talk to the penguin (plush toy) because
your co-workers aren't listening. If you are interested in this
as a serious debugging tool (and I do recommend it if you haven't
found the problem by now), you might also like to read The Psychology
of Computer Programming.
I think CGI::Debug is worth mentioning as well.
Are you using an error handler while you are debugging?
die statements and other fatal run-time and compile-time errors get
printed to STDERR, which can be hard to find and may be conflated with
messages from other web pages at your site. While you're debugging your
script, it's a good idea to get the fatal error messages to display in your
browser somehow.
One way to do this is to call
use CGI::Carp qw(fatalsToBrowser);
at the top of your script. That call will install a $SIG{__DIE__} handler (see perlvar)display fatal errors in your browser, prepending it with a valid header if necessary. Another CGI debugging trick that I used before I ever heard of CGI::Carp was to
use eval with the DATA and __END__ facilities on the script to catch compile-time errors:
#!/usr/bin/perl
eval join'', <DATA>;
if ($#) { print "Content-type: text/plain:\n\nError in the script:\n$#\n; }
__DATA__
# ... actual CGI script starts here
This more verbose technique has a slight advantage over CGI::Carp in that it will catch more compile-time errors.
Update: I've never used it, but it looks like CGI::Debug, as Mikael S
suggested, is also a very useful and configurable tool for this purpose.
I wonder how come no-one mentioned the PERLDB_OPTS option called RemotePort; although admittedly, there aren't many working examples on the web (RemotePort isn't even mentioned in perldebug) - and it was kinda problematic for me to come up with this one, but here it goes (it being a Linux example).
To do a proper example, first I needed something that can do a very simple simulation of a CGI web server, preferably through a single command line. After finding Simple command line web server for running cgis. (perlmonks.org), I found the IO::All - A Tiny Web Server to be applicable for this test.
Here, I'll work in the /tmp directory; the CGI script will be /tmp/test.pl (included below). Note that the IO::All server will only serve executable files in the same directory as CGI, so chmod +x test.pl is required here. So, to do the usual CGI test run, I change directory to /tmp in the terminal, and run the one-liner web server there:
$ cd /tmp
$ perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
The webserver command will block in the terminal, and will otherwise start the web server locally (on 127.0.0.1 or localhost) - afterwards, I can go to a web browser, and request this address:
http://127.0.0.1:8080/test.pl
... and I should observe the prints made by test.pl being loaded - and shown - in the web browser.
Now, to debug this script with RemotePort, first we need a listener on the network, through which we will interact with the Perl debugger; we can use the command line tool netcat (nc, saw that here: Perl如何remote debug?). So, first run the netcat listener in one terminal - where it will block and wait for connections on port 7234 (which will be our debug port):
$ nc -l 7234
Then, we'd want perl to start in debug mode with RemotePort, when the test.pl has been called (even in CGI mode, through the server). This, in Linux, can be done using the following "shebang wrapper" script - which here also needs to be in /tmp, and must be made executable:
cd /tmp
cat > perldbgcall.sh <<'EOF'
#!/bin/bash
PERLDB_OPTS="RemotePort=localhost:7234" perl -d -e "do '$#'"
EOF
chmod +x perldbgcall.sh
This is kind of a tricky thing - see shell script - How can I use environment variables in my shebang? - Unix & Linux Stack Exchange. But, the trick here seems to be not to fork the perl interpreter which handles test.pl - so once we hit it, we don't exec, but instead we call perl "plainly", and basically "source" our test.pl script using do (see How do I run a Perl script from within a Perl script?).
Now that we have perldbgcall.sh in /tmp - we can change the test.pl file, so that it refers to this executable file on its shebang line (instead of the usual Perl interpreter) - here is /tmp/test.pl modified thus:
#!./perldbgcall.sh
# this is test.pl
use 5.10.1;
use warnings;
use strict;
my $b = '1';
my $a = sub { "hello $b there" };
$b = '2';
print "YEAH " . $a->() . " CMON\n";
$b = '3';
print "CMON " . &$a . " YEAH\n";
$DB::single=1; # BREAKPOINT
$b = '4';
print "STEP " . &$a . " NOW\n";
$b = '5';
print "STEP " . &$a . " AGAIN\n";
Now, both test.pl and its new shebang handler, perldbgcall.sh, are in /tmp; and we have nc listening for debug connections on port 7234 - so we can finally open another terminal window, change directory to /tmp, and run the one-liner webserver (which will listen for web connections on port 8080) there:
cd /tmp
perl -MIO::All -e 'io(":8080")->fork->accept->(sub { $_[0] < io(-x $1 ? "./$1 |" : $1) if /^GET \/(.*) / })'
After this is done, we can go to our web browser, and request the same address, http://127.0.0.1:8080/test.pl. However, now when the webserver tries to execute the script, it will do so through perldbgcall.sh shebang - which will start perl in remote debugger mode. Thus, the script execution will pause - and so the web browser will lock, waiting for data. We can now switch to the netcat terminal, and we should see the familiar Perl debugger text - however, output through nc:
$ nc -l 7234
Loading DB routines from perl5db.pl version 1.32
Editor support available.
Enter h or `h h' for help, or `man perldebug' for more help.
main::(-e:1): do './test.pl'
DB<1> r
main::(./test.pl:29): $b = '4';
DB<1>
As the snippet shows, we now basically use nc as a "terminal" - so we can type r (and Enter) for "run" - and the script will run up do the breakpoint statement (see also In perl, what is the difference between $DB::single = 1 and 2?), before stopping again (note at that point, the browser will still lock).
So, now we can, say, step through the rest of test.pl, through the nc terminal:
....
main::(./test.pl:29): $b = '4';
DB<1> n
main::(./test.pl:30): print "STEP " . &$a . " NOW\n";
DB<1> n
main::(./test.pl:31): $b = '5';
DB<1> n
main::(./test.pl:32): print "STEP " . &$a . " AGAIN\n";
DB<1> n
Debugged program terminated. Use q to quit or R to restart,
use o inhibit_exit to avoid stopping after program termination,
h q, h R or h o to get additional info.
DB<1>
... however, also at this point, the browser locks and waits for data. Only after we exit the debugger with q:
DB<1> q
$
... does the browser stop locking - and finally displays the (complete) output of test.pl:
YEAH hello 2 there CMON
CMON hello 3 there YEAH
STEP hello 4 there NOW
STEP hello 5 there AGAIN
Of course, this kind of debug can be done even without running the web server - however, the neat thing here, is that we don't touch the web server at all; we trigger execution "natively" (for CGI) from a web browser - and the only change needed in the CGI script itself, is the change of shebang (and of course, the presence of the shebang wrapper script, as executable file in the same directory).
Well, hope this helps someone - I sure would have loved to have stumbled upon this, instead of writing it myself :)
Cheers!
For me, I use log4perl . It's quite useful and easy.
use Log::Log4perl qw(:easy);
Log::Log4perl->easy_init( { level => $DEBUG, file => ">>d:\\tokyo.log" } );
my $logger = Log::Log4perl::get_logger();
$logger->debug("your log message");
Honestly you can do all the fun stuff above this post.
ALTHOUGH, the simplest and most proactive solution I found was to just "print it".
In example:
(Normal code)
`$somecommand`;
To see if it's doing what I really want it to do:
(Trouble shooting)
print "$somecommand";
It will probably also be worth mentioning that Perl will always tell you on what line the error occurs when you execute the Perl script from the command line. (An SSH Session for example)
I will usually do this if all else fails. I will SSH into the server and manually execute the Perl script. For example:
% perl myscript.cgi
If there is a problem then Perl will tell you about it. This debugging method does away with any file permission related issues or web browser or web server issues.
You may run the perl cgi-script in terminal using the below command
$ perl filename.cgi
It interpret the code and provide result with HTML code.
It will report the error if any.

Beaglebone Linux: issues appending a line to a file

I am working to enable spi on my beaglebone black (Angstrom distribution), using instructions here.
I am at the point where i need to add BB-SPI1-01 to /sys/devices/bone_capemgr.*/slots to enable the drivers.
issuing the command echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots or echo BB-SPI1-01 >> /sys/devices/bone_capemgr.*/slots, however, yields the error echo: Write error: file exists
Trying to edit in the line with nano also fails. I'm able to open the file and edit it, but when I save it gives me Error writing slots: no such file or directory
I've set permissions on the file to 777.
Does anybody know why I cannot edit the file? if it's not possible, is there a workaround?
I, too have battled with this dilemma while trying to port ILI9340C display stuff to Beaglebone Black. The way /dev/devices/bone_capemgr.* works is that anything which you echo to its slots directory it goes and searches for a Device Tree overlay for that device, a new thing in Linux Kernel 3.0 and higher. For anyone who does not know (it took me forever to find this) Device Trees are basically a driver that tells Linux how to deal with a device, but instead of containing any code, they are simply a configuration file, per-se, that tells Linux what to put where in order to talk to a device, and what to expect in return. That being said, BB-SPIx-01 is a compiled Device Tree file, a .dts in /lib/firmware/ which points to the SPI device, and tells spidev what to do with it.
BB-SPI1-01 happens to be connected to the HDMI port already for some audio thing (I think) and, therefore, unless you disable HDMI entirely, SPI1 is always tied up by the HDMI framer. This explains why writing BB-SPI1-01 to /sys/devices/bone_capemgr.*/slots fails. This is a special file, and when you write to it, a kernel process reads your input and proceeds to attempt to make a 'device' file elsewhere, and since BB-SPI1-01 is already enabled, that file already exists, and so the kernel process that handles those things returns an error and pipes it through whatever process initiated it, in this case, you, the user, typing echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots.
On the bright side, SPI0 is left unused. Therefore, in order to use it, all you have to do is enable it in userland. To do that, (and you have figured this out already, but for everyone else) type echo BB-SPI0-01 > /sys/devices/bone_capemgr.*/slots at the command line, and then just to be sure that spidev is running, type modprobe spidev as root. Now, to verify, type ls /dev | grep spi and see what comes up. /dev/spidevX.Y is your SPI bus, for me that would be /dev/spidev1.0.
I'm sorry that was really long winded, but I'm culminating my research thus far into one spot in the hopes that it will help someone.
If you have any questions, please feel free to ask!
For those who are curious, while I haven't found the exact answer, I did find some more information.
The SPI1 interface on the beaglebone black can't be enabled unless the hdmi interface has been turned off, which I have not done. I'm instead using the SPI0 interface now. Interestingly, that same command works if BB-SPI0-01 is used instead of BB-SPI1-01. Therefore the error in question is probably not coming from the base command, but rather the system in response to the command (which can't allocate the resources requested due to conflicts with hdmi).
While I haven't tested SPI1 with hdmi turned off, I can only assume that my errors would go away.
Might it be because you're trying to access more than one file at a time with echo BB-SPI1-01 > /sys/devices/bone_capemgr.*/slots ?
Try selecting single path to the slots file and see if that works
Based on PyroAVR's answer, here is the concrete solution. You need to disable HDMI, that's easily done by editing the following file: /boot/uEnv.txt
You can uncomment the line which causes HDMI to be disabled by running the following command as root:
sed -i.bck '/cape_disable=capemgr.disable_partno=BB-BONELT-HDMI,BB-BONELT-HDMIN$/ s/^#//' /boot/uEnv.txt
as Mixaz mentioned in a comment, the real errors are found in the dmesg output; "no such file or directory" is a red herring, and even strace doesn't give any clues as to the real problem. in my case I found:
[26858.517893] bone_capemgr bone_capemgr: slot #5: override
[26858.517937] bone_capemgr bone_capemgr: Using override eeprom data at slot 5
[26858.517986] bone_capemgr bone_capemgr: slot #5: 'Override Board Name,00A0,Override Manuf,jc_gpio_test'
[26924.230357] bone_capemgr bone_capemgr: part_number 'jc_gpio_test', version 'N/A'
from that I guessed that it didn't like "0000" as a version number, changed to "00A0" and recompiled, then it worked.
here's the Makefile I wrote to help automate the process, in case it helps.
%.install: %-00A0.dtbo
cp -f $< /lib/firmware
echo $* > /sys/devices/platform/bone_capemgr/slots
%-00A0.dtbo: %.dts
dtc -O dtb -o $# -b 0 -# $<
use it as: make jc_gpio_test.install, assuming your .dts file name is jc_gpio_test.dts.
it turns out, my guess was probably wrong. the change that more likely fixed it was adding the -00A0 part to the dtbo file. apparently the "dash-versionnumber" is required by the slot loader.

Relinking an anonymous (unlinked but open) file

In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.

How can I tarball the proc file system?

I would like to take a snapshot of my entire proc file system, and save it in a tarball (or in the worst case concatenate all of the text files together into a single text file).
But when I run:
tar -c /proc
I get a segfault.
What's the best way to do this? Should I set up some kind of recursive walk through each file?
I only have the basic *nix utilities, such as bash, cat, ls, echo, etc. I don't have anything fancy like python or perl or java.
The linux /proc filesystem is actually kernel variables pretending to be a filesystem. There is nothing to save thus nothing to backup. If the system let you, you could rm -rf /proc and it would magically reappear upon the next reboot.
The /dev file system has real i-nodes and they can be backed up. Except they have no contents, just a major and minor number, permissions, and a name. Tools that do backup special device files only record those parameters and never try to open(2) the device. However, since the device major and minor numbers are only meaningful on the precise system they are built for, there is little cause for backing them up.
The reason that trying to tar the /proc pseudo-filesystem causes tar to segfault is because /proc has funny file behavior: things like a write-only pseudo-file may appear to have read permissions, but return an error indication if a program tries to open(2) it for backup. That's sure to drive a naïve tar to get persnickety.
Added in response to comment
It doesn't surprise me that tar had problems reading /proc/kmsg because it has some funny properties:
# strace cat /proc/kmsg
execve("/bin/cat", ["cat", "kmsg"],
open("kmsg", O_RDONLY|O_LARGEFILE) = 3
// ok, no problem opening the file for reading
fstat64(3, { st_mode=S_IFREG|0400, st_size=0,
// looks like a normal file of zero length
// but cat does not pay attention to st_size so it just
// does a blocking read
read(3, "<4>[103128.156051] ata2.00: qc t"..., 32768) = 461
write(1, "<4>[103128.156051] ata2.00: qc t"..., 461) = 461
// ...forever...
read(3, "<6>[103158.228444] ata2.00: conf"..., 32768) = 48
write(1, "<6>[103158.228444] ata2.00: conf"..., 48) = 48
+++ killed by SIGINT +++
Since /proc/kmsg is a running list of kernel messages as they happen, it never returns 0 (EOF) it just keeps going until I get bored and press ^C.
Interestingly, my tar has no trouble with /proc/kmsg:
$ tar --version
tar (GNU tar) 1.22
# tar cf /tmp/junk.tar /proc/kmsg
$ tar tvf /tmp/junk.tar
-r-------- root/root 0 2010-09-01 14:41 proc/kmsg
and if you look at the strace output, GNU tar 1.22 saw that st_length == 0 and didn't even bother opening the file for reading, since there wasn't anything there.
I can imagine that your tar saw the length was 0, allocated that much (none) space using malloc(3) which dutifully handed back a pointer to a zero length buffer. Your tar read from /proc/kmsg, got a non-zero length read, and tried to store it in the zero length buffer and got a segmentation violation.
That is but one rat-hole that awaits tar in /proc. How many more are there? Dunno. Will they behave identically? Probably not. Which of the ~1000 or so files which aren't /proc/<pid> psuedo-files are going to have weird semantics? Dunno.
But perhaps the most telling question: What sense would you make of /proc/sys/vm/lowmem_reserve_ratio, will it be different next week, and will you be able to learn anything from that difference?
While the accepted answer makes a lot of sense if you want to argue the sense of doing something like this, nevertheless there is an answer that works. Here's a script to duplicate the complete /proc file system into /tmp/proc. This can then be tarred and gzipped. I used this to keep a memory of the setup and capabilities (memory, bogomips, default processes, etc.) of my trusty old file server before I replaced it with a new one.
cd /
mkdir /tmp/proc
find /proc -type f | while read F ; do
D=/tmp/$(dirname $F)
test -d $D || mkdir -p $D
test -f /tmp/$F || sudo cat $F > /tmp/$F
done
Notes:
Permissions are not preserved since I have to use cat instead of cp.
cp -a /proc /proccopy doesn't work since it crashes on "kcore" as well. mc (Midnight Commander) succeeds in creating a copy of /proc which you can then tar and gzip, but you have to dismiss thousands of "Cannot read file XYZ" errors and it too crashes on 'kcore' with a Bus Error.
A simple answer:
ls -Rd /proc/* > proc.lst
foreach item (<proc.lst>)
echo "proc_file:$item"
if (-f $item) cat $item
end
Apropos the site advisory protocol:
"But avoid ... Making statements based on opinion..."
(IMHO...based on quite a few years experience) It doesn't require much imagination to think of some good reasons for, at a given point in time, taking a snapshot of selected elements of /proc/* and storing it, or sending it somewhere. Therefore, I would question the usefulness of an 'answer' such as:
The linux /proc filesystem is actually kernel variables pretending to be a filesystem. There is nothing to save thus nothing to backup. If the system let you, you could rm -rf /proc and it would magically reappear upon the next reboot.
...on the grounds that it doesn't answer the question, makes a false assertion, and contains gratuitous information irrelevant to the question.

Resources