I'm using shmget() to allocate a shared memory segment that I then use with pthread_mutex_init() to create a mutex shared between processes. Generally, this works as expected. However, occasionally shmget() will return ENOENT. Reading the man page, this should only occur if the shmflg doesn't include IPC_CREAT, however I am including that. Here's a snip-it of my code:
shmid_ = shmget( MYLOCK_KEY_ID, sizeof(pthread_mutex_t), IPC_CREAT | IPC_EXCL | 0666 );
if ( errno == ENOENT ) {
// This should never occur since IPC_CREAT was specified
std::cerr
<< "shmget() returned ENOENT (it thinks IPC_CREAT wasn't specified).\n"
<< "This seems to be a bug in shmget()?" << std::endl;
exit(1);
}
I'm totally lost as to what could be going on. I've tried this on several systems (Linux kernels 2.6.32 and 3.3.5) but both exhibit the same behavior. Currently, when I obtain this failure mode, I just repeat the process and it usually works. But that seems kind of kludgey and I don't know if this is a bug in shmget() or if I'm just doing something wrong.
Any ideas?
Your if statement is not checking the returned value - the man pages say to check shmid_ for -1 and then check errno.
RETURN VALUE
A valid segment identifier, shmid, is returned on success, -1 on error.
What you are doing is just checking errno - it could be ENOENT after some other call to some other function that failed.
Related
I'm coding a linux x64 assembly program that read a file and I want to handle errors like File Not Found or permission errors.
Where can I find a list of SYS_OPEN error codes?
Approaches to find codes (kinda fun)
My code to open a file:
SYS_OPEN equ 2
O_RDONLY equ 0
section .data
filename db "file.txt", 0
section .text
global _start
_start:
mov rax, SYS_OPEN
mov rdi, filename
mov rsi, O_RDONLY
mov rdx, 0644o
syscall
[...]
When the file is successfully opened the RAX register points to the file descriptor (positive integer), if fails RAX point to an error (negative integer). I managed to raise a permission error by removing all permissions for all users:
chmod 0000 file.txt
This causes an error with code -13. By deleting the file, I managed to get error -2. Where can I find a list of SYS_OPEN error codes?
PS: Maybe my googling skills are rusty
Linux system call return values from -4095 to -1 are -errno codes. (The actual highest error number that Linux has actually defined is currently about 133, EHWPOISON, but that's the official range.)
strace ./myprog can decode them for you so you don't need to actually write error checking in your toy programs when playing around with system calls.
For example:
$ strace touch /tmp/xyjklj/bar
... (dynamic linker / process startup stuff)
openat(AT_FDCWD, "/tmp/xyjklj/bar", O_WRONLY|O_CREAT|O_NOCTTY|O_NONBLOCK, 0666) = -1 ENOENT (No such file or directory)
utimensat(AT_FDCWD, "/tmp/xyjklj/bar", NULL, 0) = -1 ENOENT (No such file or directory)
... (more system calls as touch(1) finds a locale-specific set of error messages and prints
(The -1 is what the libc wrapper function actually returns; the errno code is what strace decoded from the asm syscall return value, which the glibc wrapper will store in errno. When using raw system calls in asm, you don't have to waste instructions doing that. But strace will still say "-1", not the numeric error code)
Documentation of most ways SYS_open can fail
Each system call man page documents which error codes that particular system call can fail with, and in which cases that can happen. (Those list aren't fully exhaustive, for example not covering weird things a specific filesystem like NFS could return, like EMULTIHOP (see comments).)
For your case, see the ERRORs section of the open(2) man page. e.g. there are several entries for ENOENT, covering all the cases which can lead to that return value.
ENOENT - O_CREAT is not set and the named file does not exist.
ENOENT - A directory component in pathname does not exist or is a
dangling symbolic link.
ENOENT - pathname refers to a nonexistent directory, O_TMPFILE and
one of O_WRONLY or O_RDWR were specified in flags, but
this kernel version does not provide the O_TMPFILE
functionality.
(Spoiler alert, 2 is ENOENT, so -2 is -ENOENT.)
There are of course lots of other fun ways that pathname and file access stuff (and open(2) in particular) can error, including:
EACCES (-13) - The requested access to the file is not allowed, or search
permission is denied for one of the directories in the
path prefix of pathname, or the file did not exist yet and
write access to the parent directory is not allowed. (See
also path_resolution(7).)
EFAULT - pathname points outside your accessible address space.
ENAMETOOLONG -
pathname was too long.
EBUSY - O_EXCL was specified in flags and pathname refers to a
block device that is in use by the system (e.g., it is
mounted).
[this would require root, otherwise you'd get EACCESS]
ETXTBSY - pathname refers to an executable image which is currently
being executed and write access was requested.
EWOULDBLOCK -
The O_NONBLOCK flag was specified, and an incompatible
lease was held on the file (see fcntl(2)).
ENODEV - pathname refers to a device special file and no
corresponding device exists. (This is a Linux kernel bug;
in this situation ENXIO must be returned.)
ELOOP - Too many symbolic links were encountered in resolving
pathname.
EISDIR - pathname refers to a directory and the access requested
involved writing (that is, O_WRONLY or O_RDWR is set).
ENOTDIR -
A component used as a directory in pathname is not, in
fact, a directory, or O_DIRECTORY was specified and
pathname was not a directory.
EPERM - The O_NOATIME flag was specified, but the effective user
ID of the caller did not match the owner of the file and
the caller was not privileged.
As well as various limits like number of open files (ENFILE, EMFILE), or ENOSPC disk space full. The above is not a complete list, I just took one each the ways to get many (but not all) of the error codes.
As per funnydman's answer, you can look up the number -> symbolic meaning of error values in man pages. Or look in /usr/include/asm-generic/errno-base.h (The full path may differ on some systems, and you'd only include this file indirectly, via #include <errno.h>)
You can interpret this as values of errno, here is the table (to list all of the codes use errno -l), also take a look at the docs. A part of the table:
number
hex
symbol
description
2
0x02
ENOENT
No such file or directory
13
0x0d
EACCES
Permission denied
There is described a reason of such decision: https://stackoverflow.com/a/6008711/9926721
I'm trying to understand the difference between why you can do an OPEN-call in fortran on NFSv3 in read-write mode on a file that you only have read-permissions on, while if you do the same thing on NFSv4 the OPEN-call will fail.
Let me explain, below is a simple fortran-program that opens given file (argument to the program) in read-write mode,
PROGRAM test_open
IMPLICIT NONE
! Parameters
INTEGER, PARAMETER :: lunin = 10
CHARACTER(LEN=100) :: fname
! Local
INTEGER :: i,ierr,siteid,nstation
REAL :: lat, lon, asl
CHARACTER(len=15) :: name
!----------------------------------------------------------------
!
! Open input file
!
CALL getarg(1,fname)
OPEN(lunin,file=fname,STATUS='OLD',IOSTAT=ierr)
IF ( ierr /= 0 ) THEN
WRITE(6,*)'Could not open ',TRIM(fname),ierr
STOP
ENDIF
WRITE(6,*)'Opened OK'
CLOSE(lunin)
END PROGRAM test_open
Save the above in test_open.f90 and compile with,
gfortran -o fortran test_open.f90
Now, execute the following on a mountpoint with NFSv3,
strace -eopen ./fortran file-with-only-read-permissions
And you should see the following lines (along with a lot of other output),
> open("file-with-only-read-permissions", O_RDWR) = -1 EACCES (Permission denied)
> open("file-with-only-read-permissions", O_RDONLY) = 3
So, we can clearly see that we get an "EACCES (Permission denied)" while trying to open in 'O_RDWR' (open read-write), but right after we see another open O_RDONLY (open read-only) and that succeeds.
Run the same program on a file on a NFSv4 share, and we get the following,
strace -eopen ./fortran file-with-only-read-permissions-on-nfsv4-share
> open("file-with-only-read-permissions-on-nfsv4-share", O_RDWR) = -1 EPERM (Operation not permitted)
So, here we get an "EPERM (Operation not permitted)" while trying to open the file in 'O_RDWR' (open read-write) and nothing more (ie application fails).
Doing the same tests in C with a small test-program it will fail to open the file in both scenarios (that is, it will not try to open the file in 'read-only-mode' after getting the "EACCES" on NFSv3).
So to the questions,
I assume the above behaviour is due to the implementation of the OPEN-call in fortran, and that if fortran gets an "EACCES (Permission denied)" while trying to open a file, it will automatically try to open the file in read-only (O_RDONLY). Is this assumption correct ?
I also assume that fortran doesn't have this "fallback-method" when getting an "EPERM (Operation not permitted)" while trying to open a file. Is this assumption correct, or am I missing something ?
C doesn't seem to implement a "fallback-method" in either a "EACCES" nor "EPERM". This seems correct to me, since this doesn't leave any room for confusion. If you try to open a file in a way that you do not have the permissions to do, the program should fail - my opinion.
I am aware of that there is a distinct difference between "Permission denied" and "Operation not permitted". And I guess that when mounting NFSv4 over kerberos there is a reason for getting "Permission denied" instead of "Operation not permitted", however some clarification regarding this area would be great.
Of course, adding the appropriate flags to the open-call (ACTION=READ) solves the problem. I'm just curios about my assumptions and if they are correct.
To answer your question, in order:
You are correct that gfortran will try to reopen a file in read-only mode when EACCES (or EROFS) is encountered.
You are also correct that EPERM is not handled this way, it is not mentioned in the libgfortran source tree at all.
As you say, this is a matter of opinion. Gfortran made the decision to do this a long time ago, and it seems to suit the users just fine.
I do not understand why NFS v4 returns EPERM in such a case. This seems at odds at least with the documentation in the open(2) Linux manpage that I have access to, where it is only mentioned when O_NOATIME has been specified (which libgfortran does not do). At least, this behavior does not seem to be portable.
I've had some readdir() issues occur in an embedded app, so I added this self-contained test at a convenient place in the app code:
FILE *f;
DIR *d;
f = fopen ("/mnt/mydir/myfile", "r");
printf ("fopen %p\r\n", f);
if (f) fclose(f);
d = opendir ("/mnt/mydir");
printf ("opendir ret %p\r\n", f);
if (d)
{
struct dirent *entry;
do
{
errno = 0;
entry = readdir (d);
printf ("readdir ret %p %s, errno %d %s\r\n", entry, entry ? entry->d_name : "", errno, strerror(errno));
} while (entry);
closedir (d);
}
/mnt/mydir is an NFS mount (although I'm not sure if that's relevant). The fopen() call to open a file in that dir always succeeds, and the opendir() on the dir also always succeeds. However, sometimes (most) the readdir() fails with errno=EFAULT.
I don't believe anywhere else in the app is doing anything with that dir. The test is exactly as written, all variables are local stack scope.
If I run it as a standalone program, it always succeeds.
Can anyone offer any suggestions as to what could cause EFAULT here? I'm pretty sure my DIR pointer variable is not being corrupted, although the DIR structure itself could be I guess. I haven't seen any evidence elsewhere of heap corruption.
From man 2 readdir page:
EFAULT Argument points outside the calling process's address space.
This means that your structure is corrupted
I think I found the problem. The uClibc implementation of opendir/readdir does a stat() on the directory, then later does a stack alloca() of size statbuf.st_blksize. My NFS directory was mounted with rsize=512KB, causing readdir() to try and allocate 512KB on the stack to hold the dents. My embedded setup does not have that much room between stacks, so at some point was hitting something below in memory and causing EFAULT.
If I change my NFS mount options to rsize=4096, it works fine.
I'm newbie in linux program. why following code failed? its output is "failed 20".
but in terminal the command: sudo mount /dev/sdb /home/abc/work/tmp works.
void main()
{
int rtn;
rtn=mount("/dev/sdb","/home/abc/work/tmp","vfat",MS_BIND,"");
if (rtn==-1)
printf("failed %d.\n",errno);
else
printf("OK!\n");
}
You can't bind-mount a device, only a directory. Try providing a useful value for mountflags.
Error 20 is ENOTDIR (http://www-numi.fnal.gov/offline_software/srt_public_context/WebDocs/Errors/unix_system_errors.html).
I think with MS_BIND, you would need the first argument to be an actual directory somewhere, not a device. See also the man page for mount
What you are trying to do would be equivalent to sudo mount --bind /dev/sdb /home/abc/work/temp which will give you an error too.
You should print out not just the errno value, but also the corresponding error message:
printf("failed %d: %s\n", errno, strerror(errno));
This should reveal the reason for the problem. ("Not a directory", so /home/abc/work/tmp does not seem to be a directory.)
(There are various other problems with your code, such as missing #include statements, and writing error messages to stdout and not stderr, but those are irrelevant to your problem at hand. You can fix them later.)
A process is considered to have completed correctly in Linux if its exit status was 0.
I've seen that segmentation faults often result in an exit status of 11, though I don't know if this is simply the convention where I work (the applications that failed like that have all been internal) or a standard.
Are there standard exit codes for processes in Linux?
Part 1: Advanced Bash Scripting Guide
As always, the Advanced Bash Scripting Guide has great information:
(This was linked in another answer, but to a non-canonical URL.)
1: Catchall for general errors
2: Misuse of shell builtins (according to Bash documentation)
126: Command invoked cannot execute
127: "command not found"
128: Invalid argument to exit
128+n: Fatal error signal "n"
255: Exit status out of range (exit takes only integer args in the range 0 - 255)
Part 2: sysexits.h
The ABSG references sysexits.h.
On Linux:
$ find /usr -name sysexits.h
/usr/include/sysexits.h
$ cat /usr/include/sysexits.h
/*
* Copyright (c) 1987, 1993
* The Regents of the University of California. All rights reserved.
(A whole bunch of text left out.)
#define EX_OK 0 /* successful termination */
#define EX__BASE 64 /* base value for error messages */
#define EX_USAGE 64 /* command line usage error */
#define EX_DATAERR 65 /* data format error */
#define EX_NOINPUT 66 /* cannot open input */
#define EX_NOUSER 67 /* addressee unknown */
#define EX_NOHOST 68 /* host name unknown */
#define EX_UNAVAILABLE 69 /* service unavailable */
#define EX_SOFTWARE 70 /* internal software error */
#define EX_OSERR 71 /* system error (e.g., can't fork) */
#define EX_OSFILE 72 /* critical OS file missing */
#define EX_CANTCREAT 73 /* can't create (user) output file */
#define EX_IOERR 74 /* input/output error */
#define EX_TEMPFAIL 75 /* temp failure; user is invited to retry */
#define EX_PROTOCOL 76 /* remote error in protocol */
#define EX_NOPERM 77 /* permission denied */
#define EX_CONFIG 78 /* configuration error */
#define EX__MAX 78 /* maximum listed value */
8 bits of the return code and 8 bits of the number of the killing signal are mixed into a single value on the return from wait(2) & co..
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/wait.h>
#include <unistd.h>
#include <signal.h>
int main() {
int status;
pid_t child = fork();
if (child <= 0)
exit(42);
waitpid(child, &status, 0);
if (WIFEXITED(status))
printf("first child exited with %u\n", WEXITSTATUS(status));
/* prints: "first child exited with 42" */
child = fork();
if (child <= 0)
kill(getpid(), SIGSEGV);
waitpid(child, &status, 0);
if (WIFSIGNALED(status))
printf("second child died with %u\n", WTERMSIG(status));
/* prints: "second child died with 11" */
}
How are you determining the exit status? Traditionally, the shell only stores an 8-bit return code, but sets the high bit if the process was abnormally terminated.
$ sh -c 'exit 42'; echo $?
42
$ sh -c 'kill -SEGV $$'; echo $?
Segmentation fault
139
$ expr 139 - 128
11
If you're seeing anything other than this, then the program probably has a SIGSEGV signal handler which then calls exit normally, so it isn't actually getting killed by the signal. (Programs can chose to handle any signals aside from SIGKILL and SIGSTOP.)
None of the older answers describe exit status 2 correctly. Contrary to what they claim, status 2 is what your command line utilities actually return when called improperly. (Yes, an answer can be nine years old, have hundreds of upvotes, and still be wrong.)
Here is the real, long-standing exit status convention for normal termination, i.e. not by signal:
Exit status 0: success
Exit status 1: "failure", as defined by the program
Exit status 2: command line usage error
For example, diff returns 0 if the files it compares are identical, and 1 if they differ. By long-standing convention, unix programs return exit status 2 when called incorrectly (unknown options, wrong number of arguments, etc.) For example, diff -N, grep -Y or diff a b c will all result in $? being set to 2. This is and has been the practice since the early days of Unix in the 1970s.
The accepted answer explains what happens when a command is terminated by a signal. In brief, termination due to an uncaught signal results in exit status 128+[<signal number>. E.g., termination by SIGINT (signal 2) results in exit status 130.
Notes
Several answers define exit status 2 as "Misuse of bash builtins". This applies only when bash (or a bash script) exits with status 2. Consider it a special case of incorrect usage error.
In sysexits.h, mentioned in the most popular answer, exit status EX_USAGE ("command line usage error") is defined to be 64. But this does not reflect reality: I am not aware of any common Unix utility that returns 64 on incorrect invocation (examples welcome). Careful reading of the source code reveals that sysexits.h is aspirational, rather than a reflection of true usage:
* This include file attempts to categorize possible error
* exit statuses for system programs, notably delivermail
* and the Berkeley network.
* Error numbers begin at EX__BASE [64] to reduce the possibility of
* clashing with othÂer exit statuses that random programs may
* already return.
In other words, these definitions do not reflect the common practice at the time (1993) but were intentionally incompatible with it. More's the pity.
'1': Catch-all for general errors
'2': Misuse of shell builtins (according to Bash documentation)
'126': Command invoked cannot execute
'127': "command not found"
'128': Invalid argument to exit
'128+n': Fatal error signal "n"
'130': Script terminated by Ctrl + C
'255': Exit status out of range
This is for Bash. However, for other applications, there are different exit codes.
There are no standard exit codes, aside from 0 meaning success. Non-zero doesn't necessarily mean failure either.
Header file stdlib.h does define EXIT_FAILURE as 1 and EXIT_SUCCESS as 0, but that's about it.
The 11 on segmentation fault is interesting, as 11 is the signal number that the kernel uses to kill the process in the event of a segmentation fault. There is likely some mechanism, either in the kernel or in the shell, that translates that into the exit code.
Header file sysexits.h has a list of standard exit codes. It seems to date back to at least 1993 and some big projects like Postfix use it, so I imagine it's the way to go.
From the OpenBSD man page:
According to style(9), it is not good practice to call exit(3) with arbitrary values to indicate a failure condition when ending a program. Instead, the predefined exit codes from sysexits should be used, so the caller of the process can get a rough estimation about the failure class without looking up the source code.
To a first approximation, 0 is success, non-zero is failure, with 1 being general failure, and anything larger than one being a specific failure. Aside from the trivial exceptions of false and test, which are both designed to give 1 for success, there's a few other exceptions I found.
More realistically, 0 means success or maybe failure, 1 means general failure or maybe success, 2 means general failure if 1 and 0 are both used for success, but maybe success as well.
The diff command gives 0 if files compared are identical, 1 if they differ, and 2 if binaries are different. 2 also means failure. The less command gives 1 for failure unless you fail to supply an argument, in which case, it exits 0 despite failing.
The more command and the spell command give 1 for failure, unless the failure is a result of permission denied, nonexistent file, or attempt to read a directory. In any of these cases, they exit 0 despite failing.
Then the expr command gives 1 for success unless the output is the empty string or zero, in which case, 0 is success. 2 and 3 are failure.
Then there's cases where success or failure is ambiguous. When grep fails to find a pattern, it exits 1, but it exits 2 for a genuine failure (like permission denied). klist also exits 1 when it fails to find a ticket, although this isn't really any more of a failure than when grep doesn't find a pattern, or when you ls an empty directory.
So, unfortunately, the Unix powers that be don't seem to enforce any logical set of rules, even on very commonly used executables.
Programs return a 16 bit exit code. If the program was killed with a signal then the high order byte contains the signal used, otherwise the low order byte is the exit status returned by the programmer.
How that exit code is assigned to the status variable $? is then up to the shell. Bash keeps the lower 7 bits of the status and then uses 128 + (signal nr) for indicating a signal.
The only "standard" convention for programs is 0 for success, non-zero for error. Another convention used is to return errno on error.
Standard Unix exit codes are defined by sysexits.h, as David mentioned.
The same exit codes are used by portable libraries such as Poco - here is a list of them:
Class Poco::Util::Application, ExitCode
A signal 11 is a SIGSEGV (segment violation) signal, which is different from a return code. This signal is generated by the kernel in response to a bad page access, which causes the program to terminate. A list of signals can be found in the signal man page (run "man signal").
When Linux returns 0, it means success. Anything else means failure. Each program has its own exit codes, so it would been quite long to list them all...!
About the 11 error code, it's indeed the segmentation fault number, mostly meaning that the program accessed a memory location that was not assigned.
Some are convention, but some other reserved ones are part of POSIX standard.
126 -- A file to be executed was found, but it was not an executable utility.
127 -- A utility to be executed was not found.
>128 -- A command was interrupted by a signal.
See the section RATIONALE of man 1p exit.