ucontext_t has been removed from POSIX, but is still there in glibc.
Is it safe to use it on a linux-arm64 if I don't care about interoperability? Any gotchas? (floating point registers or anything else I should be worry of?)
Yes, it should be perfectly safe to use. Just because ucontext.h was removed from POSIX.1-2017/SUSv7 does not mean that glibc no longer supports the functionality.
This particular header was removed in the latest version of the standard since IEEE Std 1003.1-2001/Cor 2-2004, item XBD/TC2/D6/28 was applied in the previous version of the standard, updating the getcontext, makecontext, setcontext, and swapcontext functions to be obsolescent, and thus the header was also defacto obsolescent.
Related
Is it possible to use sysenter in a 64 bits program on Linux ? Or is it impossible to adapt the use of sysenter with the SystemⅤ calling convention without getting other dynamic link libraries crashing (I know the 32 bits way won’t work but I just want to know if it’s possible to work around this like withint 0x80) ?
There is very few documentation around using sysenter in 32 bits, so I couldn’t found anything for 64 bits.
I know this not recommended but it’s the only opcode I can use to trigger a system call as part of bug bounty hunting exploit where the program need to exit using a special function that can be trigger only from normal execution.
It is possible to use them, but they use the 32-bit entry point of the kernel (check the code for more).
The actual location (and code) of this entry point depends on you kernel version.
For versions 4.2 and newer it is entry_SYSENTER_32.
For versions 4.1 and older it is ia32_sysenter_target.
Finally, SYSRET is not available at userspace (it can only be executed from ring 0). Check the Intel manual description of the instruction.
I am aware that an implementer has a choice of whether he wants to zero a malloc page or let OS give him a zeroed page (for more optimization purposes).
My question is simple - in Ubuntu 14.04 LTS which comes with linux kernel 3.16 and gcc 4.8.4, who will zero my pages? Is it in user land or kernel land?
It can depend on where the memory came from. The calloc code is userland, and will zero a memory page that gets re-used by a process. This happens when the memory is previously used and then freed, but not returned to the OS. However, if the page is newly allocated to the process, it will come already cleared to 0 by the OS (for security purposes), and so does not need to be cleared by calloc. This means calloc can potentially be faster than calling malloc followed by memset, since it can skip the memset if it knows it will already by zeroed.
That depends on the implementer of your standard library, not on the host system. It is not possible to give a specific answer for a particular OS, since it may be the build target of multiple compilers and their libraries - including on other systems, if you consider the possibility of cross-compiling (building on one type of system to target another).
Most implementations I've seen of calloc() use a call of malloc() followed by either a call of memset() or (with some implementations that target unix) a legacy function called bzero() - which is, itself, sometimes replaced by a macro call that expands to a call of memset() in a number of recent versions of libraries.
memset() is often hand-optimised. But, again, it is up to the implementer of the library.
How to create a simple function returns a string on a arm platform?
procedure Main is
function tst_func return String is
begin
return "string";
end tst_func;
str : String := tst_func; -- <-- Doesnt work, runtime error.
-- Adacore gpl compiller, crossdev, arm elf hosted of win os.
-- Hardware is smt32f407 discovery board.
begin
...
The problem is a bug in the runtime system: if your program doesn’t involve any tasking, the environment task’s secondary stack isn’t set up properly, so when your function tries to return a string it thinks the secondary stack has been exhausted and raises Storage_Error.
I have reported this to AdaCore: their recommendation was to include
delay until Ada.Real_Time.Clock;
in your main program.
The bug will likely be resolved in the next GNAT GPL release.
The issue here seems to be that using Ada on small embedded CPUs like the STm32 (ARM Cortex) or the Actel AVR or TI MSP430 often involves compromises, because the platform may not be capable of running a full Ada RTS (Runtime System) including things like tasking.
Instead, a minimal RTS may be supplied, with restrictions specified by pragmas, that doesn't support tasking, or in this case, features requiring the secondary stack. Funnily enough, the RTS for the AVR does include the files s-secsta.ads,.adb which implement package System.Secondary_Stack so the much more powerful STm32 ought to be capable of it. You could look at the RTS sources supplied with the Adacore GPL package to see if these files are present or not.
So - options.
1) Work around, either using fixed length strings, or a table of string constants, or returning an access String (i.e. pointer) to a string allocated on the heap (don't forget to free it!) though heap use is not normally recommended for embedded programming.
2) Find a better RTS. You can compile and link against a different RTS by supplying -RTS=... arguments to the compiler. Here is a thread discussing alternative RTS strategies for this CPU.
I have used flock() and fcntl() in the past, but I've always been concerned that behavior is undefined or problematic for some older versions of Linux.
I need a solution that is compatible with older Linux-es (say, 2.6.18 or better), and NFS 3+.
Will flock() and/or fcntl() work consistently under those circumstances, or do I need to resort to open (.... O_EXCL) to guarantee atomicity?
You definitely cannot expect flock() to work with NFS. fcntl() with F_SETLK has a decent chance of working, with caveats if you have multiple uses in one process: http://0pointer.de/blog/projects/locking.html
Historically, flock has been available for at least a decade, and implemented by the kernel since 2.0. From the flock man page:
Since kernel 2.0, flock() is implemented as a system call in its own
right rather than being emulated in the GNU C library as a call to
fcntl(2). This yields true BSD semantics: there is no interaction
between the types of lock placed by flock() and fcntl(2), and flock()
does not detect deadlock.
I think it will cover your needs, unless you are dealing with pre-2.0 kernels.
ATL uses thunks to manage callbacks for windows, and apparently it needs to allow for data execution.
Microsoft says:
Note that system DEP policy can override, and having DEP AlwaysOn will disable ATL thunk emulation, regardless of the attribute.
Am I correct in translating this quote to (more or less) "ATL applications can crash due to system policies"?
Is there a way to make a pre-ATL-8.0 application work correctly on any system, hopefully while still turning on DEP for everything other than the thunk?
DEP is enabled per process, so you cannot disable DEP for the buggy fragment only. The options are either to rebuild a binary with fixed ATL to make the binary DEP-compatible, or disable DEP for the whole process where the binary is used.
Earlier ATL versions indeed had this problem and it was fixed at some point.
DEP exceptions are under My Computer, Advanced tab, Performance Settings, Data Execution Prevention.
It is not a problem with ATL 8.0:
If possible, replace the older components with ones built to support
the "No eXecute Compatibility", such as those using ATL 8.0 or newer.
The ATL thunk strategy was devised as a lookup convenience and to
avoid using thread-local storage for a window-handle-to-object map,
but the thunk emulation required in DEP-aware OS's negates and even
reverses any performance improvement. Newer versions of ATL don't
require the thunk emulation because their thunks are created in
executable data blocks.
EDIT: Sorry, didn't notice you asked about pre-8.0 ATL.