What is the plan to upgrade time_t for linux 32 - linux

Linux currently has time_t as 32 bits on Linux 32 bit. This will run out in less than 25 years (mortgage times) and Linux is being used embedded in devices with long > 10 year lifetimes (cars). Is there an upgrade plan for this platform?

There is no "set" time or time frame for which all Linux kernels will be using 64-bit time_t. In fact right now the general consensus is that it will not be changed anytime soon. No one is really that worried about it yet; just like Y2K it will cause problems in code that already relies on time_t.
There are a few Operating Systems that are using the workaround which is to use a wrapper that makes time_t both a 32-bit and a 64-bit integer.
While others have just forcibly upgraded time_t to use 64-bit integers.
For more information please refer to this link:
https://en.wikipedia.org/wiki/Year_2038_problem

There were some good articles about it (specifically syscalls) on LWN. Have a look at System call conversion for year 2038

Related

CPU/Threads usage on M1 Pro (Apple Silicon) using openMP

hope someone knows the answer to this...
I have a code that compiles perfectly well with openMP (it uses libsharp). However, I am finding it impossible to make the M1 Pro chip use all the 8 or 10 cores I have.
I am setting the threads variable correctly as export OMP_NUM_THREADS=10 such that the code correctly identifies it's supposed to be running with 10 threads (see image below showing a print-screen from my activity monitor):
Activity Monitor Print Screen
Print screen is showing that the code is compiled for Apple Silicon, uses 10 threads but not much of the CPU available.
Does anyone know how to properly compile/set the number of threads such that all the cores will be used?
This is trivial in x86 architectures.
Not really an answer, but long for a comment...
If both LLVM and GCC behave the same then it's not an OpenMP runtime issue. (And your monitor output shows that the correct number of threads have been created). I'm also not certain that it's really an Arm issue.
Are you comparing with an Apple x86 machine (so running the same operating system), or with a Linux x86 system?
The scheduling decisions of the two OSes are likely different, and (for instance) MacOS has no interface to bind threads to logicalCPUs.
As well as that, there's the issue of having some fast and some slow cores. That could mean that statically scheduled loops are inefficient.
I'm also confused by the fact that you arm to show multiple instances of your code running at the same time, so you are explicitly causing over-subscription of the logicalCPUs...

How to find the number of bit of OS/390 or a z/OS?

What is the command for finding the number of bit of an OS/390 or a z/OS?
Since there didn't seem to be a "real" answer on this thread, I thought I'd provide one just in case anyone needs the information...
The definitive source of whether you're running in 64-bit mode is the STORE FACILITY LIST (STFL, or STFLE) hardware instruction. It sets two different bits - one to indicate that the 64-bit zArchitecture facility is installed, and one to indicate that the 64-bit zArchitecture facility is active (it was once possible to run in 31-bit mode on 64-bit hardware, so this would give you the "installed, but not active" case).
The operating system generously issues STFL/STFLE during IPL, saving the response in the PSA (that's low memory, starting at location 0). This is handy, since STFL/STFLE are privileged instructions, but testing low storage doesn't require anything special. You can check the value at absolute address 0xc8 (decimal 200) for the 0x20 bit to tell that the system is active in 64-bit mode, otherwise it's 31-bit mode.
Although I doubt there are any pre-MVS/XA systems alive anymore (that is, 24-bit), for completeness you can also test CVTDCB.CVTMVSE bit - if this bit is not set, then you have a pre-MVS/XA 24-bit mode system. Finding this bit is simple - but left as an exercise for the reader... :)
If you're not able to write a program to test the above, then there are a variety of ways to display storage, such as TSO TEST or any mainframe debugger, as well as by looking at a dump, etc.
While I was not able to find commands to give this information, I think below is what you're looking for:
According to this: https://en.wikipedia.org/wiki/OS/390
z/OS is OS/390 with various extensions including support for 64-bit architecture.
So if you're on a zSeries processor with z/OS, you're on 64-bit.
According to this: https://en.wikipedia.org/wiki/IBM_ESA/390
OS/390 was installed on ESA/390 computers, which were 32-bit computers, but were 31-bit addressable.
For either z/OS or OS/390 I believe you can do a D IPLINFO and look for ARCHLEVEL. ARCHLEVEL 1 = 31 bit, ARCHLEVEL 2 = 64 bit. But it's been a very long time since I've been on an OS/390 system.

What is the current status of GHC on 64 bit Windows?

My current understanding is
No 64-bit GHC, ticket #1884
The 32-bit GHC and the binaries it builds work just fine because the Windows OS loader converts OS calls and pointers to 64 bits. The same applies to DLLs
No mixing 32 bit and 64 bit code (ie. your 32 bit Haskell DLL isn't going to be friends with the 64 bit program that wants to use it)
Latest discussion is a thread started on May 2011
Is this correct? Are there any pitfalls to watch out for, particularly as an FFI user? For example, if I were to export some Haskell code as a 32 bit DLL to some Windows program, should I expect it to work?
Edit: looks like you'd need a 64 bit DLL to go with a 64 bit process
I don't know if anyone's actively working on a 64-bit codegen right now, but 32-bit haskell will work just fine as long as you're only talking to 32-bit FFI libraries (and/or being embedded in 32-bit host programs). If you want to interact with 64-bit programs, you will need to use some form of IPC, as 32-bit and 64-bit code cannot coexist in one process.
64-bit windows is supported now. There is binary a distribution of 64-bit GHC.
No 64-bit Haskell Platform yet though.

Linux clock_gettime(CLOCK_MONOTONIC) strange non-monotonic behavior

Folks, in my application I'm using clock_gettime(CLOCK_MONOTONIC) in order to measure the delta time between frames (a typical approach in gamedev) and from time to time I'm facing a strange behavior of clock_gettime(..) - returned values occasionally are not monotonic (i.e prev. time is bigger than current time).
Currently, if such a paradox happens I simply skip the current frame and start processing the next one.
The question is how can this be possible at all? Is it a bug in Linux POSIX implementation of clock_gettime? I'm using Ubuntu Server Edition 10.04 (kernel 2.6.32-24, x86_64), gcc-4.4.3.
man clock_gettime says:
CLOCK_MONOTONIC_RAW (since Linux 2.6.28; Linux-specific)
Similar to CLOCK_MONOTONIC, but provides access to a raw hardware-based time that is not subject to NTP adjustments.
Since CLOCK_MONOTONIC_RAW is not subject of NTP adjustments, I guess CLOCK_MONOTONIC could be.
We had similar problems with Redhat Enterprise 5.0 with 2.6.18 kernel and some specific Itanium processor. We couldn't reproduce it with other processor on the same OS. It was fixed in RHEL 5.3 with slightly newer kernel and some Redhat patches.
Looks like an instance of
commit 0696b711e4be45fa104c12329f617beb29c03f78
Author: Lin Ming <ming.m.lin#intel.com>
Date: Tue Nov 17 13:49:50 2009 +0800
timekeeping: Fix clock_gettime vsyscall time warp
Since commit 0a544198 "timekeeping: Move NTP adjusted clock
multiplier to struct timekeeper" the clock multiplier of vsyscall is updated with
the unmodified clock multiplier of the clock source and not with the
NTP adjusted multiplier of the timekeeper.
This causes user space observerable time warps:
new CLOCK-warp maximum: 120 nsecs, 00000025c337c537 -> 00000025c337c4bf
See here for a patch. This was included into 2.6.32.19, but may not have been backported by the Debian team(?). You should check it out.
Try CLOCK_MONOTONIC_RAW.
Sure sounds like a bug to me. Perhaps you should report it in Ubuntu's bug tracker.
It's a linux bug. No ajustment in a monotonic clock can make it go backwards. You're using a very old kernel and a very old distribution.
Edit: are you sure you need to skip the frame ? If you call clock_gettime again, what happens ?

64-bit linux, Assembly Language, Issues?

I'm currently in the process of learning assembly language.
I'm using Gas on Linux Mint (32-bit). Using this book:
Programming from the Ground Up.
The machine I'm using has an AMD Turion 64 bit processor, but I'm limited to 2 GB of RAM.
I'm thinking of upgrading my Linux installation to the 64-bit version of Linux Mint, but I'm worried that because the book is targeted at 32-bit x86 architecture that the code examples won't work.
So two questions:
Is there likely to be any problems with the code samples?
Has anyone here noticed any benefits in general in using 64-bit Linux over 32-bit (I've seen some threads on Stack Overflow about this but they are mostly related to Windows Vista vs. Windows XP.)
Your code examples should all still work. 64-bit processors and operating systems can still run 32-bit code in a sort of "compatability mode". Your assembly examples are no different. You may have to provide an extra line of assembly or two (such as .BITS 32) but that's all.
In general, using a 64-bit OS will be faster than using a 32-bit OS. x86_64 has more registers than i386. Since you're working on assembly, you already know what registers are used for... Having more of them means less stuff has to be moved on and off the stack (and other temporary memory) thus your program spends less time managing data and more time working on that data.
Edit: To compile 32-bit code on 64-bit linux using gas, you just use the commandline argument "--32", as noted in the GAS manual
Even if you run Linux 64bit, it is possible to compile and run 32bit binaries on it. I don't know how good Mint's support for that is, I assume you should check.
64bit assembler however is not fully compatible to 32bit, for example you have different (more) registers. There are some specific instructions not available on either platform.
I would say the move to 64bit is not a big deal. You can still write 32bit assembly and then perhaps try to get it also running as 64bit (shouldn't be too hard), as a source of even more programming/learning fun.
Usually 32-bits is plenty so only use 64-bits or more if you really NEED IT.
Best to decide prior to programming if you want to do it as a 32-bit app or
a 64-bit app and then stick to it as mixed mode debugging ca get tricky fast.

Resources