In node.js how can I know whether fs.stat() will return usable crtime and/or birthtime fields for a given file/path/volume/fs? - node.js

I recently learned that different OSes and even different filesystems under the same OS support different subsets of the timestamps returned by lstat.
The Stats object returned gives us four times, each in two different flavours.
js Date objects:
atime: the last time this file was accessed expressed in milliseconds since the POSIX Epoch
mtime: the last time this file was modified ...
ctime: the last time the file status was changed ...
birthtime: the creation time of this file
(atimeMs, mtimeMs, ctimeMs, and birthtimeMs are js Date object versions of each of the above)
"Modified" means the file's contents were changed by being written to etc.
"Changed" means the file's metadata such as owners and permissions was changed.
Linux has traditionally never supported the concept of birth time, but as more newer filesystems did support it, it has recently had support added to hopefully all relevant layers of the Linux stack if I have read correctly.
But Windows and Mac both do support birth time as do their native filesystems.
Windows on the other hand did not traditionally support a concept of file change separate from file modification. But to comply to POSIX it added support at the API level and to NTFS. (It doesn't seem to be exposed anywhere in the GUI or commandline though). FAT fs does not support it.
When I call lstat on a file on Windows on an NTFS drive the results for ctime look good. When I call it on a file on a FAT drive, ctime contains junk. (In my case it's always 2076-11-29T08:54:34.955Z for every file.)
I don't know if this is a bug.
I don't know what birthtime returns on Linux on filesystems that don't support it. Hopefully null or undefined but perhaps also garbage. I also don't know what Linux or Mac return in ctime for files on FAT volumes.
So is there a way in Node to get info on which of these features are supported for a given file/path/fs?

Related

Okay to call a file core.foo?

On Unix-like operating systems, one should never call a file core because it might be overwritten by a core dump, so that name is de facto reserved for use by the operating system.
But what about with an extension, like core.c? It seems to me this should be perfectly okay; core and core.c are two distinct names.
Is the above correct, and names like core.foo are okay? Or am I missing anything?
On Unix-like operating systems,
Ie. on a system conforming to the single UNIX Specification (or just to POSIX). Or on a system belonging to the unix family.
one should never call a file core because it might be overwritten by a core dump, so that name is de facto reserved for use by the operating system.
No. I do not think there is a specification that specifies that core dumps are going into a file named "core". It's all "implementation defined", from posix:
3.117 Core File
A file of unspecified format that may be generated when a process terminates abnormally.
If coredump is created, where is it created, how and what the contents are, it's all up to implementation. [Freebsd creates a file named executable_name.core for ages. So not "on unix-like operating systems". Your sentence could be made valid by changing the beginning to:
On a linux system with kernel version lower then 2.6 or 2.4.21 one should never call a file "core", because...
Kernel 2.6 and 2.4.21 is more then 15 years old. Newer kernels have /proc/sys/kernel/core_pattern that allow specifying the filename and location and even a process to run on a coredump. So if you are working with such archaic kernel version lower then 2.6 or 2.4.21 (or the content of /proc/sys/kernel/core_pattern is naively set to core), then yes, you should be careful when creating a file named "core".
In today's world, that behavior doesn't matter at all, on most linux distributions systemd-coredump takes care of that and creates coredumps in /var/lib/systemd/coredump.
But what about with an extension, like core.c?
It's ok - core and core.c are different filenames. Dot is not a special character, it's just a character like anything else, core.c differs from core as much as core12 or coreAB does...
Is the above correct,
Yes.
and names like core.foo are okay?
Are okay.
Or am I missing anything?
No idea, but if you are, I surely hope you will find it.

What is file expansion?

found this and could understand
Example: Windows 8.3 filename expansion “c:\program files” be -
comes “C:\PROGRA~1”
i tryed to navigate to the two paths and they worked both
anyone could make it clear
This is a holdover from the days of Windows 95, which revamped the filesystem FAT to FAT32, which enabled long filenames, and was a part of the selling point of the system itself.
At the time, there was still, old DOS packages, old Win 3.1 packages, that relied on the old filename convention 8.3 that is, 8 characters with 3 character for extension.
Windows 95 incorporated the API, to convert automatically in both directions, whilst maintaining compatibility with the existing FAT system, even after using the convert FAT utility. This was to ensure that no breakage of the files occurred, in the context of the old applications on it.
That API is still available to this day.
GetShortPathName with the long filename as parameter, returns the short 8.3, with abbreviation in the form of ~.
GetLongPathName with the 8.3 filename as parameter, returns the long filename.
Source found in MSDN
In ye olde days, the FAT file system used by MS-DOG only supported eight character file names.
When MS switched to the FAT32 file system that used longer names (and later to the NTFS, this created migration issues. There were old systems that only supported 8+3 file names that would be accessing FAT32 disks over a network and there would be old software that only worked with 8+3 file names.
The solution MS came up with was to create short path names that used ~ and numbers to create unique 8+3 aliases for longer file names.
If you were on an old system and accessing a networked disk (or even using DOS commands on a FAT32 local disk early on):
c:\program files
became
C:\PROGRA~1
If you had
c:\program settings
That might come out as
C:\PROGRA~2
In short, this was then a system for creating unique 8+3 file names that mapped to longer file names so that they could be used with legacy systems and software.

Syncing a file system that has no file on it

Say I want to synchronize data buffers of a file system to disk (in my case the one of an USB stick partition) on a linux box.
While searching for a function to do that I found the following
DESCRIPTION
sync() causes all buffered modifications to file metadata and
data to be written to the underlying file sys‐
tems.
syncfs(int fd) is like sync(), but synchronizes just the file system
containing file referred to by the open file
descriptor fd.
But what if the file system has no file on it that I can open and pass to syncfs? Can I "abuse" the dot file? Does it appear on all file systems?
Is there another function that does what I want? Perhaps by providing a device file with major / minor numbers or some such?
Yes I think you can do that. The root directory of your file system will have at least one inode for your root directory. You can use the .-file to do that. Play also around with ls -i to see the inode numbers.
Is there a possibility to avoid your problem by mounting your file system with sync? Does performance issues hamper? Did you have a look at remounting? This can sync your file system as well in particular cases.
I do not know what your application is, but I suffered problems with synchronization of files to a USB stick with the FAT32-file system. It resulted in weird read and write errors. I can not imagine any other valid reason why you should sync an empty file system.
From man 8 sync description:
"sync writes any data buffered in memory out to disk. This can include (but is not
limited to) modified superblocks, modified inodes, and delayed reads and writes. This
must be implemented by the kernel; The sync program does nothing but exercise the sync(2)
system call."
So, note that it's all about modification (modified inode, superblocks etc). If you don't have any modification, it don't have anything to sync up.

Files informations on unix-based file systems

When I create a new file (eg. touch file.txt) its size equals to 0B.
I'm wondering where are its informations (size, last modify date, owner name, file name) stored.
These informations are stored on hd and are managed by kernel, of course, but I'd love to know something more about them:
Where and how I may get them, using a programming language as C, for example, and how I may change them.
Are these informations changeable, simply using a programming language, or maybe kernel avoids this operations?
I'm working on Unix based file systems, and I'm asking informations especially about this fs.
On unix system, they're traditionally stored in the metadata part of a file representation called an inode
You can fetch this information with the stat() call, see these fields, you can change the owner and permissions with chown() and chmod()
This information is retrievable using the stat() function (and others in its family). Where it's stored is up to the specific file system and for what should be obvious reasons, you cannot change them unless you have raw access to the drive -- and that should be avoided unless you're ok with losing everything on that drive.
The metadata such as owner, size and dates are usually stored in a structure called index-node (inode), which resides in the filesystem's superblock.

Are Linux's timezone files always in /usr/share/zoneinfo?

I'm writing a program that needs to be able to read in the time zone files on Linux. And that means that I need to be able to consistently find them across distros. As far as I know, they are always located in /usr/share/zoneinfo. The question is, are they in fact always located in /usr/share/zoneinfo? Or are there distros which put them elsewhere? And if so, where do they put them?
A quote from tzset(3):
The system timezone directory used depends on the (g)libc version.
Libc4 and libc5 use /usr/lib/zoneinfo,
and, since libc-5.4.6,
when this doesn't work, will try /usr/share/zoneinfo.
Glibc2 will use the environment
variable TZDIR, when that exists. Its
default depends on how it was installed, but normally is
/usr/share/zoneinfo.
Note, however, that nothing prevents some perverse distro from patching libc and placing the files wherever they want.
The public-domain time zone database contains the code and data to handle time zones on Linux.
The public-domain time zone database
contains code and data that represent
the history of local time for many
representative locations around the
globe. It is updated periodically to
reflect changes made by political
bodies to time zone boundaries, UTC
offsets, and daylight-saving rules.
This database (often called tz or
zoneinfo) is used by several
implementations, including the GNU C
Library used in GNU/Linux, FreeBSD,
NetBSD, OpenBSD, Cygwin, DJGPP, AIX,
Mac OS X, OpenVMS, Oracle Database,
Solaris, Tru64, and UnixWare.
That covers a lot of system but I can only agree with Roman that nobody can be prevented from creating a distribution that differs for whatever reasons. The existence and location of a zonezinfo file is not covered by any official standard as far as I know. The standards (e.g. POSIX and XPG4) only establish the API.

Resources