Is there any way to read macOS's "st_flags" field returned by stat() from nodeJS? - node.js

Today I found out that one of the several ways in which macOS can mark files as hidden is to use an extra set of "flags" in it's stat struct that it inherited from BSD.
chflags system call to set these flags
stat call that allows reading these flags
On Mac you can see them by doing ls -lO. On the BSDs you can see them by doing ls -lo.
I've found that at least the Python that comes with Macs supports these extra fields (not sure if all Pythons can see them), and of course I can access them from C.
But is there any way to access them from Node? I usually consider Node to be pretty good at this kind of thing but it's definitely not in its stat support, nor can I find it in any module on NPM, or mentioned anywhere on the net. I also don't know how trivial it is to do it in C and call it from Node, but I'm about to look into it.
But perhaps I'm missing something. Even if people rarely code for Mac-specific stuff these days, I'm more surprised that it hasn't been added so the BSDs can make use of it. Maybe nodeJS hasn't been popular on those platforms?
So how might I read and set these flags from nodeJS code?

Related

Linux: How to find out which (sub) dependency of my library needs a specific library?

The title may seem complicated.
I made a library to be loaded within a Tcl script. Now I need to transfer it to Ubuntu 12.04.
Tclsh gives the following error:
couldn't load file "/apollo/applications/Linux-PORT/i586/lib/libapmntwraptcl.so":
**libgeos-3.4.2.so**:
cannot open shared object file: No such file or directory
while executing "load $::env(ACCLIB)/libapmntwraptcl[info sharedlibextension]"
The library libgeos doesn't have the version 3.4.2 under Ubuntu 12.04. So I need to know which (sub) dependency of my library needs the famous libgeos-3.4.2.so, so that I can rebuild it or find an alternative.
Many thanks in advance.
Edit:
Thank you for your USEFUL answers. I already did ldd -v or -r. I have 200+ dependencies when I do ldd -r. The worst is, in the result list I see libgeos-3.3.8.so => /usr/lib/libgeos-3.3.8.so (0xb3ea9000) (version I have), but when I execute, Tclsh says
libgeos-3.4.2.so missing.
That's why I need something able to tell me the complete dependency tree of my library.
Could anyone give me a hint (not some useless showoff)?
Thank you so much.
You've accidentally (probably through no fault of your own) wandered into “DLL Hell”; the problem is that something that libapmntwraptcl.so depends on, possibly indirectly, does not have its dependencies satisfied. This sort of thing can be very difficult to solve precisely because the tools that know what went wrong (in particular, the system dynamic linker library) produce such little informative output by default.
What's even worse is that you have apparently multiple versions about. That's where DLL Hell reaches its worst incarnation. You need to be a detective to solve this; it's too hard to sensibly do remotely as many of the things that you poke your fingers at are determined by what previous steps said.
You need to identify exactly what versions you're loading, with ldd libapmntwraptcl.so (in your shell, not in Tcl). You also need to double check what your environment variables are immediately before the offending load command, as several of them can affect the loading process. The easiest way to do that is to put parray env just before the offending load, which will produce a dump of everything in the context where things could be failing; reading the manual page for ld.so will tell you a lot more about each of the possible candidates for trouble (there's many!).
You might also need to go through the list of libraries identified by the ldd program above and check whether each of those also has all their dependencies satisfied and in a way that you expect, and you should also bear in mind that failing to locate with ldd might not mean that the code actually fails. (That would be too easy.)
You can also try setting the LD_DEBUG environment variable to all before doing the load. That will produce quite a lot of information on standard out; maybe it will give you enough to figure out what is going wrong?
Finally, on Linux you need to bear in mind that there can be an RPATH set for a particular library (which can affect where it is found) and there's a system library cache which can also affect things.
I'm really sorry the error message isn't better. All I can really say is that it's exactly as much as Tcl is told about what went wrong, and its hardly anything.

Will writing C in both Windows and Linux cause compiling problems?

I work from 2 different machines. One is Windows and the other is Linux. If I alternately work on the same project but switch between both OSes, will I eventually run into compiling errors? I ask because maybe there are standards supported by one but not by the other.
That question is a pretty broad one and it depends, strictly speaking, on your tool chain. If you were to use the same tool chain (e.g. GCC/MinGW or Clang), you'd be minimizing the chance for this class of errors. If you were to use Visual Studio on Windows and GCC or Clang on the Linux side, you'd run into more issues alone because some of the headers differ. So once your program leaves the realm of strict ANSI C (C89) you'll be on your own.
However, if you aren't careful you may run into a lot of other more profane errors, such as the compiler on Linux choking on the line endings if you didn't tell your editor on the Windows side to use these.
Ah, and also keep in mind that if you want to actually cross-compile, GCC may be the best choice and therefore the first part I mentioned in my answer becomes a moot point. GCC is a proven choice on both ends. And given your question it's unlikely that you are trying to write something like a kernel mode driver - which would be fundamentally different.
That may be only if your application use some specific API.
It is entirely possible to write code that works on both platforms, with no issues to compile the code. It is, however, not without some difficulties. Compilers allow you to use non-standard features in the compiler, and it's often hard to do more fancy user interfaces (even if it's still just text) because as soon as you start wanting to do more than "read a line of text as it is entered in a shell", it's into "non-standard" land.
If you do find yourself needing to do more than what the standard C library can do, make sure you isolate those parts of the code into a separate file (or a couple of files, one for Linux/Unix style systems and one for Windows systems).
Using the same compiler (gcc) would help avoiding problems with "compiler B doesn't compile code that works fine in compiler A".
But it's far from an absolute necessity - just make sure you compile the code on both platforms and with all of your "suppoerted" compilers often enough that you haven't dug a very deep hole that is hard to get out of before you discover that "it's not working on the other system". It certainly helps if you have (at least) a virtual machine running the other OS, so you can easily try both variants.
Ideally, you want to set up an automated system, such that when you change the code [and feel that the changes are "complete"], it automatically gets built on both platforms and all compilers you want to use. And if possible, also automatically tested!
I would also seriously consider using version control - that way, when something breaks on one or the other side, you can go back and look at what the code looked like before it stopped working, and (hopefully) find the reason it broke much quicker than "Hmm, I think it's the change I made to foo.c, lets take that out... No, not that one, ok how about the change here..." - at least with version control, you can say "Ok, so version 1234 doesn't work, let's try version 1220 - ok, that works. Now try 1228, still works - so change between 1229 and 1234 - try 1232, ah, it's broken..." No editing files and you can still go to any other version you like with very little difficulty. I have used Mercurial quite a bit, git a little bit, some subversion, and worked on a project in Perforce for a few years. All of these are good - personally, I think I prefer mercurial.
As a side-effect: Most version control systems also deal with filename and line endings in the saner way than doing this manually.
If you combine your version control system with a "automated build and test-system", such as Jenkins, you can get everything very automated. Jenkins is free and runs on both Windows and Linux, and you can use it to automatically build and test your code as and when you submit the code to the version control system.
It will not create a problem until you recompile the source code in the respective OS. If you wanna run your compiled file generated by windows(.exe or .obj), into linux or vice-versa then it will definitely create a problem and wont be possible. But you can move you source code (file with extension .c/.c++) into any of the os. And sometimes it also create problems with different header files, so take care of that also. Best practice is to use single OS for you entire project, avoid multiple os until it is extremely necessary.

where to find Linux version sys/queue.h header file?

sys/queue.h first appeared in 4.4BSD. Linux has included it in its distribution, but the version seems not up-to-date.
FreeBSD version implements singly-linked lists, singly-linked tail queues, lists and tail queues. Linux version implements lists, tail queues, and circular queues.
I installed libbsd-dev package in my Ubuntu PC and then found BSD version's sys/queue.h in /usr/include/bsd/sys/queue.h.
My questions:
Where can I find the Linux version of this header file?
What's the main difference between these two implementations? Is Linux version just a out-dated version of BSD's ?
They share the same ancestry, but it looks like any development that have been done in them diverged a long time ago.
If you want to use it in your project your best bet is to just copy the one you like the most into your project and use that. Don't depend on the system providing it for you. It's just a header file with a bunch of macros and doesn't need a library or any dependencies to work and as such isn't operating system specific at all. I usually take the one from OpenBSD for my projects.
Looks like Linux's version is seriously outdated. CIRCLEQ is (rather strongly) deprecated in BSDs since 2001, and it even got removed from the documentation, even if the implementation is still in queue.h. We are supposed to use TAILQ, which offers the same functionality with better performance/less problems/saner implementation.
Meanwhile, in Linux it's still even documented, but you can find changes in kconfig migrating from CIRCLEQ to TAILQ citing the BSD deprecation.
The concrete problem in CIRCLEQ seems to be that it uses a specific head, different to a list node, but which is anyway linked as a node; so the head pointer has to be kept around and checked at every node access to see if the node turns out to be the head. So there are 2 problems: the checks at every access, and the need to keep the head pointer at hand, taking registers or cache.

Suppressing system calls when using gcc/g++

I have a portal in my university LAN where people can upload code to programming puzzles in C/C++. I would like to make the portal secure so that people cannot make system calls via their submitted code. There might be several workarounds but I'd like to know if I could do it simply by setting some clever gcc flags. libc by default seems to include <unistd.h>, which appears to be the basic file where system calls are declared. Is there a way I could tell gcc/g++ to 'ignore' this file at compile time so that none of the functions declared in unistd.h can be accessed?
Some particular reason why chroot("/var/jail/empty"); setuid(65534); isn't good enough (assuming 65534 has sensible limits)?
Restricting access to the header file won't prevent you from accessing libc functions: they're still available if you link against libc - you just won't have the prototypes (and macros) to hand; but you can replicate them yourself.
And not linking against libc won't help either: system calls could be made directly via inline assembler (or even tricks involving jumping into data).
I don't think this is a good approach in general. Running the uploaded code in a completely self-contained virtual sandbox (via QEMU or something like that, perhaps) would probably be a better way to go.
-D can overwrite individual function names. For example:
gcc file.c -Dchown -Dchdir
Or you can set the include guard yourself:
gcc file.c -D_UNISTD_H
However their effects can be easily reverted with #undefs by intelligent submitters :)

Are there good reasons not to exploit '#!/bin/make -f' at the top of a makefile to give an executable makefile?

Mostly for my amusement, I created a makefile in my $HOME/bin directory called rebuild.mk, and made it executable, and the first lines of the file read:
#!/bin/make -f
#
# Comments on what the makefile is for
...
all: ${SCRIPTS} ${LINKS} ...
...
I can now type:
rebuild.mk
and this causes make to execute.
What are the reasons for not exploiting this on a permanent basis, other than this:
The makefile is tied to a single directory, so it really isn't appropriate in my main bin directory.
Has anyone ever seen the trick exploited before?
Collecting some comments, and providing a bit more background information.
Norman Ramsey reports that this technique is used in Debian; that is interesting to know. Thank you.
I agree that typing 'make' is more idiomatic.
However, the scenario (previously unstated) is that my $HOME/bin directory already has a cross-platform main makefile in it that is the primary maintenance tool for the 500+ commands in the directory.
However, on one particular machine (only), I wanted to add a makefile for building a special set of tools. So, those tools get a special makefile, which I called rebuild.mk for this question (it has another name on my machine).
I do get to save typing 'make -f rebuild.mk' by using 'rebuild.mk' instead.
Fixing the position of the make utility is problematic across platforms.
The #!/usr/bin/env make -f technique is likely to work, though I believe the official rules of engagement are that the line must be less than 32 characters and may only have one argument to the command.
#dF comments that the technique might prevent you passing arguments to make. That is not a problem on my Solaris machine, at any rate. The three different versions of 'make' I tested (Sun, GNU, mine) all got the extra command line arguments that I type, including options ('-u' on my home-brew version) and targets 'someprogram' and macros CC='cc' WFLAGS=-v (to use a different compiler and cancel the GCC warning flags which the Sun compiler does not understand).
I would not advocate this as a general technique.
As stated, it was mostly for my amusement. I may keep it for this particular job; it is most unlikely that I'd use it in distributed work. And if I did, I'd supply and apply a 'fixin' script to fix the pathname of the interpreter; indeed, I did that already on my machine. That script is a relic from the first edition of the Camel book ('Programming Perl' by Larry Wall).
One problem with this for generally distributable Makefiles is that the location of make is not always consistent across platforms. Also, some systems might require an alternate name like gmake.
Of course one can always run the appropriate command manually, but this sort of defeats the whole purpose of making the Makefile executable.
I've seen this trick used before in the debian/rules file that is part of every Debian package.
To address the problem of make not always being in the same place (on my system for example it's in /usr/bin), you could use
#!/usr/bin/env make -f
if you're on a UNIX-like system.
Another problem is that by using the Makefile this way you cannot override variables, by doing, for example make CFLAGS=....
"make" is shorter than "./Makefile", so I don't think you're buying anything.
The reason I would not do this is that typing "make" is more idiomatic to building Makefile based projects. Imagine if every project you built you had to search for the differently named makefile someone created instead of just typing "make && make install".
You could use a shell alias for this too.
We can look at this another way: is it a good idea to design a language whose interpreter looks for a fixed filename if you don't give it one? What if python looked for Pythonfile in the absence of a script name? ;)
You don't need such a mechanism in order to have a convention based around a known name. Example: Autoconf's ./configure script.

Resources