The Linux Programming Interface has an exercise in Chapter 3 that goes like this:
When using the Linux-specific reboot()
system call to reboot the system, the
second argument, magic2, must be
specified as one of a set of magic
numbers (e.g., LINUX_REBOOT_MAGIC2).
What is the significance of these
numbers? (Converting them to
hexadecimal provides a clue.)
The man page tells us magic2 can be one of LINUX_REBOOT_MAGIC2 (672274793), LINUX_REBOOT_MAGIC2A (85072278), LINUX_REBOOT_MAGIC2B (369367448), or LINUX_REBOOT_MAGIC2C (537993216). I failed to decipher their meaning in hex. I also looked at /usr/include/linux/reboot.h, which didn't give any helpful comment either.
I then searched in the kernel's source code for sys_reboot's definition. All I found was a declaration in a header file.
Therefore, my first question is, what is the significance of these numbers? My second question is, where's sys_reboot's definition, and how did you find it?
EDIT: I found the definition in kernel/sys.c. I only grepped for sys_reboot, and forgot to grep for the MAGIC numbers. I figured the definition must be hidden behind some macro trick, so I looked at the System.map file under /boot, and found it next to ctrl_alt_del. I then grepped for that symbol, which led me to the correct file. If I had compiled the kernel from source code, I could try to find which object file defined the symbol, and go from there.
Just a guess, but those numbers look more interesting in hex:
672274793 = 0x28121969
85072278 = 0x05121996
369367448 = 0x16041998
537993216 = 0x20112000
Developers' or developers' children's birthdays?
Regarding finding the syscall implementation, I did a git grep -n LINUX_REBOOT_MAGIC2 and found the definition in kernel/sys.c. The symbol sys_reboot is generated by the SYSCALL_DEFINE4(reboot, ... gubbins, I suspect.
It's the birthday of Linus Torvalds (The developer of the Linux kernel and the Git version control) and his 3 daughters. works as magic numbers to reboot the system.
http://en.wikipedia.org/wiki/Linus_Torvalds
Related
I have created a TensorFlow Lite .tflite model which I plan to use on a microcontroller. However, this file must be converted to a C source file, i.e, a TensorFlow Lite for Microcontrollers model. TensorFlow documentation provides a simple way to convert to a C array with the unix command xxd. I am using Windows 10 and do not have access to the unix command and there are no alternative Windows methods documented. After searching superuser, I saw that xxd for Windows now exists. I downloaded the command and ran it on my .tflite model. The results were different than the hello world example.
First, the hello world example model.h file has a comment that say it was "Automatically created from a TensorFlow Lite flatbuffer using the command: xxd -i model.tflite > model.cc" When I ran the command, model.h was not "automatically created".
Second, comparing the model.cc file from the hello world example, with the model.cc file that I generated, they are quite different and I'm not sure how to interpret this (I'm not referring to the differences in the actual array). Again, in the example model.cc file, it states that it was "automatically created" using the xxd command. Line 28 in the example is alignas(8) const unsigned char g_model[] = { and line 237 is const int g_model_len = 2488;. In comparison, the equivalent lines in the file I generated are unsigned char _________g_model[] = { and unsigned int _________g_model_len = 4009981;
While I am not a C expert, I am not sure how to interpret the differences in the files and if I have generated the model.cc file incorrectly. I would greatly appreciate any insight or guidance here on how to properly generate both the model.h and model.cc files from the original model.tflite file.
After doing some experiments, I think this is why you are getting differences:
xxd replaces any non-letter/non-digit character of the path to the input file by an underscore ('_'). Apparently you called xxd with a path for the input file that has 9 such leading characters, perhaps something like "../../../g.model". The syntax of C allows only letters (a to z, A to Z), digits (0 to 9) and underscore as characters of objects' names, and the names need to start with a non-digit. This is the only "manipulation" xxd does to the name of an input file.
Since xxd knows nothing about TensorFlow, it could not had generated the copyright notice. Using this as indication, any other difference had been inserted by other means by the TensorFlow authors, despite the statement "Automatically created from a TensorFlow Lite flatbuffer ...". This could be done manually or by a script, unfortunately I did not find any hint in some quick research on their repository. Apparently the statement means just the data values.
So you need to edit your result:
Add any comment you see fit.
Add the compiler-specific alignas(8) to the array, if your compiler supports it.
Add the keywords const to the array and the length variable. This will tell the compiler to prohibit any write access. And probably this will place the data in read-only memory.
Rename array and length variables to g_model and g_model_len, respectively. Most probably TensorFlow expects these names.
Copy "model.cc" into "model.h", and then apply more editions, as the example demonstrated.
Don't be bothered by different values. Different contents of the model's file are the reason. It's especially simple to check the length variable, it has to have exactly the same value as the size of the input file.
EDIT:
On line 28 which is this text alignas(8) const unsigned char as shown in the example converted model. When I attempt to convert a model (whether it's my custom model or the "hello_world.tflite" example model) the text that would be on line 28 is unsigned char (any other text on that line is not in question). How is line 28 edited & explained?
Concerning the "how": I firmly believe that the authors of TensorFlow literally used an editor (an IDE or a stand-alone program like Notepad++ or Geany) and edited the line, or used some script to automate this.
The reason for alignas(8) is most probably that TensorFlow expects the data with an alignment of 8 bytes, for example because it casts the byte array to a structure that contains values of 8 bytes width.
The insertion of const will also commonly locate the model in read-only memory, which is preferable on most microcontrollers. If it were left out, the model's data were not only writable, but would be located in precious RAM.
On line 237, the text specifically is const int. When I attempt to convert a model (whether it's my custom model or the "hello_world.tflite" example model) the text that would be on line 237 is unsigned int (any other text on that line is not in question). Why are these two lines different in these specific places? It makes me believe that xxd on Windows is not functioning the same?
Again, I firmly believe this was edited manually or by a script. TensorFlow might expect this variable to be of data type int, but any xxd I tried (Windows and Linux) generates unsigned int. I don't think that your specific version of xxd functions differently on Windows.
For const the same thoughts apply as above.
Finally, when I attempt to convert the example model "hello_world.tflite" file using the xxd for windows utility, my resulting array doesn't match the example "hello_world.cc" file. I would expect the array values to be identical if the xxd worked. The last question is how to generate the "model.h" and "model.cc" files on Windows.
Did you note that the model you link is in another branch of the repository?
If I use the branch on GitHub as in your link to "hello_world.cc", I find in "../train/README.md" this archive hello_world_2020_12_28.zip. I unpacked it and ran xxd on the included "model.tflite". The result's data match the included "model.cc" in the archive. But it does not match the data of "hello_world.cc" in the same branch that you linked. The difference is already there.
My conclusion is, that the example result was not generated from the example model. This happens, since developers sometimes don't pay enough attention on what they commit. Yes, it's unfortunate, as it irritates and frustrates beginners like you.
But, as I wrote, don't let this make you headaches. Try the simple example, use the documentation as instructions on the process. Look at the differences in specific data as a quirk. You will encounter such things time after time when working with other's projects. It is quite normal.
My program, PKGDAYMONR has the control option:
ctl-opt Main( CheckDailyPackages )
The CheckDailyPackages procedure has the following PI:
dcl-pi *n ExtPgm( 'PGMNAME' );
As you can see the ExtPgm parameter is not the name of the program. In fact, it’s what came over in the template source and I forgot to change it. Despite the wrong name in ExtPgm, the program runs without a problem.
If I remove that parameter and leave the keyword as just ExtPgm, I get the following message:
RNF3573: A parameter is required for the EXTPGM keyword when the
procedure name is longer than 10.
If I drop ExtPgm from the Procedure Interface altogether, it also complains:
RNF3834: EXTPGM must be specified on the prototype for the MAIN()
procedure.
So why is it that I have to specify a parameter if it doesn't matter what value I enter?
O/S level: IBM i 7.2
Probably worth pursuing as a defect with the service provider; presumably for most, that would be IBM rather than a third-party, as they would have to contact IBM anyhow, given the perceived issue is clearly with their compiler. Beyond that, as my "Answer", I offer some thoughts:
IMO, and in apparent agreement with the OP, naming the ExtPgm seems pointless in the given scenario. I think the compiler is confused while trying to enforce some requirements in validations of the implicitly generated Prototype for the linear-main for which only a Procedure Interface is supplied; i.e. enforcing requirements that are appropriate for an explicit Prototype, but requirements that could be overlooked [thus are no longer requirements] in the given scenario.? I am suggesting that while the RNF3573 would seem appropriate for diagnosing EXTPGM specifications of an explicit Prototype, IMO that same effect is inappropriate [i.e. the validation should not be performed] for an implicit prototype that was generated by the compiler.
FWiW: Was the fixed-format equivalent of that free-form code tested, to see if the same or a different error was the effect? The following source code currently includes the EXTPGM specification with 'PGMNAME' as the argument [i.e. supplying any bogus value of 10-byte naming to supplicate the compiler, just as is being done in the scenario of the OP, solely to effect a successful compile], but could be compiled with the other variations with changes to the source, mimicking what was done with free-form variations, to test if the same\consistent validations and errors are the effect:
- just EXTPGM keyword coded (w/out argument); is RNF3573 the effect?
- the EXTPGM keyword could be omitted; is RNF3834 the effect?
- the D-spec removed entirely (if there are no parameters defined); ¿that was not one of the variations noted in the OP as being tried, so... the effect?
H MAIN(CheckDailyPackages)
*--------------------------------------------------
* Program name: CheckDailyPackages (PGMNAME)
*--------------------------------------------------
P CheckDailyPackages...
P B
D PI EXTPGM('PGMNAME')
/free
// Work is done here
/end-free
P CheckDailyPackages...
P E
I got a response from IBM and essentially Biswa was on to something, it simply wasn't clear (in my opinion) about the answer.
Essentially the EXTPGM is required on long Main procedure names in order to support recursive program calls.
This is the response I received from IBM explaining the reason for the scenario:
The incorrect EXTPGM would only matter if there was a call to the main
procedure (the program) within the module.
When the compiler processes the procedure interface, it doesn't know
whether there might be a call that appears later in the module.
EXTPGM keyword is used to define the external name of the program which you want to prototype. If you mention the EXTPGM then the program will be called dynamically.
Let us take an example in order to explain your query.
PGMA
D cmdExc PR ExtPgm('QSYS/QCMDEXC')
D 200A const
D 15P05 const
c callp cmdExc('CLRPFM LIB1/PF1':200)
C Eval *INLR = *ON
In the above example CmdExc used for the dynamic call to QSYS/QCMDEXC.
When we use the same program name as the EXTPGM parameter it acts as an entry point to the program when called from other programs or procedure.
But in any case when we mention any name as the sample parameter the EXTPGM will not give any error in compilation, but it gives the error during run time as it tries to resolve the name during run time.
Is there any way to have an Alex macro defined in one source file and used in other source files? In my case, I have definitions for $LowerCaseLetter and $UpperCaseLetter (these are all letters except e and O, since they have special roles in my code). How can I refer to these macros from other .x files?
Disproving something exists is always harder than finding something that does exist, but I think the info below does show that Alex can only get macro definitions from the .x file it is reading (other than predefinied stuff like $white), and not via includes from other files....
You can get the sourcecode for Alex by doing the following:
> cabal unpack alex
> cd alex-3.1.3
In src/Main.hs, predefined macros are first set in variables called initSetEnv (charset macros $white, $printable, and "."), and initREEnv (regexp macros, there are none). This gets passed into runP, in src/ParseMonad.hs, which is used to hold the current parsing state, including all defined macros. The initial state is set using the values passed in, but macros can be added using a function called newSMac (or newRMac for regular expression macros).
Since this seems to be the only way that macros can be set, it is then only a matter of some grep bookkeeping to verify the only ways that macros can be added is through an actual macro definition in the source .x file. Unsurprisingly, Alex recursively uses its own .x/.y files for .x source file parsing (src/parser.y, src/Scan.x). It is a couple of levels of indirection away, but you can verify that the only way newSMac can be called is through the src/Scan.x macro
#smac = \$ #id | \$ \{ #id \}
<0> #smac #ws? \= { smacdef }
Other than some obvious predefined stuff, I don't believe reuse in lexers is all that typical anyway, because at the token level things are usually pretty simple (often simple tokens like SPACE, WORD, NUMBER, and a few operators, symbols and parens are all that are needed). The complexity comes at the parsing stage, although for technical reasons, parser-includes aren't that common either (see scannerless parsing for a newer technology that does allow reuse through nesting, like javascript embedded in html.... The tools for scannerless parsing are still pretty primitive though).
If I had a file called raw_text.txt, is there a way I could iterate through each bit?
I see the following but am confused on how to use it:
http://www.gnu.org/software/mit-scheme/documentation/mit-scheme-ref/File-Manipulation.html
— procedure: file-attributes/mode-string attributes
The mode string of the file, a newly allocated string showing the file's mode bits. Under unix, this string is in unix format. Under Windows, this string shows the standard “DOS” attributes in their usual format.
EDIT: I am using mit-scheme
It's implementation-specific. On the Racket side of things, there are a few libraries:
http://planet.racket-lang.org/display.ss?package=bitsyntax.plt&owner=tonyg
http://planet.racket-lang.org/display.ss?package=bit-io.plt&owner=soegaard
You can probably use something like the binary-parse library as well: http://okmij.org/ftp/Scheme/binary-io.html, as long as your implementation of Scheme can support it.
Under MIT Scheme, you can use the bit-string functions.
I haven't actually tried to do anything with this, but I think you're looking for this section of the mit-scheme docs: Input/Output. Specifically the file ports and input procedures sections.
I didn't see anything specifically about reading the binary bits, but if it's character bytes you want, it looks like there are procedures for that. Maybe you want to do something like this?
(call-with-input-file "raw_text.txt" <procedure>)
or
(call-with-binary-file "raw_text.txt" <procedure>)
Where <procedure> will take the file port and use the input procedures to read things from that file.
Just out of curiosity, what are you trying to do?
EDIT: It appears that someone did a write up on this here.
Is it possible to turn off abbreviation in getopt_long()? From the man page:
Long option names may be abbreviated if the abbreviation is unique or is an exact match for >some defined option.
I want to do this because the specification I have received for a piece of code requires full-length exact match of the flags, and there are many flags.
Codeape,
It appears there isn't a way to disable the abbreviation feature. You aren't alone in wishing for this feature. See: http://sourceware.org/bugzilla/show_bug.cgi?id=6863
Unfortunately, It seems the glibc developers don't want the option as the bug report linked above was resolved with "WONTFIX". You may be out of luck here :-\
If you use argp_parse() instead of getopt() (highly reccommended, BTW), you can access the exact flag entered by the user through
state->argv[ state->next - 2 ]
It's a bit of a hack, but should work.
This is not perfect solution but you can check exact arg given by a user after calling getopt_long() (normally within switch) like below:
if (strcmp(argv[optind-1], "--longoption") == 0)
optind points a next argument that you need to process. Thus, you can access the original arg using optind-1.