How can I add Linux source code global variable? - linux

I face a situation where I need to declare a global variable so every file of Linux source code can reference it. As much as I know, Linux is a Monolithic kernel, so this can be done. So I add the global variable to the init/main.c file. However, when I use extern .. in other files, the compiler says undefined reference. Any help?
Update
The more file can access, the better. To be more concrete, I'm tracing the execution times of certain function, so I define a global variable, everytime the function executes, I'll add one. This is the most straightforward way to meet my demand.

Related

What is the best way to compile a Rust project on-the-fly with another Rust program?

I am exploring the idea of using Rust to dynamically compile a program with specific changes to the source code based on each user’s needs.
My idea was to have the “compiler” program load the main.rs source file for the project it’s going to compile. It uses string replacement to make the necessary changes, and saves the file.
Next I would just like to build the project (from another Rust program) and be able to grab the resulting target file.
Of course, I could probably just use std::process::Command, but is there a better method rather than manually invoking “cargo build” and then waiting an arbitrary amount of seconds for the exe to be ready? With all of the online rust compilers and stuff, i figured there is a better way.
There is no library with public API for the compiler at the moment.
You can check that the playground uses cargo as a command line tool: https://github.com/integer32llc/rust-playground/blob/806ce3ec134214356e93d8df751834f1eadc0d84/ui/src/sandbox.rs#L150
just use std::process::Command, but is there a better method rather than manually invoking “cargo build” and then waiting an arbitrary amount of seconds for the exe to be ready
You wouldn't need to wait an arbitrary amount of time. The command can be awaited and when it completes, either the program compiles successfully and the file as been created, or it had an error, and the result code of the compiler is non-null.

RPG program error: Error MCH3601 was detected in file

We have been facing a very strange issue with one of our RPGLE programs that bombs intermittently with the subjected error.
This happens specifically at a line where a write operation is performed to a subfile record format. I have debugged and checked all the values assigned to variables during runtime and could not find absolutely no issues. As per the https://www.ibm.com/support/pages/node/644069 IBM page, I can only assume that this might be related to the parameter definitions of the programs called within the RPG. But I have checked the parameters of each and every prototyped program call and everything seems to be in sync.
Can some one please guide on the direction to go to find out the root cause of this problem?
But I have checked the parameters of each and every prototyped program
call
Assuming you're using prototypes properly, ie. there is one prototype defined in a separate source member and it is /INCLUDE into BOTH the caller and the callee...
Then prototype calls aren't the problem, as long as you're properly handling any *OMIT and *NOPASS parameters.
Look at any old style CALL or CALLB calls and anyplace you're not using prototypes properly...meaning there's a explicit PR coded in both caller & callee.
Note that you it's not just old-style calls made by the program that bombs, it's calls made anywhere down the call chain.
And if the program is repeatedly called with LR=*OFF or without reclaiming resources, then it could be any old style calls up the call chain also.
Lastly, old style calls include any made by CL or CLLE programs.
Good luck!

Libgit2 global state and thread safety

I'm trying to revise our codebase which seems to be using libgit2 wrong (at least TSAN is going crazy over how we use it).
I understand that most operations are object based (aka, operations on top of repo are localized to that repo), but I'm unclear when it comes to the global state and which operations need to be synchronized globally.
Is there a list of functions that require global synchronization?
Also when it comes to git_repository_open(), do I need to ensure that one path is only ever held by a single thread? I.e. do I need to prevent multiple threads accessing the same repo?

Is reading and writing process.env values synchronous?

Reading and writing environment variables in Node.js is done using the process.env object.
For instance:
process.env.foo evaluates to the env var foo
process.env.bar = 'blah' sets the value of the env var bar to blah
delete process.env.baz deletes the environment variable baz
From trial and error, and the lack of a callback, I assume that these actions are synchronous, but I found no reference to it process.env documentation.
Is env var access synchronous or asynchronous in Node.js?
Addendum: Why I believe this question to be non-trivial
Following the comments: Reading and writing the environment variables might mean that the process needs to communicate with the operating system, or perform some sort of blocking I/O operations.
Therefore, it makes sense to ask whether the environment variables are stored as a local object in memory without any synchronization, or otherwise sent to the operating system in a blocking manner.
Moreover, implementation may vary between operating systems and the official documentation lacks any promise to a non-blocking operation.
I think the "synchronous"/"asynchronous" may be a bit misleading.
I guess the actual question is: Is reading from or writing to process.env expensive? Does it perform a blocking operation with the operating system?
The short answer is Yes, it can be expensive.
For more background info and how much it can impact some apps, see this GitHub issue. There it was already stated that the documentation should be updated to make it clear that accessing process.env is slow in 2015 but it hasn't happened yet.
You can actually see the implementation for process.env in the node.js source code where it's obvious that any access will call one of the functions defined from here onwards.
Note: At the time of writing, this was defined in node.cc in a more straight-forward way. The links above still point to the old implementation. Newer versions of node have process.env implemented in a separate file node_env_var.cc which can be found here, but it has more encapsulation, making it harder to follow for the purpose of this explanation.
Depending on the platform, this may have more or less of an impact.
It becomes most obvious on Windows, because there you can view a process' current environment from the outside (while in Linux, the /proc/.../environ file will retain its original contents when the environment was changed with setenv).
For example:
node -e "process.env.TEST = '123'; setInterval(() => {}, 1000);";
This will start a node process which creates a TEST environment variable in the current process' environment and then wait forever.
Now we can open a tool like Process Explorer or Process Hacker and look at the environment of the node process:
And lo and behold, the variable is there. This proves in another way that writing to process.env does in fact access the operating system.
Also, because the object actually queries all data from the OS, it means that it even behaves different than a normal object. Again, Windows example (because it's most quirky):
Windows matches environment variables case-insensitive.
> process.env.TEST = '123'
'123'
> process.env.tEsT
'123'
Windows has hidden environment variables starting with = which cannot be changed through normal means and which are not enumerated. node.js replicates these semantics. The =X: variables in particular represent the current directory in specific drives (yes, Windows stores them per drive).
> Object.keys(process.env).filter(k => k === '=Z:')
[]
> process.env['=Z:']
'Z:\\'
> process.env['=Z:'] = 'Z:\\Temp'
'Z:\\Temp'
> process.env['=Z:']
'Z:\\'
> process.chdir('Z:\\Temp')
undefined
> process.env['=Z:']
'Z:\\Temp'
Now, somebody might think (similar to what was proposed in the GitHub issue that I linked) that node.js should just cache process.env in an actual object, and for child process creation read the environment from the cached object. This is not advisible for the following reasons:
They would need to copy the semantics of the underlying platform and reimplement them. As you can see in the above example for Windows, this would at some point end up in intercepting chdir and trying to automatically update the relevant =X: variable of the affected drive (and then it wouldn't work if a native plugin would change the current directory), or access the OS only for some variables, and therein lies madness and huge potentional for obscure bugs.
This would break applications which read a process' environment from the outside (like Process Explorer), as they would see incorrect values.
This would create inconsistencies if a native module would access the environment variables in its own from C++ code, because they would now have a different state than the cached object.
This would cause childprocesses to not inherit the correct variables if the child process were started by a native module (for the same reason as above).
This should also explain why it is a bad idea to do process.env = JSON.parse(JSON.stringify(process.env)) in your code. For one, it would break case-insensitivity on Windows (and you can't possibly know what modules which some other module requires may depend on that), and apart from that it would of course cause tons of other problems as described above.
Actually it is an normal object to make you could get the environment variables of current process, after all them are just some variables for carry some setting to a program. Nodejs just set a normal object for them after nodejs program read them. Although documentation not write them but it write this is an object and following things:
It is possible to modify this object, but such modifications will not
be reflected outside the Node.js process. In other words, the
following example would not work:
$ node -e 'process.env.foo = "bar"' && echo $foo
While the following will:
process.env.foo = 'bar';
console.log(process.env.foo);
Assigning a property on process.env will implicitly convert the value to a string.
This is enough to explain your problem.

How to spawn ghci and pass a pointer to it where it can access later?

In my haskell program I invoke ghci via createProcess in System.Process module. I want to run some initialization code in ghci before entering in a forever loop which gets user input and passes to ghci for evaluation. One of the value is a Ptr a which is created using FFI from c code. Another is a function which is imported from shared lib which needs the pointer to do all useful stuffs. I can "create" this function directly in ghci or make ghci load a module file, but the pointer is the key problem.
That pointer actually points to a large struct and is volatile. Ideally to make it works ghci should be able to access the memory at any time. I'm not familiar with Linux process and don't know if it's possible in this scenario. Or is there any better way to achieve the same effect? (create a repl environment with additional helper functions ready for the user to use interactively)
I prefer not to write my own repl code and not use hint.

Resources