How to construct HTTP_REQUEST_HEADERS in native server push functionality, HttpDeclarePush - iis

I am trying to implement iis native server push functionality, HTTPDeclarePush, in VC++. Below is the syntax for HttpDeclarePush (from msdn).
ULONG WINAPI HttpDeclarePush(
_In_ HANDLE RequestQueueHandle,
_In_ HTTP_REQUEST_ID RequestId,
_In_ HTTP_VERB Verb,
_In_ PCWSTR Path,
_In_opt_ PCSTR Query,
_In_opt_ PHTTP_REQUEST_HEADERS Headers
);
While constructing the parameters, I was unable to get the data for the parameter HTTP_REQUEST_HEADERS. Can any one please let me know how to construct HTTP_REQUEST_HEADERS structure using native C++.

Related

Connecting to external unix domain socket from NDK JNI

I'm building a POC Android app that needs to communicate with an ELF binary over a Unix domain socket server that the binary binds to and listens on. The app is meant for rooted phones and executes the binary as a superuser upon launch. I need to connect with the binary from my client residing in native code, which I'm presently failing to do.
I'm using a self-ported, stripped down version of libsocket to implement the domain socket functionality for both the binary and the Android app (through JNI). The binary communicates perfectly with a command line client, however, it fails to connect with the client that I've implemented in JNI code. I've made sure that the binary is running from /data/data/<my_package_name>/files and that the server socket has public access (777).
While researching the above problem, I stumbled across the fact that NDK requires LocalSockets to be in the Linux abstract namespace. My server (arm binary) binds to an absolute path (/data/data/<my_package_name>/files/serversocket) as libsocket does not support the abstract namespace for unix domain sockets (due to the usage of strlen() and strncopy() which do not support strings beginning with \0).
The following is the code for create_socket from libsocket that's failing with a negative fd.
int create_socket(const char* path, int flags) {
if (path == NULL) {
return -1;
}
if (strlen(path) > sizeof(((struct sockaddr_un*) 0)->sun_path) - 1) {
return -1;
}
int fd = socket(AF_LOCAL, SOCK_STREAM | flags, 0);
if (fd < 0) {
return -1;
}
struct sockaddr_un addr;
memset(&addr, 0, sizeof(addr));
addr.sun_family = AF_LOCAL;
strncpy(addr.sun_path, path, sizeof(addr.sun_path) - 1);
// the connect call below fails, errno is set to 13 (EACCESS)
if (connect(fd, (struct sockaddr*) &addr, sizeof(addr.sun_family) + strlen(addr.sun_path))) {
close(fd);
return -1;
}
return fd;
}
EDIT :
In the above code, the call to connect() fails, with errno being set to 13 (EACCESS). This seems to be an insufficient privileges problem.
I'm wondering if there's any way for me to connect my client to an absolute path from within NDK. It works just fine when I package the client in an ELF executable that runs as superuser, am I missing something obvious here?
To anyone who might be following this, it is necessary to set appropriate permissions on the socket pseudo file manually every time it is launched as root, else connect() fails with errno being set to EACCESS. I'm yet to find a better solution to this.

'ioctl' signature for device mapper

The question may seem naive, but I'm new to kernel/driver programming. I created a device mapper over a block device, which is working fine. It's constructor/destructor and map methods are called.
Now, I'm trying to write an ioctl for this mapper. When ioctl is written for a device, it has the following signature:
int ioctl(int d, /* other args */);
A file structure/descriptor is expected in ioctl. This can be easily used by application process as it has access to file.
But the ioctl for device mapper has the following signature ( in struct target_type):
typedef int (*dm_ioctl_fn) (struct dm_target *ti, unsigned int cmd,
unsigned long arg);
How can user application get access to device mapper with ioctl without having knowledge of struct dm_target ?
-Ioctl which stand for Input Output control is a system call used in linux to implement system calls which are not be available in the
kernel by default.
-The major use of this is in case of handling some specific operations of a device for which the kernel does not have a system call by default. For eg: Ejecting the media from a "CD" drive. An ioctl command is implemented to give the eject system call to the cd drive.
-ioctl(fd, cmd , INPARAM or OUTPARAM);
|
3rd argument is INPARAM or OUTPARAM i.e we don't have to read a device, then how how to interact with device ? use ioctl.
-open ioctl.h and check you will get more information
#define "ioctl name" __IOX("magic number","command number","argument type")
static long char_dev_ioctl( struct file *filp, unsigned int cmd, unsigned long arg)
{
/* verify argument using access_ok() */
/* impliment support of ioctl commands */
}

How to export the GetHashInterface function?

I'm going to write a simple algorithm provider under CNG (Cryptography Next Generation), exactly an user-mode Hash Provider.
According to the instruction in CNG Development Kit Help "A hash provider must implement the GetHashInterface function and export it by name".
To implement an algorithm provider, I need to include the "bcrypt.h" file from the CNG Development Kit. This file also define the interface for GetHashInterface function but WITHOUT an export directive, exactly:
__checkReturn
NTSTATUS
WINAPI
GetHashInterface(
__in LPCWSTR pszProviderName,
__in LPCWSTR pszAlgId,
__out BCRYPT_HASH_FUNCTION_TABLE **ppFunctionTable,
__in ULONG dwFlags);
If I redefine the function in my header file as an exportable function, for example
#ifndef __CngHashProvider
#define __CngHashProvider
///////////////////////////////////////////////////////////////
#ifndef EXPORT
#define EXPORT extern "C" __declspec(dllexport)
#endif
EXPORT NTSTATUS WINAPI GetHashInterface(
__in LPCWSTR pszProviderName,
__in LPCWSTR pszAlgId,
__out BCRYPT_HASH_FUNCTION_TABLE **ppFunctionTable,
__in ULONG dwFlags
);
////////////////////////////////////////////////////////////////
#endif __CngHashProvider
I should get an error message:
Error C2375 'GetHashInterface': redefinition; different linkage
If I remove the EXPORT directive (or remove the whole of interface predefinition for the function), the error message should disappear, but the function can not be exported from my DLL.
So please help me, telling me the way to solve the problem to export the needed GetHashInterface function.
At the moment I "found" a way to solve the problem.
I coppied the file bcrypt.h from the CNG Development Kit to my project folder and then removed the definition of the GetHashInterface function. My project should include the modified header file, but not the original one.
I don't know is it a right way, but it works for me.
You can use .def file without the need to edit bcrypt.h. In Visual Studio: Add->New item->Code->Module-definition file.
Just add to this file:
LIBRARY "yourlibraryname"
EXPORTS
GetHashInterface = GetHashInterface

C++/CLI and GetLastError

I have created a C++ Test Project for my C++ library in Visual Studio 2010. The test project uses C++/CLI (/clr set) and I am having problems retrieving the last error set by my library functions; GetLastError always returns zero.
In the example below I want to test that the correct return value and last error is set by my Write function:
[TestMethod]
void Write_InvalidHandle_Error()
{
char buffer[] = "Hello";
DWORD actual = -1;
DWORD expected = ERROR_INVALID_HANDLE;
int actualRetVal = 0;
int expectedRetVal = -1;
HANDLE handle = INVALID_HANDLE_VALUE;
actualRetVal = Write(handle, buffer);
actual = GetLastError();
Assert::AreEqual(expectedRetVal, actualRetVal);
Assert::AreEqual(expected, actual);
}
I have checked my Write function and it does set the correct return value and last error but the latter is not retrieved in my test method. Even when I change the Write function to just set the error and return the problem occurs (and I call no other function before calling GetLastError in my test method):
int Write(HANDLE h, const char* buf)
{
SetLastError(ERROR_INVALID_HANDLE);
return -1;
}
Any idea how I can fix this? I assume there is a problem with C++/CLI because when I use my library outside of this testing scenario (pure C++) GetLastError works.
Relying on GetLastError()/SetLastError() across the managed/unmanaged boundary is problematic.
When using P/Invoke and the DllImport attribute you can (must) set the SetLastError property to get access to the native error code on the managed side.
When using C++/CLI, however, the compiler handles all marshalling for you, and explicitly does not set that flag.
You can read some more details about it in this blog post. The gist of it is:
If you use DllImport explicitly in C++, the same rules apply as with
C#. But when you call unmanaged APIs directly from managed C++ code,
neither GetLastError nor Marshal.GetLastWin32Error will work reliably.
This is also covered at length in Chapter 9 of "Expert Visual C++/CLI" by Marcus Heege which is available on Google Books:
As mentioned before, for these native local functions, C++/CLI
automatically generates P/Invoke metadata without the lasterror flag,
because it is very uncommon to use the GetLastError value to
communicate error codes within a project. However, the MSDN
documentation on GetLastError allows you to use SetLastError and
GetLastError for your own functions. Therefore, this optimization can
theoretically cause wrong GetLastError values.
Basically, don't do it!
I would recommend to use (native) C++ exceptions to communicate errors between managed and unmanaged code. C++/CLI supports these very nicely. If you can't modify your Write() function directly, you could create a wrapper function on the unmanaged side which uses GetLastError() and then throws an exception if necessary.

How do I use this type of typedef?

in my header api file there is this. (not my code)
typedef void (WINAPI *PIN_FUNC)(char*,LPVOID);
__PINLIB__ int WINAPI PIN_GetNumeric(int m_id,char * Message,PIN_FUNC func,LPVOID Param);
and at one point I had it working by doing this in my code
static void WINAPI Pinpad_Handle(char *buf, LPVOID pParam);
void WINAPI PinpadHelper::Pinpad_Handle( char *buf, LPVOID pParam){...}
but i get the distinct feeling that I'm doing it wrong. And being new to VC++ I don't know how to fix it. The tutorial that i read on typedef mainly talked about variables and abstraction and so forth (which I understood that side of it) I thought that I could do this
static PIN_FUNC PinpadEvent(char* buffer, LPVOID pParam);
but that throws a error in Visual Studio. How do i properly do this? or did I have it right the first time?
You were doing it right originally. PIN_FUNC is just a C function pointer declaration. You must write a C function with the exact same signature as the function pointer. Which you did, nothing wrong with your original code. The compiler would have generated an error if you tried to assign the function pointer and you got the function signature wrong. Lots of programmers make the mistake of casting the error away, that's a fatal mistake that bombs badly at runtime. So never do that.
Not so sure what you tried to do in the last snippet. If you want to declare your own variable that stores the function pointer then that needs to look like this:
static PIN_FUNC PinpadEvent;
Of course it still needs to be assigned. Check your favorite C language programming book about function pointers if you are still confused.

Resources