RPC communication between Linux and Solaris - linux

I have a RPC server running in Solaris. I have a RPC client which is running fine in Solaris.
When I compile and run the same code in Ubuntu, I am getting Error decoding arguments in the server.
Solaris use SunRPC (ONC RPC). Not sure how to find the version of rpc.
Is there any difference between the RPC available in Linux & Solaris?
Would there be any mismatch between the xdr generated in Solaris & Linux?
How should I find out the issue?
Note: Code cannot be posted

#twalberg, #cppcoder Have you resolve the problem? I have the same problem, but I can to post my code if it will be helpfull. The some part of code is:
/* now allocate a LoopListRequestStruct and fill it with request data */
llrs = malloc(sizeof(LoopListRequestStruct));
fill_llrs(llrs);
/* Now, make the client request to the bossServer */
client_call_status = clnt_call(request_client, ModifyDhctState,
(xdrproc_t)xdr_LoopListRequestStruct,
(caddr_t)llrs,
(xdrproc_t)xdr_void,
0,
dummy_timeval
);
void fill_llrs(LoopListRequestStruct* llrs)
{
Descriptor_Loop* dl = 0;
DhctState_d *dhct_state_ptr = 0;
PackageAuthorization_d *pkg_auth_ptr = 0;
llrs->TRANS_NUM = 999999; /* strictly arbitraty, use whatever you want */
/* the bossServer simply passes this back in */
/* in the response you use it to match */
/* request/response if you want or you can */
/* choose to ignore it if you want */
/* now set the response program number, this is the program number of */
/* transient program that was set up using the svc_reg_utils.[ch] */
/* it is that program that the response will be sent to */
llrs->responseProgramNum = response_program_number;
/* now allocate some memory for the data structures that will actually */
/* carry the request data */
llrs->ARG_PTR = malloc(sizeof(LoopListRequestArgs));
dl = llrs->ARG_PTR->loopList.Loop_List_val;
/* we are using a single descriptor loop at a time, this should always */
/* be the case */
llrs->ARG_PTR->loopList.Loop_List_len = 1;
llrs->ARG_PTR->loopList.Loop_List_val = malloc(sizeof(Descriptor_Loop));
/* now allocate memory and set the size for the ModifyDhctConfiguration */
/* this transaction always has 3 descriptors, the DhctMacAddr_d, the */
/* DhctState_d, and the PackageAuthorization_d */
dl = llrs->ARG_PTR->loopList.Loop_List_val;
dl->Descriptor_Loop_len = 2;
dl->Descriptor_Loop_val =
malloc((2 * sizeof(Resource_descriptor_union)));
/* now, populate each descriptor */
/* the order doesn't really matter I'm just doing it in the order I */
/* always have done */
/* first the mac address descriptor */
dl->Descriptor_Loop_val->type =
dhct_mac_addr_type;
strcpy(
dl->Descriptor_Loop_val[0].Resource_descriptor_union_u.dhctMacAddr.dhctMacAddr,
dhct_mac_addr
);
/* second the dhct state descriptor */
dl->Descriptor_Loop_val[1].type =
dhct_state_type;
dhct_state_ptr =
&(dl->Descriptor_Loop_val[1].Resource_descriptor_union_u.dhctState);
if(dis_enable)
dhct_state_ptr->disEnableFlag = DIS_Enabled;
else
dhct_state_ptr->disEnableFlag = DIS_Disabled;
if(dms_enable)
dhct_state_ptr->dmsEnableFlag = DMS_Enabled;
else
dhct_state_ptr->dmsEnableFlag = DMS_Disabled;
if(analog_enable)
dhct_state_ptr->analogEnableFlag = AEF_Enabled;
else
dhct_state_ptr->analogEnableFlag = AEF_Disabled;
if(ippv_enable)
dhct_state_ptr->ippvEnableFlag = IEF_Enabled;
else
dhct_state_ptr->ippvEnableFlag = IEF_Disabled;
dhct_state_ptr->creditLimit = credit_limit;
dhct_state_ptr->maxIppvEvents = max_ippv_events;
/* we don't currently use the powerkey pin, instead we use an */
/* application layer pin for purchases and blocking so always turn */
/* pinEnable off */
dhct_state_ptr->pinEnable = PE_Disabled;
dhct_state_ptr->pin = 0;
if(fast_refresh_enable)
dhct_state_ptr->fastRefreshFlag = FRF_Enabled;
else
dhct_state_ptr->fastRefreshFlag = FRF_Disabled;
dhct_state_ptr->locationX = location_x;
dhct_state_ptr->locationY = location_y;
}

I've met exactly this error during integration with the same software. Linux version really creates bad request. Reason of such behaviour is serialization of null c-string. Glibc edition of SUN rpc can't encode them, xdr_string returns zero. But the sample which you are dealing with sets 'pin' in 0. Just replace 'pin' with "", or create some wrapper over xdr_string(), and samples will work.
My patch to the PowerKey samples looks like this:
< if (!xdr_string(xdrs, objp, PIN_SZ))
< return (FALSE);
< return (TRUE);
---
> char *t = "";
> return xdr_string(xdrs, *objp? objp : &t , PIN_SZ);
but it can be made simpler, ofcourse. In general you should fix usage of the generated code, in my case it was 'pin' variable in the sample sources provided by software authors which must be initialized before xdr_string() call.

Note that XDR will handle endianness but if you use app specific opaque fields, decoding will break if you don’t handle endianness yourself. Make sure integers are sent as XDR integers

Related

Is it possible to sample LOAD and STORE instructions at the same time in Intel PEBS sampling?

I am trying to use the Intel PMU performance monitoring (PEBS) to sample all LOAD and STORE operations in a C/C++ application binary.
The codebase I am using uses perf_event_open() to set up the monitoring for either LOAD or STORE in the attr->config field as shown in the code snippet below. I want to add another switch case to sample LOAD_AND_STORE operations. But I don't know how to config this attr->config field to the appropriate HEX value for Intel PMU like the values currently present in the code snippet for either LOAD or STORE. I would appreciate any pointers or help.
Thanks in advance.
switch(aType)
{
case LOAD:
{
/* comment out by Me
// attr->config = 0x1cd;
#if defined PEBS_SAMPLING_L1_LOAD_MISS
//attr->config = 0x5308D1; // L1 load miss
attr->config = 0x8d1; // perf stat -e mem_load_uops_retired.l1_miss -vvv ls // for broadwell
#elif defined PEBS_SAMPLING_LLC_LOAD_MISS
attr->config = 0x5320D1; // LLC load miss
#else
attr->config = 0x5381d0; //All Load
#endif
*/
// attr->config = 0x5308D1; // L1 load miss
// attr->config = 0x5320D1; // LLC load miss
// attr->config1 = 0x3;
// added by me
attr->config = 0x5381d0; //All Load added by me
attr->precise_ip = 3;
load_flag = true;
break;
}
case STORE:
default:
{
attr->config = 0x5382d0;//0x2cd;
// attr->config = 0x8d1; //mem_load_uops_retired.l3_miss
// attr->config1 = 0x0;
attr->precise_ip = 3;
store_flag = true;
break;
}
}
attr->read_format = PERF_FORMAT_GROUP | PERF_FORMAT_ID;
// attr->task = 1;
// fresh creation
// return registerDevice(sessionId);
}
Yes, there is a way to measure all "LOAD_AND_STORE" instructions using the PEBS facility.
The raw event you are looking for MEM_INST_RETIRED.ANY. The specification for this event for Skylake microarchitecture is defined here.
The umask for this event is 0x83 and the event code is 0xD0. So the resultant perf event config that you are looking for is attr->config = 0x5383d0.

What s the Windows exact equivalent of WaitOnAddress() on Linux?

Using shared memory with the shmget() system call, the aim of my C++ program, is to fetch a bid price from the Internet through a server written in Rust so that each times the value changes, I m performing a financial transaction.
Server pseudocode
Shared_struct.price = new_price
Client pseudocode
Infinite_loop_label:
Wait until memory address pointed by Shared_struct.price changes.
Launch_transaction(Shared_struct.price*1.13)
Goto Infinite_loop
Since launching a transaction involve paying transaction fees, I want to create a transaction only once per buy price change.
Using a semaphore or a futex, I can do the reverse, I m meaning waiting for a variable to reachs a specific value, but how to wait until a variable is no longer equal to current value?
Whereas on Windows I can do something like this on the address of the shared segment:
ULONG g_TargetValue; // global, accessible to all process
ULONG CapturedValue;
ULONG UndesiredValue;
UndesiredValue = 0;
CapturedValue = g_TargetValue;
while (CapturedValue == UndesiredValue) {
WaitOnAddress(&g_TargetValue, &UndesiredValue, sizeof(ULONG), INFINITE);
CapturedValue = g_TargetValue;
}
Is there a way to do this on Linux? Or a straight equivalent?
You can use futex. (I assumed "var" is in shm mem)
/* Client */
int prv;
while (1) {
int prv = var;
int ret = futex(&var, FUTEX_WAIT, prv, NULL, NULL, 0);
/* Spurious wake-up */
if (!ret && var == prv) continue;
doTransaction();
}
/* Server */
int prv = NOT_CACHED;
while(1) {
var = updateVar();
if (var != prv || prv = NOT_CACHED)
futex(&var, FUTEX_WAKE, 1, NULL, NULL, 0);
prv = var;
}
It requires the server side to call futex as well to notify client(s).
Note that the same holds true for WaitOnAddress.
According to MSDN:
Any thread within the same process that changes the value at the address on which threads are waiting should call WakeByAddressSingle to wake a single waiting thread or WakeByAddressAll to wake all waiting threads.
(Added)
More high level synchronization method for this problem is to use condition variable.
It is also implemented based on futex.
See link

In ETW, how to enable ProcessRundown events for Microsoft-Windows-Kernel-Process?

Provider's manifest indicates that it can send Microsoft-Windows-Kernel-Process::ProcessRundown::Info events, which I'd really like to have: they give a summary of processes that existed at the time the trace has started.
For reference, in the "usual" process provider enabled by EVENT_TRACE_FLAG_PROCESS, rundown is sent automatically via MSNT_SystemTrace::Process::DCStart events. However, data fields in that provider does not allow to find the process's image path: ImageFileName field is an ANSI filename without a path, and CommandLine field is also unreliable, because it could contain relative path (in worst case, no path at all). For this reason, I need Microsoft-Windows-Kernel-Process provider.
After quite a lot of trying, I found a very simple way: after the provider is enabled with EnableTraceEx2(EVENT_CONTROL_CODE_ENABLE_PROVIDER), an additional EnableTraceEx2(EVENT_CONTROL_CODE_CAPTURE_STATE) will send the events.
Eventually, I enable provider this way:
namespace Microsoft_Windows_Kernel_Process
{
struct __declspec(uuid("{22FB2CD6-0E7B-422B-A0C7-2FAD1FD0E716}")) GUID_STRUCT;
static const auto GUID = __uuidof(GUID_STRUCT);
enum class Keyword : u64
{
WINEVENT_KEYWORD_PROCESS = 0x10,
WINEVENT_KEYWORD_THREAD = 0x20,
WINEVENT_KEYWORD_IMAGE = 0x40,
WINEVENT_KEYWORD_CPU_PRIORITY = 0x80,
WINEVENT_KEYWORD_OTHER_PRIORITY = 0x100,
WINEVENT_KEYWORD_PROCESS_FREEZE = 0x200,
Microsoft_Windows_Kernel_Process_Analytic = 0x8000000000000000,
};
}
///////////////////////////////////
const u64 matchAnyKeyword =
(u64)Microsoft_Windows_Kernel_Process::Keyword::WINEVENT_KEYWORD_PROCESS;
const ULONG status = EnableTraceEx2(
m_SessionHandle,
&Microsoft_Windows_Kernel_Process::GUID,
EVENT_CONTROL_CODE_ENABLE_PROVIDER,
TRACE_LEVEL_VERBOSE,
matchAnyKeyword, // Filter events to specific keyword
0, // No 'MatchAllKeyword' mask
INFINITE, // Synchronous operation
nullptr // The trace parameters used to enable the provider
);
ENSURE_OR_CRASH(ERROR_SUCCESS == status);
And request rundown like this
const ULONG status = EnableTraceEx2(
m_SessionHandle,
&Microsoft_Windows_Kernel_Process::GUID,
EVENT_CONTROL_CODE_CAPTURE_STATE, // Request 'ProcessRundown' events
TRACE_LEVEL_NONE, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
0, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
0, // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
INFINITE, // Synchronous operation
nullptr // Probably ignored for 'EVENT_CONTROL_CODE_CAPTURE_STATE'
);
ENSURE_OR_CRASH(ERROR_SUCCESS == status);

Error Domain=NSOSStatusErrorDomain Code=560030580 "The operation couldn’t be completed. (OSStatus error 560030580.)"

I was using AVPlayer to play online mp3 stream ! When I pause the player
[AVPlayer pause];
AVAudioSession *session = [AVAudioSession sharedInstance];
session.delegate = nil;
NSError *error = nil;
[session setActive:NO error:&error];
NSLog([error description]);
I met the error , Error Domain=NSOSStatusErrorDomain Code=560030580 "The operation couldn’t be completed. (OSStatus error 560030580.)"
Can anyone tell me why and how to resolve it?
Thank you very much!!
I encounter this problem too. I googled and see this post more than one time. From the forum, someone said, the error code can be figured out by AVAudioSession.h.
In my case, the problem seems to be race condition problem. It never happens on my iPhone 5s, but happens on other devices generally, ex: iPhone6 Plus.
As far as I know, the error code 560030580 means AVAudioSessionErrorCodeIsBusy.
From AVAudioSession.h, the following is defined.
typedef NS_ENUM(NSInteger, AVAudioSessionErrorCode)
{
AVAudioSessionErrorCodeNone = 0,
AVAudioSessionErrorCodeMediaServicesFailed = 'msrv', /* 0x6D737276, 1836282486 */
AVAudioSessionErrorCodeIsBusy = '!act', /* 0x21616374, 560030580 */
AVAudioSessionErrorCodeIncompatibleCategory = '!cat', /* 0x21636174, 560161140 */
AVAudioSessionErrorCodeCannotInterruptOthers = '!int', /* 0x21696E74, 560557684 */
AVAudioSessionErrorCodeMissingEntitlement = 'ent?', /* 0x656E743F, 1701737535 */
AVAudioSessionErrorCodeSiriIsRecording = 'siri', /* 0x73697269, 1936290409 */
AVAudioSessionErrorCodeCannotStartPlaying = '!pla', /* 0x21706C61, 561015905 */
AVAudioSessionErrorCodeCannotStartRecording = '!rec', /* 0x21726563, 561145187 */
AVAudioSessionErrorCodeBadParam = -50,
AVAudioSessionErrorInsufficientPriority = '!pri', /* 0x21707269, 561017449 */
AVAudioSessionErrorCodeResourceNotAvailable = '!res', /* 0x21726573, 561145203 */
AVAudioSessionErrorCodeUnspecified = 'what' /* 0x77686174, 2003329396 */
} NS_AVAILABLE_IOS(7_0);
According to Apple https://developer.apple.com/documentation/avfoundation/avaudiosession/1616597-setactive :
Deactivating an audio session that has running audio objects stops
them, deactivates the session, and returns an
AVAudioSessionErrorCodeIsBusy error.
As far as I understand it means that your audio is still playing and you try to deactivate the session - it will be deactivated but you will be noted this way about that fact.
For me, this was the answer: Disconnect from Airplay.

File Descriptor Sharing between Parent and Pre-forked Children

In Unix Network Programming there is an example of a Pre-forked server which uses message passing on a Unix Domain Pipe to instruct child processes to handle an incoming connection:
for ( ; ; ) {
rset = masterset;
if (navail <= 0)
FD_CLR(listenfd, &rset); /* turn off if no available children */
nsel = Select(maxfd + 1, &rset, NULL, NULL, NULL);
/* 4check for new connections */
if (FD_ISSET(listenfd, &rset)) {
clilen = addrlen;
connfd = Accept(listenfd, cliaddr, &clilen);
for (i = 0; i < nchildren; i++)
if (cptr[i].child_status == 0)
break; /* available */
if (i == nchildren)
err_quit("no available children");
cptr[i].child_status = 1; /* mark child as busy */
cptr[i].child_count++;
navail--;
n = Write_fd(cptr[i].child_pipefd, "", 1, connfd);
Close(connfd);
if (--nsel == 0)
continue; /* all done with select() results */
}
As you can see, the parent writes the file descriptor number for the socket to the pipe, and then calls close on the file descriptor. When the preforked children finish with the socket they also call close on the descriptor. The thing which is throwing me for a loop is that because these children are preforked I would assume that only file descriptors which existed at the time the children were forked would be shared. However, if that was true, then this example would fail spectacularly, yet it works.
Can someone shed some light on how it is that file descriptors created by the parent after the fork end up being shared with the children process?
Take a look at the Write_fd implementation. It uses something like
union {
struct cmsghdr cm;
char control[CMSG_SPACE(sizeof(int))];
} control_un;
struct cmsghdr *cmptr;
msg.msg_control = control_un.control;
msg.msg_controllen = sizeof(control_un.control);
cmptr = CMSG_FIRSTHDR(&msg);
cmptr->cmsg_len = CMSG_LEN(sizeof(int));
cmptr->cmsg_level = SOL_SOCKET;
cmptr->cmsg_type = SCM_RIGHTS;
*((int *) CMSG_DATA(cmptr)) = sendfd;
That is, sending a control message with type SCM_RIGHTS is a way unixes can share a file descriptor with an unreleated process.
You can send (most) arbitrary file descriptors to a potentially unrelated process using the FD passing mechanism in Unix sockets.
This is typically a little-used mechanism and rather tricky to get right - both processes need to cooperate.
Most prefork servers do NOT do this, rather, they have the child process call accept() on a shared listen socket, and create its own connected socket this way. Other processes cannot see this connected socket, and there is only one copy of it, so when the child closes it, it's gone.
One disadvantage is that the process cannot tell what the client is going to request BEFORE calling accept, so you cannot handle different types of requests in different children etc. Once one child has accept()ed it, another child cannot.

Resources