How to know the request to fuse is congested - fuse

I am new to fuse. I have 2 questions:
Can I only set the value of congestion_threshold itself without changing default max_background?
If the number of number of asynchronous requests in the pending and processing queues reaches the value of the tunable congestion threshold parameter, how can I know it is congested? Where to check the logging? dmesg?

Without testing, looking at the source code there seems to be such a separate option: https://github.com/libfuse/libfuse/blob/master/lib/helper.c#L90:
CONN_OPTION("congestion_threshold=%u", congestion_threshold, 0),
CONN_OPTION("congestion_threshold=", set_congestion_threshold, 1),
Also from the code, looks like you can configure it using fuse_parse_conn_info_opts. See https://github.com/libfuse/libfuse/blob/master/include/fuse_common.h#L523:
* The following options are recognized:
*
...
* -o max_background=N sets conn->max_background
* -o congestion_threshold=N sets conn->congestion_threshold
...
**/
struct fuse_conn_info_opts* fuse_parse_conn_info_opts(struct fuse_args *args);
Looking in the Kernel code, no prints for the actual congestion happening.

Related

Random failure of selenium test on test server

I'm working on a project which uses nodejs and nighwatch for test automation. The problem here is that the tests are not reliable and give lots of false positives. I did everything to make them stable and still getting the errors. I went through some blogs like https://bocoup.com/blog/a-day-at-the-races and did some code refactoring. Did anyone have some suggestions to solve this issue. At this moment I have two options, either I rewrite the code in Java(removing nodejs and nightwatch from solution as I'm far more comfortable in Java then Javascript. Most of the time, struggle with the non blocking nature of Javascript) or taking snapshots/reviewing app logs/run one test at a time.
Test environment :-
Server -Linux
Display - Framebuffer
Total VM's -9 with selenium nodes running the tests in parallel.
Browser - Chrome
Type of errors which I get is element not found. Most of the time the tests fail as soon the page is loaded. I have already set 80 seconds for timeout so time can't be issue. The tests are running in parallel but on separate VM's so I don't know whether it can be issue or not.
Edit 1: -
Was working on this to know the root cause. I did following things to eliminate random fails: -
a. Added --suiteRetries to retry the failed cases.
b. Went through the error screenshot and DOM source. Everything seems fine.
c. Replaced the browser.pause with explicit waits
Also while debugging I observed one problem, maybe that is the issue which is causing random failures. Here's the code snippet
for (var i = 0; i < apiResponse.data.length; i++) {
var name = apiResponse.data[i];
browser.useXpath().waitForElementVisible(pageObject.getDynamicElement("#topicTextLabel", name.trim()), 5000, false);
browser.useCss().assert.containsText(
pageObject.getDynamicElement("#topicText", i + 1),
name.trim(),
util.format(issueCats.WRONG_DATA)
);
}
I added the xpath check to validate if i'm waiting enough for that text to appear. I observed that visible assertion is getting passed but in next assertion the #topicText is coming as previous value or null.This is an intermittent issue but on test server happens frequently.
There is no magic bullet to brittle UI end to end tests. In the ideal world there would be an option set avoid_random_failures=true that would quickly and easily solve the problem, but for now it's only a dream.
Simple rewriting all tests in Java will not solve the problem, but if you feel better in java, then I would definitely go in that direction.
As you already know from this article Avoiding random failures in Selenium UI tests there are 3 commonly used avoidance techniques for race conditions in UI tests:
using constant sleep
using WebDriver's "implicit wait" parameter
using explicit waits (WebDriverWait + ExpectedConditions + FluentWait)
These techniques are also briefly mentioned on WebDriver: Advanced Usage, you can also read about them here: Tips to Avoid Brittle UI Tests
Methods 1 and 2 are generally not recommended, they have drawbaks, they can work well on simple HTML pages, but they are not 100% realiable on AJAX pages, and they slow down the tests. The best one is #3 - explicit waits.
In order to use technique #3 (explicit waits) You need to familiarize yourself and be comfortable with the following WebDriver tools (I point to theirs java versions, but they have their counterparts in other languages):
WebDriverWait class
ExpectedConditions class
FluentWait - used very rarely, but very usefull in some difficult cases
ExpectedConditions has many predefinied wait states, the most used (in my experience) is ExpectedConditions#elementToBeClickable which waits until an element is visible and enabled such that you can click it.
How to use it - an example: say you open a page with a form which contains several fields to which you want to enter data. Usually it is enought to wait until the first field appears on the page and it will be editable (clickable):
By field1 = By.xpath("//div//input[.......]");
By field2 = By.id("some_id");
By field3 = By.name("some_name");
By buttonOk = By.xpath("//input[ text() = 'OK' ]");
....
....
WebDriwerWait wait = new WebDriverWait( driver, 60 ); // wait max 60 seconds
// wait max 60 seconds until element is visible and enabled such that you can click it
// if you can click it, that means it is editable
wait.until( ExpectedConditions.elementToBeClickable( field1 ) ).sendKeys("some data" );
driver.findElement( field2 ).sendKeys( "other data" );
driver.findElement( field3 ).sendKeys( "name" );
....
wait.until( ExpectedConditions.elementToBeClickable( buttonOK)).click();
The above code waits until field1 becomes editable after the page is loaded and rendered - but no longer, exactly as long as it is neccesarry. If the element will not be visible and editable after 60 seconds, then test will fail with TimeoutException.
Usually it's only necessary to wait for the first field on the page, if it becomes active, then the others also will be.

Define new socket option for use in TCP kernel code

I'm trying to add some functionality to the TCP kernel code (in tcp_input.c). I want the code I've implemented to run only in certain situations. I want to add a control flag, which can be set from a user-space application.
I (think I) need to add a new socket option, so that I can accomplish the following with setsockopt().
kernel space:
if(tcp_flags.simulate_ecn_signal) {
// run code for simulating ecn signal
}
user space:
if(tcp_info.tcpi_retransmits > LIMIT) {
u8 simulate_ecn_signal = 1;
// set the flag, so that the kernel code runs
if (setsockopt(sock, IPPROTO_TCP, TCP_FLAGS, &simulate_ecn_signal, sizeof(simulate_ecn_signal)) < 0)
printf("Can't set data with setsockopt.\n");
}
In the example code above I've added an example flag simulate_ecn_signal which I thought could be a member of a (new) socket option (struct) called tcp_flags (that potentially could contain multiple flag values).
How do I define a new socket option, in order to accomplish this?

Why do I need two different connections to wpa_supplicant (wpa_cli - ctrl_conn and mon_conn)

I am writing my own C library to manage wlan in linux. I base on wpa_cli interface, but I can not understand, why do they use two wpa_ctrl structures:
static struct wpa_ctrl *ctrl_conn;
static struct wpa_ctrl *mon_conn;
It works also when I open and attach only with ctrl_conn ?
wpa_cli works two ways: interactive and non-interactive
When you have a prompt you are using wpa_cli interactively and vice versa.
Here is the interactive mode:
$ wpa_cli -i wlan0
wpa_cli v2.1
Copyright (c) 2004-2014, Jouni Malinen <j#w1.fi> and contributors
This software may be distributed under the terms of the BSD license.
See README for more details.
Interactive mode
> status
wpa_state=INACTIVE
address=98:fc:11:d1:89:68
uuid=0cb62eb3-776e-55d2-a4f9-983cdd3e48d2
And is the non-interactive mode:
$ wpa_cli -i wlan0 status
wpa_state=INACTIVE
address=98:fc:11:d1:89:68
uuid=0cb62eb3-776e-55d2-a4f9-983cdd3e48d2
It seems that when you are using the interactive mode, wpa_cli use both ctrl_conn and mon_conn. ctrl_conn is used to send commands only, and mon_conn is used to get events (i.e it is the one to be attached via wpa_ctrl_attach()).
And when you are using the non-interactive mode, wpa_cli use only ctrl_conn because there is no events returned.
If you plan to use the wpa_supplicant events (and I hope you will do), I think it is better to use two different connections as explained in the wpa_ctrl_request() comments concerning the msg_cb argument:
/**
* wpa_ctrl_request - Send a command to wpa_supplicant/hostapd
* #ctrl: Control interface data from wpa_ctrl_open()
* #cmd: Command; usually, ASCII text, e.g., "PING"
* #cmd_len: Length of the cmd in bytes
* #reply: Buffer for the response
* #reply_len: Reply buffer length
* #msg_cb: Callback function for unsolicited messages or %NULL if not used
* Returns: 0 on success, -1 on error (send or receive failed), -2 on timeout
*
* This function is used to send commands to wpa_supplicant/hostapd. Received
* response will be written to reply and reply_len is set to the actual length
* of the reply. This function will block for up to two seconds while waiting
* for the reply. If unsolicited messages are received, the blocking time may
* be longer.
*
* msg_cb can be used to register a callback function that will be called for
* unsolicited messages received while waiting for the command response. These
* messages may be received if wpa_ctrl_request() is called at the same time as
* wpa_supplicant/hostapd is sending such a message. This can happen only if
* the program has used wpa_ctrl_attach() to register itself as a monitor for
* event messages. Alternatively to msg_cb, programs can register two control
* interface connections and use one of them for commands and the other one for
* receiving event messages, in other words, call wpa_ctrl_attach() only for
* the control interface connection that will be used for event messages.
*/
int wpa_ctrl_request(struct wpa_ctrl *ctrl, const char *cmd, size_t cmd_len,
char *reply, size_t *reply_len,
void (*msg_cb)(char *msg, size_t len));

Why does process started by systemd not behave same as when started interactively?

I have a program which spawns a real-time thread with the code as follows:
schparam.sched_priority = sched_get_priority_max(SCHED_FIFO);
getrlimit(RLIMIT_RTPRIO , &rlim);
rlim.rlim_cur = schparam.sched_priority;
setrlimit(RLIMIT_RTPRIO , &rlim);
result = pthread_setschedparam(pthread_self(),SCHED_FIFO, &schparam);
if(result != 0 )
printf("failed to set priority\n");
My default system limit does not allow RT scheduled threads so I need to call setrlimit to raise this value. The above code works as desired when I log in to a root shell and start the program manually.
However, when I have the program start automatically by systemd at startup, the schedule setting fails with a permissions error. The setrlimit call appears to work judging by the return value and subsequent calls to getrlimit within the process. But the pthread_setschedparam call seems to not realize the limit has been increased.
Again this all works fine when I start the program manually. What am I missing here?
By default, systemd sets LimitRTPRIO=0. You can verify that with systemctl show $servicename | grep LimitRTPRIO.
If the RLIMIT_RTPRIO soft limit is 0, then the only permitted changes are to lower the priority, or to switch to a non-real-time policy.
Changing LimitRTPRIO as Siosm suggested, eg LimitRTPRIO=infinity in your unit file should do the trick.

Netty: Pipe-ing output of one Channel to the input of an other

Netty-Gurus,
I've been wondering if there is a shortcut/Netty-Utility/smart-trick
for connecting the input of one Channel to the output of
an other channel. In more details consider the following:
Set-Up a Netty (http) server
For an incoming MessageEvent get its ChannelBuffer
and pipe its input to a NettyClient-ChannelBuffer
(which is to be set up along the lines of the NettyServer).
I'm interested in how to achieve bullet-point 3. since my first
thoughts along the lines
// mock messageReceived(ChannelHandlerContext ctx, MessageEvent e):
ChannelBuffer bufIn = (ChannelBuffer) e.getMessage();
ChannelBuffer bufOut = getClientChannelBuffer();// Set-up somewhere else
bufOut.write(bufIn);
seem to me awkward because
A. I have to determine for each and every messageReceived-Event
the target ChannelBuffer
B. To much Low-Level tinkering
My wish/vision would be to connect
--> the input of one Channel
--> to the output of an other channel
and let them do their I/O without any additional coding.
Many thanks in advance!,
Traude
P.S: Issue has arisen as I'm trying to dispatch the various HTTP-requests to the
server (one entry point) to several other servers, depending on
the input content (mapping based on the first HTTP request line).
Obviously, I also need to do the inverse trick -- pipeing back client
to server -- but I guess it'll be similar to the solution of
the question before.
Looks like you need to use a multiplexer in you business handler. The business handler could have a map. With key as "first http request line" and value as the output channel for the server. Once you do a lookup you just do a channel.write(channelBuffer);
Also take a look at bruno de carvalho's tcp tunnel, which may give you more ideas on how to deal with these kind of requirements.

Resources