I’m able to set a maximum TTL value with Dnsmasq by --max-ttl. But I wonder what the default value is if I don’t set --max-ttl. Does anyone know about it?
The default value is 0.
The --max-ttl option sets up the max_ttl attribute as witnessed there:
https://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=blob;f=src/option.c;h=3a870970b82335dcfa59b5cf1ad0ce765d4484a3;hb=HEAD#l3061
3060 else if (option == LOPT_MAXTTL)
3061 daemon->max_ttl = (unsigned long)ttl;
It is defined like that, in src/dnsmasq.h:
extern struct daemon {
/* datastuctures representing the command-line and
config file arguments. All set (including defaults)
in option.c */
[..]
unsigned long local_ttl, neg_ttl, max_ttl, min_cache_ttl, max_cache_ttl, auth_ttl, dhcp_ttl, use_dhcp_ttl;
So its default value is 0 per the C standard.
Later, you can find tests on that value to trigger specific behavior in src/rfc1035.c:
/* Return the Max TTL value if it is lower than the actual TTL */
if (daemon->max_ttl == 0 || ((unsigned)(crecp->ttd - now) < daemon->max_ttl))
return crecp->ttd - now;
else
return daemon->max_ttl;
Related
I'm studying on the p4 program code recently.
However, I do not understand what does this 'current(0,4)' mean in the parser.p4.
parser parse_mpls_bos {
extract(mpls_bos);
return select(current(0, 4)) {
0x4 : parse_ipv4;
default : ingress;
}
}
The header for mpls_bos
header_type mpls_t {
fields {
label : 20;
tc : 3;
bos : 1;
ttl : 8;
}
}
Which fields should be equals to 0x4 here to parse_ipv4?
Can someone help to explain/answer?
Thanks in advance.
current allows you to reference bits that have not been parsed yet without extract them. It is a valid statement in p4_14 but is not available in p4_16.
The first argument of current is the bit offset, 0 in this case means that you're pointing at the end of the mpls_bos header. The second argument is the width in bits.
Since MPLS layer does not contain information regarding what the next header is, here the code is working on the assumption that if the 4 bits following the MPLS header are equal to 4 then it means you are parsing the version field of the IPv4 header.
In the parse_ipv4 state you can extract the IP header without issues since the first 4-bits of its header have been used for the transition but not yet extracted.
I try to follow the path of parameters setting from linux user space (arecord/aplay) down to kernel driver. Let's take arecords --period-size as an example.
It all starts in set_params function in aplay.c:
if (period_time > 0)
err = snd_pcm_hw_params_set_period_time_near(handle, params, &period_time, 0);
else
err = snd_pcm_hw_params_set_period_size_near(handle, params, &period_frames, 0);
The function snd_pcm_hw_params_set_period_size_near() is defined in [pcm.c : 5186](alsa-lib https://github.com/alsa-project/alsa-lib/blob/master/src/pcm/pcm.c#L5186), and here my headache starts... This function starts a chain of calls to other functions which doesn't make much sense to me and doesn't seem to be leading to any end call of driver.
There is _end label so I skipped all calls like snd_pcm_hw_param_set_min() or snd_pcm_hw_param_set_max() and went to snd_pcm_hw_param_set_last() hoping for some driver invocation like:
drv->hw_params_set(...);
but instead I found an end call to:
MASK_INLINE unsigned int snd_mask_min(const snd_mask_t *mask)
{
int i;
assert(!snd_mask_empty(mask));
for (i = 0; i < MASK_SIZE; i++) {
if (mask->bits[i])
return ffs(mask->bits[i]) - 1 + (i << 5);
}
return 0;
}
where return values shall be the parameter set.
So to summarize, I found alsa-lib to be very difficult to read and understand. Maybe I am lacking some knowledge though. My question is simple, how is user space parameter passed to kernel driver. Can you provide a software path showing interfaces called?
Thanks.
The hw_params structure contains a configuration space, which is a description of all possible configurations that the device can support. Numeric parameters are described as intervals (i.e., min and max), access and format as bitmasks.
When you change one parameter, the library calls the kernel driver (SNDRV_PCM_IOCTL_HW_REFINE) to adjust all the other parameters in the hw_params structure that depend on the changed parameter.
After you have reduced the configuration space to the configuration you actually want, you call snd_pcm_hw_params() (→ SNDRV_PCM_IOCTL_HW_PARAMS) to actually configure the device for those parameters. (If some parameter has not been reduced to a single value, snd_pcm_hw_params() will choose a random one.)
snd_pcm_hw_params_set_xxx_near() is more complex because there is no SET_NEAR ioctl. This function tries to adjust the interval so that either its maximum or its minimum is the desired value, and then check whether the actual maximum or minimum is nearer.
For example, assume a device that supports period sizes of 1024, 2048, 4096, and 8192 frames. Initially, the interval is described as [1024, 8192]. When you call snd_pcm_hw_params_set_period_size_near(4000), the snd_pcm_hw_param_set_near() helper function calls set_min(4000) and set_max(4000) (on separate copies of the hw_params structure), so the intervals are [1024, 4000] and [4000, 8192]; after refining, the driver returns the intervals [1024, 2048] and [4096, 8192]. snd_pcm_hw_param_set_near() then sees that 4096 is nearest to the desired calue, so it then calls set_first on the second interval, which results in [4096, 4096].
I'm developing some network driver and I'm trying to assign packets to different queues basing on ip::tos value. For testing purposes I'm running:
ping -Q 1 10.0.0.2
to set ip::tos value to 1. The problem I've got is that on this system where I run ping command - outgoing skb has skb->priority==0, but I think it should be 1.
I assumed that setting "-Q 1" will set skb->priority to 1, but it isn't.
Anyone knows why?
First of all, there is no direct mapping between the skb->priority and the IP TOS field. It is done like so in the linux kernel:
sk->sk_priority = rt_tos2priority(val)
...
static inline char rt_tos2priority(u8 tos)
{
return ip_tos2prio[IPTOS_TOS(tos)>>1];
}
(and the ip_tos2prio table can be found in ipv4/route.c).
It seems to me you'll have to set the "TOS" byte to atleast 32 to get skb->priority to anything other than 0.
ping -Q 1 sets the whole TOS byte to 1. Note that TOS is deprecated in favor of DSCP. The 2 low-order bits are used for ECN, while the upper 6 bits are used for the DSCP value (the "priority").
So you likely have to start at 4, to get a DSCP priority of 1, but according to the above table, start at 32 to get skb->priority set as well, as in ping -Q 32 10.0.0.2
However, I'm not sure that will set the skb->priority as well in all cases. If the ping tool crafts packets using raw sockets, it might bypass setting the skb->priority.
However, skb->priority for locally generated packets will be set if one does e.g.
int tos = 32;
setsockopt(sock_fd, IPPROTO_IP, IP_TOS,
&tos, sizeof(tos));
So you might need to cook up a small sample program that does the above before sending packets.
The above answer is right, Let's complete it here
static inline char rt_tos2priority(u8 tos)
{
return ip_tos2prio[IPTOS_TOS(tos)>>1];
}
where IPTOS_TOS is a macro, which ANDs "tos" value with 0x1E
So, If you give TOS as 0xFF, the above return statement reduces to
return ip_tos2prio[(0x1E & 0xFF)>>1];
calculate it further, (0x1E & 0xFF) is equal to 0x1E,
and (0x1E >> 1) gives us 0x0F, which is 15 in decimal.
we can say that above return statement is equal to
return ip_tos2prio[15];
Now "ip_tos2prio" is a predefined array, like this
const __u8 ip_tos2prio[16]={0,0,0,0,2,2,2,2,6,6,6,4,4,4,4};
where each distinct value has a meaning, 0->BESTEFFORT, 2->BULK, 4->INTERACTIVE BULK, 6 ->INTERACTIVE.
get back to the return statement, it returns the 15th element in ip_tos2prio array, which is 4.
The stat(2) manual addresses the support for nanosecond resolution for the timestamp fields, but it doesn't look trivial to test their presence or their names in a program intended to be portable: as many as four feature test macro names (_BSD_SOURCE, _SVID_SOURCE, _POSIX_C_SOURCE, _XOPEN_SOURCE) are mentioned. It looks like the manual is suggesting the following:
#if defined(_BSD_SOURCE) || defined(_SVID_SOURCE) || \
defined(_POSIX_C_SOURCE) && _POSIX_C_SOURCE >= 200809L || \
defined(_XOPEN_SOURCE) && _XOPEN_SOURCE >= 700
// use st_atim.tv_nsec, etc.
#elif 1 // really?
// use st_atimensec, etc.
#else // when?
// no nanosecond field exists
#endif
It doesn't mention any possibility of having neither st_atim.tv_nsec nor st_atimensec at all. Is one of the two names guaranteed to exist?
It says the nanosecond fields are returned with the value 0, but this is indistinguishable from the actual value 0. How do I test if subsecond timestamps are really supported?
I'm trying to configure RTC alarm on a Linux device. I've used an example from the RTC documentation:
int retval
struct rtc_time rtc_tm;
/* .... */
/* Read the RTC time/date */
retval = ioctl(fd, RTC_RD_TIME, &rtc_tm);
if (retval == -1) {
exit(errno);
}
/* Set the alarm to 5 sec in the future, and check for rollover */
rtc_tm.tm_sec += 5;
if (rtc_tm.tm_sec >= 60) {
rtc_tm.tm_sec %= 60;
rtc_tm.tm_min++;
}
if (rtc_tm.tm_min == 60) {
rtc_tm.tm_min = 0;
rtc_tm.tm_hour++;
}
if (rtc_tm.tm_hour == 24)
rtc_tm.tm_hour = 0;
retval = ioctl(fd, RTC_ALM_SET, &rtc_tm);
if (retval == -1) {
exit(errno);
}
This code snippet uses absolute time (from the epoch start) and it did not work for me. I thought this was due to a bug in hardware, but after some seemingly random time the alarm did fire. The only other piece of documentation that I've managed to find was a comment in rtc.cc:
case RTC_ALM_SET: /* Store a time into the alarm */
{
/*
* This expects a struct rtc_time. Writing 0xff means
* "don't care" or "match all". Only the tm_hour,
* tm_min and tm_sec are used.
*/
The fact that only hours, minutes and second are used suggests that time is relative to the moment when ioctl was called.
Should time passed to ioctl(fd, RTC_ALM_SET, &rtc_tm) be relative or absolute?
The RTC alarm works off absolute time, in other words if you want the alarm to go off in 5 minutes then you should read the current time and add 5 minutes to the current time and use the result to set the alarm time.
Here is a snip of text from a TI RTC chip doc: (http://www.ti.com/lit/ds/symlink/bq3285ld.pdf)
During each update cycle, the RTC compares the day-of-the-month, hours, minutes, and seconds bytes with the four corresponding alarm bytes. If a match of all bytes is found, the alarm interrupt event flag bit, AF in register C, is set to 1. If the alarm event is enabled, an interrupt request is generated.
I believe this to be pretty standard across RTCs out there...