How to change the watchdog timer in linux embedded - linux

I have to use the linux watchdog driver (/dev/watchdog). It works great, I write an character like this:
echo 1 > /dev/watchdog
And the watchdog start and after an about 1 minute, the system reboot.
The question is, how can I change the timeout? I have to change the time interval in the driver?

Please read the Linux documentation. The standard method of changing the timeout from user space is to use an ioctl().
int timeout = 45; /* a time in seconds */
int fd;
fd = open("/dev/watchdog");
ioctl(fd, WDIOC_SETTIMEOUT, &timeout); /* Send time request to the driver. */
Each watchdog device may have an upper (and possibly lower) limit on that the hardware supports, so you can not set the timeout arbitrarily high. So after setting a timeout, it is good to read back the timeout.
ioctl(fd, WDIOC_GETTIMEOUT, &timeout); /* Update timeout with driver value. */
Now, the re-read timeout can be used as a kick frequency.
assert(timeout > 2);
while (1) {
ioctl(fd, WDIOC_KEEPALIVE, 0);
sleep(timeout-2);
}
You can write your own kicking routine in a script/shell command,
while [ 1 ] ; do sleep 1; echo V > /dev/watchdog; done
However, the userspace watchdog program is usually used. This should take care of all the esoteric features. You can nice the user space program to a minimum priority and then the system will reset if user space becomes hung up. BusyBox includes a watchdog applet.
Each watchdog driver has separate module parameters and most include a mechanism to set the timeout; use either the kernel command line or module parameter setting mechanism. However, the infra-structure ioctl timeout is more portable if you do not have specific knowledge of your watchdog hardware. The ioctl is probably more future proof, in that your hardware may change.
Sample user space code is included in the Linux samples directory.

Related

Linux UART slower than specified Baudrate

I'm trying to communicate between two Linux systems via UART.
I want to send large chunks of data. With the specified Baudrate it should take around 5 seconds, but it takes nearly 10 times the expected time.
As I'm sending more than the buffer can handle at once it is send in small parts and I'm draining the buffer in between. If I measure the time needed for the drain and the number of bytes written to the buffer I calculate a Baudrate nearly 10 times lower than the specified Baudrate.
I would expect a slower transmission as the optimal, but not this much.
Did I miss something while setting the UART or while writing? Or is this normal?
The code used for setup:
int bus = open(interface.c_str(), O_RDWR | O_NOCTTY | O_NDELAY); // <- also tryed blocking
if (bus < 0) {
return;
}
struct termios options;
memset (&options, 0, sizeof options);
if(tcgetattr(bus, &options) != 0){
close(bus);
bus = -1;
return;
}
cfsetspeed (&options, B230400);
cfmakeraw(&options); // <- also tried this manually. did not make a difference
if(tcsetattr(bus, TCSANOW, &options) != 0)
{
close(bus);
bus = -1;
return;
}
tcflush(bus, TCIFLUSH);
The code used to send:
int32_t res = write(bus, data, dataLength);
while (res < dataLength){
tcdrain(bus); // <- taking way longer than expected
int32_t r = write(bus, &data[res], dataLength - res);
if(r == 0)
break;
if(r == -1){
break;
}
res += r;
}
B230400
The docs are contradictory. cfsetspeed is documented as requiring a speed_t type, while the note says you need to use one of the "B" constants like "B230400." Have you tried using an actual speed_t type?
In any case, the speed you're supplying is the baud rate, which in this case should get you approximately 23,000 bytes/second, assuming there is no throttling.
The speed is dependent on hardware and link limitations. Also the serial protocol allows pausing the transmission.
FWIW, according to the time and speed you listed, if everything works perfectly, you'll get about 1 MB in 50 seconds. What speed are you actually getting?
Another "also" is the options structure. It's been years since I've had to do any serial I/O, but IIRC, you need to actually set the options that you want and are supported by your hardware, like CTS/RTS, XON/XOFF, etc.
This might be helpful.
As I'm sending more than the buffer can handle at once it is send in small parts and I'm draining the buffer in between.
You have only provided code snippets (rather than a minimal, complete, and verifiable example), so your data size is unknown.
But the Linux kernel buffer size is known. What do you think it is?
(FYI it's 4KB.)
If I measure the time needed for the drain and the number of bytese written to the buffer I calculate a Baudrate nearly 10 times lower than the specified Baudrate.
You're confusing throughput with baudrate.
The maximum throughput (of just payload) of an asynchronous serial link will always be less than the baudrate due to framing overhead per character, which could be two of the ten bits of the frame (assuming 8N1). Since your termios configuration is incomplete, the overhead could actually be three of the eleven bits of the frame (assuming 8N2).
In order to achieve the maximum throughput, the tranmitting UART must saturate the line with frames and never let the line go idle.
The userspace program must be able to supply data fast enough, preferably by one large write() to reduce syscall overhead.
Did I miss something while setting the UART or while writing?
With Linux, you have limited access to the UART hardware.
From userspace your program accesses a serial terminal.
Your program accesses the serial terminal in a sub-optinal manner.
Your termios configuration appears to be incomplete.
It leaves both hardware and software flow-control untouched.
The number of stop bits is untouched.
The Ignore modem control lines and Enable receiver flags are not enabled.
For raw reading, the VMIN and VTIME values are not assigned.
Or is this normal?
There are ways to easily speed up the transfer.
First, your program combines non-blocking mode with non-canonical mode. That's a degenerate combination for receiving, and suboptimal for transmitting.
You have provided no reason for using non-blocking mode, and your program is not written to properly utilize it.
Therefore your program should be revised to use blocking mode instead of non-blocking mode.
Second, the tcdrain() between write() syscalls can introduce idle time on the serial link. Use of blocking mode eliminates the need for this delay tactic between write() syscalls.
In fact with blocking mode only one write() syscall should be needed to transmit the entire dataLength. This would also minimize any idle time introduced on the serial link.
Note that the first write() does not properly check the return value for a possible error condition, which is always possible.
Bottom line: your program would be simpler and throughput would be improved by using blocking I/O.

TTY input queue too slow to return data

I've recently noticed a very odd behavior on my system (running on an AT91SAM9G15): Despite the fact I'm reading serial port continuously, TTY driver takes sometimes 1,2s to deliver data from the input queue.
Thing is: I'm not losing any data, it just takes too many calls to read for it to come.
Maybe my code will help to explain the problem.
First off, I set my serial port:
/* 8N1 */
tty.c_cflag = (tty.c_cflag & ~CSIZE) | CS8;
/** Parity bit (none) */
tty.c_cflag &= ~(PARENB | PARODD);
/** Stop bit (1)*/
tty.c_cflag &= ~CSTOPB;
/* Noncanonical mode */
tty.c_lflag = 0;
tty.c_oflag = 0;
tty.c_cc[VMIN] = 0;
tty.c_cc[VTIME] = 0;
Later on, select is called:
s_ret = select(rfid_fd + 1, &set, NULL, NULL, &port_timeval);
So read() can do its magic:
...
if ((rd_ret = read(rfid_fd, &recv_buff[u16_recv_len], (u16_req_len - u16_recv_len))) > 0)
...
Right afterwards, if I keep reading serial port for 15s for example, for several times I can see no data coming and that data, which I know arrived on time (it's timestamped), comes late. Delays in fetching data from input queue may vary from 300ms to 1,5s.
I've tried every kind of setting I could think of. It's tricky now since I don't know if at91 UART drivers aren't delivering data to tty driver or tty driver isn't fetching it? Which is which here?
Any help would be appreciated.
The normal procedure to set port flags is to read the termios structure, save it for later restoring, modify (in a copy of it) the flags you want to change, and do a tcsetattr() call. You have initialised c_lflag = 0; which can have some secondary effects related to your problem.
The next thing you have to consider is reading the documentation about VMIN and VTIME elements. Setting both to 0 makes the driver a non blocking device, so you'll get in a loop trying to read whatever should be in the buffer. But before doing that, think twice that you have two threads competing for putting the characters in the buffer (your process, trying to get it from the buffer and the driver interrupt routine, that tries to put the character just read) without rest. It should be better (and probably here is the problem) to wait for one character to be available, setting VMIN to 1 and VTIME to 0. This makes the driver to awake your process as soon as one character is available, and probably nearer to what you want.
After all this amount of guesses, you haven't post any reproducible code that can be used to check what you say, so this is the most we can do to help you.

Last Reboot detection on PhyCORE-AM335x-PD13.1.2 Linux 3.2

In an embedded system using BSP linux 3.2 on the sitara AM3359, at application startup, I want to detect what caused the last reboot and save this status in one of two counters: a Watchdog reset and a Power-on reset.
Usually in a MCU, I test the watchdog by reserving spot in the ram and write special key on the first boot and reset using the watchdog. If not there when reboot it's power on if it's there it's a watchdog reset.
My first question is, how to save the key variables on RAM that would survive a reboot or a watchdog reset ?
It's seem something clean the ram at boot...can I disable that?
There usually a register with that information. On AM335x there is the PRM_RSTST register with the bit (WDT1_RST), I am using ioctl() with WDIOC_GETBOOTSTATUS to Check if the last boot is caused by watchdog or it's power-on-reset. This call doesn't return me something I can understand. Can somebody explain it ? How can I get this register...
Power ON:
test1: 1076092848
test2: 1076113328
test3: 1075589040
test4: 1076203440
watchdog:
test5: 1076481968
test6: 1075732400
test7: 1075965872
code use:
/* Check if last boot is caused by watchdog */
if (ioctl(fd, WDIOC_GETBOOTSTATUS, &bootstatus) == 0) {
fprintf(stdout, "Last boot is caused by : %s, bootstatus= %d\n",
(bootstatus != 0) ? "Watchdog" : "Power-On-Reset", bootstatus);
} else {
fprintf(stderr, "Error: Cannot read watchdog status\n");
exit(EXIT_FAILURE);
}
Is there another way to get this information (mmap, write driver, sys, etc)?
I would propose you to use your bootloader to see processor register values (for u-boot I beleive the command is reginfo). The same way (but another command) for the memory where you stock watchdog keys. Once debugged with your bootloader you can think about passing them to the kernel.
I start by using terminal command devmem 0x44E00F08 (busybox) to see if reading the physical memory will work then I use mmap() to read the PRM_RSTST register and know if the last reset was watchdog reset.

Linux, termios: how to handle negative result of select()

I'm developing on an am335x system with ubuntu and the last kernel released from TI (vendor).
I'm using a virtual tty device (ttyUSB0) for comunicate with a remote device. After about one hour of continuous comunication (cyclic open-transmit-receive-close) I get a strange behaviour of read(). If the UART is opened in blocking mode the read hangs forever (no matter what value I set on VMIN&VTIME). If I open it in non-blocking mode it return -1 for ever (after 1 hour).
Now I'm using select() to check if there is data to be read.
In case I receive a negative result from select, how can I handle the error? What is a good practice? I have to restart the service?
This code is a part of a service that start at boot time(with upstart). When it hangs, if I restart it, it works again. The restart do not have any effect on the device with which I'm communicating. It works properly.
This is a piece of code, just for completeness:
FD_ZERO(&set); /* clear the set */
FD_SET(tty_fileDescriptor, &set); /* add our file descriptor to the set */
timeout.tv_sec = 10;
timeout.tv_usec = 0;
rv = select(tty_fileDescriptor + 1, &set, NULL, NULL, &timeout);
if(rv>0){
letti=read(tty_fileDescriptor,payLoadTMP,300);
}if(rv<0){
perror("select")
//what to do here to re-stablish communication?
}
The perror's output is:
select: Resource temporarily unavailable
this is a grep on dmesg
usb 1-1: cp210x converter now attached to ttyUSB0
any ideas? How to re-stablish connection?

SetPriorityClass equivalent on Linux

I have a daemon-like application that does some disk-intensive processing at initialization. To avoid slowing down other tasks I do something like this on Windows:
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN);
// initialization tasks
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_END);
// daemon is ready and running at normal priority
AFAIK, on Unices I can call nice or setpriority and lower the process priority but I can't raise it back to what it was at process creation (i.e. there's no equivalent to the second SetPriorityClass invocation) unless I have superuser privileges. Is there by any chance another way of doing it that I'm missing? (I know I can create an initialization thread that runs at low priority and wait for it to complete on the main thread, but I'd rather prefer avoiding it)
edit: Bonus points for the equivalent SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_BEGIN); and SetThreadPriority(GetCurrentThread(), THREAD_MODE_BACKGROUND_END);
You've said that your processing is disk intensive, so solutions using nice won't work. nice handles the priority of CPU access, not I/O access. PROCESS_MODE_BACKGROUND_BEGIN lowers I/O priority as well as CPU priority, and requires kernel features that don't exist in XP and older.
Controlling I/O priority is not portable across Unices, but there is a solution on modern Linux kernels. You'll need CAP_SYS_ADMIN to lower I/O priority to IO_PRIO_CLASS_IDLE, but it is possible to lower and raise priority within the best effort class without this.
The key function call is ioprio_set, which you'll have to call via a syscall wrapper:
static int ioprio_set(int which, int who, int ioprio)
{
return syscall(SYS_ioprio_set, which, who, ioprio);
}
For full example source, see here.
Depending on permissions, your entry to background mode is either IOPRIO_PRIO_VALUE(IO_PRIO_CLASS_IDLE,0) or IOPRIO_PRIO_VALUE(IO_PRIO_CLASS_BE,7). The sequence should then be:
#define IOPRIO_PRIO_VALUE(class, data) (((class) << IOPRIO_CLASS_SHIFT) | data)
ioprio_set(IOPRIO_WHO_PROCESS, 0, IOPRIO_PRIO_VALUE(IO_PRIO_CLASS_BE,7));
// Do work
ioprio_set(IOPRIO_WHO_PROCESS, 0, IOPRIO_PRIO_VALUE(IO_PRIO_CLASS_BE,4));
Note that you many not have permission to return to your original io priority, so you'll need to return to another best effort value.
Actually, if you have a reasonably recent Linux kernel there might be a solution. Here's what TLPI says:
In Linux kernels before 2.6.12, an
unprivileged process may use
setpriority() only to (irreversibly)
lower its own or another process’s
nice value.
Since kernel 2.6.12, Linux provides
the RLIMIT_NICE resource limit, which
permits unprivileged processes to
increase nice values. An unprivileged
process can raise its own nice value
to the maximum specified by the
formula 20 – rlim_cur, where rlim_cur
is the current RLIMIT_NICE soft
resource limit.
So basically you have to:
Use ulimit -e to set RLIMIT_NICE
Use setpriority as usual
Here is an example
Edit /etc/security/limits.conf. Add
cnicutar - nice -10
Verify using ulimit
cnicutar#aiur:~$ ulimit -e
30
We like that limit so we don't change it.
nice ls
cnicutar#aiur:~$ nice -n -10 ls tmp
cnicutar#aiur:~$
cnicutar#aiur:~$ nice -n -11 ls tmp
nice: cannot set niceness: Permission denied
setpriority example
#include <stdio.h>
#include <sys/resource.h>
#include <unistd.h>
int main()
{
int rc;
printf("We are being nice!\n");
/* set our nice to 10 */
rc = setpriority(PRIO_PROCESS, 0, 10);
if (0 != rc) {
perror("setpriority");
}
sleep(1);
printf("Stop being nice\n");
/* set our nice to -10 */
rc = setpriority(PRIO_PROCESS, 0, -10);
if (0 != rc) {
perror("setpriority");
}
return 0;
}
Test program
cnicutar#aiur:~$ ./nnice
We are being nice!
Stop being nice
cnicutar#aiur:~$
The only drawback to this is that it's not portable to other Unixes (or is it Unices ?).
To workaroud lowering priority and then bringing it back you can:
fork()
CHILD: lower its priority
PARENT: wait for the child (keeping original parent's priority)
CHILD: do the job (in lower priority)
PARENT: continue with original priority after child is finished.
This should be UNIX-portable solution.

Resources