In minicom, how do I get more time to stop autoboot so I can enter u-boot? - ubuntu-14.04

I have BusyBox v1.24.2 running in Ubuntu 14.04. When I reboot I get a prompt that says, "Hit any key to stop autoboot: 3". Normally I get 3 seconds but now it gives me a 0 count and immediately goes into autoboot giving me no chance to enter u-boot. Is there a way to increase the time?

The easiest way to achieve this would be to modify your u-boot environment variable; bootcmd and add bootdelay=x (where x is the number of seconds).
You can do this by interrupting the boot sequence and modifying the u-boot environment variable here, or whilst in userspace use fw_setenv to achieve this.
More information here regarding the bootdelay environment variable: https://www.denx.de/wiki/DULG/UBootEnvVariables

Related

Can a task that is killed incorrectly on Linux be considered a memory leak?

I use a Raspberry Pi [and run Ubunutu Server 20.04 LTS] quite often so it is advantageous to use memory as responsibly as possible. That being said, I run a number of processes that seem to run fairly efficiently with the 4GB of available memory at about ~2GB. Eventually, though, the memory usage seems to grow closer and closer to the 4GB level. While investgating memory usage with HTOP, I noticed something with the Python scrips I'm running (I've provided an image of what I'm describing); the processes seem to stack up.
Could this be because I'm using CTRL + Z rather than CTRL + C to restart my Python script?
Please let me know if I can be more specific.
Yes it's because you use ctrl-z. Use ctrl-c to interrupt your processes, by sending them SIGINT.
ctrl-z only puts your process into the background. It will keep running until it needs terminal input, then pause.
Try this when running some terminal program on your rPi. (It works with vi and many other programs.)
Press ctrl-z
Then do some shell commands. ls or whatever
Then type fg to resume your suspended process.
Believe it or not, this stuff works exactly the same on my rPi running GNU/Linux as it did on Bell Labs UNIX Seventh Edition on a PDP 11/70 back in 1976. But that computer had quite a bit less RAM.

"clocksource tsc unstable" shown when the linux kernel boots up

I am booting up a linux kernel using a full system simulator, and I'd like to run my benchmark on the booted system. However, when it boots up, it shows me this message: "clocksource tsc unstable" and occasionally it hangs in the beggining. However, sometimes it lets me run my benchmark and then probably it hangs in the middle since the application never finishes and seems it's stuck there. Any idea how to fix this issue?
Thanks.
It suggests that, kernel didn't manage to calculate tsc (Time Stamp Counter) value properly i.e value is stale. This usually happens with VM. The way to avoid this is to - use predefined lpj (loops per jiffy) as kernel parameter (lpj=). Try it, hope issue will be fixed!

How to debug ARM Linux kernel (msleep()) lock up?

I am first of all looking for debugging tips. If some one can point out the one line of code to change or the one peripheral config bit to set to fix the problem, that would be terrific. But that's not what I'm hoping for; I'm looking more for how do I go about debugging it.
Googling "msleep hang linux kernel site:stackoverflow.com" yields 13 answers and none is on the point, so I think I'm safe to ask.
I rebuild an ARM Linux kernel for an embedded TI AM1808 ARM processor (Sitara/DaVinci?). I see the all the boot log up to the login: prompt coming out of the serial port, but trying to login gets no response, doesn't even echo what I typed.
After lots of debugging I arrived at the kernel and added debugging code between line 828 and 830 (yes, kernel version is 2.6.37). This is at this point in the kernel mode before 'sbin/init' is called:
http://lxr.linux.no/linux+v2.6.37/init/main.c#L815
Right before line 830 I added a forever loop printk and I see the results. I have let it run for about a couple of hour and it counts to about 2 million. Sample line:
dbg:init/main.c:1202: 2088430
So it has spit out 60 million bytes without problem.
However, if I add msleep(1000) in the loop, it prints only once, i.e. msleep () does not return.
Details:
Adding a conditional printk at line 4073 in the scheduler that condition on a flag that get set at the start of the forever test loop described above shows that the schedule() is no longer called when it hangs:
http://lxr.linux.no/linux+v2.6.37/kernel/sched.c#L4064
The only selections under .config/'Device Drivers' are:
Block devices
I2C support
SPI support
The kernel and its ramdisk are loaded using uboot/TFTP.
I don't believe it tries to use the Ethernet.
Since all these happened before '/sbin/init', very little should be happenning.
More details:
I have a very similar board with the same CPU. I can run the same uImage and the same ramdisk and it works fine there. I can login and do the usual things.
I have run memory test (64 MB total, limit kernel to 32M and test the other 32M; it's a single chip DDR2) and found no problem.
One board uses UART0, and the other UART2, but boot log comes out of both so it should not be the problem.
Any debugging tips is greatly appreciated.
I don't have an appropriate JTAG so I can't use that.
If msleep doesn't return or doesn't make it to schedule, then in order to debug we can follow the call stack.
msleep calls schedule_timeout_uninterruptible(timeout) which calls schedule_timeout(timeout) which in the default case exits without calling schedule if the timeout in jiffies passed to it is < 0, so that is one thing to check.
If timeout is positive , then setup_timer_on_stack(&timer, process_timeout, (unsigned long)current); is called, followed by __mod_timer(&timer, expire, false, TIMER_NOT_PINNED); before calling schedule.
If we aren't getting to schedule then something must be happening in either setup_timer_on_stack or __mod_timer.
The calltrace for setup_timer_on_stack is setup_timer_on_stack calls setup_timer_on_stack_key which calls init_timer_on_stack_key is either external if CONFIG_DEBUG_OBJECTS_TIMERS is enabled or calls init_timer_key(timer, name, key);which calls
debug_init followed by __init_timer(timer, name, key).
__mod_timer first calls timer_stats_timer_set_start_info(timer); then a whole lot of other function calls.
I would advise starting by putting a printk or two in schedule_timeout probably either side of the setup_timer_on_stack call or either side of the __mod_timer call.
This problem has been solved.
With liberal use of prink it was determined that schedule() indeed switches to another task, the idle task. In this instance, being an embedded Linux, the original code base I copied from installed an idle task. That idle task seems not appropriate for my board and has locked up the CPU and thus causing the crash. Commenting out the call to the idle task
http://lxr.linux.no/linux+v2.6.37/arch/arm/mach-davinci/cpuidle.c#L93
works around the problem.

select() inside infinite loop uses significantly more CPU on RHEL 4.8 virtual machine than on a Solaris 10 machine

I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range).
My first thought was that something must be wrong with the event loop. The event loop is an infinite loop (while(1)) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue.
Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely?
One more thing I am not sure which version of vmware server is being used. It was just updated about a month ago though.
I believe that Linux returns the remaining time by writing it into the time parameter of the select() call and Solaris does not. That means that a programmer who isn't aware of the POSIX spec might not reset the time parameter between calls to select.
This would result in the first call having 1000 usec timeout and all other calls using 0 usec timeout.
As Zan Lynx said, the timeval is modified by select on linux, so you should reassign the correct value before each select call. Also I suggest to check if some of the file descriptor is in a particular state (e.g. end of file, peer connection closed...). Maybe the porting is showing some latent bug in the analisys of the returned values (FD_ISSET and so on). It happened to me too some years ago in a port of a select-driven cycle: I was using the returned value in the wrong way, and a closed fd was added to the rd_set, causing select to fail. On the old platform the wrong fd was used to have a value higher than maxfd, so it was ignored. Because of the same bug, the program didn't recognize the select failure (select() == -1) and looped forever.
Bye!

Trying to run my script only at boot time, not at reboot?

Shouldn't following command run myScript only at the runlevel 2. I noticed it executes at reboot too. I wanted to run it only and only at startup.
update-rc.d myScript start 01 2 . stop 01 0 1 6 .
That's right. You need to check the first argument (start, stop, etc.) passed to your script to decide whats happening. It is explained in Debian Policy Manual '9.3.2 Writing the scripts' section.
Or alternatively you can run your script just by placing it in /etc/init.d/rc.local file or do not include any stop levels in your update-rc.d.
You should recognize the different runlevels inside myScript and do something different depending on the case. In runlevel 6 you should write some file to be discovered next time you reach runlevel 2.
However, you may need to revise your design as this is a very strange requirement.
What do you need to achieve ?
How is startup and reboot different? If the machine is dual-boot machine you can reboot from linux but then after that boot into windows for some time, say an hour, before again rebooting and boot linux. How is that different from just shutting down the machine for an hour? There is no way you can detect or predict at shutdown time what will happen after the reboot.
What I think you actually want to check is when your machine starts up booting linux your script must check how long time it was since the machine was shut down last time. If that is within a short period of time, say 5 minutes, you count it as a reboot. Otherwise you count it as a startup.

Resources