Bash: infinite sleep (infinite blocking) - linux

I use startx to start X which will evaluate my .xinitrc. In my .xinitrc I start my window manager using /usr/bin/mywm. Now, if I kill my WM (in order to f.e. test some other WM), X will terminate too because the .xinitrc script reached EOF.
So I added this at the end of my .xinitrc:
while true; do sleep 10000; done
This way X won't terminate if I kill my WM. Now my question: how can I do an infinite sleep instead of looping sleep? Is there a command which will kinda like freeze the script?

sleep infinity does exactly what it suggests and works without cat abuse.

tail does not block
As always: For everything there is an answer which is short, easy to understand, easy to follow and completely wrong. Here tail -f /dev/null falls into this category ;)
If you look at it with strace tail -f /dev/null you will notice, that this solution is far from blocking! It's probably even worse than the sleep solution in the question, as it uses (under Linux) precious resources like the inotify system. Also other processes which write to /dev/null make tail loop. (On my Ubuntu64 16.10 this adds several 10 syscalls per second on an already busy system.)
The question was for a blocking command
Unfortunately, there is no such thing ..
Read: I do not know any way to archive this with the shell directly.
Everything (even sleep infinity) can be interrupted by some signal. So if you want to be really sure it does not exceptionally return, it must run in a loop, like you already did for your sleep. Please note, that (on Linux) /bin/sleep apparently is capped at 24 days (have a look at strace sleep infinity), hence the best you can do probably is:
while :; do sleep 2073600; done
(Note that I believe sleep loops internally for higher values than 24 days, but this means: It is not blocking, it is very slowly looping. So why not move this loop to the outside?)
.. but you can come quite near with an unnamed fifo
You can create something which really blocks as long as there are no signals send to the process. Following uses bash 4, 2 PIDs and 1 fifo:
bash -c 'coproc { exec >&-; read; }; eval exec "${COPROC[0]}<&-"; wait'
You can check that this really blocks with strace if you like:
strace -ff bash -c '..see above..'
How this was constructed
read blocks if there is no input data (see some other answers). However, the tty (aka. stdin) usually is not a good source, as it is closed when the user logs out. Also it might steal some input from the tty. Not nice.
To make read block, we need to wait for something like a fifo which will never return anything. In bash 4 there is a command which can exactly provide us with such a fifo: coproc. If we also wait the blocking read (which is our coproc), we are done. Sadly this needs to keep open two PIDs and a fifo.
Variant with a named fifo
If you do not bother using a named fifo, you can do this as follows:
mkfifo "$HOME/.pause.fifo" 2>/dev/null; read <"$HOME/.pause.fifo"
Not using a loop on the read is a bit sloppy, but you can reuse this fifo as often as you like and make the reads terminat using touch "$HOME/.pause.fifo" (if there are more than a single read waiting, all are terminated at once).
Or use the Linux pause() syscall
For the infinite blocking there is a Linux kernel call, called pause(), which does what we want: Wait forever (until a signal arrives). However there is no userspace program for this (yet).
C
Create such a program is easy. Here is a snippet to create a very small Linux program called pause which pauses indefinitely (needs diet, gcc etc.):
printf '#include <unistd.h>\nint main(){for(;;)pause();}' > pause.c;
diet -Os cc pause.c -o pause;
strip -s pause;
ls -al pause
python
If you do not want to compile something yourself, but you have python installed, you can use this under Linux:
python -c 'while 1: import ctypes; ctypes.CDLL(None).pause()'
(Note: Use exec python -c ... to replace the current shell, this frees one PID. The solution can be improved with some IO redirection as well, freeing unused FDs. This is up to you.)
How this works (I think): ctypes.CDLL(None) loads the standard C library and runs the pause() function in it within some additional loop. Less efficient than the C version, but works.
My recommendation for you:
Stay at the looping sleep. It's easy to understand, very portable, and blocks most of the time.

Maybe this seems ugly, but why not just run cat and let it wait for input forever?

TL;DR: since GNU coreutils version 9, sleep infinity does the right thing on Linux systems. Previously (and in other systems) the implementation was to actually sleep the maximum time allowed, which is finite.
Wondering why this is not documented anywhere, I bothered to read the sources from GNU coreutils and I found it executes roughly what follows:
Use strtod from C stdlib on the first argument to convert 'infinity' to a double precision value. So, assuming IEEE 754 double precision the 64-bit positive infinity value is stored in the seconds variable.
Invoke xnanosleep(seconds) (found in gnulib), this in turn invokes dtotimespec(seconds) (also in gnulib) to convert from double to struct timespec.
struct timespec is just a pair of numbers: integer part (in seconds) and fractional part (in nanoseconds).
Naïvely converting positive infinity to integer would result in undefined behaviour (see §6.3.1.4 from C standard), so instead it truncates to TYPE_MAXIMUM(time_t).
The actual value of TYPE_MAXIMUM(time_t) is not set in the standard (even sizeof(time_t) isn't); so, for the sake of example let's pick x86-64 from a recent Linux kernel.
This is TIME_T_MAX in the Linux kernel, which is defined (time.h) as:
(time_t)((1UL << ((sizeof(time_t) << 3) - 1)) - 1)
Note that time_t is __kernel_time_t and time_t is long; the LP64 data model is used, so sizeof(long) is 8 (64 bits).
Which results in: TIME_T_MAX = 9223372036854775807.
That is: sleep infinite results in an actual sleep time of 9223372036854775807 seconds (10^11 years). And for 32-bit linux systems (sizeof(long) is 4 (32 bits)): 2147483647 seconds (68 years; see also year 2038 problem).
Edit: apparently the nanoseconds function called is not directly the syscall, but an OS-dependent wrapper (also defined in gnulib).
There's an extra step as a result: for some systems where HAVE_BUG_BIG_NANOSLEEP is true the sleep is truncated to 24 days and then called in a loop. This is the case for some (or all?) Linux distros. Note that this wrapper may be not used if a configure-time test succeeds (source).
In particular, that would be 24 * 24 * 60 * 60 = 2073600 seconds (plus 999999999 nanoseconds); but this is called in a loop in order to respect the specified total sleep time. Therefore the previous conclusions remain valid.
In conclusion, the resulting sleep time is not infinite but high enough for all practical purposes, even if the resulting actual time lapse is not portable; that depends on the OS and architecture.
To answer the original question, this is obviously good enough but if for some reason (a very resource-constrained system) you really want to avoid an useless extra countdown timer, I guess the most correct alternative is to use the cat method described in other answers.
Edit: recent GNU coreutils versions will try to use the pause syscall (if available) instead of looping. The previous argument is no longer valid when targeting these newer versions in Linux (and possibly BSD).
Portability
This is an important and valid concern:
sleep infinity is a GNU coreutils extension not contemplated in POSIX. GNU's implementation also supports a "fancy" syntax for time durations, like sleep 1h 5.2s while POSIX only allows a positive integer (e.g. sleep 0.5 is not allowed).
Some compatible implementations: GNU coreutils, FreeBSD (at least from version 8.2?), Busybox (requires to be compiled with options FANCY_SLEEP and FLOAT_DURATION).
The strtod behaviour is C and POSIX compatible (i.e. strtod("infinity", 0) is always valid in C99-conformant implementations, see §7.20.1.3).

sleep infinity looks most elegant, but sometimes it doesn't work for some reason. In that case, you can try other blocking commands such as cat, read, tail -f /dev/null, grep a etc.

Let me explain why sleep infinity works though it is not documented. jp48's answer is also useful.
The most important thing: By specifying inf or infinity (both case-insensitive), you can sleep for the longest time your implementation permits (i.e. the smaller value of HUGE_VAL and TYPE_MAXIMUM(time_t)).
Now let's dig into the details. The source code of sleep command can be read from coreutils/src/sleep.c. Essentially, the function does this:
double s; //seconds
xstrtod (argv[i], &p, &s, cl_strtod); //`p` is not essential (just used for error check).
xnanosleep (s);
Understanding xstrtod (argv[i], &p, &s, cl_strtod)
xstrtod()
According to gnulib/lib/xstrtod.c, the call of xstrtod() converts string argv[i] to a floating point value and stores it to *s, using a converting function cl_strtod().
cl_strtod()
As can be seen from coreutils/lib/cl-strtod.c, cl_strtod() converts a string to a floating point value, using strtod().
strtod()
According to man 3 strtod, strtod() converts a string to a value of type double. The manpage says
The expected form of the (initial portion of the) string is ... or (iii) an infinity, or ...
and an infinity is defined as
An infinity is either "INF" or "INFINITY", disregarding case.
Although the document tells
If the correct value would cause overflow, plus or minus HUGE_VAL (HUGE_VALF, HUGE_VALL) is returned
, it is not clear how an infinity is treated. So let's see the source code gnulib/lib/strtod.c. What we want to read is
else if (c_tolower (*s) == 'i'
&& c_tolower (s[1]) == 'n'
&& c_tolower (s[2]) == 'f')
{
s += 3;
if (c_tolower (*s) == 'i'
&& c_tolower (s[1]) == 'n'
&& c_tolower (s[2]) == 'i'
&& c_tolower (s[3]) == 't'
&& c_tolower (s[4]) == 'y')
s += 5;
num = HUGE_VAL;
errno = saved_errno;
}
Thus, INF and INFINITY (both case-insensitive) are regarded as HUGE_VAL.
HUGE_VAL family
Let's use N1570 as the C standard. HUGE_VAL, HUGE_VALF and HUGE_VALL macros are defined in §7.12-3
The macro
HUGE_VAL
expands to a positive double constant expression, not necessarily representable as a float. The macros
HUGE_VALF
HUGE_VALL
are respectively float and long double analogs of HUGE_VAL.
HUGE_VAL, HUGE_VALF, and HUGE_VALL can be positive infinities in an implementation that supports infinities.
and in §7.12.1-5
If a floating result overflows and default rounding is in effect, then the function returns the value of the macro HUGE_VAL, HUGE_VALF, or HUGE_VALL according to the return type
Understanding xnanosleep (s)
Now we understand all essence of xstrtod(). From the explanations above, it is crystal-clear that xnanosleep(s) we've seen first actually means xnanosleep(HUGE_VALL).
xnanosleep()
According to the source code gnulib/lib/xnanosleep.c, xnanosleep(s) essentially does this:
struct timespec ts_sleep = dtotimespec (s);
nanosleep (&ts_sleep, NULL);
dtotimespec()
This function converts an argument of type double to an object of type struct timespec. Since it is very simple, let me cite the source code gnulib/lib/dtotimespec.c. All of the comments are added by me.
struct timespec
dtotimespec (double sec)
{
if (! (TYPE_MINIMUM (time_t) < sec)) //underflow case
return make_timespec (TYPE_MINIMUM (time_t), 0);
else if (! (sec < 1.0 + TYPE_MAXIMUM (time_t))) //overflow case
return make_timespec (TYPE_MAXIMUM (time_t), TIMESPEC_HZ - 1);
else //normal case (looks complex but does nothing technical)
{
time_t s = sec;
double frac = TIMESPEC_HZ * (sec - s);
long ns = frac;
ns += ns < frac;
s += ns / TIMESPEC_HZ;
ns %= TIMESPEC_HZ;
if (ns < 0)
{
s--;
ns += TIMESPEC_HZ;
}
return make_timespec (s, ns);
}
}
Since time_t is defined as an integral type (see §7.27.1-3), it is natural we assume the maximum value of type time_t is smaller than HUGE_VAL (of type double), which means we enter the overflow case. (Actually this assumption is not needed since, in all cases, the procedure is essentially the same.)
make_timespec()
The last wall we have to climb up is make_timespec(). Very fortunately, it is so simple that citing the source code gnulib/lib/timespec.h is enough.
_GL_TIMESPEC_INLINE struct timespec
make_timespec (time_t s, long int ns)
{
struct timespec r;
r.tv_sec = s;
r.tv_nsec = ns;
return r;
}

What about sending a SIGSTOP to itself?
This should pause the process until SIGCONT is received. Which is in your case: never.
kill -STOP "$$";
# grace time for signal delivery
sleep 60;

I recently had a need to do this. I came up with the following function that will allow bash to sleep forever without calling any external program:
snore()
{
local IFS
[[ -n "${_snore_fd:-}" ]] || { exec {_snore_fd}<> <(:); } 2>/dev/null ||
{
# workaround for MacOS and similar systems
local fifo
fifo=$(mktemp -u)
mkfifo -m 700 "$fifo"
exec {_snore_fd}<>"$fifo"
rm "$fifo"
}
read ${1:+-t "$1"} -u $_snore_fd || :
}
NOTE: I previously posted a version of this that would open and close the file descriptor each time, but I found that on some systems doing this hundreds of times a second would eventually lock up. Thus the new solution keeps the file descriptor between calls to the function. Bash will clean it up on exit anyway.
This can be called just like /bin/sleep, and it will sleep for the requested time. Called without parameters, it will hang forever.
snore 0.1 # sleeps for 0.1 seconds
snore 10 # sleeps for 10 seconds
snore # sleeps forever
There's a writeup with excessive details on my blog here

This approach will not consume any resources for keeping process alive.
while :; do :; done & kill -STOP $! && wait
Breakdown
while :; do :; done & Creates a dummy process in background
kill -STOP $! Stops the background process
wait Wait for the background process, this will be blocking forever, cause background process was stopped before
Notes
works only from within a script file.

Instead of killing the window manager, try running the new one with --replace or -replace if available.

while :; do read; done
no waiting for child sleeping process.

Related

Can perl sysopen open file for atomic writes?

While reading the APUE (3rd edition) book, I came across the open system call and its ability to let user open file for write atomic operation with O_APPEND mode meaning that, multiple processes can write to a file descriptor and kernel ensures that the data written to the single file by multiple processes, doesn't overlap and all lines are intact.
Upon experimenting successfully with open system call with a C/C++ program, I was able to validate the same and it works just like the book describes. I was able to launch multiple processes which wrote to a single file and all lines could be accounted for w.r.t to their process PIDs.
I was hoping to observe the same behavior with perl sysopen, as I have some tasks at work which could benefit with this behavior. Tried it out but it actually did not work. When I analyzed the output file, I was able to see signs of race condition (probably) as there many a times interleaved lines.
Question: Is perl sysopen call not same as the linux's open system call? Is it possible to achieve this type of atomic write operation by multiple processes to a single file?
EDIT: adding C code, and perl code used for testing.
C/C++ code
int main(void)
{
if ((fd = open("outfile.txt",O_WRONLY|O_CREAT|O_APPEND)) == -1) {
printf ("failed to create outfile! exiting!\n");
return -1;
}
for (int counter{1};counter<=MAXLINES;counter++)
{ /* write string 'line' for MAXLINES no. of times */
std::string line = std::to_string(ACE_OS::getpid())
+ " This is a sample data line ";
line += std::to_string(counter) + " \n";
if ((n = write(fd,line.c_str(),strlen(line.c_str()))) == -1) {
printf("Failed to write to outfile!\n";
}
}
return 0;
}
Perl code
#!/usr/bin/perl
use Fcntl;
use strict;
use warnings;
my $maxlines = 100000;
sysopen (FH, "testfile", O_CREAT|O_WRONLY|O_APPEND) or die "failed sysopen\n";
while ($maxlines != 0) {
print FH "($$) This is sample data line no. $maxlines\n";
$maxlines--;
}
close (FH);
__END__
Update (after initial troubleshooting):
Thanks to the information provided in the answer below, I was able to get it working. Although I ran into issue of some missing lines which was caused by me opening the file with each process with O_TRUNC, which I shouldn't have done, but missed it initially. After some careful analysis - I spotted the issue and corrected it. As always - linux never fails you :).
Here is a bash script I used to launch the processes:
#!/bin/bash
# basically we spawn "$1" instances of the same
# executable which should append to the same output file.
max=$1
[[ -z $max ]] && max=6
echo "creating $max processes for appending into same file"
# this is our output file collecting all
# the lines from all the processes.
# we truncate it before we start
>testfile
for i in $(seq 1 $max)
do
echo $i && ./perl_read_write_with_syscalls.pl 2>>_err &
done
# end.
Verification from output file:
[compuser#lenovoe470:07-multiple-processes-append-to-a-single-file]$ ls -lrth testfile
-rw-rw-r--. 1 compuser compuser 252M Jan 31 22:52 testfile
[compuser#lenovoe470:07-multiple-processes-append-to-a-single-file]$ wc -l testfile
6000000 testfile
[compuser#lenovoe470:07-multiple-processes-append-to-a-single-file]$ cat testfile |cut -f1 -d" "|sort|uniq -c
1000000 (PID: 21118)
1000000 (PID: 21123)
1000000 (PID: 21124)
1000000 (PID: 21125)
1000000 (PID: 21126)
1000000 (PID: 21127)
[compuser#lenovoe470:07-multiple-processes-append-to-a-single-file]$
Observations:
To my surprise, there wasn't any wait average load on the system - at all. I was not expecting it. I believe somehow kernel must have taken care of that but don't know how it works. I would be interested to know more about it.
What could be the possible applications of this?
I do a lot of file to file reconciliation(s), and we (at work) always have need to parse huge data files (like 30gb - 50gb each). With this working - I could now do parallel operations instead of my previous approach which comprised of: hashing file1, then hashing file2, then compare key,value pairs from 2 files. Now I could do the hashing part in parallel and bring down the time it takes - even further.
Thanks
It doesn't matter if you open or sysopen; the key is using syswrite and sysread instead of print/printf/say/etc and readline/read/eof/etc.
syswrite maps to a single write(2) call, while print/printf/say/etc can result in multiple calls to write(2) (even if autoflush is enabled).[1]
sysread maps to a single read(2) call, while readline/read/eof/etc can result in multiple calls to read(2).
So, by using syswrite and sysread, you are subject to all the assurances that POSIX gives about those calls (whatever they might be) if you're on a POSIX system.
If you use print/printf/say/etc, and limit your writes to less than the size of the buffer between (explicit or automatic) flushes, you'll get a single write(2) call. The buffer size was 4 KiB in older versions of Perl, and it's 8 KiB by default in newer versions of Perl. (The size is decided when perl is built.)

Read from standard input with all MPI processes

So far I've been using OPEN(fid, FILE='IN', ...) and it seems that all MPI processes read the same file IN without interfering with each other.
Furthermore, in order to allow the input file being chosen among several, I simply made the IN file a symbolic link pointing to the desired input. This means that when I want to change the input file I have to run ln -sf desidered-input IN before running the program (mpirun -n $np ./program).
I'd really like to be able to run the progam as mpirun -n $np ./program < input-file. To do so I removed the OPEN statement, and the corresponding CLOSE statement, and changed all READ(fid,*) statements to READ(INPUT_UNIT,*) (I'm using ISO_FORTRAN_ENV module).
But, after all edits, I've realized that only one process (always 0, I noticed) reads from it, since all others reach EOF immediately. Here is a MWE, using OpenMPI 2.0.1.
! cat main.f90
program main
use, intrinsic :: iso_fortran_env
use mpi
implicit none
integer :: myid, x, ierr, stat
x = 12
call mpi_init(ierr)
call mpi_comm_rank(mpi_comm_world, myid, ierr)
read(input_unit,*, iostat=stat) x
if (is_iostat_end(stat)) write(output_unit,*) myid, "I'm out"
if (.not. is_iostat_end(stat)) write(output_unit,*) myid, "I'm in", myid, x
call mpi_finalize(ierr)
end program main
that can be compiled with mpifort -o main main.f90, run with mpirun -np 4 ./main, and which results in this output
1 I'm out
2 I'm out
3 I'm out
17 this is my input from keyboard
0 I'm in 0 17
I know that MPI has proper routines to perform parallel I/O, but I've found nothing about reading from standard input.
You are seeing the expected behaviour with OpenMPI. By default, mpirun
directs UNIX standard input to /dev/null on all processes except the MPI_COMM_WORLD rank 0 process. The MPI_COMM_WORLD rank 0 process inherits standard input from mpirun.
The option --stdin can be used to direct standard input to another process, but not to direct to all.
One could also note that the behaviour of redirection of standard input isn't consistent across MPI implementations (the notion isn't specified by the MPI standard). For example, using Intel MPI there is the -s option to that mpirun. mpirun -np 4 -s all ./main does allow all processes access to mpirun's standard input. There's also no guarantee that processes without that redirection will fail, rather than wait, to read.

How to write bash output to arbitrary program stdin

I want to (hopefully easily) write from a bash script to any arbitrary program that is already running via that program's stdin.
Say I have some arbitrary program "Sum", that constantly takes user input from the terminal. Every integer it receives from stdin it adds to the current sum and outputs the new sum. Here's example terminal text of what I mean:
$: ./Sum
Sum: 0
Give me an integer: 2
Sum: 2
Give me an integer: 5
Sum: 7
How would automate this process in a bash script? If I had control of Sum's source code I could let it accept integer arguments. But if I don't have said control of the program, how can I automate the interaction with Sum? Here's a psuedo code of a bash snippet of what I want to do:
In example.sh:
#!/bin/bash
my_program_input_point=./Sum &
echo 2 > my_program_input_point
echo 5 > my_program_input_point
Thus on my terminal screen it would still look like this:
$: ./example.sh
Sum: 0
Give me an integer: 2
Sum: 2
Give me an integer: 5
Sum: 7
The difference is I wouldn't have typed any of it.
It feels like this task should be really easy, because basically anything you can do in a terminal, you can also easily do in a script. Except, apparently, directly interact with an arbitrary process once its started.
This topic answers some aspects of my question by using pipes. However the accepted answer only works if I have control of both programs on each side of the pipe, the "recipient" program is under no obligation to read from the pipe automatically.
This is assuming I can't modify the program/script in question. I also can't write a "pipe reader" script to wrap around the program (Sum), because that wrapper script would still be required to interact with the running process Sum.
The second answer to this question on serverfault.com (written by jfgagne) seems much closer, but I can't seem to get it working. Is there any easier way to do this that I just don't know about?
For information on how to capture and read an arbitrary program's output, see my next question for more information
The most straightforward way is to use a pipeline. The trick is that you can make the lefthand side of the pipeline a compound statement, or a function call. It doesn't have to be just a single command.
{
echo 2
echo 5
} | ./Sum
or
numbers() {
echo 2
echo 5
}
numbers | ./Sum
This lets you do whatever you want to generate the input. You don't have to have it all ahead of time. If you generate the input bit by bit, it'll be fed to ./Sum bit by bit.
You can use a named pipe. Whatever starts Sum is responsible for creating the named pipe some place convenient, then starting Sum with the named pipe as its standard input.
mkfifo /tmp/pipe
Sum < /tmp/pipe
Then your script could take the name of the pipe as an argument, then treat it as a file it can write to.
#!/bin/bash
p=$1
echo 2 > "$p"
echo 5 > "$p"
Then you could call your script with client /tmp/pipe.
You can write to the standard input file descriptor of the running process. Here is the same question: https://serverfault.com/questions/178457/can-i-send-some-text-to-the-stdin-of-an-active-process-running-in-a-screen-sessi
You need to write to /proc/*pid of the program*/fd/0 which is the file descriptor for the standard input of the process with that pid. Make sure you have access to do this.

proccess at linux ,fork value divided by zero

main()
{
printf( "%d\n" , 1/fork() );
}
by running this app my output is: 0.
I know that at parent fork value is number ,and at Son the value is 0.
So why don't I get any problem dividing 1/0 ?
Actually, the 1/0 Arithmetic Exception do occur, but it just do not print out in the console.
set core file size to unlimited you will see the core file
$ ulimit -c unlimited
And use gdb you can see the Arithmetic Exception
$ gdb a.out core
The compiler is transforming your code into more elementary steps
(you could pass the -fdump-tree-all option to GCC, or use MELT graphical probe to look into some intermediate GCC representations)
So bascially the compiler is transforming your code into something like
int main()
{
int t1 = fork();
int t2 = 1 / t1;
printf("%d\n", t2);
}
So if t1 gets 0 (in the child process), the assignment to t2 is an undefined behavior, which usually crashes with a division by zero (i.e. a SIGFPE asynchronous signal), and the printf is not reached.
Probably, on a PowerPC processor where you can make a division by zero which does not crash, the behavior (still undefined) would be different.
BTW, you should run your program with strace -f to understand what syscalls & signals are involved.

Time taken by `less` command to show output

I have a script that produces a lot of output. The script pauses for a few seconds at point T.
Now I am using the less command to analyze the output of the script.
So I execute ./script | less. I leave it running for sufficient time so that the script would have finished executing.
Now I go through the output of the less command by pressing Pg Down key. Surprisingly while scrolling at the point T of the output I notice the pause of few seconds again.
The script does not expect any input and would have definitely completed by the time I start analyzing the output of less.
Can someone explain how the pause of few seconds is noticable in the output of less when the script would have finished executing?
Your script is communicating with less via a pipe. Pipe is an in-memory stream of bytes that connects two endpoints: your script and the less program, the former writing output to it, the latter reading from it.
As pipes are in-memory, it would be not pleasant if they grew arbitrarily large. So, by default, there's a limit of data that can be inside the pipe (written, but not yet read) at any given moment. By default it's 64k on Linux. If the pipe is full, and your script tries to write to it, the write blocks. So your script isn't actually working, it stopped at some point when doing a write() call.
How to overcome this? Adjusting defaults is a bad option; what is used instead is allocating a buffer in the reader, so that it reads into the buffer, freeing the pipe and thus letting the writing program work, but shows to you (or handles) only a part of the output. less has such a buffer, and, by default, expands it automatically, However, it doesn't fill it in the background, it only fills it as you read the input.
So what would solve your problem is reading the file until the end (like you would normally press G), and then going back to the beginning (like you would normally press g). The thing is that you may specify these commands via command line like this:
./script | less +Gg
You should note, however, that you will have to wait until the whole script's output loads into memory, so you won't be able to view it at once. less is insufficiently sophisticated for that. But if that's what you really need (browsing the beginning of the output while the ./script is still computing its end), you might want to use a temporary file:
./script >x & less x ; rm x
The pipe is full at the OS level, so script blocks until less consumes some of it.
Flow control. Your script is effectively being paused while less is paging.
If you want to make sure that your command completes before you use less interactively, invoke less as less +G and it will read to the end of the input, you can then return to the start by typing 1G into less.
For some background information there's also a nice article by Alexander Sandler called "How less processes its input"!
http://www.alexonlinux.com/how-less-processes-its-input
Can I externally enforce line buffering on the script?
Is there an off the shelf pseudo tty utility I could use?
You may try to use the script command to turn on line-buffering output mode.
script -q /dev/null ./script | less # FreeBSD, Mac OS X
script -c "./script" /dev/null | less # Linux
For more alternatives in this respect please see: Turn off buffering in pipe.

Resources