Disable pagination on the command line - linux

I am trying to write a script using the python module pexpect that will connect to a server and execute commands like you are typing at the command line.
So for example, you can have something like:
child = pexpect.spawn('/usr/bin/ssh user#example.com')
child.sendLine('ls -al')
or whatever command you want to send. It will act like you are typing in a terminal.
In my script, I am trying to run a command using the sendLine() API that essentially dumps out a bunch of info to the command line. But there is a pagination that requires there to be another command where you have to press a key to continue to get to the next command.
So for example:
[Some info]
--------------- To continue, press any key. To quit, press 'q'. ---------------
[Some more info]
Is there a way that I can turn pagination off or a command I can send before I try to dump the info to the command line to turn it off?

In Linux:
You can use redirection to skip the pager(more or less). If it is important to display the output on screen, the output can be redirected to tee.
For example in man ls; ls, the man command expects the user to press q for termination and then ls is executed. To execute both the commands simultaneously without user intervention, it can be done as man ls | tee; ls. If displaying the output is not mandatory, it can be redirected to /dev/null as well.
For additional help, please specify the exact command that you are trying to execute on the remote server.
In Python: When using pexpect, the user activity can be automated if the intermediate output is known in advance. You can use expect function to wait for a particular output and then take necessary action(for example using sendLine).

Related

Linux Shell script executed but not return to command prompt

I have a script that runs one of the following line
sudo -u $USER $SUDOCMD &>>stdout.log
The sudo command is a realtime process that print out lots of stuff to the console.
After running the script each time, the script does not return to the command prompt. You have to press enter or ctrl + c to get back to the command prompt.
Is there a way to to do it automatically, so that I can get a return value from the script to decide whether the script runs ok or failed.
thanks.
What is probably happening here is that your script is printing binary data to the TTY rather than text to standard output/error, and this is hiding your prompt. You can for example try this:
$ PS1='\$ '
$ (printf "first line\nsecond line\r" > $(tty)) &>> output.log
The second command will result in two lines of output, the second one being "mixed in" with your prompt:
first line
$ cond line
As you can see the cursor is on the "c", but if you start typing the rest of the line is overwritten. What has happened here is the following:
You pressed Enter to run the command, so the cursor moved a line down.
The tty command prints the path to the terminal file, something like "/dev/pts/1". Writing to this file means that the output does not go to standard output (which is usually linked to the terminal) but directly to the terminal.
The subshell (similar to running the command in a shell script) ensures that the first redirect isn't overridden by the second one. So the printf output goes directly to the terminal, and nothing goes to the output log.
The terminal now proceeds to print the printf output, which ends in a carriage return. Carriage return moves the cursor to the start of the line you've already written to, so that is where your prompt appears.
By the way:
&>> redirects both standard output and standard error, contrary to your filename.
Use More Quotes™
I would recommend reading up on how to put a command in a variable

Windows command prompt capture output of bash script in one step

I trigger a bash script from Windows command prompt.
postCloneSetup.sh
It opens another window and then returns. The window it spawned stays open and logs output text.
I want to capture the output from the spawned window and return that to the Windows command prompt.
I would prefer to use something like
$(postCloneSetup.sh) // Linux for capturing output to current context
for the Windows command prompt.
I'd prefer not to modify postCloneSetup.sh. I know I could have it write out to a file with
exec &> postCloneSetupLog.log
but then I must wait and manually run
type postCloneSetupLog.log
to see the output in the console. This is not possible for integrating into a CI engine, which is my goal.
How can I capture the output from the spawned console in one command?

How to take continuous back up of linux gnome terminal logs? commands and output of that command

I want to take continuous backup of logs being printed in my linux terminal. Is it possible that whenever something will be printed in my terminal, it will automatically get printed into some text file with time stamp.
Use the script command ie
script log.txt
at the start of your session. You can also add this to your bash profile so that it starts when you open a terminal etc. You need to use
script -a log.txt
to append. Don't try and cat it or tail it while in the session, you need to CTRL-D then have a look at what got logged.

Linux: using the tee command via ssh

I have written a Fortran program (let's call it program.exe) with does some simulation for me. Via ssh I'm logging ino some far away computers to start runs there whose results I collect after a few days. To be up-to-date how the program proceeds I want to write the shell output into a text file output.txt also (since I can't be logged in the far away computers all the time). The command should be something like
nohup program.exe | tee output.txt > /dev/null &
This enables me to have a look at output.txt to see the current status even though the program hasn't ended its run yet. The above command works fine on my local machine. I tried first with the command '>' but here the problem was that nothing was written into the text file until the whole program had finish (maybe related to the pipe buffer?). So I used the workaround with 'tee'.
The problem is now that when I log into the computer via ssh (ssh -X user#machine), execute the above command and look at output.txt with the VI editor nothing appears until the program has finished. If I omit the 'nohup' and '&' I will not even get any shell output until it has finished. My thought was that it might have to do something with data being buffered by ssh but I'm rather a Linux newbie. For any ideas or workaround I would be very grateful!
I would use screen utility http://www.oreillynet.com/linux/cmd/cmd.csp?path=s/screen instead of nohup. Thus I would be able to set my program to detached state (^A^D) reconnect to the host, retrieve my screen session (screen -r)
and monitor my output as if I never logged out.

How to simply ignore output from a program when called as an external one in vim?

I can dump the output content from my external command in the main window, I can disable "Press ENTER or type command to continue" and simply store it in a register.
But how do I call an external command in vim (it can be any program, apt-get, etc) and simply avoid it creating a buffer window if an output? Simply IGNORE the output from a external command I ran? I just want to call the command from vim. The command starts a simple webserver (listening on port 8080) and I have to press ctrl+c to stop it and move away from the external command buffer.
I tried silent before !cmd, it works, but I would like to stop the process my external command created right after it was started.
EDIT: I changed my mind about the simple webserver. I another situations, just doing like the suggestion accepted answer it works.
Pipe output to /dev/null:
:!cmd &> /dev/null
Use silent as you mention to get rid of the Press ENTER or type command to continue:
:silent !cmd &> /dev/null
Read this page for more on hiding this message.

Resources