Scp as background job? - linux

Here is the problem:
i must move some files from one host to another, ok i use scp for
it.
But i need use it without blocking console, so should use scp &. But my job will killed after disconnecting (a heard it`s something called hup signal) so i found some tricks for fixing this.
But i wanna see progress bar after some time and all that tricks couldnt work because when i use jobs - it display jobs that only for these session.
So how to fix my problem ?
P.S. Sorry for my English.

I am not sure it is what you want but I'll suggest GNU Screen. It allows you to run a program so that when you log out, the program continues execution in the background. Later you can log back in and resume interaction with the program.

I guess you want ability to detach -and- attach terminal - The tool is called screen.

Related

Perl Interacting With Terminal

I'm not sure if what I am trying to do is possible, and I'm fairly new to perl so I'd appreciate any help.
My perl application will use system() to issue commands to Perforce that will create a devel/workspace, integrate, sync, etc. But obviously I can't integrate until my devel is created, and I can't sync unless some condition is met, so on and so forth. Also when my code is synced and I run it, I'm not sure how to tell if it finished or not either.
So I'm wondering how to say (slack pseudo code):
system(create my devel);
wait until devel created
system(integrate blah);
wait until integration complete
system (launch test);
wait until test complete;
etc...
I looked at other questions and saw the possibility of using forks, but I am not familiar with how to code that in this context.
Thanks
Normally, the system command in Perl will wait until the command you asked it to run has completed. This would work exactly the same as if you entered the command at a shell prompt, the program would run and the shell prompt would appear only when the command has completed whatever it is doing.
Perforce has a free Perl module downloadable from http://www.perforce.com/downloads/Perforce/20-User?qt-perforce_downloads_step_3=6#qt-perforce_downloads_step_3#52, with documentation at http://www.perforce.com/perforce/r12.1/manuals/p4script/02_perl.html#1047731.
But it sounds like you need more experience with Perl multiprogramming and IPC. Have you read the Camel book?

Linux process in background - "Stopped" in jobs?

I'm currently running a process with the & sign.
$ example &
However, (please note i'm a newbie to Linux) I realised that pretty much a second after such command I'm getting a note that my process received a stopped signal. If I do
$ jobs
I'll get the list with my example process with a little note "Stopped". Is it really stopped and not working at all in the background? How does it exactly work? I'm getting mixed info from the Internet.
In Linux and other Unix systems, a job that is running in the background, but still has its stdin (or std::cin) associated with its controlling terminal (a.k.a. the window it was run in) will be sent a SIGTTIN signal, which by default causes the program to be completely stopped, pending the user bringing it to the foreground (fg %job or similar) to allow input to actually be given to the program. To avoid the program being paused in this way, you can either:
Make sure the programs stdin channel is no longer associated with the terminal, by either redirecting it to a file with appropriate contents for the program to input, or to /dev/null if it really doesn't need input - e.g. myprogram < /dev/null &.
Exit the terminal after starting the program, which will cause the association with the program's stdin to go away. But this will cause a SIGHUP to be delivered to the program (meaning the input/output channel experienced a "hangup") - this normally causes a program to be terminated, but this can be avoided by using nohup - e.g. nohup myprogram &.
If you are at all interested in capturing the output of the program, this is probably the best option, as it prevents both of the above signals (as well as a couple others), and saves the output for you to look at to determine if there are any issues with the programs execution:
nohup myprogram < /dev/null > ${HOME}/myprogram.log 2>&1 &
Yes it really is stopped and no longer working in the background. To bring it back to life type fg job_number
From what I can gather.
Background jobs are blocked from reading the user's terminal. When one tries to do so it will be suspended until the user brings it to the foreground and provides some input. "reading from the user's terminal" can mean either directly trying to read from the terminal or changing terminal settings.
Normally that is what you want, but sometimes programs read from the terminal and/or change terminal settings not because they need user input to continue but because they want to check if the user is trying to provide input.
http://curiousthing.org/sigttin-sigttou-deep-dive-linux has the gory technical details.
Just enter fg which will resolve the error when you then try to exit.

How can I know that a cron job has started or finished?

I have put a long running python program in a cron job on a server, so that I can turn off my computer without interrupting the job.
Now I would like to know if the job is correctly started, if it has finished, if there are reasons to stop at a certain point, and so on. How can I do that ?
You could have it write to a logfile, but as it sounds like this isn't possible, you could probably have cron email you the output of the job, try adding MAILTO=you#example.com to your crontab. You should also find evidence of cron activity in your system logfiles (try grep cron /var/log/* to find likely logs on your system).
If you are using cron simply as a way to run processes after you disconnect from a server, consider using screen:
type screen and press return
set your script running
type Ctrl+A Ctrl+D to detatch from the screen
The process continues running even if you log off. Later on simply
screen -r
And you will be will reattached, allowing you to review the script's output
Why not get that cron job to have a log file. Also just do a ps before shutdown.

What do you do with badly behaving 3rd party processes in Linux?

Anytime I have a badly behaving process (pegging CPU, or frozen, or otherwise acting strangely) I generally kill it, restart it and hope it doesn't happen again.
If I wanted to explore/understand the problem (i.e. debug someone else's broken program as it's running) what are my options?
I know (generally) of things like strace, lsof, dmesg, etc. But I'm not really sure of the best way start poking around productively.
Does anyone have a systematic approach for getting to the bottom of these issues? Or general suggestions? Or is killing & restarting really the best one can do?
Thanks.
If you have debugging symbols of the program in question installed, you can attach to it with gdb and look what is wrong there. Start gdb, type attach pid where pid is the process id of the program in question (you can find it via top or ps). Then type Ctrl-C to stop it. Saying backtrace gives you the call stack, that means it tells which line of code is currently running and which functions called the currently running function.

Linux - communicating with a process? rejoin process in action?

I feel somewhat dumb asking this, but I'm relatively new to linux (more in terms of experience than time), and one thing that i've always wondered is if I can 'rejoin' (my own term) a process while it's running.
For example, if I set a game server or eggdrop IRC bot to run in the background, is there a command I can use to view that process in action and view all the output it delivers to the console?
I'm not talking about just viewing the process using the 'top' command, but actually linking to it as if I just ran it from the command line.
Thanks.
Debuggers can "attach" to running processes, but you might be better running your program in screen (which lets you detach and reattach to terminal in a fairly natural way).
There might be some good stuff good stuff in :
Redirect STDERR / STDOUT of a process AFTER it’s been started, using command line?
Can you be more specific? Are you just talking about backgrounding a process in the current session, then putting it back in the foreground.
E.g.:
doLongTask &
# Later
fg %3
3 in this example is the job number of this instance of doLongTask. You can see all running jobs with:
jobs
But note this will still only let you see what's being outputted to the console. I.E. stdout and stderr, minus any redirections.
The simple answer is:
>> ./runmyserver
<press ctrl-z>
>> bg
>> ...do something else ...
>> fg
You can also start in the background with:
>> ./runmyserver &
For more complicated stuff like disconnecting the server from your console session (so it's still running when you log out) you really want screen. Maybe beg them for it, it isn't really a security risk and it's a useful program to have around.
Also note that ctrl-z will actually pause your server until bg so if people are playing on it might skip a beat, best to do it quickly.
Finally, many game servers have a remote login for this kind of thing which would solve many of these issues. Make sure your game and host don't support this before looking for alternatives.
EDIT: Re-read your question. It sounds like you could at least get the output using redirect to a file. This won't let you add more input though:
./runmyserver > log.txt
If you know ahead of time that you want to do this, use screen(1) and run your server in the foreground in a screen session. You will be able to detach from your screen session and have the process keep running. You can then later re-attach your screen session and view any output it has made since, up to the size of the scrollback buffer.

Resources