How to QLOGIN to a node that shared a specific job-id - linux

I have an existing qlogin job like this:
job-ID prior name user state submit/start at queue
-------------------------------------------------------------------------
3530770 0.50500 QLOGIN jlsmith r 10/15/2012 14:02:07 mri.q#compute-0-29.local
The above job was submitted using standard qlogin command in linux:
$ qlogin
What I want to do is to perform another qlogin so that the process
are running in the same node with the above Job-ID 3530770.
The idea is that if it's done correctly in top command I can see the same running
process submitted to the above job-ID.
Is there a way to do it?

Either
qlogin -l h=compute-0-29.local
or
qlogin -q "*#compute-0-29.local"
Should do the job

Based on talking to some HPC specialists at work and some Google searching on the subject (I also wanted to resume a job ID), it's not really possible if you've already submitted the job. You can qlogin -q <node name> into the node again, but you cannot resume the job on the shell screen.
If you are thinking of starting a new qlogin job, but you would like to be able to resume it at some future point, then you can use screen to do this.
Before you write qlogin into the command line at the front-end node, write screen. It should completely clear the terminal screen.
Now qlogin and put in your job script interactively.
Once your job has started running and you want to leave for a bit, press and hold Cntl while you press A and D. It should say that your screen was detached and take you back to the front-end node. If you qstat now, you should see your job running.
When you want to resume the job ID (see the running process on the terminal screen), in the front-end node write screen -r. You should be able to see your running process in the terminal again.
Note: if you do this several times and you accumulate multiple screens by accident (happens to me every time), when you screen -r you will get multiple choices instead of automatically resuming the one you want. To try each one out, type screen -r <name of screen listed> one at a time until you find the one you want (detach as specified above). To get rid of the extra screens, write screen -D -r <name>.
Hope this helps.

Related

How to Stop & Start Screen Jobs Using Scripts in Linux

I want to use some scripts to stop and start a bunch of programs, each running in a separate linux screen. These programs run continuously and need to be stopped using Ctrl-C.
So I can write some code to stop a screen:
screen -S "mysessionname" -X quit
but do I need to send a Ctrl-C somehow first of all and if so then how?
Also, I can start a new detached screen thus from within a script:
screen -mdS "mysesssionname"
but how can I then kick of the job from within this screen using a script? I've tried attaching to the session and then starting the job all from within a script but it doesn't seem to work
Well let's hope that this can help you but to simulate the Ctrl-C in a script as far as i know you can use something like kill -3 and the pid of the process. See in manual of signals: "man 7 signal"

Monitor console output

I have a program, lets call if 'foo'
Foo works fine for a random amount of time during which it announces its progress on the console.
But after sometimes it stops giving any output. At this point I have to manually close the program (ctrl + c) and start it again.
I would like to know if there is a way to monitor console output of a program and in case there is no output for a certain duration of time take some action.
Platform is linux.
I've found this on Internet about a command called watch.
Name
watch - execute a program periodically, showing output fullscreen
Synopsis
watch [-dhvt] [-n ] [--differences[=cumulative]] [--help] [--interval=] [--no-title] [--version]
Description
watch runs command repeatedly, displaying its output (the first screenfull). This allows you to watch the program output change over time. By default, the program is run every 2 seconds; use -n or --interval to specify a different interval.
The -d or --differences flag will highlight the differences between successive updates. The --cumulative option makes highlighting "sticky", presenting a running display of all positions that have ever changed. [...]
watch will run until interrupted.

What does the linux make command do?

I created a database and a table inside it.I insert records manually and then run a python script which reads the records and updates them in the table. I was just trying
the make &>1.txt command to redirect the output when the unexpected happened.
Even when I am not running the script and insert the record manually from mysql, the script is
running somewhere as it updates the records , but I am not running it any tab
What is the reason ??
The script just starts running by itself!!!
If it's running in the background (which could have happened if you typed make & >1.txt with a space, not what you showed above) then the jobs command will show it is running, and fg will bring it to the foregound, so you can kill it with Ctrl-C
(You can also kill it with the jobspec shown by jobs e.g. kill %1 but if you don't know what you're doing it's simpler to bring it to the foreground and interrupt it)

How to tell if a process has ended?

Besides using top, is there a more precise way of identifying if the last executed command has finished if I had to check in a separate session over Putty?
pgrep
How about getting it to run another command immediately after that sets a flag.
$ do_command ; touch I_FINISHED
then when the command finishes it'll create a file called I_FINISHED that you
can look for.
or something more sophisticated that writes to a log file if you're doing it
multiple times.
I agree that it may be a faster option in the long run to have your program write to a log file or create a notification. Just put it at the end of the executed code, past the part that you suspect may cause it to hang.
ps -eo cmd
Lists all processes, and displays the command line, as 'typed' when the command started, so you will be able to tell your script apart from anything else running written in Perl.

Run a command in a shell and keep running the command when you close the session

I am using Putty to connect to a remote server. What I want to know is if there is any way to write my commands and allow them to keep running after I close the session with Putty. The reason for this is that I do not want to keep the computer ON all the time. Is there any way to do this?.
Update with the solution
For my question as it is presented the best solution is use one of the commands provided such as nohup, because you do not have to install any additional software. But if you are in the same problem use screen, install it and use it. It is amazing.
I have selected the answer of Norman Ramsey as favourite because propose several solutions using commands and screen. But please check the other answers specially the one of PEZ, then you get an insight of what screen is able todo.
screen! It's the best thing since sliced bread. (Yeah, I know others have already suggested it, but it's so good the whole world should join in and suggest it too.)
screen is like, like, ummmm ... like using VNC or the like to connect to a GUI destop, but for command shell windows. You can have several shell "windows" open at once in the same screen session. You can do stuff like:
Start a screens session using "screen -dR" (get used to using -dR)
run some commands in one window
press CTRL-A,C to create a new window open a file there in vim
press CTRL-A,0 to go back to the first window and issue some command on the file you just edited
CTRL-A, 1 to go back to your vim session
CTRL-A, C for yet another window and maybe do "sudo - su" (because you just happen to need a full root shell)
CTRL-A, 0 and start a background process
CTRL-A, C to create yet a new window, "tail -f" the log for that background process
CTRL-A, d to disconnect your screen then CTRL-D to disconnect from the server
Go on vacation for three weeks
Log on to the server again and issue "screen -dR" to connect to your existing screen session
check the log in the the fourth window with CTRL-A, 3 (it's like you've been there watching it all the time)
CTRL-A, 1 to pick up that vim session again
I guess you're starting to get the picture now? =)
It's like magic. I've been using screen for longer than I can remember and I'm still totally amazed with how bloody great it is.
EDIT: Just want to mention there's now also tmux. Very much like screen, but has some unique features, splitting the windows being the most prominent one.
nohup, disown, and screen are all good but screen is the best because unlike the other two, screen allows you to disconnect from the remote server, keep everything running, and then reconnect later to see what is happening. With nohup and disown you can't resume interacting.
Try using GNU Screen. It allows you to have several shells open at once. And you can disconnect from those running shells (i.e. close session with Putty) and they will keep doing their thing.
What you are looking for is nohup.
See the wiki link for how to use it.
screen is the best.
Try:
screen -dmS "MyTail" tail -f /var/log/syslog
This start command in background.
Use screen -r to list, and or screen -r Mytail to enter session.
If more users need access same session, use: screen -rx MyTail, and both or more users share the session.
If you can't use screen (because, for instance, your SSH session is being programmatically driven), you can also use daemonize to run the program as a daemon.
One way that works well for me is at.
at works like cron, but for a one-time job. I used it today to download a large file without having to keep my session alive.
for example:
$ at 23:55
at> wget http://file.to.download.com/bigfile.iso
at> ^D
You pass at a time (in the future) and it gives you a prompt. You enter the commands you want to run at that time and hit ctrl+d. You can exit out of your session and it will run the commands at the specified time.
Wikipedia has more info on at.
./command & disown
ssh localhost && ./command && exit

Resources