p4 sync command abruptly terminating with the message "Killed" - perforce

We are having trouble populating the workspace.
p4 sync command abruptly terminating with the message "Killed". we did not kill the sync.
Any idea what's going on?
Thanks in advance.

Related

How to handle shell script if the terminal closes abruptly or terminal lost network connection?

I have big bash script running but if due to some reason if the terminal closes (the one which is running the script) or due to some network issues the SSH connection lost or if user willingly gave Clt+c then how to capture the above scenario. I want to log some message saying the script exited due to above reason.
To avoid terminal and network disconnection, use nohup. From man nohup:
NAME
nohup - run a command immune to hangups, with output to a non-tty
Ctrl+C won't be available for the user anymore, but the script can still be killed with kill. Then use trap to catch the signal.
You can also use screen or tmux.

Continuing an operation if terminal closes

I need to execute this command on my terminal, that will operate on a server. I need to modify it such that if I lost the connection and the terminal closes, the operation running is not stopped, continuing the operation until its conclusion.
for file in 4458*/payoffTable* ; do cp "$file" /storage/scratch2/id0056/DMM_BASTIAN/M_7/N_15_P_05/Evoluzioni;done &
I know there is a command, called "nohup", that allow that, but if I prepend it to the previous command I get an error.
Also, in the case in which some of you is able to give a solution to this problem, I would like to know how to monitor the background processes running on a remote server. Thanks for the help.
After the execution of nohup the terminal returns the PID (Process ID) that is the process identifier.
By opening a terminal and running the command
ps -ef | grep PID
the console returns your process created above, if still running. If the process is not returned, it means that the execution is completed.

Will the script on remote server keep running after ssh timeout?

I'm running some script on remote server using ssh. The task is downloading images to remote server. I'm wondering will the script keep running after I log out the ssh session? Why? Could anyone explain in detail?
If you want the script keep running after logout you need to detach it from the terminal and run it in the background:
nohup ./script.sh &
If you close the terminal where you launched a process in, the process will receive SIGHUP and unless it handles it this means the process will get terminated. HUP means hang up, like in a phone call.
The nohup command can be used to start a process and prevent it from SIGHUP signals getting send to it. An alternative would be to use the bash builtin disown, which does basically the same:
./script.sh &
disown %1
Note that the 1 represents the job id. If you running multiple processes in the background you need to specify the correct job id.

bash remote command with sudo does not work

I've got this question answered here bash - running remote script from local machine about how could I use remote command with sudo involved. I thought it was working, because I got message that my server was successfully restarting etc so I chose answer. But today I saw that server actually was not restarting, but being killed for some reason.
Things I did, when I ran this command first time:
ssh user#host.com -t 'sudo /etc/init.d/script restart' #or -t just after ssh, same thing
I got this message:
Restarting openerp-server: Stopping openerp-server: openerp-server.
Starting openerp-server: openerp-server.
Connection to host.com closed.
So yeah, then I thought everything was good. But actually going to see if process restarted I saw it was not working.
And when I tried to restart again (with same command), I got this message:
Restarting openerp-server: Stopping openerp-server: start-stop-daemon: warning: failed to kill 25205: No such process
openerp-server.
Starting openerp-server: openerp-server.
Connection to host.com closed.
But if I use same command when directly connected to remote server, everything works fine and my script restarts server normally.
Same command I mean this:
ssh user#host.com
sudo /etc/init.d/script restart
So what the heck is going on here?
It seems likely that the script that starts it up is doing something naughty that relies on the TTY staying alive briefly after the command returns. It is probably the immediate exit that is causing trouble. If it starts a background job that's attached to the shell inside the TTY, and detaches shortly afterwards, then closing the connection might kill the shell and kill the job inside it. That would explain why, when you restart, the script thinks there's a process number for the service, but then can't find it: maybe the process number gets logged somewhere, but then the process gets killed off before it can get going.
To confirm, you might try a couple of things. Log in remotely, run your sudo command, then exit immediately:
ssh user#host.com
sudo /etc/init.d/script restart; exit
and see if this immediate exit also hits the same problem.
Also try
ssh user#host.com -t 'sudo /etc/init.d/script restart; sleep 30'
to force it to wait for a bit, and see if that gets you anywhere.

How to know from a bash script if the user abruptly closes ssh session

I have a bash script that acts as the default shell for a user loging in trough ssh.
It provides a menu with several options one of wich is sending a file using netcat.
The netcat of the embedded linux I'm using lacks the -w option, so if the user closes the ssh connection without ever sending the file, the netcat command waits forever.
I need to know if the user abruptly closes the connection so the script can kill the netcat command and exit gracefully.
Things I've tried so far:
Trapping the SIGHUP: it is not issued. The only signal issued i could find is SIGCONT, but I don't think it's reliable and portable.
Playing with the -t option of the read command to detect a closed stdin: this would work if not for a silly bug in the embedded read command (only times out on the first invocation)
Edit:
I'll try to answer the questions in the comments and explain the situation further.
The code I have is:
nc -l -p 7576 > /dev/null 2>> $LOGFILE < $TMP_DIR/$BACKUP_FILE &
wait
I'm ignoring SIGINT and SIGTSTP, but I've tried to trap all the signals and the only one received is SIGCONT.
Reading the bash man page I've found out that the SIGHUP should be sent to both script and netcat and that the SIGCONT is sent to stopped jobs to ensure they receive the SIGHUP.
I guess the wait makes the script count as stopped and so it receives the SIGCONT but at the same time the wait somehow eats up the SIGHUP.
So I've tried changing the wait for a sleep and then both SIGHUP and SIGCONT are received.
The question is: why is the wait blocking the SIGHUP?
Edit 2: Solved
I solved it polling for a closed stdin with the read builtin using the -t option. To work around the bug in the read builtin I spawn it in a new bash (bash -c "read -t 3 dummy").
Does the Parent PiD change? If so you could look up the parent in the process list and make sure the process name is correct.
I have written similar applications. It would be helpful to have more of the code in your shell. I think there may be a way of writing your overall program differently which would address this issue.

Resources