I am trying to use a perl script to transfer files from one machine to another within a cron job. However for security reasons the cron job has to run as a unprivileged user. Now if I try to establish a connection using this unpriviliged user, Net::SFTP::Foreign always refuses to connect. Here is the part of the script I am having trouble with:
my $host = "hostname";
my %args = (
user => "username",
password => "password",
port => '12345'
);
my $sftp_connection = Net::SFTP::Foreign->new($host, %args);
if( $sftp_connection->error ) {
log_message( "E", "Error " . $sftp_connection->status() . " connecting to " . $host );
die;
}
log_message( "A", "Connected" );
I cannot give a full example, as this would require username and password.
If I execute this script as root, everything works fine, however if I try to use another user, the connection always fails.
Is there a way to get some more diagnostic information? I think there was a way to get more output from the actual sftp process, but I cannot look it up right now as cpan currently does not work from here.
I previously also tried using Net::SFTP instead of Net::SFTP, but the error handling at later parts did not work correctly, so switching to Net::SFTP does not seem a viable option right now.
Use metacpan.org
Debugging:
For debugging purposes you can run ssh in verbose mode passing it the -v option:
my $sftp = Net::SFTP::Foreign->new($host, more => '-v');
"Module description"
Enabling debugging (thanks to all commenters), I could see that the host-key verification failed. So after I added the key to the list of allowed keys (by doing a manual ssh to the host once), everything worked fine.
For root the key must have been in the list already, so that it had nothing to do with priviledge, just with prior usage of the account (I seem have ssh'ed before from root to the other host).
Related
I have an API which, long story short, runs a custom command on a remote machine (I'll call it custom_command). The app is run via subprocess with shell=False. The command is run and arguments may be programmatically appended to it based on input to an API. IE:
End-user runs curl https://rest-api-address -d {"add_do_this": true}
Server runs subprocess.Popen(["custom_command", "--do-this"], shell=False)
The end users have asked for the ability to append custom arguments when invoking the command by entering a raw string, ie:
curl https://rest-api-address -d {"add_do_this": true, "custom_args": "--custom-arg1 val1 --custom-arg2 val2"}
Server runs subprocess.Popen(["custom_command", "--do-this", "--custom-arg1", "val1", "--custom-arg2", "val2"], shell=False)
I'm reluctant to allow them to enter a raw string b/c of command injection fears, but if I take the string, shlex.split it, and extend the original command, is there still a risk of injection if shell=False? Or any other security risk? The documentation seems to imply no and I haven't been able to create one in my testing, but something about this feels risky, and I'm not sure if I'm being too paranoid.
https://rest-api-address -d {"add_do_this": true, "custom_args": "--custom-arg1 val1;echo 'HAHA' --custom-arg2 val2"}
Did not cause injection
P4 Server version: P4D/LINUX26X86_64/2013.2/938876 (2014/09/23)
on RHEL6
While running a perl script using p4perl, I trap an error something like this...
if ($p4->ErrorCount() {
foreach $err ($p4->Errors) {
print "$err\n";
}
}
These errors pop up in a nondeterministic way, sometimes I get them, sometimes not. But if I trap an error with the code above, and I get ...
TCP receive failed.
read: socket:
Connection reset by peer
Is that a real error (given that, apparently, the connection was reset ?)
Can I ignore this? Will it run the thing I wanted to run after resetting the connection ? Or do I need to rerun that command ?
I fear that the problem may be rooted in the fact that the perl script does a fork earlier on and the $p4 handle I have was sent to the forked process. Could I do something like this to detect and remedy this
use P4;
our $p4 = new P4;
<perl forks off a new process...>
if(!($p4-IsConnected)) {
$p4->SetCwd("$CWD");
if($p4->ErrorCount()) {handle_p4_error();}
$p4->Connect();
if($p4->ErrorCount()) {handle_p4_error();}
}
....etc....
exit;
sub handle_p4_err {
print "<<<P4 ERROR>>>\n";
foreach $err ($p4->Errors) {
print "$err\n";
}
exit;
}
Or will the SetCwd fail for lack of connection?
Could the P4 admin be setting some sort of timeout (kill connection after x minutes of inactivity) ?
Thanks for any help !
Is that a real error (given that, apparently, the connection was reset ?)
Yes; the connection with the server was terminated.
Can I ignore this? Will it run the thing I wanted to run after resetting the connection?
No.
Or do I need to rerun that command ?
Yes; I think you will also need to reopen the connection first.
$p4->Connect();
if($p4->ErrorCount()) {handle_p4_error();}
That is the general pattern, although if the connection fails you probably want to bail out since nothing you do after that point is going to work (and in most cases it means something is configured wrong).
Or will the SetCwd fail for lack of connection?
No; that's purely a client side action and does not talk to the server.
Could the P4 admin be setting some sort of timeout (kill connection after x minutes of inactivity) ?
That is one possibility -- does your script hold an idle connection open? That's considered poor manners since enough of those will constitute a DDoS attack by preventing any new connections from being opened. Another possibility is that there was some other network failure (your VPN connection went down, etc).
In my particular case, I believe the problem had to do with the fork in the perl script. Perhaps the handle was passed to the forked process and this interfered with attempts to reconnect in the main process thread. I had a similar problem with a DB connection. The remedy was similar as well....
What seemed to work was to unconditionally disconnect from P4 just before the fork and unconditionally reconnect right after. The forked process didn't need a P4 connection, so this is OK (in my particular case).
In my case, the problem was I needed to make an ssh connection to the perforce server, but I hadn't added the correct ssh key. You can list ssh keys added to your session with this command:
ssh-add -L
I am running a .bin file via child_process.spawn(), which takes in some inputs from the user before completing the setup. It seems to work fine taking all the inputs correctly via process.stdin.write("input"\n);. However, the same doesn't work when the password is sent via stdin. Directly running the bin file and manually entering the password works. Is there some format I am supposed to set before sending the password via node.js? I just keep seeing * being logged on stdout continuously and the setup doesn't seem to proceed further. Below is the snippet that I am using
var child_process = require('child_process');
var process = child_process.spawn('./test.bin');
process.stdout.on('data', function(data) {
if(data.toString().trim() === 'Username:')
process.stdin.write("test\n"); // This works
else if (data.toString().trim() === 'Password:')
process.stdin.write("password\n"); //This doesn't
Any inputs on the same might be helpful.Thanks.
Please note that when the password is entered by directly running the bin file, upon typing the password, nothing is displayed, but entering the correct password works. So, I am thinking there might be some encoding issues or something like that which I may be missing.
Answering my own question, it seemed to work by entering the password string one character at a time like below:
process.stdin.write(password[0]);
process.stdin.write(password[1]);
process.stdin.write(password[2]);
process.stdin.write(password[3]);
process.stdin.write("\n");
I have a basic service check in a puppet manifest I want running most of the time with just
service{ $service_name :
ensure => "running",
enable => "true",
}
The thing is there are periods of maintenance I would like to ensure puppet doesn't come along and try to start it back up.
I was thinking creating a file "no_service_start" in a specified path and do a 'creates' check like you could do with a guard for exec but it doesn't look like that's available for the service type.
My next thought was to have the actual service init script do the check for this file itself and just die early if that guard file exists.
While this works in that it prevents a service from starting it manifests itself as a big red error in puppet (as expected). Given the service not starting is a desired outcome if that file is in place I'd rather not have an error message present and have to spend time thinking about if it's "legit" or not.
Is there a more "puppet" way this should be implemented though?
Define a fact for when maintenance is happening.
Then put the service definition in an if block based off that fact.
if !maintenance
{
service{ $service_name :
ensure => "running",
enable => "true",
}
}
Then when puppet compiles the catalog if maintenance == true the service will not be managed and stay in whatever state it currently happens to be.
I don't really like this answer, but to work around puppet spitting out errors when bailing b/c of a guard file is to have to init script that's doing that check bail with an exit code as 0.
How about putting the check outside? You could do something similar to this: https://stackoverflow.com/a/20552751/1097483, except with your service check inside the if loop.
As xiankai said, you can do this on the puppetmaster. If you have a script that returns running or stopped as a string, depending on the current time or anything, you can write something like:
service{ $service_name :
ensure => generate('/usr/local/bin/maintenanceScript.sh');
}
I wanted to automate ssh logins. After some research, it seemed like tcl/expect was the route to go.
However, my issue is that when interact takes over my terminal, stuff doesn't work as expected (pun not intended).
For example, if I resize the terminal, it does not "take". Also, sometimes the interact is not responsive, and sometimes it just hangs for no reason. I have included my code below. My question for the code is, am I missing something?
Also, is there a better way to do this (with another scripting language?) I need the terminal to be very responsive, and no different than if I had manually typed ssh on the console.
proc Login {username server password} {
set prompt "(%|>|\#|\\\$) $"
spawn /usr/bin/ssh $username#$server
expect {
-re "Are you sure you want to continue connecting (yes/no)?" {
exp_send "yes\r"
exp_continue
#continue to match statements within this expect {}
}
-nocase "password: " {
exp_send "$password\r"
interact
}
}
}
The issue you bring up about 'interact' not noticing that the window has changed size is because, by default, interact only watches the streams of characters. When you resize the window, a WINCH signal is raised. By default, Expect has the, uh, default action which is to ignore it. You should use Expect's trap command to catch the SIGWINCH. Then you can do what you want - for example, propagating the new size to the other side or taking some other local action.
You can find examples in Expect's sample scripts that do this. And there is an explanation and example in the Expect book of how to handle SIGWINCH.
spawn ssh my-server.example.net
trap {
set rows [stty rows]
set cols [stty columns]
stty rows $rows columns $cols < $spawn_out(slave,name)
} WINCH
expect {
"assword: " {send "Let-Me-In-Please\r"}
"Last login"
}
interact
As W. Craig Trader mentions in a comment, the "right" way to do this is to use ssh-keygen to generate a key for authentication.
You can avoid the "continue" prompt by setting -o StrictHostKeyChecking=no.