I created a Perl module that is to be used in many Perl scripts to use Net::SSH::Expect
to do a login.
package myRoutines;
#
use v5.22;
use strict;
use warnings;
use Net::SSH::Expect;
use Exporter qw(import);
our #EXPORT_OK = qw(my_login);
sub my_login {
my $user = 'xxxx';
my $port = '10000';
my $passwd = 'XYZ';
my $adminServer = 'myServer';
my $rootpassword = 'ABCDEF';
my ( $pName, $vName ) = #_;
our $ssh = Net::SSH::Expect->new(
host => "$adminServer",
ssh_option => "-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null",
user => "$user",
password => "$passwd",
port => "$port",
raw_pty => 1,
restart_timeout_upon_receive => 1,
log_file => "/var/tmp/clilog_$pName$vName"
);
eval {
my $login_output = $ssh->login();
if ( $login_output !~ />/ ) {
die "Login has failed.
Login output was $login_output";
}
};
return $ssh;
}
1;
The scripts will do:
use myRoutines qw(my_login);
our ( $ssh, $pName, $vName );
$pName = 'abc';
$vName = '123';
$ssh = my_login( $pName, $vName );
$ssh->send( "some command\r" );
This all works if I'm in the directory that the script and module are in. If I'm in any other directory, the new call works but the call to $ssh->send does nothing.
I've tried adding to my script:
use lib '/some/dir';
(where the .pm file resides) to force it find the module, and that seems to work when I'm not in the directory where the pm file resides.
I've tried to:
use File::chdir;
$CWD = '/some/dir';
and again, the login seems to work but the next send does nothing. So I'm at a loss as to what might be happening and would like some advice.
Update 20170908:
Upon further playing and following the suggestions made, I've done the following and it now works:
removed the eval as it was unnecessary.
removed the our's and made it my's.
removed the ""'s
set the following in the script:
use File::Basename;
use Cwd qw( abs_path );
chdir "/some/dir";
use lib dirname(abs_path($0));
my $scriptName = basename($0);
use myRoutines qw(ovm_login);
my $pName = substr($scriptName,0,-3); (cutting off the .pl from the end of the script name to pass the scriptname as the pName)
using chdir to change directory to where my pl script and pm file is and then setting the lib is seemingly working as it should.
Borodin, I'm not sure I understand your meaning when you say to object orient the module and .... but would be interested in hearing more to better understand.
If you don't want to hardcode the directory, you can use
use FindBin qw( $RealBin );
use lib $RealBin;
($RealBin is the path to the script. Adjust as needed if myRoutines.pm is in a subdir.)
The simple and easy way would be placing your .pm file in:
/usr/lib64/perl5/
directory and you shouldn't have any problems.
But still not the perfect solution, you should be able to put the .pm file wherever you want.
Related
My objective is to write a CLI in Typescript/node.js, that uses --experimental-specifier-resolution=node, written in yargs with support for autocompletion.
To make this work, I use this entry.sh file, thanks to this helpful SO anwswer (and the bin: {eddy: "./entry.sh"} options in package.json points to this file)
#!/usr/bin/env bash
full_path=$(realpath $0)
dir_path=$(dirname $full_path)
script_path="$dir_path/dist/src/cli/entry.js"
# Path is made thanks to: https://code-maven.com/bash-shell-relative-path
# Combined with knowledge from: https://stackoverflow.com/questions/68111434/how-to-run-node-js-cli-with-experimental-specifier-resolution-node
/usr/bin/env node --experimental-specifier-resolution=node $script_path "$#"
This works great, and I can use the CLI. However, autocompletion does not work. According to yargs I should be able to get autocompletion by outputting the result from ./entry.sh completion to the ~/.bashrc profile. However this does not seem to work.
Output from ./entry.sh completion:
###-begin-entry.js-completions-###
#
# yargs command completion script
#
# Installation: ./dist/src/cli/entry.js completion >> ~/.bashrc
# or ./dist/src/cli/entry.js completion >> ~/.bash_profile on OSX.
#
_entry.js_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(./dist/src/cli/entry.js --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.js_yargs_completions entry.js
###-end-entry.js-completions-###
I tried modifying the completion output, but I don't really understand bash - just yet 😅
Update
Working on a reproducible example (WIP).
Repo is here.
Currently one of the big differences is that npm link does not work the same in the 2 different environments. It's only in the repo where I'm trying to reproduce that /usr/local/share/npm-global/bin/ is actually updated. Currently trying to investigate this.
You can try specifying the scriptName in your entry.js file to the name of your wrapper script. This may force generation of completion name using it. I haven't tried it but looking at the source code of yargs, it looks like the $0 parameter can be altered using scriptName, which in turn will affect how the completion-generation function generate the completion code:
In yargs-factor.ts:
scriptName(scriptName: string): YargsInstance {
this.customScriptName = true;
this.$0 = scriptName;
return this;
}
In completion.ts:
generateCompletionScript($0: string, cmd: string): string {
let script = this.zshShell
? templates.completionZshTemplate
: templates.completionShTemplate;
const name = this.shim.path.basename($0);
// add ./ to applications not yet installed as bin.
if ($0.match(/\.js$/)) $0 = `./${$0}`;
script = script.replace(/{{app_name}}/g, name);
script = script.replace(/{{completion_command}}/g, cmd);
return script.replace(/{{app_path}}/g, $0);
}
Also I'm not sure how the "bin" configuration works but maybe because of scriptName you'd no longer need a wrapper.
Make sure the version of yargs you use supports this.
Also as a side note I thought about suggesting to modify the generated completion script directly but besides being hackish that might also still lead to the script name being unrecognized during completion. Anyhow I just looked at the right approach first.
The modified version would like this:
_entry.sh_yargs_completions()
{
local cur_word args type_list
cur_word="${COMP_WORDS[COMP_CWORD]}"
args=("${COMP_WORDS[#]}")
# ask yargs to generate completions.
type_list=$(/path/to/entry.sh --get-yargs-completions "${args[#]}")
COMPREPLY=( $(compgen -W "${type_list}" -- ${cur_word}) )
# if no match was found, fall back to filename completion
if [ ${#COMPREPLY[#]} -eq 0 ]; then
COMPREPLY=()
fi
return 0
}
complete -o default -F _entry.sh_yargs_completions entry.sh
Another note: If the script name needs to be dynamic based on the name of its caller, you can make it identifiable through an environment variable, so in entry.sh you can declare it like this:
export ENTRY_JS_SCRIPT_NAME=entry.sh
node ...
Then somewhere in entry.js, you can access the variable name through this:
process.env.ENTRY_JS_SCRIPT_NAME
Maybe even just specify $0 or ${0##*/} whatever works:
export ENTRY_JS_SCRIPT_NAME=$0
Thanks, everyone. The solution I ended up with, was 2 fold:
I added a scriptName to the yargs config
In the .sh file "wrapping", I used which node to probably set the --experimental-specifier-resolution=node flags.
test-cli.js
#!/usr/bin/env node
import yargs from 'yargs'
import { hideBin } from 'yargs/helpers'
import { someOtherModule } from './some-other-module';
someOtherModule();
yargs(hideBin(process.argv))
.command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.command('curlAgain <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.demandCommand(1)
.help()
.completion()
.scriptName('eddy') // <== Added thanks to konsolebox
.parse()
test-cli.sh
#!/usr/bin/env bash
full_path="$(realpath "$0")"
dir_path="$(dirname $full_path)"
script_path="$dir_path/test-cli.js"
node_path="$(which node)" # <== Makes it work on github codespaces 😅
$node_path --experimental-specifier-resolution=node $script_path "$#"
package.json
{
"name": "lets-reproduce",
"type": "module",
"dependencies": {
"yargs": "^17.3.1"
},
"bin": {
"eddy": "./test-cli.sh"
}
}
Steps to install autocompletion:
run npm link
run eddy completion >> ~/.bashrc
source ~/.bashrc
profit 😅🔥
Imagine this contrived scenario:
./main.sh
source ./config.sh
SOME_CONFIG="${SOME_CONFIG}bar"
./output.sh
./config.sh
export SOME_CONFIG='foo'
./output.sh
echo "Config is: ${SOME_CONFIG}"
I am trying to replace ./main.sh with a Node.js powered ./main.js WITHOUT replacing the other shell files. The exported ./config.sh functions/variables must also be fully available to ./output.sh
Here is a NON working ./main.js. I have written this for the sole purpose to explain what I want the final code to look like:
const terminal = require('child_process').spawn('bash')
terminal.stdin.write('source ./config.sh\n')
process.env.SOME_CONFIG = `${process.env.SOME_CONFIG}bar` // this must be done in JS
terminal.stdin.write('./output.sh\n') // this must be able to access all exported functions/variables in config.sh, including the JS modified SOME_CONFIG
How can I achieve this? Ideally if there's a library that can do this I'd prefer that.
While this doesn't fully answer my question, it solves the contrived problem I had at hand and could help others if need be.
In general, if bash scripts communicate with each other via environment variables (eg. using export/source), this will allow you to start moving bash code to Node.js.
./main.js
const child_process = require("child_process");
const os = require("os");
// Source config.sh and print the environment variables including SOME_CONFIG
const sourcedConfig = child_process
.execSync(". ./config.sh > /dev/null 2>&1 && env")
.toString();
// Convert ALL sourced environment variables into an object
const sourcedEnvVars = sourcedConfig
.split(os.EOL)
.map((line) => ({
env: `${line.substr(0, line.indexOf("="))}`,
val: `${line.substr(line.indexOf("=") + 1)}`,
}))
.reduce((envVarObject, envVarEntry) => {
envVarObject[envVarEntry.env] = envVarEntry.val;
return envVarObject;
}, {});
// Make changes
sourcedEnvVars["SOME_CONFIG"] = `${sourcedEnvVars["SOME_CONFIG"]}bar`;
// Run output.sh and pass in the environment variables we got from the previous command
child_process.execSync("./output.sh", {
env: sourcedEnvVars,
stdio: "inherit",
});
i'm calling an async nodejs function that uses prompts(https://www.npmjs.com/package/prompts)
basically, the user is presented options and after they select one, i want the selection outputted to a variable in bash. I cannot get this to work. it either hangs, or outputs everything since prompts is a user interface that uses stdout
//nodefunc.js
async run() {
await blahhhh;
return result; // text string
}
console.log(run());
// bash
x=$(node nodefunc.js)
echo $x
Unless you can ensure nothing else in the node script will print to stdout, you will need a different approach.
I'd suggest having the node script write to a temporary file, and have the bash script read the output from there.
Something like this perhaps:
const fs = require('fs');
const outputString = 'I am output';
fs.writeFileSync('/tmp/node_output.txt', outputString);
node nodefunc.js
# Assuming the node script ran succesfully, read the output file
x=$(</tmp/node_output.txt)
echo "$x"
# Optionally, cleanup the tmp file
rm /tmp/node_output.txt
I am creating a perl script in which I have to ssh multiple servers from same script and perform same commands on all these remote servers.
Right now I am using "If loop" and call all other servers from this script and perform command on them.
I want to create a function with these set of commands, that I need to perform on these different servers.
if($random_number==1){
use Net::SSH::perl
use lib qw("user/share/perl5/");
my $hostname = "10.*.*.*";
my $username = "root";
my $password = "root\#123";
my $cmd1 = "ls /home/ashish/"
my $cmd2 = "netstat -na | grep *.*.*.*;
$ssh->login("$username" , "$password");
my ($stdout,$stderr,$exit) = $ssh->cmd("$smd1" && "$cmd2");
print $stdout;
}
the above commands after if syntax needs to be repeated for different servers.
want to use a function call.
Start with general programming toutorials, then do it like:
use Net::SSH::perl;
use strict;
use warnings;
my #servers = (
{
hostname => 'somehost1',
username => 'someuser1',
password => 'somepass1',
commands => ['somecmd11','somecmd12'],
},
{
hostname => 'somehost2',
username => 'someuser2',
password => 'somepass2',
commands => ['somecmd21','somecmd22'],
},
# ...
);
do_something_on_remote_servers_one_by_one( #servers );
exit(0);
sub do_something_on_remote_servers_one_by_one {
my (#servers) = #_;
foreach my $server (#servers) {
my $ssh = Net::SSH::perl->new($server->{hostname});
$ssh->login($server->{username}, $server->{password});
my $cmd_string = join(' & ', #{ $server->{commands} } );
my ($stdout,$stderr,$exit) = $ssh->cmd($cmd_string);
print $stdout;
}
}
After that, you can think about executing commands in paralell.
I have a Linux (openSUSE 10.X) box and have a SFTP service on it.
When someone puts a file I have to write a script to move the files to another dir. I do not want to write a cron job. Is there an event or something I can check to see if they have sent the file?
You can write a c application and hook into inotify events.
Check also Net::SFTP::Server, an SFTP server written in Perl that can be extended to do things like the one you need.
Some code:
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename ();
my $server = Server->new(timeout => 15);
$server->run;
exit(0);
package Server;
use Net::SFTP::Server::Constants qw(SSH_FXF_WRITE);
use parent 'Net::SFTP::Server::FS';
sub handle_command_open_v3 {
my ($self, $id, $path, $flags, $attrs) = #_;
my $writable = $flags & SSH_FXF_WRITE;
my $pflags = $self->sftp_open_flags_to_sysopen($flags);
my $perms = $attrs->{mode};
my $old_umask;
if (defined $perms) {
$old_umask = umask $perms;
}
else {
$perms = 0666;
}
my $fh;
unless (sysopen $fh, $path, $pflags, $perms) {
$self->push_status_errno_response($id);
umask $old_umask if defined $old_umask;
return;
}
umask $old_umask if defined $old_umask;
if ($writable) {
Net::SFTP::Server::FS::_set_attrs($path, $attrs)
or $self->send_status_errno_response($id);
}
my $hid = $self->save_file_handler($fh, $flags, $perms, $path);
$self->push_handle_response($id, $hid);
}
sub handle_command_close_v3 {
my $self = shift;
my ($id, $hid) = #_;
my ($type, $fh, $flags, $perms, $path) = $self->get_handler($hid);
$self->SUPER::handle_command_close_v3(#_);
if ($type eq 'file' and $flags & SSH_FXF_WRITE) {
my $name = File::Basename::basename($path);
rename $path, "/tmp/$name";
}
}
Save the script to somewhere in your server, chmod 755 $it, and configure OpenSSH to use it as the SFTP server instead of the default one.