Shell - LFTP - multiple extensions - linux

I have been trying to find a way to use mget with only certain file extensions.
I have used following command (which works just fine if I leave *.csv)
lftp -e "set xfer:clobber true;mget $SOURCE_DIR*.{csv,txt,xls,xlsx,zip,rar};exit" -u $SOURCE_USERNAME,$SOURCE_PASSWORD $SOURCE_SERVER || exit 0
But no luck, I get message dir/*.{csv,txt,xls,xlsx,zip,rar} no files found
Tried to add parenthesis
lftp -e "set xfer:clobber true;mget $SOURCE_DIR(*.{csv,txt,xls,xlsx,zip,rar});exit" -u $SOURCE_USERNAME,$SOURCE_PASSWORD $SOURCE_SERVER || exit 0
Also no luck
$SOURCE_DIR already has a slash / at the end
I tried to test lftp locally but I have problem with opening ports on my Vagrant box, hence the question

I managed to connect to one FTP without the need to forward ports.
Turns out (I know it seems obvious) you have to specify full path with the wildcard for each extension
mget $SOURCE_DIR*.csv $SOURCE_DIR*.txt
separated by space
Also if one (or more) extensions are not found the message "*.txt no files found" will land in standard error, which led me to not be able to proceed with the full script

Related

How can I ingore whitespaces inside an echo command when usual methods don't seem to work

Hi I'm pretty new to linux/bash in general and I'm having a some trouble making a script for my coworker. Idea of this script is to help automate coworkers IP table entries (don't use IP tables myself so no idea how it works, just working as per his instructions). Program is going to ask a few questions, form an entry and then add it to a file on a different host. After the file is written it will also run
"systemctl reload iptables.service" and "systemctl status iptables". I tested that pwd was atleast working where I was planning to put these.
The code worked fine with a single word in place of table_entry variable, I was able to write something to a file in my host computer with a different user.
Problem is that the "table_entry" variable is going to have whitespaces in it and the sudo su -c command gets the second word as input (atleast that's what I think is happening) and I get an error sudo: INPUT: command not found. the "INPUT" coming from my case sentence
I tried to put the "table_entry" variable in "{$table_entry}" , "$table_entry" and {$table_entry} forms but they didn't work.
I removed some of the code to make it more readable (mostly case sentences for variables and asking for host and username).
#!/bin/bash
echo -e "Which ports?
1. INPUT
2. OUTPUT
3. Forward"
read opt_ch
case $opt_ch in
1) chain="INPUT" ;;
2) chain="OUTPUT" ;;
3) chain="FORWARD" ;;
*) echo -e "Wrong Option Selected!!!"
esac
table_entry="-A $chain "#-s $ip_source -d $ip_dest
ssh -t user#host "sudo table_entry=$table_entry su -c 'echo $table_entry >> /home/user/y.txt'"
#^ this line will later also include the systemctl commands separated with ";" ^
I tested few different methods how to do this script overall, heredoc(didn't get input to work very well), Ansible(Didn't really seem like a great tool for this job), Python(can't install new modules to the enviroment). So this is the best solution I came up with bearing my limited skillset.
Edit: I also realise this is propably not the smartest way to do this script, but it's the only one I have gotten to work this far, that can also ask for a password from the user when doing su command. I'm not knowledgeable in transferring passwords safely in linux enviroment or in general, so I like to let linux handle the passwords for me.
This a problem with dealing with nested quoting - which is really quite the annoying problem to solve.
This case seems like you could do this with quotes inside the string - your example would become
ssh -t user#host "sudo table_entry='$table_entry' su -c 'echo \"$table_entry\" >> /home/user/y.txt'"
It seems to me the table_entry='$table_entry' is redundant though, this should work:
ssh -t user#host "sudo su -c 'echo \"$table_entry\" >> /home/user/y.txt'"
Your comment (denoted with #) is getting concatenated with the table_entry string you're trying to form. Try adding a space like this:
table_entry="-A $chain " #-s $ip_source -d $ip_dest
Then table_entry gets assigned correctly. I was using KWrite to edit your bash script, and it does text highlighting that quickly showed me the problem.

Exit from lftp after local shell command execution

I need to quit the lftp automatically after some local shell commands are executed. E.g. I need to find some files and exit.
lftp -e "!find . -maxdepth 3 -name \"index.*\" -type f;exit" sftp://user:pass#mysite.com:22
When this command is executed, it keeps me inside the lftp environment so I need to send extra "bye" command to leave the app. But I need to perform it automatically upon shell command execution.
I tried
lftp -e "!find . -maxdepth 3 -name \"index.*\" -type f;exit;bye" sftp://user:pass#mysite.com:22
but it doesn't work (seems "bye" is executed in local shell context rather than lftp shell).
It there any chance to exit from local shell mode back to lftp command mode and then perform "bye" within the same session?
Note that what you're trying won't have a useful effect -- the local shell is local to where you're running lftp, so you're running find on the same machine as the client, not the server. There's thus no reason to run find inside lftp as opposed to outside of it.
Getting past that, though, and answering the literal question -- you can split your commands across multiple lines; $'\n' is a literal for a newline, or newlines can be literally added to a single-line string. Thus:
lftp -c '
open sftp://user:pass#mysite.com:22
!find . -maxdepth 3 -name "index.*" -type f
' </dev/null
There's no need for the exit or bye as using -c rather than -e causes the connection to be closed and lftp to automatically exit after all commands are run. Using </dev/null also ensures that even if you did use -e, attempts to read further commands from stdin would return an EOF (and thus likewise indicate an exit).
I've also observed that, somehow, after executing a local command, lftp will run a local version of the next command, even if for that second command 'local' was not specified. Normally this reverts back to sending commands to the remote site the third time around, however, when at times I walk away from the terminal and come back and issue a third command much later, the new command and all subsequent ones will also apply to local, like if the connection had been lost -or like it never existed- and in this situation unless I reconnect to the site a command such as 'bye' is just not possible.
What I do to work around this is to define a bookmark early on in the connection process that I can reuse later and make sure is open prior to issuing 'bye' - which as you said, should close the connection / the process / the application and/or window.
So initially, issue something like 'bookmark save remote'. And just prior to leaving, issue something like 'open remote' followed by 'bye', and that should work.
NB: Give your bookmarks unique names instead of 'remote' if you wish to connect to multiple servers and plan to do concurrent work, as all sessions will most likely share the same set of lftp bookmarks.

shell script can't see files in remote directory

I'm trying to write an interactive script on a remote server, whose default shell is zsh. I've been trying two different approaches to get this to work:
Approach 1: ssh -t <user>#<host> "$(<serverStatusReport.sh)"
Approach 2: ssh <user>#<host> "bash -s" < serverStatusReport.sh
I've been using approach 1 just fine up until now, when I ran into the following issue - I have a block of code that runs depending on whether certain files exist in the current directory:
filename="./service_log.*"
if ls $filename 1> /dev/null 2>&1 ; then
echo "$filename found."
##process files
else
echo "$filename not found."
fi
If I ssh into the server and run the command directly, I see "$filename found."
If I run the block of code above using Approach 1, I see "$filename not found".
If I copy this block into a new script (lets call this script2), and run it using Approach 2, then I see "$filename found".
I can't for the life of me figure out where this discrepancy is coming from. I thought that the difference may be that script2 is piped into bash whereas my original script is being run with zsh... but considering that running the same command verbatim on the server, with its default zsh shell, returns correctly... I'm stumped.
:( any help would be greatly appreciated!
I guess that when executing your approach 1 it is the local shell that expands "$(<serverStatusReport.sh)", not the remote. You can easily check this with:
ssh -t <user>#<host> "$(<hostname)"
Is the serverStatusReport.sh script also in the PATH on the local host?
What I do not understand is why you get this message instead of an error message.

How can I check dhcpd.conf against syntax error without running dhcpd?

I need to make sure there are no syntax errors on dhcpd.conf. If there are errors, I want to get what they are.
I can check for syntax errors with this command:
dhcpd -cf /path/to/dhcpd.conf
but that prints a lot of information in addition to the error I got. Another thing is that I don't want to run dhcpd, even there is no syntax error. I only want to check for syntax errors and see what they are.
Unfortunately, running dhcpd -tf /path/to/dhcpd.conf also didn't solve my problem.
The syntax you are looking for is
dhcpd -t -cf /path/to/dhcpd.conf
The -t option will do a config check:
If the -t flag is specified, the server will simply test the configuration file for correct syntax, but will not attempt to perform any network operations. This can be used to test the new configuration file automatically before installing it.
You do not need to use -cf if you are using the default config file path.
/usr/sbin/dhcpd -t
The one you tried with -tf /path/to/... is quite different and relates to tracing.
One thing not on the manual page, and not covered here yet, is that the '/usr/sbin/dhcpd -t' command uses the return value to indicate whether the configuration is correct or not.
If there are no errors, it will return zero. if there are syntax errors it will return non zero (1 for the test I did)
So you can use something like:
/usr/sbin/dhcpd -t
if [ $? -ne 0 ]; then
echo "Configuration has errors, aborting"
fi
/bin/systemctl restart isc-dhcp-server
To check if changes made to the configuration are valid before trying to restart the server with the new version.
Unfortunately I don't think there is any option to just display the errors. It would be possible to use a text parsing tool (awk, python etc) to remove the header lines (for the version I have, everything up to a line beginning with "For info"), and trailer lines (for the version I have, everything after a line saying "Configuration file errors encountered -- exiting") which would leave just the syntax error and location

iLO3: Multiple SSH commands

is there a way to run multiple commands in HPs integrated Lights-Out 3 system via SSH? I can login to iLO and run a command line by line, but I need to create a small shell-script, to connect to iLO and to run some commands one by one.
This is the line I use, to get information about the iLO-version:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version"
Now, how can I do something like this?
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version" "show /map1 license" "start /system1"
This doesn't work, because iLO thinks it's all one command. But I need something to login into iLO, run these commands and then exit from iLO. It takes too much time to run them one after the other because every login into iLO-SSH takes ~5-6 seconds (5 commands = 5*5 seconds...).
I've also tried to seperate the commands directly in iLO after manual login but there is no way to use multiple commands in one line. Seems like one command is finished by pressing return.
iLO-SSH Version is: SM-CLP Version 1.0
The following solutions did NOT work:
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version; show /map1 license; start /system1"
/usr/bin/ssh -i dsa_key administrator#<iLO-IP> "version && show /map1 license && start /system1"
This Python module is for HP iLO Management. check it out
http://pypi.python.org/pypi/python-hpilo/
Try putting your commands in a file (named theFile in this example):
version
show /map1 license
start /system1
Then:
ssh -i dsa_key administrator#iLO-IP < theFile
Semicolons and such won't work because you're using the iLO shell on the other side, not a normal *nix shell. So above I redirect the file, with newlines intact, as if you were typing all that into the session by hand. I hope it works.
You are trying to treat iLO like it's a normal shell, but its really HP's dopy interface.
That being said, the easiest way is to put all the commands in a file and then pipe it to ssh (sending all of the newline characters):
echo -e "version\nshow /map1 license\nstart /system1" | /usr/bin/ssh -i dsa_key administrator#<iLO-IP>
That's a messy workaround, but would you might fancy using expect? Your script in expect would look something like that:
# Make an ssh connection
spawn ssh -i dsa_key administrator#<iLO-IP>
# Wait for command prompt to appear
expect "$"
# Send your first command
send "version\r"
# Wait for command prompt to appear
expect "$"
# Send your second command
send "show /map1 license\r"
# Etc...
On the bright side, it's guaranteed to work. On the darker side, it's a pretty clumsy workaround, very prone to breaking if something goes not the way it should (for example, command prompt character would appear in version output, or something like that).
I'm on the same case and wish to avoid to run a lot of plink commands. So I've seen you can add a file with the -m option but apparently it executes just one command at time :-(
plink -ssh Administrator#AddressIP -pw password -m test.txt
What's the purpose of the file ? Is there a special format for this file ?
My current text file looks like below:
set /map1/oemhp_dircfg1 oemhp_usercntxt1=CN=TEST
set /map1/oemhp_dircfg1 oemhp_usercntxt2=CN=TEST2
...
Is there a solution to execute these two commands ?
I had similar issues and ended up using the "RIBCL over HTTPS" interface to the iLO. This has advantages in that it is much more responsive than logging in/out over ssh.
Using curl or another command-line HTTP client try:
USERNAME=<YOUR_ILO_USERNAME>
PASSWORD=<YOUR_ILO_PASSWORD>
ILO_URL=https://<YOUR_ILO_IP>/ribcl
curl -k -X POST -d "<RIBCL VERSION=\"2.0\">
<LOGIN USER_LOGIN=\"${USERNAME}\" PASSWORD=\"${PASSWORD}\">
<RIB_INFO MODE="READ">
<GET_FW_VERSION/>
<GET_ALL_LICENSES/>
</RIB_INFO>
<SERVER_INFO MODE=\"write\">
<SET_HOST_POWER HOST_POWER=\"Yes\">
</SERVER_INFO>
</LOGIN>
</RIBCL>" ${ILO_URL}
The formatting isn't exactly the same, but if you have the ability to access the iLO via HTTPS instead of only ssh, this may give you some flexibility.
More details on the various RIBCL commands and options may be found at HP iLO 3 Scripting Guide (PDF).

Resources