What is the meaning of the first byte of each record set when downloading a v(b)-file from z/OS over FTP using "TYPE E" and "MODE B" - mainframe

Right now, I'm trying to upload and download files with variable record lengths from an IBM mainframe running zOS 2.1. Like this guy: How to FTP a variable length file from linux to mainframe z/OS
curl --user "******" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'" | hexdump
0000000 dead cafe babe
0000006
curl --user "******" --quote "site RDw" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'" | hexdump
0000000 000a 0000 dead cafe babe
000000a
It looks good. The rdw is "000a 0000" and the record "dead cafe babe". But. If I upload it again - even while using "quote site RDw" the server will ignore the RDW and store it as part of the actual data.
curl --user "******" --quote "site RDw" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'" > SOME.FILE.NAME
cat SOME.FILE.NAME | curl --user "******" --upload-file "-" --quote "site RDw" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'"
0000000 000c 0000 0008 0000 dead beef
000000c
Since that's not what I wanted, I searched some more. And - I found this article:
http://www-01.ibm.com/support/docview.wss?uid=swg21188301
And gave it another try.
curl --user "******" --quote "TYPE E" --quote "MODE B" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'" | hexdump
0000000 4000 04de adbe ef00
0000007
That looked interesting. So I compared it with another file, containing a larger dataset...
0000000 4002 cbdc...
00002ce
And another one...
0000000 8000 16f0...
0000019 4000 16f0...
0000032
My first impression is: An 80 seems to indicate that there will be more datasets, whereas the 40 indicates the last one. That seemed to be true for every file I tried. For a normal file with variable record lengths as well as for a blocked file withe variable record lengths.
So I tried to upload it again...
curl --user "******" --quote "TYPE E" --quote "MODE B" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'" > SOME.FILE.NAME
cat SOME.FILE.NAME | curl --user "******" --upload-file "-" --quote "TYPE E" --quote "MODE B" --verbose --silent --show-error "ftp://themainframe/'SOME.FILE.NAME'"
And it seemed to work
Well - at least now I'm able to transfer files with variable record lengths from and to the mainframe while preserving the record lengths.
But - and here is the question:
Is the first byte of each record "only" an indicator for wheather there will be more data sets? Or am I missing something?

The first byte of a variable block record is the record length, so that's what you are seeing

Related

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

Avoiding kinit when cache still has credentials

I have a systemd service that calls a webservice to perform some maintenance periodically (every minute). The service looks like:
[Service]
Type=oneshot
ExecStart=/usr/bin/kinit -kt user.keytab user#DOMAIN
ExecStart=/usr/bin/curl --tlsv1.2 --cacert cert.pem --negotiate --user user: --url https://website/maintenance
now this destroy and reinitializes my kerberos ticket every time.
the kinit can take up to 2-3 min.
I would like to avoid that step and only kinit if needed. any ideas?
Try the HTTP request, and use the status code to decide whether you need to try kinit. You could grep the output of curl like this:
curl -s -i http://www.example.com | grep "HTTP/" | tail -1
If it's "HTTP/1.1 401 Unauthorized", run kinit and try again. (See here for how to parse out just the numeric part of the response if you prefer)
The "tail -1" part is to make sure you only get the last code; because of the negotiate protocol, you will typically get multiple lines from the grep command, like this:
HTTP/1.1 401 Unauthorized
HTTP/1.1 200 OK
The first one is the initial challenge from the server; the second one is the final response code.
After researching a bit more, I realized having logic in systemd service didn't seem like a good idea. So I decided to go with the suggestion by Elliott Frisch and create a script for it:
#!/bin/bash
# check if ticket is present and not expired
if [[ $(klist -l | awk 'tolower($0) ~ /user/ && tolower($0) !~ /expired/') ]]; then
echo "using ticket cache"
else
echo "no cache authentication for user, kinit needed"
/usr/bin/kinit -kt /user.keytab user#DOMAIN
fi
/usr/bin/curl --tlsv1.2 --cacert cert.pem --negotiate --user user: --url https://website/maintenance
I am then calling this script in my systemd service

Best practice to check if a process is running on Linux?

I installed chef server with the script below. I'm new to linux and I'm trying to learn how to set up chef server. I ran the commands chef.io provides and the script succeeded. I'm really not sure how to check or what I should do to check if the process is running. What are the best practices for linux on how to see if a process is running? What are the things I could do to find out what I need to know?
#!/bin/bash \
echo "Do your provisioning here" \
sudo wget https://packages.chef.io/files/stable/chef-server/12.14.0/el/7/chef-server-core-12.14.0-1.el7.x86_64.rpm \
sudo chmod a+x chef-server-core-12.14.0-1.el7.x86_64.rpm
sudo rpm -Uvh ./chef-server-core-12.14.0-1.el7.x86_64.rpm
sudo chef-server-ctl reconfigure \
sudo openssl rsa -in private.pem -outform PEM -pubout -out ~/.ssh/chef-server.pem \
sudo chef-server-ctl user-create admin 'admin' 'email' 'password' --filename ~/.ssh/chef-server.pem \
sudo openssl rsa -in private.pem -outform PEM -pubout -out ~/.ssh/chef-server-validator.pem \
sudo chef-server-ctl org-create short_name 'idevops' --association_user admin --filename ~/.ssh/chef-server-validator.pem \
sudo openssl rsa -in private.pem -outform PEM -pubout -out ~/.ssh/chef-coffee-server-validator.pem \
sudo chef-server-ctl org-create 4thcoffee 'iDevops 4th Coffee' --association_user admin --filename ~/.ssh/chef-coffee-server-validator.pem \
sudo chef-server-ctl install chef-manage \
sudo chef-server-ctl reconfigure \
sudo chef-manage-ctl reconfigure \
sudo chef-server-ctl install opscode-push-jobs-server \
sudo chef-server-ctl reconfigure \
sudo opscode-push-jobs-server-ctl reconfigure \
sudo chef-server-ctl install opscode-reporting \
sudo chef-server-ctl reconfigure \
sudo opscode-reporting-ctl reconfigure \
sudo chef-server-ctl install PACKAGE_NAME --path /path/to/package/directory \
sudo chef-server-ctl install chef-manage --path /root/packages \
sudo mkdir /etc/opscode && sudo touch /etc/opscode/chef-server.rb \
sudo echo "license['nodes'] = 0" >> /etc/opscode/chef-server.rb \
sudo chef-server-ctl reconfigure
Ok, there is a few places you can find them online, first thing you want to do is check if the process is running.
ps aux | grep process_name
Then if it is running and you still can't access it. You will want to use the netstat command and grep for the port.
net stat -anp | grep portnumber
You look to see if the service is running on the port its supposed to be. It will say its listening. Listening means the port is looking for communication on that port. This will mean the application is up and looking for communication on the port, or the port is inuse by another app and thats why it didn't start.
generally you would then look in the lgos tail -f -n 100 /path/to/log/file
-n is the number of lines
-f is a continuous follow so its how you watch the file. If you don't specify it it will just cat 100 lines to the screen.
If you just want to check it manually, log into the server and run:
ps auxw | grep YOUR_PROCESS_NAME
First, I do not run commands from a script until I know they work.
You may not be running the service:
sudo service chef-server status #would be a good place to start
To check if a process is running on linux I would recommend starting with the following:
ps aux | grep <part of the process name>
ps aux | grep chef
If you do not like the output of aux you can also use -ef like:
ps -ef | grep <part of the process name>
ps -ef | grep chef
If you know what port the process should be open on you could use netstat:
netstat -anp | grep <port number>
Example: looking for nginx or apache server to be running:
netstat -anp | grep 80
Watching logs I like to use the tail command:
tail -f /var/log/<name of application folder/<name of log>
Example: watching nginx logs:
tail -f /var/log/nginx/nginx.log
Sometimes it is helpful to watch the syslog as well:
tail -f -n 100 /var/log/syslog
If looking for incoming connections from other machines:
sudo tcpdump -vvv -i any port <port number>
There is a relatively LARGE hint in the output you posted (e.g. chef-server-ctl status),
The status subcommand is used to show the status of all services available
to the Chef server. The results will vary based on the configuration of a
given server.
or, chef-server-ctl status SERVICE_NAME
where SERVICE_NAME represents the name of any service that is listed after
running the service-list subcommand.
See Also the start subcommand at chef-server-ctl (executable) (toward the bottom of the page)

Using curl to send email

How can I use the curl command line program to send an email from a gmail account?
I have tried the following:
curl -n --ssl-reqd --mail-from "<sender#gmail.com>" --mail-rcpt "<receiver#server.tld>" --url smtps://smtp.gmail.com:465 -T file.txt
With file.txt being the email's contents, however, when I run this command I get the following error:
curl: (67) Access denied: 530
Is it possible to send an email from an account that is hosted by a personal server, still using curl? Does that make the authentication process easier?
curl --ssl-reqd \
--url 'smtps://smtp.gmail.com:465' \
--user 'username#gmail.com:password' \
--mail-from 'username#gmail.com' \
--mail-rcpt 'john#example.com' \
--upload-file mail.txt
mail.txt file contents:
From: "User Name" <username#gmail.com>
To: "John Smith" <john#example.com>
Subject: This is a test
Hi John,
I’m sending this mail with curl thru my gmail account.
Bye!
Additional info:
I’m using curl version 7.21.6 with SSL support.
You don't need to use the --insecure switch, which prevents curl from performing SSL connection verification. See this online resource for further details.
It’s considered a bad security practice to pass account credentials thru
command line arguments. Use --netrc-file. See the documentation.
You must turn on access for less secure apps or the newer App passwords.
if one wants to send mails as carbon copy or blind carbon copy:
curl --url 'smtps://smtp.gmail.com:465' --ssl-reqd \
--mail-from 'username#gmail.com' --mail-rcpt 'john#example.com' \
--mail-rcpt 'mary#gmail.com' --mail-rcpt 'eli#example.com' \
--upload-file mail.txt --user 'username#gmail.com:password' --insecure
From: "User Name" <username#gmail.com>
To: "John Smith" <john#example.com>
Cc: "Mary Smith" <mary#example.com>
Subject: This is a test
a BCC recipient eli is not specified in the data, just in the RCPT list.
Crate a simple email.conf file like so
Username: hi#example.com
Password: OKbNGRcjiV
POP/IMAP Server: mail.example.com
And simply run sendmail.sh, like so after making it executable (sudo chmod +x sendmail.sh)
./sendmail.sh
Code
#!/bin/bash
ARGS=$(xargs echo $(perl -anle 's/^[^:]+//g && s/:\s+//g && print' email.conf) < /dev/null)
set -- $ARGS "$#";
declare -A email;
email['user']=$1
email['pass']=$2
email['smtp']=$3
email['port']='587';
email['rcpt']='your-email-address#gmail.com';
email_content='From: "The title" <'"${email['user']}"'>
To: "Gmail" <'"${email['rcpt']}"'>
Subject: from '"${email['user']}"' to Gmail
Date: '"$(date)"'
Hi Gmail,
'"${email['user']}"' is sending email to you and it should work.
Regards
';
echo "$email_content" | curl -s \
--url "smtp://${email['smtp']}:${email['port']}" \
--user "${email['user']}:${email['pass']}" \
--mail-from "${email['user']}" \
--mail-rcpt "${email['rcpt']}" \
--upload-file - # email.txt
if [[ $? == 0 ]]; then
echo;
echo 'okay';
else
echo "curl error code $?";
man curl | grep "^ \+$? \+"
fi
more
Mind that the form of mail.txt seems to be important / CRLF for win, LF for Linux, special characters etc.
Finally after struggling 2 hours, it works for me for GMX (they tell their SMPT port to be 587 - and further down in small letters the hint: "also 465 can be used with SSL"):
UNDER Linux (TinyCore Linux on Raspberry 3B+ with curl.tcz installed):
curl --ssl-reqd --url 'smtps://mail.gmx.net:465' --user 'mymail#gmx.at:mymailPassword' --mail-from 'mymail#gmx.at' --mail-rcpt 'mymail#gmx.at' --upload-file mail.txt
UNDER Windows:
curl --ssl-reqd --url "smtps://mail.gmx.net:465" --user "mymail#gmx.at:mymailPassword" --mail-from "mymail#gmx.at" --mail-rcpt "mymail#gmx.at" --upload-file mail_win.txt
with mail.txt:
From: "User Name" <mymail#gmx.at>
To: "John Smith" <mymail#gmx.at>
Subject: This is a test
Hi John,
Im sending this mail with curl thru my gmx account.
Bye!
Note that if Perl's "system()" function is used to execute the curl command, each argument 'word' is a separate item in the argument array, and words must NOT be quoted.
Also note that if sending via Gmail after May 30, 2022, the gmail account must be set up with 2-factor authentication and then you must create an "App Password". The App Password is a long character string that acts as an alternative password and replaces the usual password on the "--user" parameter.

freeTDS inserting blob inside Asterisk dialplan

Inside an Asterisk application (dialplan) I need to insert in a MSSQL database a sound file. I'm using freeTDS to communicate with the DB.
The example table is called "testblob"
(id int identity, name varchar(50), audiofile varbinary(MAX))
Im trying with this code:
exten => s,n,Set(arch=${FILE(/var/lib/asterisk/sounds/custom/myFile.wav)})
exten => s,n,Verbose(${arch})
exten => s,n,Set(RESULT=${SHELL(echo -e use "AVL \\ngo\\ninsert into testblob (name, audiofile) values ('mar',${arch})\\ngo"|tsql -H x.x.x.x -p 1433 -U sa -P x)})
But is not working most certainly because of the "special chars" inside de ${arch} variable. I know that inside $arch is the file info, but I guess I need to read it on binary or 64encode it or something like that.
Question is: Is there any way to insert this myFile.wav directly from the shell? Something like:
echo -e use "AVL \\ngo\\ninsert into testblob (name, audiofile) values ('mar',####MAGIC GOES HERE TO READ MYFILE.WAV###)\\ngo"|tsql -H x.x.x.x -p 1433 -U sa -P x
Ok, so the way to do it in a single line is using base64, like this:
(echo -e -n use "AVL \\ngo\\nexec spAVL_SetAlertIVR 1, '";(base64 myFile.wav|tr -d '\n');echo -n -e "'\\ngo") | tsql -H 192.168.1.111 -p 1433 -U sa -P x
Note the use of "( )" and "|" and the trick using "tr -d" to remove the CR leaved behind by the base64 command.
Hope this helps someone else

Resources