I want to transfer a few files weekly from mainframe to a Linux server running RedHat using a batch (JCL) job using FTPS.
Linux server is configured with vsftpd. Is it possible to send file from mainframe to linux using FTPS?
Getting this error while transferring the file from mainframe to Linux.
EZA1736I FTP
EZY2640I Using 'SYS1.TCPPARMS(FTPDATA)' for local site configuration parameters.
EZA1450I xxx FTP CS xxx
EZA1456I Connect to ?
EZA1736I host_name
EZA1554I Connecting to: host_name xxx.xxx.xxx.xxx port: 21.
220 (vsFTPd 2.0.5)
EZA1701I >>> AUTH TLS
234 Proceed with negotiation.
EZA2897I Authentication negotiation failed
EZA1534I *** Control connection with host_name dies.
EZA1457I You must first issue the 'OPEN' command
EZA1460I Command:
EZA1618I Unknown command: 'Atul'
EZA1619I For a list of the available commands, say HELP
EZA1460I Command:
EZA1736I Summer#123
EZA1618I Unknown command: 'Monsoon#123'
EZA1460I Command:
EZA1736I cd /home/Atul/
EZA1457I You must first issue the 'OPEN' command
From your log you seem to be able to set up an unsecured connection to the FTP server. That's good.
EZA2897I Authentication negotiation failed indicates that the TLS-handshake did not complete successfully. Either the partners could not find a common TLS-version and/or ciphersuite or (that's the point I'd examine first) the certificate provided by the FTPs-server isn't trusted by the client user. To be sure you would have to capture and examine a TCP- or TLS-trace.
In a first step I would check the certificate provided by the FTP server and compare it to the trusted certificates in your security manager. In the case of RACF you would have to examine SITE-certificates and/or certificates in the user's keyring.
Yes, sending from the mainframe using FTPS to VSFTP is certainly possible. Both the client (z/OS in this case) and server (Linux in this case) need to agree on the encryption method to be used and I believe by default, z/OS has to trust the certificate for the server, which may involve importing the certificate bundle to a key ring that the batch job has access to. The job not having access to a keyring that trusts the chain for the server certificate would be my first guess.
I don't have experience with setting up the RACF keyring things, but I can say that people do successfully send us data every day from z/OS to our Linux server via FTPS.
Related
I have tried to create VPN connection in my windows 8.1 machine.
I have followed the steps mentioned in the setup. While connecting Its shows the error message as:
"A certificate could not be found that can be used with this
Extensible Authentication Protocol. (Error 798)".
How to resolve this issue. Thanks in advance.
This typically means that the client certificate is not installed on the Windows machine you are trying to connect from.
On the client machine, please open "certmgr" from command prompt and verify that the client certificate (created/signed by the root certificate) is installed on the machine.
A couple more things to check or verify:
The P2S root cert, the one you uploaded to Azure, must be in the
machine's trusted root store in certmgr
Delete the re-install/import the client cert again, please ensure you
do NOT check the "Enable strong private key protection. ..."
(It's a long shot but ...) Check if the following regkey is set to 1:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\RasMan\PPP\EAP\13
\SelectSelfSignedCert
If it still fails, collect tracing for further debugging from an elevated command prompt:
"netsh ras diag se tr en"
repro the issue (failed connect)
"netsh ras diag se tr dis"
Share/send the contents of your Windows\tracing folder
[Edit]
Just found another link with very extensive instructions to try:
https://www.tectimes.net/how-to-microsoft-azure-como-solucionar-el-error-798-a-certificate-could-not-be-used-with-this-extensible-authentication-protocol-en-windows-8-1-en-una-vpn-hacia-azure/
(In Spansh, you will need to translate the page.)
Thanks,
Yushun [MSFT]
We are writing softwares that run on both Windows and Linux, and plan to use Windows Active Directory for authentication. I am struggling with the issues described below, and would appreciate any help very much:
Domain name: CORP.COMPANY.COM
Test programming running on the one Linux machine: host1.corp.company.com
The test program comes from the gss-sample from krb5-1.11.3 downloaded files.
The server will be named "gssapitest".
Based on "Step-by-Step Guide to Kerberos 5(krb5 1.0) Interoperability(from Microsoft)
,
First create a user "host1" in the AD to represent the host
host1.corp.company.com (the linux machine).
Use ktpass to generate the keytab (run from Windows):
ktpass /princ host/host1.corp.company.com#CORP.COMPANY.COM /mapuser host1 /pass
hostpassword /out file1.keytab
Now in AD, create another domain user "gssapitest" to represent the test server program, and map user similarly:
ktpass /princ gssapitest/host1.corp.company.com#CORP.COMPANY.COM /mapuser
gssapitest /pass gssapitestpassword /out file2.keytab
copy file1.keytab and file2.keytab to the Linux machine host1, and merge them to /etc/krb5.keytab.
In Linux, "ktutil" shows the content of /etc/krb5.keytab like the following:
slot KVNO Principal
1 4 host/host1.corp.company.com#CORP.COMPANY.COM
2 5 gssapitest/host1.corp.company.com#CORP.COMPANY.COM
On windows, register the service (using "setspn") for the Linux server program so that the result looks like (2 entries, one with mapped host name, the other with actual host name, for testing purpose. If only one entry, no matter which one, the result was the same):
Registered ServicePrincipalNames for
CN=xxxx,CN=Users,DC=corp,DC=company,DC=com:
gssapitest/host1:2001
gssapitest/host1.corp.company.com:2001
Now I start the server this way:
gss-server -port 2001 gssapitest
and start the client from another terminal this way:
gss-client -port 2001 -user xxxx -pass xxxxpassword host1.corp.company.com
gssapitest "abcd"
The error shows on the server side:
GSS-API error accepting context: Unspecified GSS failure. Minor code may
provide more information
GSS-API error accepting context: Wrong principal in request
What could be the likely cause of this? I'd like to know if the step I outlined about
are all necessary. and which one are not needed at all or are incorrect.
(Note: I have tried to log in to the Linux with both a local user account and
a domain account in CORP.COMPANY.COM, the result shows the same error.
also the nslookup shows correct IP to host mapping for the linux machine).
I would not include the port number when using setspn; I'd expect gssapitest not gssapitest:2001.
In addition, use gssapitest#host as the service name in the call to gss-client;
gss-client -user xxx -pass xxx -port 2001 hostname gssapitest#hostname "test message"
You can use krb5 tracing to get much better logging about what's going on:
export KRB5_TRACE=/tmp/trace.client # and run client
Similar for the server.
I did some test runs, and in my case, the problem seems to be this: I made changes to my mapped user, i.e., gssapitest (In "Active Directory Users and Computers", I unchecked "Use DES encryption types for this account" under "Account" tab for this user) after running "ktpass" and merged the output file to the krb5.keytab in the Linux machine.
To fix this problem, I checked the "Use DES encryptiuon types for this account" again from inside the Active Directory, then go to the Linux machine, run "kdestroy" before starting my server and client programs. Then it worked.
If anyone is having similar problems, you may want to look into this possible cause. Thanks.
I would like to start this discussion about mysqldump security.
With security I'm not speaking about Cron tasks that display password security or password security in any way, instead I'm talink about the security of the command itself.
In my particular case I have setup the command to execute on my home server the cron job with mysqldump and backup my website database on my VPS that I have with 1&1.
So basically the scenario is that my Home PC is backing up remotely the MySQL database on port 3306.
This work correctly but I start making nightmares while sleeping and thinking that maybe could someone listen on port 3306 and get all my data while I'm backing up (with mysqldump) I mean for what I have understanded mysql is not under SSL with port 3306 so anybody could potentially get the backup copy from the database?
I mean it would be possible this scenario:
My Home PC start mysqldump task
My VPS on 1&1 prepare remotely the sql dump
My Home PC receive locally the dump from the remote server
between point 2 and point 3 is possible that someone get a copy of my file?
Thanks in advance for the answers
Marcos
You should not expose port 3306 on your VPS host to the public internet. MySQL's unencrypted port is not secure.
If you're running mysqldump on your VPS host, and only transferring the resulting dump file to your PC, then you can do this securely.
If you can ssh to your VPS, you should be able to use scp too. This gives you the ability to transfer files securely.
Here's a FAQ article about using scp with 1&1. I found this by googling for "1&1 scp":
http://faq.1and1.co.uk/server/root_server/linux_recovery/9.html
If you need to run mysqldump on your Home PC and connect remotely to MySQL on the VPS host, you have options:
Run mysqldump on the PC with SSL connection options.
Open an port-forwarding ssh tunnel, then run mysqldump on the PC connecting to the forwarded port.
Run ssh to invoke mysqldump on the VPS, then capture output. See example in the accepted answer to this question: https://serverfault.com/questions/36467/temporary-ssh-tunnel-for-backup-purposes
Create a VPN and do anything you want because it's all encrypted.
Re your comments of 10/11:
I need to execute the command from home PC to backup the VPS remotely.
I want to ... receive instead the backup file directly so in the VPS should be saved nothing.
Okay, here's what you can do, without exposing port 3306:
$ ssh marcos#192.168.1.3 'mysqldump ...options.. | gzip -c' > ~/dump.sql.gz
Notice the position of quotes in that command. You're executing on the VPS the command: mysqldump ...options.. | gzip -c. The stdout of that command is a gzipped stream of the dump. That stream is returned via ssh, and then > saves the output locally in the shell on your PC.
Re your comment of 10/13:
now I'm storing on the server an open text file that contain the credentials to access the MySQL server. I mean if someone will break into the server it will be able not just to damage the server content but also to damage and stolen MySQL database and informations. Am I right?
If you use MySQL 5.6 you can use the new feature to store connection credentials in a semi-encrypted manner. See http://dev.mysql.com/doc/refman/5.6/en/mysql-config-editor.html
If you use MySQL 5.5 or earlier, then you're right, you should be careful to restrict the file permissions of my.cnf. Mode 600 should be enough (i.e. it's not an executable file).
But if someone breaks into your server, they may have broken in with root access, in which case nothing can restrict what files they read.
MySQL doesn't have enough security to block access if someone gains root access, so it's up to you to use other means to prevent breakins. Firewalls, etc.
Yes it's possible, but you don't mention how you gonna fetch that data. If you use ssh/scp (with dedicated user for dumps, IP filtered, auth based on private key with key password) is acceptable and consider as safe in my opinion. Another fast way is be a more secure is to set up VPN. Any else is paranoid level for personal use.
what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.
I have been through some basics about setting up SSH tunneling via e.g., putty.
One question: how to let the two SSH ends authenticate each other based on certificate?
For example, using SSH tunneling for remote VNC access...
VNC == SSH (A) ===== SSH (B) === VNC
I want A and B to authenticate each other. It is arguable that VNC could have its own password for protection. But that is not the point here. I could have many apps running on A and B that are not necessarily having usr/pwd protection.
I check the putty config, seems no option for using certificate. Someone suggests stunnel, but I would like to see if doable using SSH directly. Thanks for the suggestion.
Any particular reason you need to use certificates, and not just ssh keys? The only reason I'm aware of is it takes the burden off of the host administrator from managing a complex configuration of authorized_keys files on hosts which have a lot of users who login.
OpenSSH introduced certificates in version 5.4, so make sure you're running at least that version on the server side. The client must support SSH certificates as well, and it is unclear to me at this moment if putty supports it. It does support ssh keys however, and unless you specifically need certificates, key-based authentication should be all you need.
Here is a good read on SSH certificates: http://blog.habets.pp.se/2011/07/OpenSSH-certificates
If you just need way to login without being prompted for a password, then just use ssh keys (which is what certificates use anyway).
You say this:
I want A and B to authenticate each other.
Whether you use keys or certificates, you get this already out of the ssh protocol itself. When the client connects to the server, it compares the host key to it's local known_hosts files. If it's the first time you've ever going to that server, it asks you if you want to accept it. If the server's key changed since you logged in, you get the Man-in-the-middle warning, and based on your client configuration, asks you whether it's OK to proceed or simply doesn't let you continue.
This is the process of the server authenticating itself to the client, which happens before the client attempts to authenticate to the server.
We are working on a solution that possess capability to perform SSH based authentication. please have a look at https://cecuring.com
Since we are gathering more users, you are free to submit new feature requests. we will collaborate with you in those cases.