GSSAPI - Windows Active Directory Interoperability - error accepting context: Wrong principal in request - linux

We are writing softwares that run on both Windows and Linux, and plan to use Windows Active Directory for authentication. I am struggling with the issues described below, and would appreciate any help very much:
Domain name: CORP.COMPANY.COM
Test programming running on the one Linux machine: host1.corp.company.com
The test program comes from the gss-sample from krb5-1.11.3 downloaded files.
The server will be named "gssapitest".
Based on "Step-by-Step Guide to Kerberos 5(krb5 1.0) Interoperability(from Microsoft)
,
First create a user "host1" in the AD to represent the host
host1.corp.company.com (the linux machine).
Use ktpass to generate the keytab (run from Windows):
ktpass /princ host/host1.corp.company.com#CORP.COMPANY.COM /mapuser host1 /pass
hostpassword /out file1.keytab
Now in AD, create another domain user "gssapitest" to represent the test server program, and map user similarly:
ktpass /princ gssapitest/host1.corp.company.com#CORP.COMPANY.COM /mapuser
gssapitest /pass gssapitestpassword /out file2.keytab
copy file1.keytab and file2.keytab to the Linux machine host1, and merge them to /etc/krb5.keytab.
In Linux, "ktutil" shows the content of /etc/krb5.keytab like the following:
slot KVNO Principal
1 4 host/host1.corp.company.com#CORP.COMPANY.COM
2 5 gssapitest/host1.corp.company.com#CORP.COMPANY.COM
On windows, register the service (using "setspn") for the Linux server program so that the result looks like (2 entries, one with mapped host name, the other with actual host name, for testing purpose. If only one entry, no matter which one, the result was the same):
Registered ServicePrincipalNames for
CN=xxxx,CN=Users,DC=corp,DC=company,DC=com:
gssapitest/host1:2001
gssapitest/host1.corp.company.com:2001
Now I start the server this way:
gss-server -port 2001 gssapitest
and start the client from another terminal this way:
gss-client -port 2001 -user xxxx -pass xxxxpassword host1.corp.company.com
gssapitest "abcd"
The error shows on the server side:
GSS-API error accepting context: Unspecified GSS failure. Minor code may
provide more information
GSS-API error accepting context: Wrong principal in request
What could be the likely cause of this? I'd like to know if the step I outlined about
are all necessary. and which one are not needed at all or are incorrect.
(Note: I have tried to log in to the Linux with both a local user account and
a domain account in CORP.COMPANY.COM, the result shows the same error.
also the nslookup shows correct IP to host mapping for the linux machine).

I would not include the port number when using setspn; I'd expect gssapitest not gssapitest:2001.
In addition, use gssapitest#host as the service name in the call to gss-client;
gss-client -user xxx -pass xxx -port 2001 hostname gssapitest#hostname "test message"
You can use krb5 tracing to get much better logging about what's going on:
export KRB5_TRACE=/tmp/trace.client # and run client
Similar for the server.

I did some test runs, and in my case, the problem seems to be this: I made changes to my mapped user, i.e., gssapitest (In "Active Directory Users and Computers", I unchecked "Use DES encryption types for this account" under "Account" tab for this user) after running "ktpass" and merged the output file to the krb5.keytab in the Linux machine.
To fix this problem, I checked the "Use DES encryptiuon types for this account" again from inside the Active Directory, then go to the Linux machine, run "kdestroy" before starting my server and client programs. Then it worked.
If anyone is having similar problems, you may want to look into this possible cause. Thanks.

Related

Error transferring files from mainframe to RedHat Linux using FTPS

I want to transfer a few files weekly from mainframe to a Linux server running RedHat using a batch (JCL) job using FTPS.
Linux server is configured with vsftpd. Is it possible to send file from mainframe to linux using FTPS?
Getting this error while transferring the file from mainframe to Linux.
EZA1736I FTP
EZY2640I Using 'SYS1.TCPPARMS(FTPDATA)' for local site configuration parameters.
EZA1450I xxx FTP CS xxx
EZA1456I Connect to ?
EZA1736I host_name
EZA1554I Connecting to: host_name xxx.xxx.xxx.xxx port: 21.
220 (vsFTPd 2.0.5)
EZA1701I >>> AUTH TLS
234 Proceed with negotiation.
EZA2897I Authentication negotiation failed
EZA1534I *** Control connection with host_name dies.
EZA1457I You must first issue the 'OPEN' command
EZA1460I Command:
EZA1618I Unknown command: 'Atul'
EZA1619I For a list of the available commands, say HELP
EZA1460I Command:
EZA1736I Summer#123
EZA1618I Unknown command: 'Monsoon#123'
EZA1460I Command:
EZA1736I cd /home/Atul/
EZA1457I You must first issue the 'OPEN' command
From your log you seem to be able to set up an unsecured connection to the FTP server. That's good.
EZA2897I Authentication negotiation failed indicates that the TLS-handshake did not complete successfully. Either the partners could not find a common TLS-version and/or ciphersuite or (that's the point I'd examine first) the certificate provided by the FTPs-server isn't trusted by the client user. To be sure you would have to capture and examine a TCP- or TLS-trace.
In a first step I would check the certificate provided by the FTP server and compare it to the trusted certificates in your security manager. In the case of RACF you would have to examine SITE-certificates and/or certificates in the user's keyring.
Yes, sending from the mainframe using FTPS to VSFTP is certainly possible. Both the client (z/OS in this case) and server (Linux in this case) need to agree on the encryption method to be used and I believe by default, z/OS has to trust the certificate for the server, which may involve importing the certificate bundle to a key ring that the batch job has access to. The job not having access to a keyring that trusts the chain for the server certificate would be my first guess.
I don't have experience with setting up the RACF keyring things, but I can say that people do successfully send us data every day from z/OS to our Linux server via FTPS.

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

CUPS session setup failed with 'nt_status_logon_failure'

I am running the CUPS in Linux Debian machine. Using the CUPS I am sending the print request to Windows XP machine. I have enabled the 445 and 139 port and I am able to connect the Windows Machine. Printer is connected to the Windows Machine.
I am sending the print request using the following command
lp -E -d < Printer Name > file name.
After sending I am checking the printer status using the following command
lpstat -p < Printer Name >
I am getting the below error message when I execute the above message.
unable to connect to cifs host will retry in 60 seconds..
When I checked in the log I am getting the below error message.
session setup failed: NT_STATUS_LOGON_FAILURE and NT_STATUS_BAD_NETWORK_NAME
The DeviceURI is like below in /etc/cups/printers.conf
smb://username:password#IP Address of windows Machine/printer_Name
Please let me know if the password is having '#' symbol, how can we specify the '#' with actual syntax in DeviceURI ( user:password#IP )
Unfortunately you provide not enough detailed info about your specific setup. So I will make an effort to take several potential problems into account and give hints how to overcome them:
I.
Did you use the correct share name for your shared Windows printer?
To find out, use this command:
$ smbtree -U windowsusername
You might see something like the following output:
WORKGROUP
\\MURUGA-PC
\\MURUGA-PC\G
\\MURUGA-PC\Z
\\MURUGA-PC\Public
\\MURUGA-PC\print$ Printer Drivers
\\MURUGA-PC\EPSON Stylus CX8400 Series EPSON Stylus CX8400 Series
In other words: your printer's share name may contain spaces. But you cannot use spaces in the device URI for CUPS! What now?
Easy: (1) Either rename the share name on the Windows side. (2) Or escape the space by using %20 instead:
smb://muruga:mysecretpassword#muruga-pc/EPSON%20Stylus%20CX8400%20Serie
II.
Is your Windows XP by any chance using Kerberos authentication? For example, because it is part of an Active Directory environment? Then you should refer to this document on cups.org:
Configuring CUPS to Use Kerberos
Kerberos authentication does not work with username/password, it uses 'tickets'.
III.
Otherwise, if your Windows XP machine is part of a "standard" domain, you may be more successful by ditching your device URI of smb://username:password#ip-address-of-windows/printer_name and use this instead:
smb://username:password#domain_name/windows_host/printer_Name
The username you use has to be the Windows user name (with his/her password) who installed the printer on Windows!
IV.
Alternatively, you may have success by using IPP to print to Windows (though XP needs an IPP-enabling extension installed, provided by Microsoft). Be aware that MS is using a non-standard syntax for their device URIs (using port 80 or 443), and their version of IPP is still 1.0 (which always remained in "draft" status and never made it into an official release by the IETF):
DeviceURI https://mywindowsprintserver/printers/printername/.printer
or
DeviceURI http://mywindowsprintserver/printers/printername/.printer
For username/password authentication to this printer, you need
AuthInfoRequired username,password in /etc/cups/printers.conf and
DefaultAuthType Basic in cupsd.conf.
To use Kerberos, you need
AuthInfoRequired Negotiate in /etc/cups/printers.conf and
DefaultAuthType Negotiate in cupsd.conf.
If the whole setup is in a household with a private LAN/WLAN, you may want to consider removing all access controls (first on the Windows print server side, then):
AuthInfoRequired None in /etc/cups/printers.conf and
DefaultAuthType None in cupsd.conf.
If your problem is that your password contains a '#'-character, then try this:
smb://username:'p#ssword'#domain_name/windows_host/printer_Name
or
smb://username:p%40ssword#domain_name/windows_host/printer_Name

Why am I getting SSL_read errors and Rpc_client_frag_read errors when trying to Remote Desktop

I'm trying to set up a remote desktop session for monitoring specific systems at my place of work. I only have access to a Linux machine and I need to connect via a terminal server gateway. I am using FreeRDP to do this and i am using the following command to create the connection:
xfreerdp /d:** /u:***** /p:******* /g:******.************.***
/v:****.*********.***** /port:3389 /size:1920x1080
I have hidden all connection details per my supervisors request however both he and I verified the correct information is entered into the fields.
When I send the connection through I get the following error:
Connected to ******.************.***:443
Connected to ******.************.***:443
TS Gateway Connection Success
Got stub length 4 with flags 3 and called 7
Got stub length 4 with flags 3 and called 6
SSL_read: I/O error: connection reset by peer (104)
Rpc_client_frag_read: error reading header
Would anyone have any idea of what I might be missing? I have even tried adding
/sec:rdp
to the script and even that produced the same error
Try rdp from a Windows system (or have someone else try from their system, since you don't have direct access to Windows). I know it won't solve your problem, but it may give you better information. I'm in a similar situation and got the same error message. I tried remmina instead of xfreerdp and got even less information than xfreerdp spits out.
From a Windows VM, at least I could tell when I got my domain\username & password right -- it told me my account was not allowed rdp access to that server. I'm figuring that means that there are accounts that can rdp in, but mine is not among them. Along the way, though, I found that the remote was using a certificate from an untrusted authority, which was useful information for my case.
If your Linux is old or hasn't been updated, do so. Your certificate store may be out of date. But it may also be that your company's Windows domain has certificates that Linux doesn't know about. It could be a simple matter that you're lacking the company-supplied cert (because they push it to all Windows machines on the domain, but your Linux machine doesn't get that "benefit").

linux command to connect to another server using hostname and port number

what is the Linux command to connect to another server using host name and port number?
how to connect to another server using only host name and port number then check if an existing process is running? the only way i see it working is to log in to the server and run the PS command. but is there a way to do it without logging in directly to the other server and connect only with host name and port number and check the running process?
If you just want to try an arbitrary connection to a given host/port combination, you could try one nmap, telnet or nc (netcat).
Note that you can't necessarily determine whether or not a process is running remotely - it might be running on that port, but simply ignore anything it sees over the port. To really be sure, you will need to run ps or netstat or etc. via ssh or etc.
If you want to use SSH from e.g. a script or, more generally, without typing in login information, then you will want to use public key authentication. Ubuntu has some good documentation on how to set this up, and it's very much applicable to other distrobutions as well: https://help.ubuntu.com/community/SSH/OpenSSH/Keys.
If you have no access to the server you're trying to list processes on at all, then I'm afraid there isn't a way to list running processes remotely (besides remote tools like nmap and so on, as mentioned earlier - you can always probe public ports without authentication [although you might make people angry if you do this to servers you don't own]). This is a feature, not a problem.
telnet connects to most of services. With it you can ensure that port is open and see hello message (if any). Also nc is more low level.
eri#eri-macro ~ $ telnet smtp.yandex.ru 25
Trying 87.250.250.38...
Connected to smtp.yandex.ru.
Escape character is '^]'.
220 smtp16.mail.yandex.net ESMTP (Want to use Yandex.Mail for your domain? Visit http://pdd.yandex.ru)
helo
501 5.5.4 HELO requires domain address.
HELO ya.ru
250 smtp16.mail.yandex.net
MAILĀ FROM: <someusername#somecompany.ru>
502 5.5.2 Syntax error, command unrecognized.
If there is plain text protocol you cat talk with service by keyboard. If connection is secured try openssl.
openssl s_client -quiet -connect www.google.com:443
depth=1 /C=ZA/O=Thawte Consulting (Pty) Ltd./CN=Thawte SGC CA
verify error:num=20:unable to get local issuer certificate
verify return:0
GET /
<HTML><HEAD>
If protocol is not known you may see much of hieroglyphs or just Connected to ... message.
Try this :
ssh <YOUR_HOST_NAME> 'ps auxwww'
Like Dark Falcon said in the comments, you need a protocol to communicate with the server, a port alone is useless in this case.
By default on unix (and unix like) servers, ssh is the way to go.
Remote Shell with this command. Example is cat a file on the remote machine.
rsh host port 'cat remotefile' >> localfile
host and port self explainitory
remotefile: name of some file on the machine remote logging to in home directory
localfile: name of file cat information to.
Use monitoring software (like Nagios). It looks at your processes, sensors, load and thatever you configured to watch. It continuously stores log. It alerts you by email\sms\jabber if something fails. You can access it with browser or by HTTP API.

Resources