How can i enable security in presto - security

I am new to use prestodb when i go through the presto documentation it helped me but i would like to take some advice from the presto experts and configuration set up help so I am dropping this question. Different options
1) Coordinator Kerberos Authentication. (only coordinator change)
In order to use this method i have to enable /etc/krb5.conf, for this do i have create a file in etc properties in presto with krb5.conf {user/presto/etc/krb5.conf} or do i have to use the system etc/krb5.conf . Correct me if I am wrong.
2) LDAP Authentication (Only Coordinator change)
The Presto client sends a username and password to the coordinator and coordinator validates these credentials using an external LDAP service.
3) Java Keystores and Truststores
Access to the Presto coordinator must be through HTTPS when using Kerberos and LDAP authentication. The Presto coordinator uses a Java Keystore file for its TLS configuration. These keys are generated using keytool and stored in a Java Keystore file for the Presto coordinator.
The alias in the keytool command line should match the principal that the Presto coordinator will use.
Finally
4) Built-in System Access Control
A system access control plugin enforces authorization at a global level, before any connector level authorization. You can either use one of the built-in plugins in Presto or provide your own by following the guidelines in System Access Control.
As the last one is easy i could directly add a json file for providing access but that doesnt seem good idea on larger scale.

If you are just starting with Presto then it would be good to try out some basic authentication. As you can see Kerberos and LDAP have their own external dependencies, I would recommend to try out File based authentication which is very easy to implement.
https://www.qubole.com/blog/simplifying-user-access-in-presto-with-file-based-authentication/
For authorization, you can see Hive security options like read-only, sql-standard etc.
https://prestodb.io/docs/current/connector/hive-security.html

Related

How to use encrypt Cassandra password credentials in config in Quarkus?

Problem Statement
The requirement is to set encrypted password in configuration for Cassandra so that Quarkus automatically decrypts the password at runtime (like jasypt).
Example
quarkus.cassandra.auth.username=john
quarkus.cassandra.auth.password=s3cr3t --> instead of this
quarkus.cassandra.auth.password=ENC(1k9u) --> something like this
The recommendation is to use Vault as a ConfigSource. Secrets can be stored on Vault and Quarkus will read them as any other configuration source.
Please check: https://quarkus.io/guides/vault
I've reached out to the team who maintain the Quarkus extension for Apache Cassandra and got confirmation that unfortunately it's not possible to do that. Cheers!

NIFI AUTHENTICATION

Is there anyway that we can give security measures for nifi, like any username and password for the nifi UI page. And also anyway to give storage for the configuration made in the NIFI UI page.
Need some suggestion on this issue.
All user authentication and authorization mechanisms are only available once TLS is enabled. This was an intentional design decision because entering sensitive user credentials over a plaintext HTTP connection is unsafe and exposes the user to many opportunities to have those credentials, which unfortunately they may reuse for other services, stolen.
After enabling TLS for the NiFi application, LDAP, Kerberos, OpenID Connect, Knox, and client certificates are all available as authentication mechanisms.
With the default settings you can point a web browser at
https://127.0.0.1:8443/nifi
The default installation generates a random username and password, writing the generated values to the application log. The application log is located in logs/nifi-app.log under the installation directory. The log file will contain lines with Generated Username [USERNAME] and Generated Password [PASSWORD] indicating the credentials needed for access. Search the application log for those lines and record the generated values in a secure location.
The following command can be used to change the username and password:
$ ./bin/nifi.sh set-single-user-credentials <username> <password>

Can we use the Common Name in TLS certificates for authorization in Cassandra

I have defined roles and permission in the Cassandra tables as defined in the documentation.
I am using client side authentication to provide access to the Cassandra DB. However, I want to use the CN (common name) from the client side TLS certificate to map it to a user/role and provide authorization.
Is there any configuration in Cassandra that will authorize based on the CN?
There are multiple clients and I want to ensure that only clients with proper authorization can access the DB.
I do not want the application code to pass the username but use the CN instead.
No, it's not possible in the existing versions of the Apache Cassandra.

What is the reason for a Kerberos keytab file when setting up SSH authentication on a server?

I haven't really have had much experience with Kerberos but I am trying to set up SSH authentication with AD on one of my servers using sssd. I have followed the instructions on the sssd documentation here and got it working but I am struggling to understand why I need a keytab file to set this up?
I've been doing a bit of reading about Kerberos lately and it appears you only need to create a keytab file on the server when the server needs to authenticate to AD without user interaction or when you need to implement SSO (when a user requests a ticket for that service).
I simply want my users to enter their username / password when logging in via SSH and have sssd authenticate this user against AD and create a TGT ticket for them. The funny thing is - even when I don't setup sssd and only set up the kerberos side I can run kinit and I get a ticket!
So my question is this: Can I set up SSH authentication using sssd without generating a keytab file on the server? if not then why not?
Your question in the Subject line "What is the reason for a Kerberos keytab file when setting up SSH authentication on a server?" boils down to a one-line answer: it allows for Kerberos single sign-on authentication to the Directory server by de-crypting the inbound Kerberos service ticket to "tell" who the user is. As far as your other question, "Can I set up SSH authentication using sssd without generating a keytab file on the server?", the answer is yes, you can. But you will be prompted for a username or password whenever you connect to the SSH service, unless you choose to cache the password in whatever SSH utility you might be using to connect. Caching the password though, in such a method, is not considered to be "single sign-on".
For additional reference, you can read more about my article on Kerberos keytabs on Microsoft Technet: Kerberos Keytabs – Explained. I frequently go back and edit it based on questions I see here in this forum.

Service Ticket in Kerberos - Hadoop security

I am trying to secure my hadoop cluster using kerberos. I am able to generate TGT using kinit command. But beyond that I am stucked up.
1) I would like to know what is meant by 'serviceTicket' in reality. (Not as a description); Using which command/code we can make use of service ticket?
2) What is the use of '.keyTab' file and '.keyStore' file?
Hadoop-Kerberos story
User sends an authentication request to the KDC using kinit command.
The KDC sends back an encrypted ticket.
User decrypts the ticket by providing his password.
4. Now authenticated, User sends a request for a service ticket.
5. KDC validates the ticket and sends back a service ticket.
User presents the service ticket to hdfs#KERBEROS.com.
hdfs#KERBEROS.com decrypts the ticket, validating the User’s identity
In the 4th step , 'requesting for service ticket'; what does it actually mean? To get TGT, we use 'kinit' command. Similarly, what is the procedure / method to get service ticket?
My Process in detail:
LDAP : ActiveDirectory
Kerberos : Installed in Ubuntu
Hadoop Cluster : Configured in Ubuntu machines with one master and one slave
Ubuntu username : labuser
Realm in Ubuntu : KERBEROS.COM
The plan is to provide hadoop security with Kerberos and Active Directory
Generated TGT(using kinit command) in Kerberos Server machine for the users present in ActiveDirectory
Next to integrate Kerberos with Ubuntu hadoop cluster, did the below,
1) Command to create Principal : addprinc -randkey namenode/labuser#KERBEROS.COM
2) Command to create keytab : xst -norandkey -k namenode.service.keytab namenode/labuser#KERBEROS.COM (or) ktadd -k namenode.service.keytab namenode/labuser#KERBEROS.COM
3) Added properties in hadoop configuration files with resp., to Kerberos.
Staring hadoop cluster, we get to know that Login gets successfull for all the services (Namenode,Datanode,Resource Manager and Node Manager).
Log info:INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user dn/labuser#KERBEROS.COM using keytab file * /home/labuser/hadoopC/etc/hadoop/dn.service.keytab
Yet Hadoop cluster is not started , with failure in,
DataNode(java.lang.RuntimeException: Cannot start secure cluster without privileged resources)
NameNode(java.io.FileNotFoundException: /home/labuser/.keystore (No such file or directory))
Please suggest whether the above Kerberos process require any change? If yes, please justify
The kerberos API will get the service ticket automatically if the protocol for the service is kerberos enabled.
The server needs the secret key corresponding to hdfs#KERBEROS.com in a keytab file that it can read to decrypt any incoming connections. Generally, you create this using the kadmin command and
install the secret in the keytab file using the appropriate utility ( it's different for different versions of kerberos source code.)
Generally, once you have kinit'd as a client, you will never need to run another explict kerberos
command to obtain service tickets, PROVIDING all the servers and clients are configured correctly. That's kind of the whole point of kerberos.
If you really want to obtain a service ticket for testing, you can use the kvno command.
http://web.mit.edu/kerberos/krb5-1.13/doc/user/user_commands/kvno.html
I have found a solution for:
NameNode(java.io.FileNotFoundException: /SOME/PATH/.keystore (No such file or directory))
Try to configure HTTP_ONLY option in hdfs-site.xml:
<property>
<name>dfs.http.policy</name>
<value>HTTP_ONLY</value>
</property>
If you need https you need to additionally generate certificates and configure keystore.

Resources