We are trying to build a setup where we have a server that submits jobs of different users to the Livy server via the REST API. We established a kerberos server to authenticate against livy. But we want to prohibit the users to access a different users' data, the filesystem, and the network.
My question would then be, how secure is livy? Users can inject custom code to run on livy, but this gives them the ability to access the filesystem on the host the livy server resides in. Even if we run livy with a different unix user, that has very little permissions on the filesystem, that could be potentially dangerous from my point of view, they could potentially access the keytab on the livy server also. And they could also potentially inject malware and run it.
I know that the session created creates also a JVM, so one session lives in a JVM, and it is impossible to see another session's data etc. without having the kerberos ticket, but could I change the security settings of that JVM to only access specific paths and specific IP addresses only? Would that mean for me to change the source code of livy?
And in the case of using HDFS with active directory to secure the datasystem, so that users need to specify a kerberos key to access their files, how could I manage multiple principals in one server, to get this working?
My conf file is as below:
livy.environment production
livy.impersonation.enabled true
livy.server.csrf_protection.enabled true
livy.server.port 8999
livy.server.session.timeout 3600000
livy.server.auth.kerberos.keytab /home/harun/Documents/incubator-livy/keytabs/new.keytab
livy.server.auth.kerberos.principal HTTP/livyserver.local#EXAMPLE.COM
livy.server.auth.type kerberos
#livy.server.launch.kerberos.keytab /home/harun/Documents/incubator-livy/conf/livy.headless.keytab
#livy.server.launch.kerberos.principal livy#EXAMPLE.COM
livy.server.access_control.enabled = true
livy.server.access_control.users = livy
livy.superusers=livy
PS: Does enabling launch.kerberos provide additional security to protect the keytab?
Any help to any of the questions is very much appriciated,
Thanks in forehand
Related
I have an Elasticsearch installation (V7.3.2). Is it possible to secure this retrospectively? This link states that a password can only be set "during the initial configuration of the Elasticsearch". Basically, I require consumers of the restful API to provide a password (?) going forward.
The elastic bootstrap password is used to init the internal/reserved users used by the components or features of the elastic stack (kibana, logstash, beats, monitoring, ...).
If you want to secure the API, you need to create users/roles for your scenario on top.
Please use TLS in your cluster when handling with passwords and don't expose the cluster directly for security reasons.
Here are all informations regarding a secure cluster including some tutorials: https://www.elastic.co/guide/en/elasticsearch/reference/7.3/secure-cluster.html
EDIT: Added links as requested. Feel free to raise a new question here at SO if you're facing serious problems!
Here you can find a complete guide to install and secure ElasticSearch.
Basically the bootstrap password is used initially to setup the built-in ElasticSearch users (like "elastic", "kibana"). Once this is done, you won't be able access ElasticSearch anonymously but only with one of the built in users, e.g. "elastic".
Then you can use "elastic" user to create additional users (with their own password) and roles (e.g. to asses specific indexes only in read-only mode).
As #ibexit wrote it's highly recommended to secure your cluster and don't expose it directly (use a proxy server, secured with SSL).
We are going to change cassandra setting from authenticator: AllowAllAuthentication to authenticator: PasswordAuthenticator
to enable role-based authentication. There will be two roles:
admin which is a superuser
read-only which is only allowed to read.
I would like to provide backward compatibility for users of the cassandra cluster. More specifically,
many users use
shell script that uses cqlsh
python cassandra package
php cassandra package
to only read data from cassandra. Currently they don't specify any username or password. Therefore
I would like to make read-only role some sort of a "default" role, i.e. if no username and password provided,
then the role is automatically set to read-only so the users can read data and thus clients don't need to change their code.
Is there a way to do this? I'm currently having trouble in the following two parts:
the default user is cassandra if there is no role / user specified in cqlsh. I did not find a way to set default user / role.
and for the default user cassandra, I still have to set a password for it.
Any suggestions would be appreciated! Thanks in advance.
I come from an oracle background, were I've done "sqlplus "/as sysdba"" for years. I like it because the O/S authenticates me. Now, there is something similar in Cassandra, but it isn't secure. Basically in your home directory there is a subdirectory called ".cassandra" (hidden). In that directory there is a file (if there isn't, create one) called "cqlshrc" (so ~/.cassandra/cqlshrc). That file you can add authentication information that will allow someone to log in by simply typing "cqlsh" without anything else (unless you're doing remote where you need "host" and "port"). The cqlshrc file has, among other things an authentication section that looks like this:
[authentication]
username = <your_user_name>
password = <your_password>
So you could simply put your desired username and password in that file and you're essentially able to connect without supplying your username and password (You could also run "cqlsh -u your_user_name" and it will find your password in your cqlshrc file as well).
You can see a few obvious issues here:
1) The password is in clear text
2) If you change the password you need to change the password in the cqlshrc file
I do not recommend you use the "cassandra" user for ANYTHING. In fact, I'd drop it. The reason is because the cassandra user does everything with CL=quorum. We found this out when investigating huge I/O requests coming from OpsCenter and our backup tool (as you can see, we use DSE). They were all using cassandra and pounding on the node(s) that had the cassandra authentication information. It's baked into the code apparently to have CL=quorum - kinda dumb. Anyway, the above is one way to have users log in with a specific user and not provide credentials making it pretty easy to switch.
Hope that helps
-Jim
The question may sound odd, but I have a worst case scenario.
My application server is on http://10.10.10.10/app (say it app-server) and http-apache server is on http://some.dns.com/app (say it http-server). Both are different system-server.
I know app-server shouldn't directly accessible publically, but let's assume it is publically accessible. Now Shibboleth is installed on http-server , securing path http://some.dns.com/app/secure . While one servlet is mapped to get attributes from path /secure.
If someone manages to create fake http-apache-server (say fake-http-server) and that too points to app-server. So here fake-http-server can directly have access to /secure path and that server can manually send shibboleth-like attributes and can login in system without protection.
My question here is, Is there a mechanism in Shibboleth where I can check the shibboleth session in my application - not only in http layer.
The mod_shib Apache module sets environment variables by default. These variables cannot be spoofed by a proxying Apache server.
From the docs:
The safest mechanism, and the default for servers that allow for it,
is the use of environment variables. The term is somewhat generic
because environment variables don't necessarily always imply the
actual process environment in the traditional sense, since there's
often no separate process. It really refers to a set of controlled
data elements that the web server supplies to applications and that
cannot be manipulated in any way from outside the web server.
Specifically, the client has no say in them.
If you don't trust the Apache webserver, you can parse the SAML assertion in your code and validate the signatures in the assertion using the certificate provided by the Identity Provider (IdP) making the SAML assertion. But checking signatures is difficult and you need to deal with cases like key rotation and how to handle new certificates being used by the IdP. Shibboleth handles these very difficult and important tasks for you.
I want to securely allow a web application to verify user credentials by checking the users entered info with /etc/shadow (unless there is some other solution). Is there any safe way to do this? It wouldn't be too hard to get it to run as root and then have it check the file and run the designated hash algorithm and salt string but this is eliminating the whole reason for having the salt value in the hash and severely reduces the security of the system.
So is there any utility or better way to try and have a web application verify my unix credentials or safe way to access /etc/shadow?
Normally, the authentication would be handled by the web server (container managed authentication). For example, in Tomcat, this would authenticate against an active directory:
<Realm className="org.apache.catalina.realm.JNDIRealm"
debug="99"
connectionName="CN=Our Users,OU=Accounts,DC=domainds,DC=com"
connectionPassword="S0mePassWord"
connectionURL="ldap://192.168.1.0:389"
alternateURL="ldap://192.168.1.1:389"
referrals="follow"
userBase="OU=User Accounts,DC=domainds,DC=com"
userSearch="(sAMAccountName={0})"
userSubtree="true"
roleBase="OU=User Roles,DC=domainds,DC=com"
roleName="cn"
roleSearch="(member={0})"
roleSubtree="true"
/>
To do what you ask is not impossible, but be careful not to pass an unencrypted password as a parameter because those might be visible on some forms of the "ps" command.
For a script (probably you would need more than one) to do what you ask, it might expose you to security issues, especially if you are not aware of all the intricacies of code injection into scripts. (Same idea a SQL injection, except it's for scripts.)
I am making a web app that has a login page (Using Facelets with JSF 2.0) which checks credentials before redirecting to a isLoggedIn or error page. I have access to the server
that the app is deployed on and Tomcat is used as container. I would like to log ip addresses that clearly tries to perform brute force attacks. My idea for now is the following, but I am not sure how to get hold of the offending IP and even if I could it looks a bit clumsy, so what is a standard/good way of doing this? I would prefer not having to use
any other implementation of JSF.
During login write log messages (from Beans) using a logging framework to an app-specific
logfile in Tomcat's log folder where failed logins are saved with offending ip and time.
Create script that reads the log and checks if any ips have a high rate of failed
attempts. Add these ips to hosts.deny.
If you're using realm authentication check Tomcat's LockOutRealm. It does not write the host.deny file but it also could prevent brute-force attacks.