I'm currently looking to configure a Kerberos V realm and wondering about the risk of having systems in my environment that are not in FQDN (Fully Qualified Domain Name).
A lot of my search mention to use FQDN but doesn't mention what are the risk of not being in FQDN.
It's not exactly a risk in the security sense, but it will create much confusion in configuring various clients and servers.
Kerberos depends on the ability of the client and server to agree on the service name to be used by some process that is outside the kerberos protocol. In other words if I want to use kerberos telnet to some host, I need to know in advance what service principal that host is using in it's /etc/krb5.keytab. There is no way in the kerberos protocol for the client to learn this.
By default kerberos clients usually do a gethostbyname, then gethostbyaddr on the ip address returned and then use that hostname to construct a service principal. This is where you will run into problems. You might try turning off DNS canonicalization altogether ( it's an option in krb5.conf ).
There is also the problem of default realm based on hostname, but that's a much simpler one to solve using values in /etc/krb5.conf.
Related
We are using snowflake enterprise edition.
One of the client systems wants to access our snowflake account to consume the data.
We have created user and password and share with them to connect to snowflake.
Now we want to add extra security to this user by whitelisting the DNS name, so that username created for this client will not be misused.
Is there anyway that we can whitelist DNS in enterprise edition.
I read that VPC version have this feature by setup some firewall behind the snowflake.
We can achieve this using IP Mapping in Enterprise but client using dynamic IP which will keep change.
Regards,
Srinivas.
The network policy feature is IP address or range only so you won't be able to do name resolution with this currently (i.e., would be a feature request). I don't think there's one perfect solution to your request.
If the changing IPs are all part of a CIDR range, you could use that, or a proxy solution would have a stable IP. VPN could be another alternative and include the VPN-issued IP addresses in the Snowflake Network Policy.
I'm sure there's other methods too and worth a discussion with your security team for more ideas. Welcome others to comment with their ideas as well.
In the world of microservices endpoints should not (must not) be hardcoded. One of the best ways to do this is to have a DNS and let each microservice register while starting. By doing this whenever microservice A wants to communicate with microservice B it just asks DNS for endpoints where B currently listens.
What I do not understand is: How microservices know where the DNS lives?
Basically DNS is just a 'special' service and I can have one or multiple instances of it right? So I should not hardcode it's endpoint too or should I? And let's say I do - what if DNS instnace is moved to different location? Do I have to manually change it's location in configuration?
Does anyone happen to know how to design this? (or can anyone just point me to any document where this is explained since although there are many information about microservices and dns I can not find this particular information anywhere - maybe it's just too trivial and I am the only one who does not get it)
Manual setup of DNS is possible, as stated by the other answers, but I would recommend to use an infrastructure that supports the service discovery in all respects. For example kubernetes has built in DNS support and makes it very easy to expose a service that can consist of any number of Pods.
An infrastructure technology like kubernetes will also make many other respects of the microservices architectural style easier to implement, including high availability and scalability.
Please see the official docs for some more information.
DHCP solves this problem. When a host boots it sends a broadcast DHCP message. The DHCP server responds with many values, one of which is the location of DNS servers.
In the case of micro services, the host OS (or container host) will be configured for DNS via DHCP. The microservice code uses the OS DNS functions to resolve addresses.
https://en.m.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol
You can use your local network to discover services, via Dhcp and whatnot. But that requires that all services are already "registered" within that DNS server.
Microservices can find each other via service discovery, server or client side. If you choose client side service discovery, you can use tools like Consul, which provides a bunch of great features. One of which is a DNS endpoint which allows queries via SRV records with <serviceName>.consul.service domain names.
Consul has it's own DNS endpoint, you can configure your services to use that (usually on port 8600 locally, as Consul agents run locally).
But you can also configure an actual DNS server to forward questions to Consul, so that you can easily mix service discovery drive by Consul with manually setup services within a Bind instance or similar...
Known hostname solution. The fixed part would be the service domain name, for instance xservice.com. You can query this host using standard DNS tools (e.g., dig in your shell, etc).
Finally, in the DNS bound to xservice.com you then add a SRV record with further details.
A SRV record lists all the service details, including:
the symbolic service name;
the canonical hostname of the machine providing the service;
the TCP (or UDP) port on which the service is available.
There are many other info as well. Please see Wikipedia for the complete list.
Please keep in mind this is a somewhat static solution. If you are looking for a more dynamic one, then Oswin answer might be a better fit :-)
i just installed the datastax cluster of cassandra.
i have a question regarding the security groups and how to limit access.
currently, there are no security groups to the vnet and to all vms. so everyone can connect to the cluster.
the problem starts when i try to set a security group on the subnet. this is because the http communication of the cassandra nodes is (i think) used with the public ip and not the internal ip. i get an error in the opscenter that the http connection is down.
the question is how can i restrict the access to the cluster (for a specific ip), but provide access to all the cassandra nodes to work.
Its good practice to exercise security when running inside any public cloud whether its Azure, GCE, or AWS etc. Enabling internode SSL is a very good idea because this will secure the internode gossip communications. Then you should also introduce internal authentication (at the very least) so you require a user/password to login to cqlsh. I would also recommend using client to node SSL, 1-way should be sufficient for most cases.
I'm not so sure about Azure but I know with AWS and GCE the instances will only have a local internally routed IP (usually in the 10.0.0.0/8 private range) and the public IP will be via NAT. You would normally use the public IP as the broadcast_address especially if you are running across different availability zones where the internal IP does not route. You may also be running a client application which might connect via the public ip so you'd want to set the broadcast_rpc_address as public too. Both of these are found in the cassandra.yaml. The listen_address and rpc_address are both IPs that the node will bind to so they have to be locally available (i.e. you cant bind a process to a IP thats not configured on an interface on the node).
Summary
Use internode SSL
Use client to node SSL
Use internal authentication at the very minimum (Ldap and Kerberos are also supported)
Useful docs
I highly recommend following the documentation here. Introducing security can be a bit tricky if you hit snags (whatever the application). I always start of making sure the cluster is running ok with no security in place then introduce one thing at a time, then test, verify and then introduce the next thing. Dont configure everything at once!
Firewall ports
Client to node SSL - note require_client_auth: true should be false for 1-way.
Node to node SSL
Preparing SSL certificates
Unified authentication (internal, LDAP, Kerberos etc)
Note when generating SSL keys and certs typically you'd just generate the one pair and use it across all the nodes when you have node to node SSL. Otherwise if you introduce a new node you'll have to import the new cert into all nodes, which isn't really scalable. In my experience working with organisations using large clusters this is how they manage things. Also client applications may well use just same key or a different one at least.
Further info / reading
2-way SSL is supported, but its not as common as 1-way. This is typically a bit more complex and switched on with the require_client_auth: true in the cassandra.yaml
If you're using OpsCenter for SSL, the docs (below) will cover things. Note that essentially its in two places:
SSL between opscenter and the agents and the cluster (same as client to node SSL above)
SSL between OpsCenter and the Agents
OpsCenter SSL configuration
I hope this helps you towards achieving what you need to!
I have to prove my SolrCloud is secure.
From my understanding of what I am reading I can secure the Solr instances talking to each other via basic authentication and SSL which is great, its secure, it works.
However, I can't see anything that will allow me to secure Zookeeper - or am I mistaken? Is there anything in an open Zookeeper that will allow a malicious user on my internal network to "hack" my SolrCloud, or is it the case that Zookeeper doesn't have anything that needs to be hidden?
Regarding securing ZooKeeper, you may want to check ZooKeeper access control using ACLs link.
What we do at Measured Search for our customers who are using our Solr-as-a-Service platform, we allow them to restrict access to Zookeeper with IP filtering. They can either specify a specific IP address or a CIDR (range) that can have access to Zookeeper.
http://docs.measuredsearch.com/security/
That way, they can secure their Solr instances independently of Zookeeper.
Windows Service Bus 1.0 supports DNS-registered namespaces using New-SbNamespace -AddressingScheme DNSRegistered.
New-SbNamespace command
My Scenario:
All machines on same domain (cromwell.local)
2 Compute Nodes
1 SQL Node on separate server
2 namespaces (NamespaceA & NamespaceB for example)
Should the DNS entry (I'm leaning CName - not a DNS guru) each point to a compute node? That doesn't seem to give with the whole gateway/redirect situation.
It doesn't matter what kind of entry, as long as you have a host name that maps to an IP (or set of IPs, if you're using a clustered install). The host name can be a simple name (e.g. my-server) or a fully qualified domain name (e.g my-server.mydomain.com). What's important is that the name can be resolved by both parties, and that you pass the same host name to the server when creating the namespace.
One important thing to consider is that the hostname you use must match the CN name ofthe server's ssl certificate, to avoid auth issues (due to CN name matching). If you're using a default install on a domained-joined machine, you should use a hostname with the same domain name (since in default installs on a domain the server uses a *.yourdomain cert). For all the other scenarios (workgroup machines, hostname that doesn't match the domain) you'll need to provide your own cert. This decision will impact the namespaces you'll be able to have on the server (since all of them will need to match the certificate CN somehow), so weight well your options.
Based on the scenario you describe, I recommend you do the following:
Your DNS name should point to the IP of both compute nodes (I'm assuming these are the machines running service bus server. Besides the DNS redirection, this will also give you DNS-based load balancing
You can only have one namespace per DNS name, so when creating Namespace A you need to pass the CNAME you created on the first step. If you need to have more namespaces, you'll need to create more CNAMEs (which could be a problem with your certificate, depending the hostname / domain names you're picking)
PS. Service Bus Server doesn't really support a 2-node configuration. You should either go to one node for simplicity, or three, if you want a highly-available server.