Azure ElasticSearch config file and how to add security - azure

I install the azure plugin for elastic search according to this tutorial.
Azure Elastic which is using the template from here
github.com/Azure/azure-quickstart-templates/tree/master/elasticsearch
After it is deployed, I am able to connect to the kabana from the tutorial link above. If I like to install security for the Azure Elastic Search, how would be possible?
Furthermore, how do I access the elasticsearch.yaml for the config to further customisation ?
I tried to access the VM but there are only two ip i can link from the azure portal which is the jumpbox and also the kabana public ip.
Tried searching the /etc/ folder but didnt get to see the elastic folder after I remote into the server.
Please see this photo for the IP in Azure Portal.
I am also very new into ARM (Azure Resource Manager) which now consists multiple nodes of server connected together. It would be best , if someone could help explain how elastic search install into here. As far as I know the master node will proper assign the task to the data node after the request is handled at the client node.
The Elastic version is v2.3.1
Please help.

Once you install use the quickstart to install your cluster (of a single node it sounds like), you are in complete control.
In the case of the template, the jumpbox exists as an access point to pivot into the rest of the cluster. This way you can avoid ever giving your Elasticsearch instances a public IP address, thereby reducing the chance for a driveby attack to take place on your cluster -- because it's never exposed! For what it's worth, this is a pretty common strategy in operational isolation.
So, to get started, you should be able to SSH into the jumpbox, and from there you can use the private address of the Elasticsearch VM to SSH to it, from the jumpbox.
SSH into jump box
SSH into the rest of the private VMs
Once you have done that, then you should be able to access the elasticsearch.yml file.
How do you add security? The only official way to install security in Elasticsearch is to use the Shield plugin. This allows you to encrypt communication to/from Elasticsearch, as well as provide authentication.
Elastic, the company behind Elasticsearch and Kibana, has its own Azure Quick Start for Elasticsearch that does most of what the template you used does, but it also adds security to it. It may prove to be easier to delete the old cluster and start one from there.

Related

Is it possible to connect to 3rd party database using Azure Logic Apps?

I am new to Cloud and looking to cut down cost on Azure. I already have a database on the hostinger platform and would like to use it for the python script that I want to run on the Azure Logic Apps platform. Is it possible to do this or does Azure prevent any such connections? Do I need to create any connector on Azure for this purpose? I have no idea of running python script on Azure. If this is possible then it can be a great cost cutting measure for me.
One of the workarounds is you can try using Remote MySQL from the Databases of the Hostinger platform.
Type the IP address of your remote server in the IP (IPv4 or IPv6) area on the Remote MySQL page, or check the Any Host box to connect from any IP.
Then choose the database you wish to access from afar. Click Create when you're finished.
Make sure that a MySQL user must use their MySQL server hostname for remote connections - the hostname may be found at the top of the same page.
You may now use the same connection to make your own logic app connector and utilize the same connector for additional database manipulations.
would like to use it for the python script that I want to run on the Azure Logic Apps platform.
Depending on your requirements, you may utilize a variety of connectors for this. For example following the custom connector that you are using to retrieve the database from your hostinger, you can use azure functions in order to while coding with python.
For more information, you can refer to this example.
REFERENCES:
How to Allow Remote Connections to MySQL Database (hostinger.in)

Secure communication between existing Azure App Service and Azure VM cluster

We have an application running in Azure that consists of the following:
A Web App front end, which talks to…
A WebApi running as a Web App as well, which can (as well as a couple other services) talk to…
A Cloud Service load balanced set of VMs which Are hosting an Elasticsearch cluster.
Additionally we have the scenario were dev’s whitelist their IPs so that their localhost version of the API can hit the VMs as well.
We have locked down our Elasticsearch VM’s by adding ACLs to the exposed end point. I whitelisted the outbound IPs that were listed on my App Services. I was under the mistaken impression that these were unique to my Api. It turns out that these are shared across the scale unit in Azure. Other services running in the same scale unit, could, if they knew the endpoint, access the data exposed on the endpoint in my cluster. I need to lock this down, and I am trying to find the easiest way. These are the things I am looking at, and I would appreciate advice and/or redirection.
Elastic Shield: Not being considered. This is a product by Elastic
that is designed to secure ES. This is ideal, but at the moment it
is out of scope (due to the cost and overhead)
List item
Elastic plugins: Not being considered. The main plugins (such as
Jetty) appear to be abandoned.
Azure VPN. I originally tried to set this up, but ran into too many
difficulties. The ACLs seemed to give me what I need without much
difficulty. I am not sure if I can set this up now. The things I
don’t know are:
I don’t think I can move existing VMs into a new VPN.
I think you have to recreate the VMs in that VPN from the get go
Could I move my Web App into the VPN? How does that work?
This would prob break my developer scenario as the localhost API
would not be able to access the VPN, right?
Add a certificate to requests: It would be ideal if I could have
requests require a cert or a header token. I assume to do this I
would need to create a proxy that would run on the VMs and do the
validation before forwarding the request on to my Elasticsearch.
Anything else? Is there another option I have not thought of?
Thanks!
~john
You can create a VPN point-to-site connecting your Web App with your IaaS VMs. This is the best solution because you will be able to use just internal IPs on your IaaS.
The easiest way to do that using Azure Portal is create a Web App and, create a new VPN and VNet using "setup" option at "Your Web App" -> Settings -> Networking -> VNET Integration -> Setup -> Create New Virtual Network.
After that, create your IaaS inside this new VNet.
You also can create a ARM template to create Web App, IaaS, VPN and everything that you need. Take a look at my ARM template to create PHP+MySQL using Web App and MariaDB Cluster connected by VPN: https://github.com/juliosene/azure-webapp-php-mariadb

Link containers in Azure Container Service with Mesos & Marathon

I'm trying to deploy a simple WordPress example (WordPress & MySQL DB) on Microsofts new Azure Container Service with Mesos & Marathon as the underlying orchestration platform. I already ran this on the services offered by Google (Kubernetes) and Amazon (ECS) and thought it would be an easy task on ACS as well.
I have my Mesos cluster deployed and everything is up and running. Deploying the MySQL container isn't a problem either, but when I deploy my WordPress container I can't get a connection to my MySQL container. I think this might be because MySQL runs on a different Mesos agent?
What I tried so far:
Using the Mesos DNS to get ahold of the MySQL container host (for now I don't really care which container I get ahold of). I set the WORDPRESS_DB_HOST environment var to mysql.marathon.mesos and specified the host of MySQL container as suggested here.
I created a new rule for the Agent Load Balancer and a Probe for port 3306 in Azure itself, this worked but seems like a very complicated way to achieve something so simple. In Kubernetes and ECS links can be simply defined by using the container name as hostname.
An other question that came up, what difference is their in Marathon between setting the Port in the Port Mappings Section and in the Optional Settings section. (See screenshot attached)
Update: If I ssh into the master node than I can dig by using mysql.marathon.mesos, how ever I can't get a connection to work from within an other container (in my case the wordpress container).
So there are essentially two questions here: one around stateful services on Marathon, the other around port management. Let me first clarify that neither has to do anything with Azure or ACS in the first place, they are both Marathon-related.
Q1: Stateful services
Depending on your requirements (development/testing or prod) you can either use Marathon's persistent volumes feature (simple but no automatic failover/HA for the data) or, since you are on Azure, a robust solution like I showed here (essentially mounting a file share).
Q2: Ports
The port mapping you see in the Marathon UI screen shot is only relevant if you launch a Docker image and want to explicitly map container ports to host ports in BRIDGE mode, see the docs for details.

Setting up OrientDB image on Microsoft Azure platform

I am trying to setup OrientDb instance under Azure. I followed the procedure documented at OrientDB website (OrientDB Community Edition 2.0.10). I was able to setup the instance as described. After setting up all I could do is to ssh to the instance using the username:db as mentioned in the document (well I could have used any name, but for simplicity I followed word to word from the doc). I couldn't find information on user:root or user:orientdb (and a few other users and groups) that were part of this image. Additional users/groups are available in /etc/passwd. I am unable to get access to those users/groups. I am unable to find the documentation.
I tried to connect to the OrientDb web interface http://10.0.0.4:2480 (hosted on internal network interface within Azure region) and it doesn't even allow me to create db or login. It keeps asking for the username and password which I dont know (not documented).
Anyone know where can I found additional documentation/help on this image.
I can always setup a plain linux OS, install java and setup orient-db and configure it to use azure storage (bound as local disk storage). As much as possible, I would like to use the image provided by the orient-db team as I think it would come with recommended configuration.
I want to host/run a clustered orientdb instance on Azure. Any help is appreciated
You'll need to ssh to the virtual machine using the username and password that you specified when you created the Azure instance.
To obtain the credentials for Studio, Pabzt is right, just take a look at the sections of orientdb-server-config.xml and look for the root user. Its password will be auto-generated. You can change this.
Pabzt, regarding accessing Studio, you might make sure the OrientDB instance is still runnning:
sudo systemctl status orientdb
Usually, ports 22 and 2480 are open by default in the OrientDB Azure image. So, it's strange you can't access it.
I had the same problem today. You could connect using ssh. The default password and username can be found in the "orientdb-server-config.xml":
/opt/orientdb/config/orientdb-server-config.xml
The only thing i can't do is accessing the OrientDB Studio. While i can connect to the public ip address of the vm using ssh, i cant open the OrientDB Studio on port 2480 using the same public ip address. I tried adding an inbound securty rule in the network security settings for the orientdb vm but that didn't help. Still can't connect.
EDIT 22.10.2015 21:00
But I'm sure the password and username is working (from "orientdb-server-config.xml"), because i tried using the binary protocol on port 2424 with the "official .Net Driver" for OrientDB in a client application written in C#, and they worked. I was able to connect and create a new Database. Also i was able to access the default database: "GratefulDeadConcerts". I used the same public ip address that i used to connect via ssh.
I compared the OrientDB VM created by the image from the azure marketplace and couldn't find the option to set Endpoints (Azure VM Settings). All my other Azure VMs have this option in the Azure VM Settings. I always used the Endpoint settings to open ports on the virtual machines i have. It seems that i can only use the Endpoints for ssh and port 2424. Maybe those are the ones which are open by default. Any Ideas?
EDIT 23.10.2015 14:00 Uhr
Okay i found the solution, the OrientDB image from the azure marketplace works. I just added a new securty rule that allows connections from any port (*) to port 2480 (OrientDB Studio) and now it works.
I had this problem and realized I had missed something. On Azure go to All Resources, click on the Network Security Group for your server, and add an Inbound Security Rule allowing TCP port 2480. I didn't have to add anything using iptables on the server even though 2480 is not listed there. I hope this helps someone else.
The endpoints, by default are set to 22 and 2480. Strange that you had to set 2480 to * for incoming. But I'm glad you got it to work!
The root in the orientdb-server-config.xml is just for OrientDB and is not related to the system root account.
You should be able to sudo as the system username that you specified when you created the Azure VM. If you can sudo commands you should be able to change the system root password as well.

Managing inter instance access on EC2

We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..

Resources