Is there a simple way on EKS to use pgadmin from local machine to access RDS instance? - amazon-rds

On Amazon EKS, I have attached an RDS instance with private access and I'm able to access it fine from my EKS cluster. Now, for debugging, I'd like to examine the db using pgadmin. What's the simplest way to do this?

This assumes your RDS instance is connected to your cluster, but is not externally accessible.
On your EKS cluster that has access to the RDS instance, add a pgadmin pod by running:
kubectl run pgadmin --image dpage/pgadmin4 --env="PGADMIN_DEFAULT_EMAIL=admin#admin.com" --env="PGADMIN_DEFAULT_PASSWORD=logmein"
Now forward the port for this pgadmin pod:
kubectl port-forward pgadmin 8080:80
Now you can open http://localhost:8080 for full access to your RDS data. Use the username and password you set above to log in to pgadmin.
When you're done, don't forget to delete the pod:
delete pod pgadmin
Source

Related

How to connect/communicate Pod to DocumentDb which is outside eks cluster but within same VPC

I want to deploy my full stack application using AWS EKS, with the backend pod connected to the databases(MongoDB hosted on AWS managed service) outside of the cluster. If the EKS cluster and the databases are in same VPC, how should I configure the pod to connect exterbal database but within same VPC.
We're going to need a bit more details, but see if this blog gives you an idea about how to accomplish this.

Why telnet and nc command report connection works in azure kubernetes pod while it shouldn't

I have an Azure AKS kubernetes cluster. And I created a Pod with Ubuntu container from Ubuntu image and several other Pods from java/.net Dockerfile.
I try to enter to any of the PODs (including the ubuntu one), and execute telnet/nc command in the pod to a remote server/port to validate the remote connection, it's very weird that no matter on which remote server IP and port I choose, they always report connection succeed, but actually the IP/Port should not work.
Here is the command snapshot I executed: From the image You will find I'm telneting to 1.1.1.1 with 1111 port number. I could try any other ip and port number, it always report connection succeed. And I tried to connect to all the other pods in the AKS cluster, they are all the same. I also tried to re-create the AKS kubernetes cluster by choosing CNI network instead of the default Kubenet network, still the same. Could anyone help me on this? Thanks a lot in advance
I figured out the root cause of this issue, it's because I installed Istio as service mesh, and turn out this is the expected behavior by design by referring this link: https://github.com/istio/istio/issues/36540
However, although this is by design of Istio, I'm still very interested in how to easily figure out whether a remote ip/port tcp connection works or not in Istio sidecar enabled POD.

unable to access DB pod External IP from application

I've created two pods top of Azure Kubernetes cluster
1) Application
2) MS SQL server
both pods are exposed via Azure Loadbalancer and both having External IPs. I am unable to use the External IP in my application config file. But I can connect that SQL Server from anywhere. For some reason I am unable to telnet DB IP from Application container.
the connection is getting timeout. but I can ping/telnet the DB's cluster ip. So I have tried to use the DB cluster IP in my config file to check if the connection is successful but no luck.
Could someone help me with this ?
As Suresh said, we should not use public IP address to connect them.
We can refer to this article to create a application and a database, then connect a front end to a back end using a service.
This issue was fixed in other way. But still running a Application & DB as separate service is night mare in Azure container service(Kubernetes).
1) I've combined App+DB in same container and put the DB connection string as "localhost" or "localhost,1433" is my application config file.
2) Created Docker image with above setup
3) Created pod
4) Exposed pod with two listening ports "kubectl expose pods "xxx" --port=80,1433 --type=LoadBalancer
5) I can access the DB with 1433
In the above setup, we have planned to keep the container in auto scaled environment with persistent volume storage
Also we are planning to do the scheduled backup of container, So we do not want to loose the DB data.
Is anybody having other thoughts, what the major issue factors we need to consider in above setup ??
This issue was fixed..!
Create two pods for Application and DB, Earlier when I provide the DB cluster IP in application config file, it was worked.But I was able to telnet 1433
I have created another K8s cluster in Azure then tried with same setup (provided cluster IP). This time it worked like charm.
Thanks to #Suresh Vishnoi

SSH to Azure's Kubernetes managed master node

I just deployed a managed Kubernetes cluster with Azure Container Service. My deployment includes a single agent machine over the managed cluster and an Azure disk attached to it for persistent storage.
The problem I am facing is that I don't know how to ssh this agent server. I read that you should be able to ssh the master node and connect to the agent from there but as I am using a managed Kubernetes master I can't find the way of doing this.
Any idea? Thank you in advance.
The problem I am facing is that I don't know how to ssh this agent
server.
Do you mean you create AKS and can't find master VM?
If I understand it correctly, that is a by design behavior, AKS does not provide direct access (Such as with SSH) to the cluster.
If you want to SSH to the agent node, as a workaround, we can create a public IP address and associate this public IP address to the agent's NIC, then we can SSH to this agent.
Here are my steps:
1.Create Public IP address via Azure portal:
2.Associate the public IP address to the agent VM's NIC:
3.SSH to this VM with this public IP address:
Note:
By default, we can find ssh key when we try to create AKS, like this:
Basically, you don't even have to create a public IP to that node. Simply add public ssh key to the desired node with Azure CLI:
az vm user update --resource-group <NODE_RG> --name <NODE_NAME> --username azureuser --ssh-key-value ~/.ssh/id_rsa.pub
Then run temporary pod with (Don't forget to switch to the desired namespace in kubernetes config):
kubectl run -it --rm aks-ssh --image=debian
Copy private ssh key to that pod:
kubectl cp ~/.ssh/id_rsa <POD_NAME>:/id_rsa
Finally, connect to the AKS node from pod to private IP:
ssh -i id_rsa azureuser#<NODE_PRIVATE_IP>
In this way, you don't have to pay for Public IP and in addition, this is good from security perspective.
The easiest way is to use the below, this will create a tiny priv pod on the node and access the node using nsenter.
https://github.com/mohatb/kubectl-wls

zabbix with postgresql on AWS RDS

I have one EC2 instance and when I check connection with psql tool by it is OK.
psql --host= etc...
Basically, AWS RDS does not provide internal IP for connection. I have to use long URL Endpoint instead.
How to provide this endpoint address to zabbix web interface installation tool?
When I use endpoint in "Database host" it fails :(

Resources