Azure container instance not connected kafka cluster - azure

I have a Azure Container Instance, and I want to connect to a kafka that is also in the azure.
If in kafka configure the advertised.listeners with dns I can connect, however with hostname I can not.
In ACI I also can not ping/wget/telnet to other internal resources, just for other ACIs.
vnet kafka = vnetA
subnet kafka = subnetA
vnet ACI = vnetA
subnet ACI = subnetB
I created the ACI with private ip.

This article explains how to configure your listeners in this situation: https://rmoff.net/2018/08/02/kafka-listeners-explained/

For your issue that the Azure Container Instances connect with other Azure resources. Now the Azure Container Instance just supports a private IP and no DNS labels, and the instances can connect with other resources in the same Vnet or in different Vnet with peering.
Container groups deployed to a virtual network do not currently
support public IP addresses or DNS name labels.
So you can just connect the ACI with the private IP the Vnet. And it just is a preview version. For more details, see Deploy container instances into an Azure virtual network

Related

Cannot connect to Azure SQL database from Azure Container Instance in different subnets

I have an Azure Container Instance (ACI) and an Azure SQL database.
I'm having problems when I try to connect the ACI to the SQL
Infrastructure looks like this:
One vnet with two subnets.
The ACI has a private IP in subnet1.
The SQL has a private endpoint in subnet2.
Subnet1 has Microsoft.ContainerInstance/containerGroups configured under subnet delegation.
No NSG between the subnets.
Routing tables connected to the subnets routes traffic within the vnet.
The log in the ACI complains about not beeing able to connect to the database on port 1433.
What am I missing?
Turned out that the problem was DNS.
ACI doesn't use the DNS servers configured on the vnet so you have to configure DNS for the ACI:
How to get Azure Container Instances using my DNS server?

Connect an existing azure Kubernetes cluster to new virtual network for timescale VPC

The overall goal of this question is to find out the proper way to connect a pre-existing azure kubernetes cluster to an azure virtual private network (or redeploy it in the virtual private network) so that it can now access the timescale postgres database (timescale.com) that has been placed in the VPC connected to the virtual network.
What I would like to do is take an existing production Kubernetes cluster and configure it to be able to see the timescaledb in the Virtual Private Cloud.
Is it possible to do this with another peering rule?
What I have done
Created a VPN in azure
Created a timescaledb database at timescaledb.com
Created the appropriate service principals, peering rules, and connected timescaledb to the vnet
Created a NEW kuberneted cluster in the virtual network
Tested the connection to the database (failed via internet, succeeded within vnet)

Configure and verify Vnet in Azure

I have created a Virtual Network next to its subnet and integrated it into three service applications and created the rule on the firewall of my SQL Azure server.
Everything is in the same Azure subscription and region
I need to know if it is enough to direct all the traffic between these instances through the virtual network or do I need to configure some other aspect.
And how can I query the data traffic to verify that the virtual network is being used?
Azure Virtual Network (VNets) allows to place Azure resources in a non-internet-routable network.
https://learn.microsoft.com/en-us/azure/architecture/example-scenario/private-web-app/private-web-app#architecture
Using Azure App Service regional VNet Integration, the web app connects to Azure through an AppSvcSubnet delegated subnet in an Azure Virtual Network.
Virtual Network only routes traffic and is otherwise empty, but other subnets and workloads could also run in the Virtual Network.
The App Service and Private Link subnets could be in separate peered Virtual Networks, for example as part of a hub-and-spoke network configuration. For regional VNet Integration, the peered Virtual Networks must be located in the same Azure region.
Azure Private Link sets up a private endpoint for the Azure SQL database in the PrivateLinkSubnet of the Virtual Network.
The web app connects to the SQL Database private endpoint through the PrivateLinkSubnet of the Virtual Network.
The database firewall allows only traffic coming from the PrivateLinkSubnet to connect, making the database inaccessible from the public internet.

Azure pods app connect to MSSQL server installed in Azure VM

I have AZ VM window which installed MSSQL
I created cluster in AZ K8S, then create the pod with image - application run with embedded tomcat using MSSQL connect to private IP of the above VM. The container in the pod start with error: can not connect to that private IP of MSSQL
I can access to that private IP from my local machine (using VPN), so is there any way/config to make pod can connect to that VM using private IP ? since it's same infrastructure, I dont why it cannot connect
(I am newbie with Azure)
Thanks alot
For your requirement, I don't know how do you deploy the VM and the AKS cluster. So I give the solutions for the two situations:
AKS cluster with the network type kubelet:
VM in VNet A and AKS in VNet B
create a service with the internal load balancer for the pod, and then peer the VNet A and B
VM and AKS in the same VNet
create a service with the internal load balancer for the pod
AKS cluster with the network type CNI
VM in VNet A and AKS in VNet B
peer the VNet A and B
VM and the AKS in the same VNet
you don't do anything, it should work
All of the above solutions need you to check the NSG rules between the VM and the AKS cluster pod. You need to allow the inbound traffic to the VM with the MSSQL port.

ACS Engine Azure VNET Integrarion

I have trouble integrating an ACS Engine Cluster with my existing VNET in Azure.
Below are the steps to reproduce my issue:
Create an ACS Engine Cluster with default configuration. It creates its own VNET(let's say ACS_VNET)
Create a new VNET(VNET2) with a VM and do a VNET Peering to ACS_VNET.
Create a sample service with Azure Private Load Balancer and try to access its private IP from the above VM in VNET2. It does not work. Also tried with Private IP of the pod and with a NodePort with no luck.
Followed this article to create the cluster: https://dzone.com/articles/create-custom-azure-kubernetes-clusters-with-acs-e
Above steps works for an AKS Cluster in Azure.

Resources