Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Simple question: supposing I have several backend microservices, which are only ever accessed by application software, is it best practice to:
point the application software directly to an IP address? or
Assign a subdomain to the services?
My assumptions are that (1) avoids DNS lookup latency whereas (2) makes it easier to update the system if the IP ever changes. Is there anything else that affects this?
Assigning a subdomain or directly accessing with IP address is not a scalable approach.
I would suggest, point your client application software to an API gateway, which would be the single entry point to these microservices and there should be a service discovery mechanism so that the API gateway can reach to these individual Microservices.
please refer to the following sample diagram.
Client application requesting for data from a specific service and it reaches API gateway
API gateway requests discovery server for the latest reachable address for that specific service
Discovery Service is giving back the latest reachable address
API gateway using that address which it got from the Discovery server to reach the specific service for the resource requested by the client.
All the microservices should be registered with the Discovery server when it spins up.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
I am trying to setup a private AKS cluster which I want to manage from a user laptop using kubectl. I have tried to create a simple setup with one vNET, Azure VPN gateway with OpenVPN configuration, where the VPN Gateway is attached to one subnet of the vNET and AKS is configured via Azure CNI to live in another subnet of the same vNET. I have expected that this is all I would need to get manage the cluster as long as I am connected to the VPN (I understood that all subnets on a vNET are routed by default). But when I try to use kubectl I get Unable to connect to the server: dial tcp: lookup : no such host My network knowledge does not go too deep unfortunately, but should this just work? I mean it all lives within the same vNET. Thank you.
My setup is very similar and I ran into the same situation. This was a DNS issue for me.
If you have a private DNS zone with your private AKS cluster (should be in the resource group which was created for the AKS private cluster), go find the DNS record and IP address for the API server. Put that IP address into your hosts file (/etc/hosts if on linux or WSL) with the fully qualified domain name, then try your kubectl commands again.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
i m trying to create new linked service in azure data factory, where i have created sql db already. attach the error message in the link
enter image description here
You are trying to connect to a SQL Database that is behind a firewall. If you allow the IP address (in your case it is 20.42.2.58 as per the screenshot) your linked service should work. Navigate to Firewall and Virtual Networks blade of your SQL Server and add IP address, the start IP and the end IP should be the same.
A second option would be to use "Allow Azure services and resources to access this server" - set this option to Yes.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am using Azure Kubernetes Service (AKS) and I need Services solely for TCP connections. I do not need for HTTP at all, I think it is important to emphasize that. For accessing our Service, outside the AKS cluster, but only from different virtual network VM (and NOT from the public internet) the conclusion from the Azure Admins was that I should:
expose my Pod App Service with ClusterIP
install NGINX Ingress Controller with internal Load Balancer type to connect to ClusterIp service
I created kind:Ingress resource as well BUT that was NOT needed at all since I am not using HTTP connection, but as I mentioned I am accessing to my containerized app from outside solely using the NGINX IP address and port. And everything works with this Ingress Controller in place.
I read a lot about Kubernetes services , types, connections and Ingress and I would like to summarize my dilemmas and confusions on which I would need answer - why I had to implement this approach and not some simpler networking architecture without Ingress? (since I am not using HTTP)
ClusterIP - It's used for accessing the Service from any node in the Cluser using the CLUSTER-IP:PORT
NodePort - It's used for accessing the Service outside using the Node using the NodeIP:NodePort
Question Number 1: Why I couldn't just use NodeIP:NodePort for accessing my Service using the TCP from the different Azure Virtual Network??? Of course firewall rules needs to be configured, but why this approach is not acceptable and I had to install Ingress Controller?
LoadBalancer - Exposes the Service externally using a cloud provider's load balancer. OK I must not use the Public Load Balancer so that is clear but why I couldn't use LoadBalancer with internal LoadBalancer Type? This is explained and mentioned on the following link:
https://learn.microsoft.com/en-us/azure/aks/internal-lb where they are stating that "...accessible only to applications running in the same virtual network as the Kubernetes cluster". It uses EXTERNAL_IP:PORT value for accessing the Service.
Question Number 2: Doesn't really exist some other approach to use LoadBalancer for accessing the Service from the different virtual network but not to be exposed publicly on the Internet? Does it really have to include more complex networking architecture with Ingress Controller that needs to be created? Again I must emphasize only for TCP and not for HTTP connections use case
Question Number 3: what is the usual and regular network Service and setup that can be used in order to connect from different virtual network? Again - only for TCP connection, and if important I am looking more generally for Azure.
I will appreciate full explanation for all 3 questions, since this kind of Use Cases are not mentioned explicitly in kubernetes.io documentation - they always mentioned that Ingress resources are used for HTTP but definetly based on the instructions which I got from Admins this is also the case for TCP problematic
Thanks
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I create a free tier ec2 instance in amazon, I reserve a Elastic IP and assign it to my instance.
I install nginx and I need to access nginx from internet, for that I went to my domain registrar godaddy I create a cname that point to this Elastic IP.
I access SSH service using this IP now.
Finally I create a security group, I open stream to HTTP port.
And it seems not working ?
Please can you help me to troubleshoot and find why I can not connect to the web server ?
Step #1:
Test via its EIP address, something likes http://ec2-xxx-xxx-xxx-xxx
If you can't, in its security group, open inbound rule to 0.0.0.0/0 to http port 80, then you should be fine to access it.
Later you can adjust that CIDR range.
Step #2:
Access from it domain name which you registered in godaddy, if not, you need review the setting in godaddy and make sure you point to the right IP address.
Step #3:
Build a ELB (aws elastic load balancer) in front of nginx web server, then redirect the traffic from ELB to nginx, this way will be more flexable.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
When we spin a new ubuntu block in azure, we get a public IP address for that block. I am working off a trial account. Does anyone know how many public IP addresses can Azure provide? Is there a limit. I believe there is a limit on AWS and then they want us to use some VPN like solution. Does this limit exist on Azure or not ?
I don't think there is a limit. You get a public IP address for every active deployment in a Cloud Service. Creating a Virtual Machine creates a Cloud Service behind the scene and puts an active deployment on it. That public IP address is guaranteed to not be changed while your deployment is there (not deleted). Whether that public IP address is shared with other deployment - I don't know (but my guess is - yes). By default there is a limit for the number of cloud services one can create. This limit depends on the type of subscription you have. So this is the kind of limitation there is in Azure.