I am attempting to connect an dotnet core API to a database on Azure SQL. Everything works fine while debugging and when running without ISTIO. As soon as I run with ISTIO, it does not work. I try making a ServiceEntry but it is not helping. Can you help?
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: azure-sql
spec:
hosts:
- <servername>.database.windows.net
addresses:
- <ip address>
ports:
- name: tcp
number: 1433
protocol: tcp
location: MESH_EXTERNAL
Am I missing something here?
I know this is an old question, and likely you already know this by now, but just in case anyone else is having this issue...
SQL Azure uses gateway redirection - (i.e. it redirects to a different machine and port, so the host and/or port may be different from the one whitelisted)
The issue: https://github.com/istio/istio/issues/6587 explains this better than I can.
The suggestion is to disable this gateway mode in SQL, but there may be performance implications if you do so.
I haven't seen any other way to get around this short of allow all outbound comms from your K8s service YAML:
...
template:
metadata:
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
...
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
I started the minikube process with docker driver, but I am accessing the data on my local machine only. I want to provide that url to client.
can any one help me regarding this issue. is it possible to access the minikube service externally on other machines apart from the local machine ?
my service file is as follows:
{
apiVersion: v1
kind: Service
metadata:
name: xxxx
spec:
selector:
app: xxxx
ports:
- port: 8080
targetPort: xxxx
type: LoadBalancer
}
Thank you
Important: minikube is not meant to be used in production. It's mainly an educational tool, used to teach user how kubernetes work in safe, controlled (and usually local) environment. Please, do not use it in production environments.
Important #2: Under any circumstances do not give access to your local machine to anyone - unless it's a server meant to be accessible from outside organization, and correctly hardened - be it your client or your friend. This is a huge security risk.
Now, off to the question:
Running:
minikube service --url <service name>
will give you an url with external IP, probably something in 192.168.0.0/16 range (if you are on local network). Then you need to create port forwarding rule on your router.
You can find more details here.
I'd like to expose default port (1883) and WS port (9001) of MQTT server on an Azure Kubernetes Cluster.
Anyway here is the deployement I currently wrote :
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-server
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-server
template:
metadata:
labels:
app: mqtt-server
type: backend
spec:
containers:
- name: mqtt-server
image: eclipse-mosquitto:1.5.4
resources:
requests:
cpu: 250m
memory: 256Mi
ports:
- name: mqtt-dflt-port
containerPort: 1883
- name: mqtt-ws-port
containerPort: 9001
---
apiVersion: v1
kind: Service
metadata:
name: mqtt-server-service
spec:
selector:
app: mqtt-server
type: LoadBalancer
ports:
- name: mqtt-dflt-port
protocol: TCP
port: 1883
targetPort: 1883
- name: mqtt-ws-port
protocol: TCP
port: 1884
targetPort: 9001
And when I deploy it, everything is fine but the MQTT broker is unreachable and my service is described like that :
mqtt-server-service LoadBalancer 10.0.163.167 51.143.170.64 1883:32384/TCP,1884:31326/TCP 21m
Why is the 1883/9001 port aren't forwarded like it should be ?
First, make sure you’re connecting to the service’s cluster IP from within the
cluster, not from the outside.
Don’t bother pinging the service IP to figure out if the service is accessible
(remember, the service’s cluster IP is a virtual IP and pinging it will never work).
If you’ve defined a readiness probe, make sure it’s succeeding; otherwise the
pod won’t be part of the service.
To confirm that a pod is part of the service, examine the corresponding End-
points object with kubectl get endpoints .
If you’re trying to access the service through its FQDN or a part of it (for exam-
ple, myservice.mynamespace.svc.cluster.local or myservice.mynamespace) and
it doesn’t work, see if you can access it using its cluster IP instead of the FQDN.
Check whether you’re connecting to the port exposed by the service and not
the target port.
Try connecting to the pod IP directly to confirm your pod is accepting connec-
tions on the correct port.
If you can’t even access your app through the pod’s IP, make sure your app isn’t
only binding to localhost.
I dont see anything wrong, ports you requested are being forwarded to. And service created temporary ports on nodes for traffic to flow (it always does that). Service got endpoints, everything is okay.
Just to give more context, it always does that, because it needs to route traffic to some port, but it cannot depend on this exact port, because it might be occupied, so its using random port from 30.000 range (by default).
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
If you must specify a known and static port assignment, you can add nodePort: some-number to your ports definition in your service. By default, nodeports are assigned in 30000-32767.
I've managed to deploy a .netcore api to azure kubernetes managed service (ACS) and it's working as expected. The image is hosted in an azure container registry.
I'm now trying to get the service to be accessible via https. I'd like a very simple setup.
firstly, do I have to create an openssl cert or register with letencrypt? I'd ideally like to avoid having to manage ssl certs separately, but from documentation, it's not clear if this is required.
secondly, I've got a manifest file below. I can still access port 80 using this manifest. However, i am not able to access port 443. I don't see any errors, so it's not clear what the problem is. Any ideas?
thanks
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: someappservice-deployment
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
loadbalancer.openstack.org/floating-network-id: "9be23551-38e2-4d27-b5ea-ea2ea1321bd6"
spec:
replicas: 3
template:
metadata:
labels:
app: someappservices
spec:
containers:
- name: someappservices
image: myimage.azurecr.io/someappservices
ports:
- containerPort: 80
- containerPort: 443
---
kind: Service
apiVersion: v1
metadata:
name: external-http-someappservice
spec:
selector:
app: someappservices
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
- name: https
port: 443
protocol: TCP
targetPort: 443
From what I understand, you will need something like an NGINX ingress controller to handle the SSL termination and will also need to manage certificates. Kubernetes cert-manager is a nice package that can help with the certs.
Here is a write up on how to do both in an AKS cluster:
Deploy an HTTPS enabled ingress controller on AKS
If I do not misunderstand that you want to access your service via https with simple steps. Yes, If you don't have particularly strict security requirements such as SSL certs, you can just expose the ports to load balancer and access your service from the Internet, it's simple to configure.
The yaml file you posted looks all right. You can check from the Kubernetes dashboard and Azure portal, and the screenshot like this:
You also can check with the command kubectl get svc and the screenshot will like this:
But if you have particularly strict security requirements, you need nginx ingress controller like the answer in this case. Actually, the https is a network security protocol, you need to configure nginx ingress controller indeed.
I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT