I started the minikube process with docker driver, but I am accessing the data on my local machine only. I want to provide that url to client.
can any one help me regarding this issue. is it possible to access the minikube service externally on other machines apart from the local machine ?
my service file is as follows:
{
apiVersion: v1
kind: Service
metadata:
name: xxxx
spec:
selector:
app: xxxx
ports:
- port: 8080
targetPort: xxxx
type: LoadBalancer
}
Thank you
Important: minikube is not meant to be used in production. It's mainly an educational tool, used to teach user how kubernetes work in safe, controlled (and usually local) environment. Please, do not use it in production environments.
Important #2: Under any circumstances do not give access to your local machine to anyone - unless it's a server meant to be accessible from outside organization, and correctly hardened - be it your client or your friend. This is a huge security risk.
Now, off to the question:
Running:
minikube service --url <service name>
will give you an url with external IP, probably something in 192.168.0.0/16 range (if you are on local network). Then you need to create port forwarding rule on your router.
You can find more details here.
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
So, I have a really simple Flask app that I'm deploying in a Kubernetes environment using helm. Now, I have the following defined in my values.yaml:
...
service:
type: ClusterIP
port: 5000
targetPort: 5000
# can add
# flaskPort: "5000"
ingress:
...
I know that I can set environment variables in my helm install command by typing helm install python-service . --values values-dev.yaml --set flaskPort=5000 and in my python code just do :
PORT = os.environ.get("flaskPort")
app.run(port=PORT, debug=True, host=0.0.0.0)
I can also define in my values-dev.yaml and in my templates/deployment.yaml entries for this environment variable flaskPort. But what about the port and targetPort entries in my values-dev.yaml? Wouldn't that clash with whatever flaskPort I set? How do I modify my chart to make sure that whatever port I specify in my helm install command, my python app is started on that port. The python app is a small mock server which responds to simple GET/POST commands.
Each Kubernetes pod has its own IP address inside the cluster, so you don't need to worry about port conflicts. Similarly, each service has its own IP address, distinct from the pod IP addresses, plus its own DNS name, so services can use the same ports as pods or other services without conflicts.
This means that none of this needs to be configurable at all:
Your application can listen on whatever port is the default for its framework; for Flask that is generally port 5000. (It does need to listen on the special "all interfaces" address 0.0.0.0.)
The pod spec should reflect the same (fixed) port number. It can help to give it a name.
ports:
- name: http
containerPort: 5000
The service can use any port it likes; for an HTTP-based service I'd recommend the default HTTP port 80. The targetPort: can be a name, which would match the name: of the corresponding pod/container port.
type: ClusterIP
ports:
- name: http
port: 80
targetPort: http
Calls to the service from within the cluster can use plain http://svcname.nsname/ URLs, without really caring how the service is implemented, what the pod IPs are, are what ports the pods happen to be using.
At a Helm level it can make sense to make details of the service configurable; in particular if it's a NodePort or LoadBalancer service (or neither) and any of the various cloud-provider-specific annotations. You don't need to configure the pod's port details, particularly if you've written both the application and the Helm chart. For example, if you run helm create, the template service that you get doesn't allow configuring the pod's port; it's fixed in the deployment spec and available to the service under the http name.
I would like to secure my web application running on Kubernetes (EKS). All the nodes attached to the cluster are running on private subnets.
I have one front-end service and a dozen back-end services.
The front-end service is a pod running a container which is running on port 80. It is configured to be attached to an ELB which is only accepting traffic from 443 with an https certificate.
apiVersion: v1
kind: Service
metadata:
name: service_name
labels:
app: service_name
annotations:
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxxxxxxxx
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
ports:
- port: 443 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
type: LoadBalancer
The back-end services are pods running containers also running on port 80. None of them have been configured to be accessible from outside the cluster. Back-end services talk to each other by pointing to http://service_name (NOT https) as I configured them with this template:
apiVersion: v1
kind: Service
metadata:
name: service_name
spec:
ports:
- port: 80 # Exposed port
targetPort: 80 # Container port
selector:
app: service_name
It all works but is it sufficient?
Should the front-end/back-end containers use certificate/443 too with a wildcard https certificate? Should this configuration be done inside the container or on the services' configurations?
I have done quite a bit of investigation now and here is what I came down to.
All my EKS EC2 instances are running on the private subnets which means they are not accessible from outside. Yes, by default Kubernetes does not encrypt traffic between pods which means that a hacker who gained access to my VPC (could be a rogue AWS engineer, someone who manages to physically access AWS data centers, someone who managed to access my AWS account...) will be able to sniff the network traffic. At the same time, I feel that in that instance the hacker will have access to much more! If he has access to my AWS account, he can download the https certificate himself for instance. If he manages to walk into an (high security) AWS data center and finds my server - it's good to compare the risk he has to take against the value of your data. If your data includes credit card/payments or sensitive personal data (date of birth, health details...), SSL encryption is a must.
Anyway, to secure pods traffic, there are 2 options.
Update all the pod source code and add the certificate there. It requires a lot of maintenance if you are running many pods and certificates expire every other year..
Add an extra 'network layer' like https://istio.io/. This will add complexity to your cluster and in the case of EKS, support from AWS will be 'best effort'. Ideally, you would pay for Istio support.
For the load balancer, I decided to add an ingress to the cluster (Ngnix, Traefik...) and set it up with https. That's critical as the ELB sits on the public subnets.
I am attempting to connect an dotnet core API to a database on Azure SQL. Everything works fine while debugging and when running without ISTIO. As soon as I run with ISTIO, it does not work. I try making a ServiceEntry but it is not helping. Can you help?
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: azure-sql
spec:
hosts:
- <servername>.database.windows.net
addresses:
- <ip address>
ports:
- name: tcp
number: 1433
protocol: tcp
location: MESH_EXTERNAL
Am I missing something here?
I know this is an old question, and likely you already know this by now, but just in case anyone else is having this issue...
SQL Azure uses gateway redirection - (i.e. it redirects to a different machine and port, so the host and/or port may be different from the one whitelisted)
The issue: https://github.com/istio/istio/issues/6587 explains this better than I can.
The suggestion is to disable this gateway mode in SQL, but there may be performance implications if you do so.
I haven't seen any other way to get around this short of allow all outbound comms from your K8s service YAML:
...
template:
metadata:
annotations:
traffic.sidecar.istio.io/excludeOutboundIPRanges: 0.0.0.0/0
...
I'm trying to connect my Kubernetes deployments together via DNS.
I have a Java (Spring Boot) deployment and a javascript (node.js) deployment, both exposed via a default ClusterIP Service. I need websocket and REST communication between both services.
I've read that I should use DNS so that these two services can talk to each other, but I'm having trouble trying to determine what those DNS's are.
For example,
kubectl get pods --all-namespaces
gives me this:
NAMESPACE NAME
default javascript-deployment-65869b7db4-mxfrb
default java-deployment-54bfc87fd6-z8wml
What do I need to specify in my Service config to stop these random suffixes being applied?
How do I then determine what my DNS names need to be with a similar form of my-svc.my-namespace.svc.cluster.local?
About your questions:
1- Kubernetes doesn't recommend to avoid creating the names because basically, it ensures that the pods are unique and also, the first part of the hash it groups all the pods with the same replica-controller.
So just as advice, don't touch it. https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#pod-template-hash-label
2- kubectl get services -o wide will provide you in which port is your app listening. You just need to use the cluster ip + port like CLUSTER_IP:PORT to be able to reach your service.
I fixed it using the Service metadata name and port.
For example, this is my service definition:
apiVersion: v1
kind: Service
metadata:
name: my-big-deployment
spec:
ports:
- port: 8000
protocol: TCP
targetPort: 8000
selector:
app: my-service
From my applications in the cluster I can now access this service via the following environment variables:
MY_BIG_DEPLOYMENT_SERVICE_HOST
MY_BIG_DEPLOYMENT_SERVICE_PORT