I'm trying to use fabric8-cdi described here: https://fabric8.io/guide/cdi.html
I'm using minikube while developing, I start a rc and a service named mev-rserve here's the service running:
$kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.0.0.1 <none> 443/TCP 2d
mev-rserve 10.0.0.19 <pending> 6311:31744/TCP 49m
In my webapp I have this bean producer:
#Produces
static RConnection r (#ServiceName ("mev-rserve") String endpoint) { /* ... */ }
Which works fine if I declare MEV_RSERVE_SERVICE_HOST and MEV_RSERVE_SERVICE_PORT env variables as described in the doc I linked, but I want the library to look it up from kube api that's not happening. Here's my configuration:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /Users/levkuznetsov/.minikube/ca.crt
server: https://192.168.99.101:8443
name: minikube
contexts:
- context:
cluster: minikube
user: minikube
name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
user:
client-certificate: /Users/levkuznetsov/.minikube/apiserver.crt
client-key: /Users/levkuznetsov/.minikube/apiserver.key
From that I've setup the environment as follows:
KUBERNETES_MASTER="https://192.168.99.101:8443"
KUBERNETES_API_VERSION="v1"
KUBERNETES_CERTS_CA_FILE="/Users/levkuznetsov/.minikube/ca.crt"
KUBERNETES_CERTS_CLIENT_FILE="/Users/levkuznetsov/.minikube/apiserver.crt"
KUBERNETES_CERTS_CLIENT_KEY_FILE="/Users/levkuznetsov/.minikube/apiserver.key"
Which results in this exception:
Caused by: java.lang.IllegalArgumentException: No kubernetes service could be found for name: mev-rserve in namespace: null
at io.fabric8.kubernetes.api.KubernetesHelper.getServiceURL(KubernetesHelper.java:1347)
at io.fabric8.cdi.Services.toServiceUrl(Services.java:38)
at io.fabric8.cdi.producers.ServiceUrlProducer.produce(ServiceUrlProducer.java:47)
at io.fabric8.cdi.producers.ServiceUrlProducer.produce(ServiceUrlProducer.java:26)
at io.fabric8.cdi.bean.ProducerBean.create(ProducerBean.java:43)
...
Thanks in advance
In case anyone else is struggling with this, I traced this down to the lack of namespace definition. I do not declare a KubernetesClient bean so I don't set the default namespace which turns out to be null. I don't want to declare one in the app since in production the environment variables will take precedence anyway, this is for development only. I found it cleaner to set up the environment accordingly.
Related
I'm trying to deploy a gRPC server with kubernetes, and connect to it outside the cluster.
The relevant part of the server:
function main() {
var hello_proto = grpc.loadPackageDefinition(packageDefinition).helloworld;
var server = new grpc.Server();
server.addService(hello_proto.Greeter.service, {sayHello: sayHello});
const url = '0.0.0.0:50051'
server.bindAsync(url, grpc.ServerCredentials.createInsecure(), () => {
server.start();
console.log("Started server! on " + url);
});
}
function sayHello(call, callback) {
console.log('Hello request');
callback(null, {message: 'Hello ' + call.request.name + ' from ' + require('os').hostname()});
}
And here is the relevant part of the client:
function main() {
var target = '0.0.0.0:50051';
let pkg = grpc.loadPackageDefinition(packageDefinition);
let Greeter = pkg.helloworld["Greeter"];
var client = new Greeter(target,grpc.credentials.createInsecure());
var user = "client";
client.sayHello({name: user}, function(err, response) {
console.log('Greeting:', response.message);
});
}
When I run them manually with nodeJS, as well as when I run the server in a docker container (client is still run with node without a container) it works just fine.
The docker file with the command: docker run -it -p 50051:50051 helloapp
FROM node:carbon
# Create app directory
WORKDIR /usr/src/appnpm
COPY package.json .
COPY package-lock.json .
RUN npm install
COPY . .
CMD npm start
However, when I'm deploying the server with kubernetes (again, the client isnt run within a container) I'm not able to connect.
The yaml file is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloapp
spec:
replicas: 1
selector:
matchLabels:
app: helloapp
strategy: {}
template:
metadata:
labels:
app: helloapp
spec:
containers:
image: isolatedsushi/helloapp
name: helloapp
ports:
- containerPort: 50051
name: helloapp
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: helloservice
spec:
selector:
app: helloapp
ports:
- name: grpc
port: 50051
targetPort: 50051
The deployment and the service start up just fine
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
helloservice ClusterIP 10.105.11.22 <none> 50051/TCP 17s
kubectl get pods
NAME READY STATUS RESTARTS AGE
helloapp-dbdfffb-brvdn 1/1 Running 0 45s
But when I run the client it can't reach the server.
Any ideas what I'm doing wrong?
As mentioned in comments
ServiceTypes
If you have exposed your service as ClusterIP it's visible only internally in the cluster, if you wan't to expose your service externally you have to use either nodePort or LoadBalancer.
Publishing Services (ServiceTypes)
For some parts of your application (for example, frontends) you may want to expose a Service onto an external IP address, that's outside of your cluster.
Kubernetes ServiceTypes allow you to specify what kind of Service you want. The default is ClusterIP.
Type values and their behaviors are:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
ExternalName: Maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up.
Related documentation about that.
Minikube
With minikube you can achieve that with minikube service command.
There is documentation about minikube service and there is an example.
grpc http/https
As mentioned here by #murgatroid99
The gRPC library does not recognize the https:// scheme for addresses, so that target name will cause it to try to resolve the wrong name. You should instead use grpc-server-xxx.com:9090 or dns:grpc-server-xxx.com:9090 or dns:///grpc-server-xxx.com:9090. More detailed information about how gRPC interprets channel target names can be found in this documentation page.
As it does not recognize https I assume it's the same for http, so it's not possible.
kubectl port-forward
Additionally as #IsolatedSushi mentioned
It also works when I portforward with the command kubectl -n hellospace port-forward svc/helloservice 8080:50051
As mentioned here
Kubectl port-forward allows you to access and interact with internal Kubernetes cluster processes from your localhost. You can use this method to investigate issues and adjust your services locally without the need to expose them beforehand.
There is an example in documentation.
I'm trying get the hello-node service running and accesssible from outside on an azure VM with minikube.
minikube start --driver=virtualbox
created deployment
kubectl create deployment hello-node --image=k8s.gcr.io/echoserver
exposed deployment
kubectl expose deployment hello-node --type=LoadBalancer --port=8080
suppose kubectl get services says:
hello-node LoadBalancer 1.1.1.1 8080:31382/TCP
The public IP of the azure VM is 2.2.2.2, the private IP is 10.10.10.10 and the virtualbox IP is 192.168.99.1/24
How can I access the service from a browser outside the cluster's network?
In your case, you need you to use --type=NodePort for creating a service object that exposes the deployment. The type=LoadBalancer service is backed by external cloud providers.
kubectl expose deployment hello-node --type=NodePort --name=hello-node-service
Display information about the Service:
kubectl describe services hello-node-service
The output should be similar to this:
Name: example-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.32.0.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31496/TCP
Endpoints: 10.200.1.4:8080,10.200.2.5:8080
Session Affinity: None
Events: <none>
Make a note of the NodePort value for the service. For example, in the preceding output, the NodePort value is 31496.
Get the public IP address of your VM. And then you can use this URL:
http://<public-vm-ip>:<node-port>
Don't forget to open this port in firewall rules.
I need to set up variables inside my server.xml but this at the time of creating my pod, I did this and it did not work
server.xml
<Realm className="org.apache.catalina.realm.JDBCRealm" connectionURL="${db_url}" driverName="com.microsoft.sqlserver.jdbc.SQLServerDriver" roleNameCol="role" userCredCol="password" userNameCol="login" userRoleTable="userRole" userTable="v_login"/>
and my pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: dbtest
spec:
containers:
- name: dbtest-container
image: xxx.azurecr.io/iafoxteste:latest
ports:
- containerPort: 8080
env:
- name: db_url
value: "jdbc:sqlserver://xxx.database.windows.net:1433;database=xxx;user=xxx#iafox;password=xxxx;encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
unless java can do that natively kubernetes wont do that for you. so you need an init script that would read env. variables and replace tokens in your server.xml. or make your app do that somehow.
kubernetes cant do token replacement.
As it was mentioned kubernetes doesn't do it for you. In order to pass that value to tomcat you need to add db_url as java system property ex. -db_url="jdbc:sqlserver://xxx.database.windows.net:1433;database=xxx;user=xxx#iafox;password=xxxx;encrypt=true;....". Then you need to have a starter shell scripts that gets this value from environment variable and pass that to your CATALINA_OPTS.
Check this stackoverflow question Java system properties and environment variables
I'm having issues with the internal DNS/service resolution within Kubernetes and I can't seem to track the issue down. I have an api-gateway pod running Kong, which calls other services by their internal service name, i.e srv-name.staging.svc.cluster.local. Which was working fine up until recently. I attempted to deploy 3 more services, into two namespaces, staging and production.
The first service, works as expected when calling booking-service.staging.svc.cluster.local, however the same code doesn't seem to work in the production service. And the other two service don't worth in either namespace.
The behavior I'm getting is a timeout. If I curl these services from my gateway pod, they all timeout, apart from the first service deployed (booking-service.staging.svc.cluster.local). When I call these services from another container within the same pod, they do work as expected.
I have Node services set up for each service I wish to expose to the client side.
Here's an example Kubernetes deployment:
---
# API
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{SRV_NAME}}
spec:
replicas: 1
template:
metadata:
labels:
app: {{SRV_NAME}}
spec:
containers:
- name: booking-api
image: microhq/micro:kubernetes
args:
- "api"
- "--handler=rpc"
env:
- name: PORT
value: "8080"
- name: ENV
value: {{ENV}}
- name: MICRO_REGISTRY
value: "kubernetes"
ports:
- containerPort: 8080
- name: {{SRV_NAME}}
image: eu.gcr.io/{{PROJECT_NAME}}/{{SRV_NAME}}:latest
imagePullPolicy: Always
command: [
"./service",
"--selector=static"
]
env:
- name: MICRO_REGISTRY
value: "kubernetes"
- name: ENV
value: {{ENV}}
- name: DB_HOST
value: {{DB_HOST}}
- name: VERSION
value: "{{VERSION}}"
- name: MICRO_SERVER_ADDRESS
value: ":50051"
ports:
- containerPort: 50051
name: srv-port
---
apiVersion: v1
kind: Service
metadata:
name: booking-service
spec:
ports:
- name: api-http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: booking-api
I'm using go-micro https://github.com/micro/go-micro with the Kubernetes pre-configuration. Which again works in one case absolutely fine, but not all the others. Which leads me to believe it's not code related. It also works fine locally.
When I do nslookup from another pod, it resolves the name and finds the cluster IP for the internal Node service as expected. When I attempt to cURL that IP address, I get the same timeout behavior.
I'm using Kubernetes 1.8 on Google Cloud.
I don't understand why you think that it is an issue with the internal DNS/service resolution within Kubernetes since when you perform the DNS lookup it works, but if you query that IP you get a connection timeout.
If you curl these services from outside the pod they all timeout, apart from the first service deployed, no matter if you used the IP or the domain name.
When you call these services from another container within the same pod, they do work as expected.
It seems an issue with the connection between pods more than a DNS issue therefore I would focus your troubleshooting towards that direction, but correct me if I'am wrong.
Can you perform the classical networking troubleshooting (ping, telnet, traceroute)from a pod toward the IP given by the DNS lookup and from one of the container that is giving timeout to one of the other pods and update the question with the results?
I am attempting to create a service for creating training datasets using the Prodigy UI tool. I would like to do this using a Kubernetes cluster which is running in Azure cloud. My Prodigy UI should be reachable on 0.0.0.0:8880 (on the container).
As such, I created a deployment as follows:
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: prodigy-dply
spec:
replicas: 1
selector:
matchLabels:
app: prodigy_pod
template:
metadata:
labels:
app: prodigy_pod
spec:
containers:
- name: prodigy-sentiment
image: bdsdev.azurecr.io/prodigy
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c", "prodigy spacy textapi -F training_recipe.py"]
ports:
- name: prodigyport
containerPort: 8880
This should (should being the operative word here) expose that 8880 port at the pod level aliased as prodigyport
Following that, I have created a Service as below:
kind: Service
apiVersion: v1
metadata:
name: prodigy-service
spec:
type: LoadBalancer
selector:
app: prodigy_pod
ports:
- protocol: TCP
port: 8000
targetPort: prodigyport
At this point, when I run the associated kubectl create -f <deployment>.yaml and kubectl create -f <service>.yaml, I get an ExternalIP and associated Port: 10.*.*.*:34672.
This is not reachable by browser, and I'm assuming I have a misunderstanding of how my browser would interact with this Service, Pod, and the underlying Container. What am I missing here?
Note: I am willing to accept that kubernetes may not be the tool for the job here, it seems enticing because of the ease of scalability and updating images to reflect more recent configurations
You can find public IP address(LoadBalancer Ingress) with this command:
kubectl get service azure-vote-front
Result like this:
root#k8s-master-79E9CFFD-0:~# kubectl get service azure
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
azure 10.0.136.182 52.224.219.190 8080:31419/TCP 10m
Then you can browse it with external IP and port, like this:
curl 52.224.219.190:8080
Also you can find the Load Balaner rules via Azure portal:
Hope this helps.
You can find the IP address created for your service by getting the service information through kubectl:
kubectl describe services prodigy-service
The IP address is listed next to LoadBalancer Ingress.
Also, you can use port forwarding to access your pod:
kubectl port-forward <pod_name> 8880:8880
After that you can access Prodigy UI by localhost:8880 in your browser.