I have helm to install the git-hub repo for the grafana as shown in below
helm repo add Prometheus Prometheus-community https://promentheus-community.github.io/helm-charts
helm install Prometheus-community/kube-Prometheus-stack -n Prometheus
kubectl get namespaces
After that I have configured the grafana it will work on port 9090, I have exposed to 9090 using port-forward command,
When I open the browser with localhost:9090 I am not able to see any grafana page, its showing ERR_CONECTION error/Localhost refused to connect, After some time when I refresh again now I am able to see the grafana page
Everything works fine till now after that I am trying to open the grafana dashboard to monitor the logs it is asking admin user name credentials but I did not mention any credentials but it is asking
My questions are
1). How to login with grafana to monitor the logs and also I want to import the azure monitor metrics to check in the grafana dashboard
2). Is any thing do we need to add role based polices or service principal we have to add to login the grafana
Can any one help me how to login the grafana dashboard to monitor the logs
Thanks in advance
My grafana page is look like
I tried to reproduce the same issue in m environment and got the below results
For login into the Grafana dashboard we don't need to assign the service principal or role based polices , if we want to get the metrics into the Grafana dashboard than only we have to add the role based polices
I have added the helm repository and updated the helm using below commands
helm repo add prometheus-community [https://prometheus-community.github.io/helm-charts](https://prometheus-community.github.io/helm-charts)
Installed the Prometheus using below command
helm install prometheus prometheus-community/kube-prometheus-stack -n namespace_name
I have configured the Prometheus and Grafana and exposed to port 9090 using port forward
kubectl port-forward -n prometheus prometheus-prometheus-kube-prometheus-prometheus-0 9090
To login to grafana dashboard username and password should require
I have followed the some commands to get the username and password, usually admin name and password are in encrypted format we can decode using base64
kubectl get secret -n namespace_name Prometheus-grafana -o=sonpath=[{.data.admin-user}'|base64 -d
kubectl get secret -n namespace_name Prometheus-grafana -o=sonpath=[{.data.admin-password}'|base64 -d
I have done the local port forwarding for Grafana
kubectl port-forward -n prometheus prometheus-grafana-55b48f4cf9-rzkgv(pod_name) 3000
When I open localhost:3000 and login with username and password I am able to see the Grafana dashboard
If we use role based polices we can able to access the azure metrics for particular service area
You can find the commands list for reference below
Related
At my company, we're setting up an on-prem k8s cluster in combination with a private image repository hosted on Azure (Azure Container Registry).
For development purposes, I'd like our developers to be able to run the apps they create locally using minikube. ACR offers several ways to authenticate, including:
Indiviual login
Access Token
Service Principal
When developing locally, and using the Docker CLI, individual authentication can be setup by running az acr login -n my-repository.azurecr.io. We manage all SSO authentication through Azure Active Directory, and Docker Desktop comes with the docker-credentials-wincred.exe extension, to delegate handling of authentication to the Windows Credential Store. This is specified in ~\.docker\config.json. Pretty neat and seamless, love it.
However, when authenticating k8s to work with a ACR, most documentation steers you towards setting up a Service Principal, and storing credentials in a k8s secret. For a production environment this makes perfect sense, but for a developer machine it feels a bit weird to authenticate using headless credentials.
An alternative is to pull images manually, and set imagePullPolicy to IfNotPresent in the k8s manifest. The tricky part here is that Minikube runs in a separate VM, with its own Docker daemon. By default, Docker CLI is not configured to connect with this daemon. Minikube exposes some control over the daemon through the minikube CLI (e.g. minikube image pull) but here we run into authorization issues. Alternatively, we can configure the Docker CLI - which is already configured to use the Windows credentials store - to connect to the minikube daemon. This is pretty straight-forward using minikube docker-env.
So, this works... but my question is: isn't there a more sensible way to do this? Can't minikube image somehow be configured to work with the windows credentials store?
i have recently installed eclipse-hono on a minikube with driver=none.
I want to use Grafana to show me the telemetry data and therefore I need to use Prometheus.
As you can see in the following picture, no port is assigned to Grafana or Prometheus.
This port needs to be exposed as I need to access the application from outside the server, where hono is running.
How can I activate the prometheus and grafana service?
A sprint boot rest api is deployed as the web app via fire up a docker image in Azure. After that I need to make a POST request to test the API. Here comes the issues. I seems can't access the API. It is not the issue of the code itself since I can get the result if I deployed the code locally,
Here are some of my key steps
I add the following user command when fire up the application from the docker image (docker image is saved in the azure container registry)
docker run -d -p 8177:8177 my-api-image:latest
login to azure from azure-cli
az login
I query the post method in the terminal
curl -X POST -'from=161&to=169&limit=100' https://<my-app-name>.azurewebsites.net:8177/readRecords
But I am keep getting the Connection time out error
Failed to connect to <my-app-name>.azurewebsites.net port 8177: Connection timed out
I also try to run the curl method from the shell from the Azure Portal in the browser, it also told me the time out error Anyone know the reason of this? and how can I solve it so that I can send a post request.
Azure web app only support http 80 and https 443 port.
So your port 8177 doesn't work. For more details, please read my answers in below posts.
Related posts:
1. Strapi on Azure does not run
2. Django channels and azure
We just deployed our custom docker image on Azure Webapp services. I think the docker is running fine but we need to know the container-id of the specific webapp service.
We checked the logs on Azure portal for the app service instance; had no luck.
We tried to SSH to the container to grab the container ID; had a "connection refused" error.
Does anyone know how to easily grab the container id of a webapp that runs based on docker on Azure?
Thanks
I'm not sure what do you mean by container id, but what you can get from the Web App are the instance id and container name. Both of them you can get from the logs in the container setting.
And that you want to connect to the container but it failed. The reason is that you didn't enable the SSH in your custom image. You can follow the steps here to enable the SSH in the custom image.
I have a Node.js app in Azure and I want it to be able to connect/talk to a pod on a kubernetes cluster (this pod has an external IP and is a load balancer). On click of a submit button on my Node.js app, I want to be able to send bash commands to the pod on the Kubernetes cluster.
Would you know how I could connect the app to the pod? I know there is server.listen function in the index.js file, however I am not too sure how to approach the situation.
Thanks for the help
Not sure, but you can connect the pod in a Kubernetes Cluster as you connect the Database or other services as usual in the Node.js app.
All of all, you should get the credential of the AKS through the command az aks get-credentials to create a tunnel with the Kubernetes Cluster. Then you can use the command kubectl exec to execute the bash shell command in the pod.
Also, you can take a look at Executing commands in Pods using K8s API, maybe it would be helpful.
Here are the examples that manage the pod in the Kubernetes CLuster pod. See kubernetes-client.