I created an AKS cluster with http enabled.Also I have my project with dev spaces enabled to use the cluster.While runing azds up the app is creating all necessary deployment files (helm.yaml,charts.yaml,values.yaml).However I want to access my app using a public endpoint with dev space url but when I do azds list-uris it is only giving localhost url and not the url with dev space enabled.
Can anyone please help?
My azds.yaml looks like below
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
# Optionally, specify an array of imagePullSecrets. These secrets must be manually created in the namespace.
# This will override the imagePullSecrets array in values.yaml file.
# If the dockerfile specifies any private registry, the imagePullSecret for that registry must be added here.
# ref: https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# For example, the following uses credentials from secret "myRegistryKeySecretName".
#
# imagePullSecrets:
# - name: myRegistryKeySecretName
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to form the service's public URL: [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg, webfrontend]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
I followed below guide
https://microsoft.github.io/AzureTipsAndTricks/blog/tip228.html
AZDS up is giving end point to my localhost
Service 'webfrontend' port 80 (http) is available via port forwarding at http://localhost:50597
Has your azds.yaml file ingress definition to the public 'webfrontend' domain?
Here is an example azds.yaml file created using .NET Core sample application:
kind: helm-release
apiVersion: 1.1
build:
context: .
dockerfile: Dockerfile
install:
chart: charts/webfrontend
values:
- values.dev.yaml?
- secrets.dev.yaml?
set:
replicaCount: 1
image:
repository: webfrontend
tag: $(tag)
pullPolicy: Never
ingress:
annotations:
kubernetes.io/ingress.class: traefik-azds
hosts:
# This expands to [space.s.][rootSpace.]webfrontend.<random suffix>.<region>.azds.io
# Customize the public URL by changing the 'webfrontend' text between the $(rootSpacePrefix) and $(hostSuffix) tokens
# For more information see https://aka.ms/devspaces/routing
- $(spacePrefix)$(rootSpacePrefix)webfrontend$(hostSuffix)
configurations:
develop:
build:
dockerfile: Dockerfile.develop
useGitIgnore: true
args:
BUILD_CONFIGURATION: ${BUILD_CONFIGURATION:-Debug}
container:
sync:
- "**/Pages/**"
- "**/Views/**"
- "**/wwwroot/**"
- "!**/*.{sln,csproj}"
command: [dotnet, run, --no-restore, --no-build, --no-launch-profile, -c, "${BUILD_CONFIGURATION:-Debug}"]
iterate:
processesToKill: [dotnet, vsdbg]
buildCommands:
- [dotnet, build, --no-restore, -c, "${BUILD_CONFIGURATION:-Debug}"]
More about it: https://learn.microsoft.com/pl-pl/azure/dev-spaces/how-dev-spaces-works-prep
How many service logs do you see in 'azds up' log, are you watching something similar to:
Service 'webfrontend' port 'http' is available at `http://webfrontend.XXX
Did you follow this guide?
https://learn.microsoft.com/pl-pl/azure/dev-spaces/troubleshooting#dns-name-resolution-fails-for-a-public-url-associated-with-a-dev-spaces-service
Do you have the latest version of the azds?
Related
I uploaded a project to kubernetes and for its gateway to redirect the services, it requires the following:
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on.
I can't run any sudo commands, so sudo nano /etc/hosts doesnt work. I tried vi /etc/hosts and it gives permission denied error. How can I edit /etc/hosts file or do some configuration on Azure to make it work like that?
Edit:
To give more information, I have uploaded a project to Kubernetes that has reverse-proxy settings.
So reaching the web-app of that project is not available via IP. Instead if I'm running the application locally, I have to edit the hosts file of the computer I'm using with
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on. So whenever I type web-app.my.project its gateway redirects to web-app part and if I write app.my.project it redirects to app part, etc.
When I uploaded it to Azure Kubernetes Service it added default-http-backend on ingress-nginx namespace which created by itself. To expose these services, I opened the Http Routing option from Azure which gave me the loadbalancer at the left side of the image. So If I'm reading the situation correctly, (I'm most probably wrong though) it is something like the image below:
So, I added hostaliases to kube-system, ingress-nginx and default namespaces to make it like I edited a hosts file when I was running the project locally. But it still gives me that default backend - 404 ingress error
Edit 2:
I have nginx-ingress-controller which allows the redirection as far as I understand. So, when I add hostaliases to it as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.ota.local"
- "gateway.ota.local"
- "treehub.ota.local"
- "tuf-reposerver.ota.local"
- "web-events.ota.local"
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: {{ .ingress_controller_docker_image }}
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: tcp
containerPort: 8000
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
So when I edit yaml file as aforementioned, it gives the following error on Azure:
Failed to update the deployment
Failed to update the deployment 'nginx-ingress-controller'. Error: BadRequest (400) : Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|liases":["ip "127.0|..., bigger context ...|theus.io/scrape":"true"}},"spec":{"hostAliases":["ip "127.0.0.1""],"hostnames":["app.ota.local","g|...
If I edit the yaml file locally and try to run it from local kubectl which is connected to Azure, it gives the following error:
serviceaccount/weave-net configured
clusterrole.rbac.authorization.k8s.io/weave-net configured
clusterrolebinding.rbac.authorization.k8s.io/weave-net configured
role.rbac.authorization.k8s.io/weave-net configured
rolebinding.rbac.authorization.k8s.io/weave-net configured
daemonset.apps/weave-net configured Using cluster from kubectl
context: k8s_14
namespace/ingress-nginx unchanged deployment.apps/default-http-backend
unchanged service/default-http-backend unchanged
configmap/nginx-configuration unchanged configmap/tcp-services
unchanged configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole
unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding
unchanged
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding
unchanged error: error validating
"/home/.../ota-community-edition/scripts/../generated/templates/ingress":
error validating data: ValidationError(Deployment.spec.template.spec):
unknown field "hostnames" in io.k8s.api.core.v1.PodSpec; if you choose
to ignore these errors, turn validation off with --validate=false
make: *** [Makefile:34: start_start-all] Error 1
Adding entries to a
Pod's /etc/hosts file provides Pod-level override of hostname
resolution when DNS and other options are not applicable. You can add
these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file
is managed by the kubelet and can be overwritten on during Pod
creation/restart.
I suggest that you use hostAliases instead
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.my.project"
- "event-manager.my.project"
- "logger.my.project"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
thanks in advance for this awesome stack platform that is jhipster.
I have a question, I am trying to run a microservice directly with:
./mvnw -Pdev -DskipTests
And I am getting (UnknownHostException -- 'http://admin:admin#jhipster-registry:8761/eureka/):
2021-09-16 10:06:26.225 INFO 6762 --- [ restartedMain] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://admin:admin#jhipster-registry:8761/eureka/}, exception=I/O error on GET request for "http://admin:admin#jhipster-registry:8761/eureka/apps/": jhipster-registry: Name or service not known; nested exception is java.net.UnknownHostException: jhipster-registry: Name or service not known stacktrace=org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://admin:admin#jhipster-registry:8761/eureka/apps/": jhipster-registry: Name or service not known; nested exception is java.net.UnknownHostException: jhipster-registry: Name or service not known
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:785)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:711)
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:602)
at org.springframework.cloud.netflix.eureka.http.RestTemplateEurekaHttpClient.getApplic
My doubt is, why is trying to use the domain jhipster-registry:8761 instead of what I have in the dev configurations, "localhost"?
eureka:
instance:
prefer-ip-address: true
client:
service-url:
defaultZone: http://admin:${jhipster.registry.password}#localhost:8761/eureka/
Right now I am using docker-compose in order to run the needed services, like the registry:
services:
jhipster-registry:
image: jhipster/jhipster-registry:v6.8.0
volumes:
- ./central-server-config:/central-config
# By default the JHipster Registry runs with the "dev" and "native"
# Spring profiles.
# "native" profile means the filesystem is used to store data, see
# http://cloud.spring.io/spring-cloud-config/spring-cloud-config.html
environment:
- _JAVA_OPTIONS=-Xmx512m -Xms256m
- JHIPSTER_SLEEP=20
- SPRING_PROFILES_ACTIVE=dev,oauth2
- SPRING_SECURITY_USER_PASSWORD=admin
- JHIPSTER_REGISTRY_PASSWORD=admin
- SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=native
- SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_LOCATIONS=file:./central-config
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_TYPE=git
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_URI=https://github.com/jhipster/jhipster-registry/
# - SPRING_CLOUD_CONFIG_SERVER_COMPOSITE_0_SEARCH_PATHS=central-config
# For Keycloak to work, you need to add '127.0.0.1 keycloak' to your hosts file
- SPRING_SECURITY_OAUTH2_CLIENT_PROVIDER_OIDC_ISSUER_URI=http://keycloak:9080/auth/realms/jhipster
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_ID=jhipster-registry
- SPRING_SECURITY_OAUTH2_CLIENT_REGISTRATION_OIDC_CLIENT_SECRET=jhipster-registry
ports:
- 8761:8761
keycloak:
image: jboss/keycloak:12.0.4
command:
[
"-b",
"0.0.0.0",
"-Dkeycloak.migration.action=import",
"-Dkeycloak.migration.provider=dir",
"-Dkeycloak.migration.dir=/opt/jboss/keycloak/realm-config",
"-Dkeycloak.migration.strategy=OVERWRITE_EXISTING",
"-Djboss.socket.binding.port-offset=1000",
"-Dkeycloak.profile.feature.upload_scripts=enabled",
]
volumes:
- ./realm-config:/opt/jboss/keycloak/realm-config
environment:
- KEYCLOAK_USER=admin
- KEYCLOAK_PASSWORD=admin
- DB_VENDOR=h2
ports:
- 9080:9080
- 9443:9443
- 10990:10990
test-mysql:
container_name: test-mysql
restart: always
image: mysql:8.0.25
environment:
MYSQL_ROOT_PASSWORD: 'root'
ports:
# <Port exposed> : < MySQL Port running inside container>
- '3306:3306'
expose:
# Opens port 3306 on the container
- '3306'
volumes:
- test-datavolume:/var/lib/mysql
volumes:
test-datavolume:
I know that if I add into the /etc/hosts the entry "127.0.0.1 jhipster-registry" is going to work, but I cant find/understand why is trying to use jhipster-registry instead of localhost?
Thanks!
From Azure we try to create container using the Azure Container Instances with prepared YAML. From the machine where we execute az container create command we can login successfully to our private registry (e.g fa-docker-snapshot-local.docker.comp.dev on JFrog Artifactory ) after entering password and we can docker pull it as well
docker login fa-docker-snapshot-local.docker.comp.dev -u svc-faselect
Login succeeded
So we can pull it successfully and the image path is the same like when doing manually docker pull:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
We have YAML file for deploy, and trying to create container using the az command from the SAME server. In the YAML file we have set up the same registry information: server, username and password and the same image
az container create --resource-group FRONT-SELECT-NA2 --file ads-azure.yaml
When we try to execute this command, it takes for 30 minutes and after that message is displayed: "Deployment failed. Operation failed with status 200: Resource State Failed"
Full Yaml:
apiVersion: '2019-12-01'
location: eastus2
name: ads-test-group
properties:
containers:
- name: front-arena-ads-test
properties:
image: fa-docker-snapshot-local.docker.comp.dev/fa/ads:test1
environmentVariables:
- name: 'DBTYPE'
value: 'odbc'
command:
- /opt/front/arena/sbin/ads_start
- ads_start
- '-unicode'
- '-db_server test01'
- '-db_name HEDGE2_ADM_Test1'
- '-db_user sqldbadmin'
- '-db_password pass'
- '-db_client_user HEDGE2_ADM_Test1'
- '-db_client_password Password55'
ports:
- port: 9000
protocol: TCP
resources:
requests:
cpu: 1.0
memoryInGB: 4
volumeMounts:
- mountPath: /opt/front/arena/host
name: ads-filesharevolume
imageRegistryCredentials: # Credentials to pull a private image
- server: fa-docker-snapshot-local.docker.comp.dev
username: svcacct-faselect
password: test
ipAddress:
type: Private
ports:
- protocol: tcp
port: '9000'
volumes:
- name: ads-filesharevolume
azureFile:
sharename: azurecontainershare
storageAccountName: frontarenastorage
storageAccountKey: kdUDK97MEB308N=
networkProfile:
id: /subscriptions/746feu-1537-1007-b705-0f895fc0f7ea/resourceGroups/SELECT-NA2/providers/Microsoft.Network/networkProfiles/fa-aci-test-networkProfile
osType: Linux
restartPolicy: Always
tags: null
type: Microsoft.ContainerInstance/containerGroups
Can you please help us why this error occurs?
Thank you
According to my knowledge, there is nothing wrong with your YAML file, I only can give you some possible reasons.
Make sure the configurations are all right, the server URL, username, and password, also include the image name and tag;
Change the port from '9000' into 9000``, I mean remove the double quotes;
Take a look at the Note, maybe the mount volume makes a crash to the container. Then you need to mount the file share to a new folder, I mean the new folder that does not exist before.
I written a Nodejs service , and build it by docker . Then i pushed it into Azure Container Registry .
I used Helm to pull Repository from ACR and then deploy to AKS but service not run .
Please tell me some advise.
The code of Helm Value . I thing i have to setting type and port of service.
replicaCount: 1
image:
repository: tungthtestcontainer.azurecr.io/demonode
tag: latest
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: NodePort
port: 8082
internalPort: 8082
ingress:
enabled: false
annotations: {}
hosts:
- host: chart-example.local
paths: []
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
To figure out what happens in that situations it doesn't matter that is helm or a yaml directly with kubectl apply o if it's Azure or another provider I recommend you follow the next steps:
Check the status of the release on helm you can see the status every time you want using helm status <release-name>, try to see if the pots are correctly created and the services are also ok.
Check the deployment with kubectl describe deployment <deployment-name>
Check the pod with kubectl describe pod <pod-name>
Check the pod logs with kubectl logs -f <pod-name>
With that, you should be able to find the source problem.
I'm working on a project using Helm-kubernetes and azure kubernetes service, in which I'm trying to use a simple node image which I have been pushed on azure container registry inside my helm chart but it returns ImagePullBackOff error.
Here are some details:
My Dockerfile:
FROM node:8
# Create app directory
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 32000
CMD [ "npm", "start" ]
My helm_chart/values.yaml:
replicaCount: 1
image:
registry: helmcr.azurecr.io
repository: helloworldtest
tag: 0.7
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
name: http
type: LoadBalancer
port: 32000
internalPort: 32000
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
paths: []
hosts:
- name: mychart.local
path: /
tls: []
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
When I try to pull the image directly uasing the command below as:
docker pull helmcr.azurecr.io/helloworldtest:0.7
then it pulls the image successfully.
Whats can be wrong here?
Thanks in advance!
Your kubernetes cluster needs to be authenticated to the container registry to pull images, generally this is done by a docker secret:
kubectl create secret docker-registry regcred --docker-server=<your-registry-server> --docker-username=<your-name> --docker-password=<your-pword> --docker-email=<your-email>
If you are using AKS, you can grant cluster application id pull rights to the registry, that is enough.
Reading: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/