Why am I getting a SQSError: 404 when trying to connect using Boto to an ElasticMQ service that is behind Ambassador? - python-3.x

I have an elasticmq docker container which is deployed as a service using Kubernetes. Furthermore, this service is exposed to external users by way of Ambassador.
Here is the yaml file for the service.
---
kind: Service
apiVersion: v1
metadata:
name: elasticmq
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: elasticmq
prefix: /
host: elasticmq.localhost.com
service: elasticmq:9324
spec:
selector:
app: elasticmq
ports:
- port: 9324
protocol: TCP
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: elasticmq
labels:
app: elasticmq
spec:
replicas: 1
selector:
matchLabels:
app: elasticmq
template:
metadata:
labels:
app: elasticmq
spec:
containers:
- name: elasticmq
image: elasticmq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9324
livenessProbe:
exec:
command:
- /bin/bash
- -c
- nc -zv localhost 9324 -w 1
initialDelaySeconds: 60
periodSeconds: 5
volumeMounts:
- name: elastimq-conf-volume
mountPath: /elasticmq/elasticmq.conf
volumes:
- name: elastimq-conf-volume
hostPath:
path: /path/elasticmq.conf
Now I can check that the elasticmq container is healthy and Ambassador worked by doing a curl:
$ curl elasticmq.localhost.com?Action=ListQueues&Version=2012-11-05
[1] 10355
<ListQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<ListQueuesResult>
</ListQueuesResult>
<ResponseMetadata>
<RequestId>00000000-0000-0000-0000-000000000000</RequestId>
</ResponseMetadata>
</ListQueuesResponse>[1]+ Done
On the other hand, when I try to do the same thing using Boto3, I get a SQSError: 404 not found.
This is my Python script:
import boto.sqs.connection
conn = boto.sqs.connection
c = conn.SQSConnection(proxy_port=80, proxy='elasticmq.localhost.com', is_secure=False, aws_access_key_id='x', aws_secret_access_key='x'
c.get_all_queues('')
I thought it had to do with the outside host specified in elasticmq.conf, so I changed that to this:
include classpath("application.conf")
// What is the outside visible address of this ElasticMQ node (used by rest-sqs)
node-address {
protocol = http
host = elasticmq.localhost.com
port = 80
context-path = ""
}
rest-sqs {
enabled = true
bind-port = 9324
bind-hostname = "0.0.0.0"
// Possible values: relaxed, strict
sqs-limits = relaxed
}
I thought changing the elasticmq conf would work, but it doesn't. How can I get this to work?

Related

Shiny proxy on AKS behind an Azure Application Gateway

I’ve been using shiny proxy on AKS for the past couple of months and it’s been fantastic, no problems at all, however, the need for a more secure setup has arised, and I have to use it behind an Azure Application Gateway (v2) with WAF and TLS certifcates (on the AGW).
The deployments happen with no problems whatsoever, but, upon trying to access the Application I always get a “404 Not Found”, also the health probes always throw "no Results" has anyone been through this before?
Here is my Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
spec:
selector:
matchLabels:
run: {{lvappname}}-proxy
replicas: 1
template:
metadata:
labels:
run: {{lvappname}}-proxy
spec:
containers:
- name: {{lvappname}}-proxy
image: {{server}}/shiny-app/{{lvappname}}-shiny-proxy-application:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
- name: kube-proxy-sidecar
image: {{server}}/shiny-app/{{lvappname}}-kube-proxy-sidecar:{{TAG}}
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8001
And here is my Service and Ingress
kind: Service
apiVersion: v1
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
labels:
app: {{lvappname}}
tier: frontend
spec:
selector:
app: {{lvappname}}-proxy
tier: frontend
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{lvappname}}-proxy
namespace: {{ns}}
annotations:
kubernetes.io/ingress.class: azure/application-gateway
cert-manager.io/cluster-issuer: letsencrypt-staging-application-gateway
appgw.ingress.kubernetes.io/health-probe-status-codes: "200-399,404"
appgw.ingress.kubernetes.io/health-probe-path: "/"
labels:
app: {{lvappname}}
spec:
rules:
- host: {{lvappname}}-{{lvstage}}.{{domain}}
http:
paths:
- path: /
backend:
service:
name: {{lvappname}}-proxy
port:
number: 8080
pathType: Prefix
tls:
- hosts:
- {{lvappname}}-{{lvstage}}.{{domain}}
secretName: {{lvappname}}-{{lvstage}}.{{domain}}-secret-name
and here is my shinyproxy configuration file
proxy:
port: 8080
authentication: none
landing-page: /app/{{lvappname}}
hide-navbar: true
container-backend: kubernetes
kubernetes:
namespace: {{ns}}
image-pull-policy: IfNotPresent
image-pull-secret: {{lvappname}}-secret
specs:
- id: {{lvappname}}
display-name: {{lvappname}} application
description: Application for {{lvappname}}
container-cmd: ["R", "-e", "shiny::runApp('/app/Shiny')"]
container-image: {{server}}/shiny-app/{{lvappname}}:{{TAG}}
server:
servlet.session.timeout: 3600
spring:
session:
store-type: redis
redis:
host: redis-leader
Any Help would be deeply appreciated
Thank you all in advance

Regex based routing setup for ingress in AKS

I am have a K8s cluster in Azure, in which I am wanting to host multiple web applications on with a single host. Each application has it's own service and deployment. How can I achieve something like the following routes?
MyApp.com
Partner1.MyApp.com
Partner2.MyApp.com
Here is what my yml file looks like currently:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp # the label for the pods and the deployments
spec:
containers:
- name: myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6666 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 6666
targetPort: 6666
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner1-myapp
spec:
selector:
matchLabels:
app: partner1-myapp
template:
metadata:
labels:
app: partner1-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner1-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6669 # the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner1-myapp
spec:
selector:
app: partner1-myapp
ports:
- protocol: TCP
port: 6669
targetPort: 6669
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: partner2-myapp
spec:
selector:
matchLabels:
app: partner2-myapp
template:
metadata:
labels:
app: partner2-myapp # the label for the pods and the deployments
spec:
containers:
- name: partner2-myapp
image: myimagename
imagePullPolicy: Always
ports:
- containerPort: 6672# the application listens to this port
---
apiVersion: v1
kind: Service
metadata:
name: partner2-myapp
spec:
selector:
app: partner2-myapp
ports:
- protocol: TCP
port: 6672
targetPort: 6672
type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ing
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 70m
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
- path: /(.*)
pathType: Prefix
backend:
service:
name: partner2-myapp
port:
number: 6672
- path: /(.*)
pathType: Prefix
backend:
service:
name: myapp
port:
number: 6666
---
What can I do to get the above routing?
Please run your two applications using kubectl
kubectl apply -f Partner1-MyApp.yaml --namespace ingress-basic
kubectl apply -f Partner2-MyApp.yaml --namespace ingress-basic
To setup for routing in your YAML file, If both application are running in Kubernetes cluster to route traffic in each application EXTERNAL_IP/static is needed to route the service, create a file name add this YAML code in below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: partner-ingress-static
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /static/$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /static(/|$)(.*)
pathType: Prefix
backend:
service:
name: partner1-myapp
port:
number: 6669
To more in detail please refer this link :
Create an ingress controller in Azure Kubernetes Service (AKS)

Nginx Ingress Controller getting error in Node.js application with routes prefixed

I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

How can I run a docker container from a docker image from docker hub in skaffold?

Let's say I want to run a container from an image from docker hub, let's say mosquitto I'd execute docker run -it -p 1883:1883 -p 9001:9001 eclipse-mosquitto.
I tried to pull the image from gcr.io (deployment.yaml) like done here:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: gcr.io/vu-james-celli/eclipse-mosquitto # https://hub.docker.com/_/eclipse-mosquitto
ports:
- containerPort: 1883
skaffold.yaml:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <other image builds>
deploy:
kubectl:
manifests:
- mqtt-broker/*
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
<other port forwardings>
...
However when I run skaffold --dev --port-forward I get an error in the output:
- deployment/mqtt-broker: container mqtt-broker is waiting to start: gcr.io/vu-james-celli/eclipse-mosquitto can't be pulled
How do I have to configure skaffold.yaml (schema version v2beta10) when using kubectl to run the mosquitto container as part of a deployment?
You could create a pod with a single container referencing eclipse-mosquitto, and then ensure that pod is referenced from your skaffold.yaml.
apiVersion: v1
kind: Pod
metadata:
name: mqtt
spec:
containers:
- name: mqtt
image: eclipse-mosquitto
ports:
- containerPort: 1883
name: mqtt
- containerPort: 9001
name: websockets
You could turn this into a deployment or replicaset with services, etc.
First, pull the image from docker hub onto the local machine: docker pull eclipse-mosquitto
Second, refer the image in the mqtt-broker/deployment.yaml e.g.:
apiVersion: v1
kind: Service
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
type: NodePort
ports:
- targetPort: 1883
port: 1883
selector:
app: mqtt-broker
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mqtt-broker
labels:
app: mqtt-broker
spec:
replicas: 1
selector:
matchLabels:
app: mqtt-broker
template:
metadata:
labels:
app: mqtt-broker
spec:
containers:
- name: mqtt-broker
image: eclipse-mosquitto
ports:
- containerPort: 1883
Third, reference the deploment.yaml in skaffold.yaml` e.g.:
apiVersion: skaffold/v2beta10
kind: Config
build:
artifacts:
- <services-under-development>
deploy:
kubectl:
manifests:
- mqtt-broker/deployment.yaml
portForward:
- resourceType: deployment
resourceName: mqtt-broker
port: 1883
localPort: 1883
- <port-forwarding-for-services-under-development>

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources