DataDog GKE NESTJS integration using DataDog's helm chart - node.js

Im trying to deploy my service and read my local logfile from the inside pod.
Using DataDog's helm chart values with the following configs :
## Default values for Datadog Agent
## See Datadog helm documentation to learn more:
## https://docs.datadoghq.com/agent/kubernetes/helm/
## #param image - object - required
## Define the Datadog image to work with.
#
image:
## #param repository - string - required
## Define the repository to use:
## use "datadog/agent" for Datadog Agent 6
## use "datadog/dogstatsd" for Standalone Datadog Agent DogStatsD6
#
repository: datadog/agent
## #param tag - string - required
## Define the Agent version to use.
## Use 6.13.0-jmx to enable jmx fetch collection
#
tag: 6.13.0
## #param pullPolicy - string - required
## The Kubernetes pull policy.
#
pullPolicy: IfNotPresent
## #param pullSecrets - list of key:value strings - optional
## It is possible to specify docker registry credentials
## See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
#
# pullSecrets:
# - name: "<REG_SECRET>"
nameOverride: ""
fullnameOverride: ""
datadog:
## #param apiKey - string - required
## Set this to your Datadog API key before the Agent runs.
## ref: https://app.datadoghq.com/account/settings#agent/kubernetes
#
apiKey: "xxxxxxxx"
## #param apiKeyExistingSecret - string - optional
## Use existing Secret which stores API key instead of creating a new one.
## If set, this parameter takes precedence over "apiKey".
#
# apiKeyExistingSecret: <DATADOG_API_KEY_SECRET>
## #param appKey - string - optional
## If you are using clusterAgent.metricsProvider.enabled = true, you must set
## a Datadog application key for read access to your metrics.
#
appKey: "xxxxxx"
## #param appKeyExistingSecret - string - optional
## Use existing Secret which stores APP key instead of creating a new one
## If set, this parameter takes precedence over "appKey".
#
# appKeyExistingSecret: <DATADOG_APP_KEY_SECRET>
## #param securityContext - object - optional
## You can modify the security context used to run the containers by
## modifying the label type below:
#
# securityContext:
# seLinuxOptions:
# seLinuxLabel: "spc_t"
## #param clusterName - string - optional
## Set a unique cluster name to allow scoping hosts and Cluster Checks easily
#
# clusterName: <CLUSTER_NAME>
## #param name - string - required
## Daemonset/Deployment container name
## See clusterAgent.containerName if clusterAgent.enabled = true
#
name: datadog
## #param site - string - optional - default: 'datadoghq.com'
## The site of the Datadog intake to send Agent data to.
## Set to 'datadoghq.eu' to send data to the EU site.
#
# site: datadoghq.com
## #param dd_url - string - optional - default: 'https://app.datadoghq.com'
## The host of the Datadog intake server to send Agent data to, only set this option
## if you need the Agent to send data to a custom URL.
## Overrides the site setting defined in "site".
#
# dd_url: https://app.datadoghq.com
## #param logLevel - string - required
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off
#
logLevel: INFO
## #param podLabelsAsTags - list of key:value strings - optional
## Provide a mapping of Kubernetes Labels to Datadog Tags.
#
# podLabelsAsTags:
# app: kube_app
# release: helm_release
# <KUBERNETES_LABEL>: <DATADOG_TAG_KEY>
## #param podAnnotationsAsTags - list of key:value strings - optional
## Provide a mapping of Kubernetes Annotations to Datadog Tags
#
# podAnnotationsAsTags:
# iam.amazonaws.com/role: kube_iamrole
# <KUBERNETES_ANNOTATIONS>: <DATADOG_TAG_KEY>
## #param tags - list of key:value elements - optional
## List of tags to attach to every metric, event and service check collected by this Agent.
##
## Learn more about tagging: https://docs.datadoghq.com/tagging/
#
# tags:
# - <KEY_1>:<VALUE_1>
# - <KEY_2>:<VALUE_2>
## #param useCriSocketVolume - boolean - required
## Enable container runtime socket volume mounting
#
useCriSocketVolume: true
## #param dogstatsdOriginDetection - boolean - optional
## Enable origin detection for container tagging
## https://docs.datadoghq.com/developers/dogstatsd/unix_socket/#using-origin-detection-for-container-tagging
#
# dogstatsdOriginDetection: true
## #param useDogStatsDSocketVolume - boolean - optional
## Enable dogstatsd over Unix Domain Socket
## ref: https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
#
# useDogStatsDSocketVolume: true
## #param nonLocalTraffic - boolean - optional - default: false
## Enable this to make each node accept non-local statsd traffic.
## ref: https://github.com/DataDog/docker-dd-agent#environment-variables
#
nonLocalTraffic: true
## #param collectEvents - boolean - optional - default: false
## Enables this to start event collection from the kubernetes API
## ref: https://docs.datadoghq.com/agent/kubernetes/event_collection/
#
collectEvents: true
## #param leaderElection - boolean - optional - default: false
## Enables leader election mechanism for event collection.
#
# leaderElection: false
## #param leaderLeaseDuration - integer - optional - default: 60
## Set the lease time for leader election in second.
#
# leaderLeaseDuration: 60
## #param logsEnabled - boolean - optional - default: false
## Enables this to activate Datadog Agent log collection.
## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
#
logsEnabled: true
## #param logsConfigContainerCollectAll - boolean - optional - default: false
## Enable this to allow log collection for all containers.
## ref: https://docs.datadoghq.com/agent/basic_agent_usage/kubernetes/#log-collection-setup
#
logsConfigContainerCollectAll: true
## #param containerLogsPath - string - optional - default: /var/lib/docker/containers
## This to allow log collection from container log path. Set to a different path if not
## using docker runtime.
## ref: https://docs.datadoghq.com/agent/kubernetes/daemonset_setup/?tab=k8sfile#create-manifest
#
containerLogsPath: /var/lib/docker/containers
## #param apmEnabled - boolean - optional - default: false
## Enable this to enable APM and tracing, on port 8126
## ref: https://github.com/DataDog/docker-dd-agent#tracing-from-the-host
#
apmEnabled: true
## #param processAgentEnabled - boolean - optional - default: false
## Enable this to activate live process monitoring.
## Note: /etc/passwd is automatically mounted to allow username resolution.
## ref: https://docs.datadoghq.com/graphing/infrastructure/process/#kubernetes-daemonset
#
processAgentEnabled: true
## #param env - list of object - optional
## The dd-agent supports many environment variables
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
#
# env:
# - name: <ENV_VAR_NAME>
# value: <ENV_VAR_VALUE>
## #param volumes - list of objects - optional
## Specify additional volumes to mount in the dd-agent container
#
# volumes:
# - hostPath:
# path: <HOST_PATH>
# name: <VOLUME_NAME>
## #param volumeMounts - list of objects - optional
## Specify additional volumes to mount in the dd-agent container
#
# volumeMounts:
# - name: <VOLUME_NAME>
# mountPath: <CONTAINER_PATH>
# readOnly: true
## #param confd - list of objects - optional
## Provide additional check configurations (static and Autodiscovery)
## Each key becomes a file in /conf.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
## ref: https://docs.datadoghq.com/agent/autodiscovery/
#
confd:
conf.yaml: |-
init_config:
instances:
logs:
- type: "file"
path: "/app/logs/service.log"
service: nodejs
source: nodejs
sourcecategory: sourcecode
# kubernetes_state.yaml: |-
# ad_identifiers:
# - kube-state-metrics
# init_config:
# instances:
# - kube_state_url: http://%%host%%:8080/metrics
## #param checksd - list of key:value strings - optional
## Provide additional custom checks as python code
## Each key becomes a file in /checks.d
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#optional-volumes
#
# checksd:
# service.py: |-
## #param criSocketPath - string - optional
## Path to the container runtime socket (if different from Docker)
## This is supported starting from agent 6.6.0
#
# criSocketPath: /var/run/containerd/containerd.sock
## #param dogStatsDSocketPath - string - optional
## Path to the DogStatsD socket
#
# dogStatsDSocketPath: /var/run/datadog/dsd.socket
## #param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]
## #param resources - object -required
## datadog-agent resource requests and limits
## Make sure to keep requests and limits equal to keep the pods in the Guaranteed QoS class
## Ref: http://kubernetes.io/docs/user-guide/compute-resources/
#
resources: {}
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi
## #param clusterAgent - object - required
## This is the Datadog Cluster Agent implementation that handles cluster-wide
## metrics more cleanly, separates concerns for better rbac, and implements
## the external metrics API so you can autoscale HPAs based on datadog metrics
## ref: https://docs.datadoghq.com/agent/kubernetes/cluster/
#
clusterAgent:
## #param enabled - boolean - required
## Set this to true to enable Datadog Cluster Agent
#
enabled: false
containerName: cluster-agent
image:
repository: datadog/cluster-agent
tag: 1.3.2
pullPolicy: IfNotPresent
## #param token - string - required
## This needs to be at least 32 characters a-zA-z
## It is a preshared key between the node agents and the cluster agent
## ref:
#
token: ""
replicas: 1
## #param metricsProvider - object - required
## Enable the metricsProvider to be able to scale based on metrics in Datadog
#
metricsProvider:
enabled: true
## #param clusterChecks - object - required
## Enable the Cluster Checks feature on both the cluster-agents and the daemonset
## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
## Autodiscovery via Kube Service annotations is automatically enabled
#
clusterChecks:
enabled: true
## #param confd - list of objects - optional
## Provide additional cluster check configurations
## Each key will become a file in /conf.d
## ref: https://docs.datadoghq.com/agent/autodiscovery/
#
# confd:
# mysql.yaml: |-
# cluster_check: true
# instances:
# - server: '<EXTERNAL_IP>'
# port: 3306
# user: datadog
# pass: '<YOUR_CHOSEN_PASSWORD>'
## #param resources - object -required
## Datadog cluster-agent resource requests and limits.
#
resources: {}
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi
## #param priorityclassName - string - optional
## Name of the priorityClass to apply to the Cluster Agent
# priorityClassName: system-cluster-critical
## #param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]
## #param podAnnotations - list of key:value strings - optional
## Annotations to add to the cluster-agents's pod(s)
#
# podAnnotations:
# key: "value"
## #param readinessProbe - object - optional
## Override the cluster-agent's readiness probe logic from the default:
#
# readinessProbe:
rbac:
## #param created - boolean - required
## If true, create & use RBAC resources
#
create: true
## #param serviceAccountName - string - required
## Ignored if rbac.create is true
#
serviceAccountName: default
tolerations: []
kubeStateMetrics:
## #param enabled - boolean - required
## If true, deploys the kube-state-metrics deployment.
## ref: https://github.com/kubernetes/charts/tree/master/stable/kube-state-metrics
#
enabled: true
kube-state-metrics:
rbac:
## #param created - boolean - required
## If true, create & use RBAC resources
#
create: true
serviceAccount:
## #param created - boolean - required
## If true, create ServiceAccount, require rbac kube-state-metrics.rbac.create true
#
create: true
## #param name - string - required
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the fullname template
#
name: coupon-service-account
## #param resources - object - optional
## Resource requests and limits for the kube-state-metrics container.
#
# resources:
# requests:
# cpu: 200m
# memory: 256Mi
# limits:
# cpu: 200m
# memory: 256Mi
daemonset:
## #param enabled - boolean - required
## You should keep Datadog DaemonSet enabled!
## The exceptional case could be a situation when you need to run
## single DataDog pod per every namespace, but you do not need to
## re-create a DaemonSet for every non-default namespace install.
## Note: StatsD and DogStatsD work over UDP, so you may not
## get guaranteed delivery of the metrics in Datadog-per-namespace setup!
#
enabled: true
## #param useDedicatedContainers - boolean - optional
## Deploy each datadog agent process in a separate container. Allow fine-grained
## control over allocated resources and better isolation.
#
# useDedicatedContainers: false
containers:
agent:
## #param env - list - required
## Additionnal environment variables for the agent container.
#
# env:
## #param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO
## #param resources - object - required
## Resource requests and limits for the agent container.
#
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 200m
memory: 256Mi
processAgent:
## #param env - list - required
## Additionnal environment variables for the process-agent container.
#
# env:
## #param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO
## #param resources - object - required
## Resource requests and limits for the process-agent container.
#
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 100m
memory: 200Mi
traceAgent:
## #param env - list - required
## Additionnal environment variables for the trace-agent container.
#
# env:
## #param logLevel - string - optional
## Set logging verbosity, valid log levels are:
## trace, debug, info, warn, error, critical, and off.
## If not set, fall back to the value of datadog.logLevel.
#
logLevel: INFO
## #param resources - object - required
## Resource requests and limits for the trace-agent container.
#
resources:
requests:
cpu: 100m
memory: 200Mi
limits:
cpu: 100m
memory: 200Mi
## #param useHostNetwork - boolean - optional
## Bind ports on the hostNetwork. Useful for CNI networking where hostPort might
## not be supported. The ports need to be available on all hosts. It Can be
## used for custom metrics instead of a service endpoint.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#
useHostNetwork: true
## #param useHostPort - boolean - optional
## Sets the hostPort to the same value of the container port. Needs to be used
## to receive traces in a standard APM set up. Can be used as for sending custom metrics.
## The ports need to be available on all hosts.
##
## WARNING: Make sure that hosts using this are properly firewalled otherwise
## metrics and traces are accepted from any host able to connect to this host.
#
useHostPort: true
## #param useHostPID - boolean - optional
## Run the agent in the host's PID namespace. This is required for Dogstatsd origin
## detection to work. See https://docs.datadoghq.com/developers/dogstatsd/unix_socket/
#
# useHostPID: true
## #param podAnnotations - list of key:value strings - optional
## Annotations to add to the DaemonSet's Pods
#
# podAnnotations:
# <POD_ANNOTATION>: '[{"key": "<KEY>", "value": "<VALUE>"}]'
## #param tolerations - array - optional
## Allow the DaemonSet to schedule on tainted nodes (requires Kubernetes >= 1.6)
#
# tolerations: []
## #param nodeSelector - object - optional
## Allow the DaemonSet to schedule on selected nodes
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
#
# nodeSelector: {}
## #param affinity - object - optional
## Allow the DaemonSet to schedule using affinity rules
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
# affinity: {}
## #param updateStrategy - string - optional
## Allow the DaemonSet to perform a rolling update on helm update
## ref: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/
#
# updateStrategy: RollingUpdate
## #param priorityClassName - string - optional
## Sets PriorityClassName if defined.
#
# priorityClassName:
## #param podLabels - object - optional
## Sets podLabels if defined.
#
# podLabels: {}
## #param useConfigMap - boolean - optional
# Configures a configmap to provide the agent configuration
#
# useConfigMap: false
deployment:
## #param enabled - boolean - required
## Apart from DaemonSet, deploy Datadog agent pods and related service for
## applications that want to send custom metrics. Provides DogStasD service.
#
enabled: false
## #param replicas - integer - required
## If you want to use datadog.collectEvents, keep deployment.replicas set to 1.
#
replicas: 1
## #param affinity - object - required
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
affinity: {}
## #param tolerations - array - required
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
#
tolerations: []
## #param dogstatsdNodePort - integer - optional
## If you're using a NodePort-type service and need a fixed port, set this parameter.
#
# dogstatsdNodePort: 8125
## #param traceNodePort - integer - optional
## If you're using a NodePort-type service and need a fixed port, set this parameter.
#
# traceNodePort: 8126
## #param service - object - required
##
#
service:
type: ClusterIP
annotations: {}
## #param priorityClassName - string - optional
## Sets PriorityClassName if defined.
#
# priorityClassName:
clusterchecksDeployment:
## #param enabled - boolean - required
## If true, deploys agent dedicated for running the Cluster Checks instead of running in the Daemonset's agents.
## ref: https://docs.datadoghq.com/agent/autodiscovery/clusterchecks/
#
enabled: false
rbac:
## #param dedicated - boolean - required
## If true, use a dedicated RBAC resource for the cluster checks agent(s)
#
dedicated: false
## #param serviceAccountName - string - required
## Ignored if rbac.create is true
#
serviceAccountName: default
## #param replicas - integer - required
## If you want to deploy the cluckerchecks agent in HA, keep at least clusterchecksDeployment.replicas set to 2.
## And increase the clusterchecksDeployment.replicas according to the number of Cluster Checks.
#
replicas: 2
## #param resources - object -required
## Datadog clusterchecks-agent resource requests and limits.
#
resources: {}
# requests:
# cpu: 200m
# memory: 500Mi
# limits:
# cpu: 200m
# memory: 500Mi
## #param affinity - object - optional
## Allow the ClusterChecks Deployment to schedule using affinity rules.
## By default, ClusterChecks Deployment Pods are forced to run on different Nodes.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
#
# affinity:
## #param nodeSelector - object - optional
## Allow the ClusterChecks Deploument to schedule on selected nodes
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
#
# nodeSelector: {}
## #param tolerations - array - required
## Tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
#
# tolerations: []
## #param livenessProbe - object - optional
## Override the agent's liveness probe logic from the default:
## In case of issues with the probe, you can disable it with the
## following values, to allow easier investigating:
#
# livenessProbe:
# exec:
# command: ["/bin/true"]
## #param env - list of object - optional
## The dd-agent supports many environment variables
## ref: https://github.com/DataDog/datadog-agent/tree/master/Dockerfiles/agent#environment-variables
#
# env:
# - name: <ENV_VAR_NAME>
# value: <ENV_VAR_VALUE>
as you can see I expect my logs to be available at /app/logs/service.log and thats what Im supplying to my conf.d :
confd:
conf.yaml: |-
init_config:
instances:
logs:
- type: "file"
path: "/app/logs/service.log"
service: nodejs
source: nodejs
sourcecategory: sourcecode
In my service, I use WinstonLogger using file transport with the JSON format.
transports: [
new transports.File({
winston.format.json(),
filename: `${process.env.LOGS_PATH}/service.log`,
}),
]
process.env.LOGS_PATH = '/app/logs'
After all, exploring my pod and tail -f my service.log in the expected /app/logs folder I see that the application actually writes the logs in a JSON format as expected.
My DataDog doesn't pick up the logs and they are not showing in the log section ..
NOTE:: I do not mount any volume to and from my service ..
What am I missing?
Should I mount my local log to /var/log/pods/[service_name]/ ?

The main issue is that you would have to mount the volume in both the app container and the agent container in order to make it available. It also means you have to find a place to store the log file before it gets picked up by the agent. Doing this for every container could become difficult to maintain and time consuming.
An alternative approach would be to instead send the logs to stdout and let the agent collect them with the Docker integration. Since you configured logsConfigContainerCollectAll to true, the agent is already configured to collect the logs from every container output, so configuring Winston to output to stdout should just work.
See: https://docs.datadoghq.com/agent/docker/log/

To support rochdev comment, here are a few code snippets to help out (if you do not opt in for the STDOUT method which should be simpler). This is only to mount the right volume inside the container agent.
On your app deployment, add:
spec:
containers:
- name: your_nodejs_app
...
volumeMounts:
- name: abc
mountPath: /app/logs
volumes:
- hostPath:
path: /app/logs
name: abc
And on your agent daemonset:
spec:
containers:
- image: datadog/agent
...
volumeMounts:
...
- name: plop
mountPath: /app/logs
volumes:
...
- hostPath:
path: /app/logs/
name: plop

Related

Getting Schema validation: Incompatible types in detekt.yml in Android Studio

Any way I can fix the schema warning in detekt.yml I am attaching the screenshot and open to add more details
Minimum reproducible detekt.yml below
build:
maxIssues: 0
excludeCorrectable: false
weights:
complexity: 2
LongParameterList: 1
style: 1
comments: 1
config:
validation: true
warningsAsErrors: true
# when writing own rules with new properties, exclude the property path e.g.: 'my_rule_set,.*>.*>[my_property]'
excludes: ''
processors:
active: true
exclude:
- 'DetektProgressListener'
# - 'KtFileCountProcessor'
# - 'PackageCountProcessor'
# - 'ClassCountProcessor'
# - 'FunctionCountProcessor'
# - 'PropertyCountProcessor'
# - 'ProjectComplexityProcessor'
# - 'ProjectCognitiveComplexityProcessor'
# - 'ProjectLLOCProcessor'
# - 'ProjectCLOCProcessor'
# - 'ProjectLOCProcessor'
# - 'ProjectSLOCProcessor'
# - 'LicenseHeaderLoaderExtension'
style:
active: true
ReturnCount:
active: true
max: 2
excludedFunctions: 'equals'
excludeLabeled: false
excludeReturnFromLambda: true
excludeGuardClauses: true
Android version: Android Studio Dolphin | 2021.3.1 Patch 1
Build #AI-213.7172.25.2113.9123335, built on September 30, 2022
Runtime version: 11.0.13+0-b1751.21-8125866 x86_64
VM: OpenJDK 64-Bit Server VM by JetBrains s.r.o.
Okay after some exploration I found out, in the detekt release v1.21.0 they changed all their configuration from comma-separated string to array of string in detekt.yml.
So initially it used to be excludedFunctions: 'equals, hashCode' now they have become excludedFunctions: ['equals', 'hashCode']
After detekt release v1.22.0, they started throwing error while building the project.

snakemake allocates memory twice

I am noticing that all my rules request memory twice, one at a lower maximum than what I requested (mem_mb) and then what I actually requested (mem_gb). If I run the rules as localrules they do run faster. How can I make sure the default settings do not interfere?
resources: mem_mb=100, disk_mb=8620, tmpdir=/tmp/pop071.54835, partition=h24, qos=normal, mem_gb=100, time=120:00:00
The rules are as follows:
rule bwa_mem2_mem:
input:
R1 = "data/results/qc/{species}.{population}.{individual}_1.fq.gz",
R2 = "data/results/qc/{species}.{population}.{individual}_2.fq.gz",
R1_unp = "data/results/qc/{species}.{population}.{individual}_1_unp.fq.gz",
R2_unp = "data/results/qc/{species}.{population}.{individual}_2_unp.fq.gz",
idx= "data/results/genome/genome",
ref = "data/results/genome/genome.fa"
output:
bam = "data/results/mapped_reads/{species}.{population}.{individual}.bam",
log:
bwa ="logs/bwa_mem2/{species}.{population}.{individual}.log",
sam ="logs/samtools_view/{species}.{population}.{individual}.log",
benchmark:
"benchmark/bwa_mem2_mem/{species}.{population}.{individual}.tsv",
resources:
time = parameters["bwa_mem2"]["time"],
mem_gb = parameters["bwa_mem2"]["mem_gb"],
params:
extra = parameters["bwa_mem2"]["extra"],
tag = compose_rg_tag,
threads:
parameters["bwa_mem2"]["threads"],
shell:
"bwa-mem2 mem -t {threads} -R '{params.tag}' {params.extra} {input.idx} {input.R1} {input.R2} | "
"samtools sort -l 9 -o {output.bam} --reference {input.ref} --output-fmt CRAM -# {threads} /dev/stdin 2> {log.sam}"
and the config is:
cluster:
mkdir -p logs/{rule} && # change the log file to logs/slurm/{rule}
sbatch
--partition={resources.partition}
--time={resources.time}
--qos={resources.qos}
--cpus-per-task={threads}
--mem={resources.mem_gb}
--job-name=smk-{rule}-{wildcards}
--output=logs/{rule}/{rule}-{wildcards}-%j.out
--parsable # Required to pass job IDs to scancel
default-resources:
- partition=h24
- qos=normal
- mem_gb=100
- time="04:00:00"
restart-times: 3
max-jobs-per-second: 10
max-status-checks-per-second: 1
local-cores: 1
latency-wait: 60
jobs: 100
keep-going: True
rerun-incomplete: True
printshellcmds: True
scheduler: greedy
use-conda: True # Required to run with local conda enviroment
cluster-status: status-sacct.sh # Required to monitor the status of the submitted jobs
cluster-cancel: scancel # Required to cancel the jobs with Ctrl + C
cluster-cancel-nargs: 50
Cheers,
Angel
Right now there are two separate memory resource requirements:
mem_mb
mem_gb
From the perspective of snakemake these are different, so both will be passed to the cluster. A quick fix is to use the same units, e.g. if the resource really requires only 100 mb, then the default resource should be changed to:
default-resources:
- partition=h24
- qos=normal
- mem_mb=100

RASA 2.8 - Final custom action in rules are not run

I'm trying to setup a bot that takes three inputs, two from buttons, and one from text. Currently, I'm able to pass in all of the required input. However, when it comes time for the final action, the bot does not seem to know what to do, and returns either a greeting or a message that it does not understand me. Is there anything wrong with my rules or domain files?
rule.yml
version: "2.0"
rules:
- rule: Activate conv_get_job_status
steps:
- intent: get_job_query
- action: action_populate_user_bot_info
- action: job_type_selection #this creates a list of buttons for job type selection
- action: job_type_input #job type form, to set job_type slot
- active_loop: job_type_input
- rule: Submit conv_get_job_status
condition:
- active_loop: job_type_input
- active_loop: job_name_and_number_input
steps:
- action: job_type_input
- active_loop: null
- slot_was_set:
- requested_slot: null
- action: job_name_selection #this creates a list of buttons for job name selection
- action: job_name_and_number_input #job name and number form, to set job_name and job number slots
- active_loop: job_name_and_number_input
- action: job_name_and_number_input
- active_loop: null
- slot_was_set:
- requested_slot: null
- action: action_return_job_status #various utterances, currently not called.
testjob_domain.yml
version: '2.0'
entities:
- job_type
- job_name
- job_number
intents:
- get_job_query
actions:
- action_reset_job_type_input
- job_number_query
- utter_ask_job_number
- utter_ask_job_name
- utter_ask_job_type
- action_return_job_status
- job_name_selection
- get_job_status
- action_reset_get_job_status
- job_type_selection
- action_reset_job_name_and_number_input
responses:
utter_ask_job_type:
- text: Select job type
utter_ask_job_name:
- text: Select job name
utter_ask_job_number:
- text: Enter job number
slots:
execute_action:
type: text
job_type:
type: text
job_name:
type: text
job_number:
type: text
forms:
job_type_input:
job_type:
- type: from_text
intent_name: None
job_name_and_number_input:
job_name:
- type: from_text
intent_name: None
job_number:
- type: from_text
intent_name: None
The code for action_return_job_status:
from .action_utils import *
class ActionReturnJobStatus(Action):
def name(self):
return "action_return_job_status"
def run(self, dispatcher, tracker, domain):
logger.info("Action - action_return_job_status")
logger.info("execute_action_slot: "+str(tracker.slots['execute_action']))
record_bot_event(tracker, dispatcher, 'new_session', auth_critical=authorization_critical)
# set auth_critical=True to skip action code execution
if not verifyBAMToken(tracker, dispatcher, auth_critical=authorization_critical):
return
logger.info("RUNNING GET ACTION")#I would expect to see either this in the logs or the utterances. Neither happens
dispatcher.utter_message("RUNNING GET ACTION")
return [SlotSet('latest_faq_question', None), SlotSet('latest_application_name', None)]

Traefik persistant volume timeouts AKS

Im struggling to get Traefik working on K8s with ACME enabled. I want to store the certs as suggested on a persistantVolume. This for the fact that requesting certs is rateLimited and in case of pod restarts the cert would get lost. Below is my full config that is used for stable/traefik (helm chart) and installed in Azure AKS.
There are a issue that do not seem to work (or im just doing it wrong ofcourse).
pod has unbound immediate PersistentVolumeClaims
This is the initial error that i receive when booting up the pods. The weird thing is that the PersistantVolumeClaim is actually there and ready. When i change the volume itself in my Azure portal it also says its mount to my server
traefik-acme
Namespace: default
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/azure-disk
Creation Time: 2019-04-16T09:55 UTC
Status: Bound
Volume: pvc-b673da74-602d-11e9-a537-9275388
Access modes: ReadWriteOnce
Storage class: default
Also the storageClass itself is active:
$ kubectl get sc --all-namespaces
NAME PROVISIONER AGE
default (default) kubernetes.io/azure-disk 4d
managed-premium kubernetes.io/azure-disk 4d
When i then wait a little longer i receive below error:
Unable to mount volumes for pod "traefik-d65fcbc8b-lkzsh_default(b68c8aa3-602d-11e9-a537-92753888c74b)": timeout expired waiting for volumes to attach or mount for pod "default"/"traefik-d65fcbc8b-lkzsh". list of unmounted volumes=[acme]. list of unattached volumes=[config acme default-token-p2lgf]
Here the full K8s event trace:
pod has unbound immediate PersistentVolumeClaims
default-scheduler
2019-04-16T09:55 UTC
Successfully assigned default/traefik-d65fcbc8b-lkzsh to aks-default-22301976-0
default-scheduler
2019-04-16T09:55 UTC
Unable to mount volumes for pod "traefik-d65fcbc8b-lkzsh_default(b68c8aa3-602d-11e9-a537-92753888c74b)": timeout expired waiting for volumes to attach or mount for pod "default"/"traefik-d65fcbc8b-lkzsh". list of unmounted volumes=[acme]. list of unattached volumes=[config acme default-token-p2lgf]
kubelet aks-default-22301976-0
2019-04-16T09:57 UTC
AttachVolume.Attach succeeded for volume "pvc-b673da74-602d-11e9-a537-92753888c74b"
attachdetach-controller
2019-04-16T09:58 UTC
Container image "traefik:1.7.9" already present on machine
kubelet aks-default-22301976-0
2019-04-16T10:01 UTC
Created container
kubelet aks-default-22301976-0
2019-04-16T10:00 UTC
Started container
kubelet aks-default-22301976-0
2019-04-16T10:00 UTC
Back-off restarting failed container
kubelet aks-default-22301976-0
2019-04-16T10:02 UTC
Install
Installing the helm chart of Traefik done with:
helm install -f values.yaml stable/traefik --name traefik
Below is the full values.yaml used to install the chart
## Default values for Traefik
image: traefik
imageTag: 1.7.9
testFramework:
image: "dduportal/bats"
tag: "0.4.0"
## can switch the service type to NodePort if required
serviceType: LoadBalancer
# loadBalancerIP: ""
# loadBalancerSourceRanges: []
whiteListSourceRange: []
externalTrafficPolicy: Cluster
replicas: 1
# startupArguments:
# - "--ping"
# - "--ping.entrypoint=http"
podDisruptionBudget: {}
# maxUnavailable: 1
# minAvailable: 2
# priorityClassName: ""
# rootCAs: []
resources: {}
debug:
enabled: false
deploymentStrategy: {}
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
securityContext: {}
env: {}
nodeSelector: {}
# key: value
affinity: {}
# key: value
tolerations: []
# - key: "key"
# operator: "Equal|Exists"
# value: "value"
# effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)"
## Kubernetes ingress filters
# kubernetes:
# endpoint:
# namespaces:
# - default
# labelSelector:
# ingressClass:
# ingressEndpoint:
# hostname: "localhost"
# ip: "127.0.0.1"
# publishedService: "namespace/servicename"
# useDefaultPublishedService: false
proxyProtocol:
enabled: false
# trustedIPs is required when enabled
trustedIPs: []
# - 10.0.0.0/8
forwardedHeaders:
enabled: false
# trustedIPs is required when enabled
trustedIPs: []
# - 10.0.0.0/8
## Add arbitrary ConfigMaps to deployment
## Will be mounted to /configs/, i.e. myconfig.json would
## be mounted to /configs/myconfig.json.
configFiles: {}
# myconfig.json: |
# filecontents...
## Add arbitrary Secrets to deployment
## Will be mounted to /secrets/, i.e. file.name would
## be mounted to /secrets/mysecret.txt.
## The contents will be base64 encoded when added
secretFiles: {}
# mysecret.txt: |
# filecontents...
ssl:
enabled: false
enforced: false
permanentRedirect: false
upstream: false
insecureSkipVerify: false
generateTLS: false
# defaultCN: "example.com"
# or *.example.com
defaultSANList: []
# - example.com
# - test1.example.com
defaultIPList: []
# - 1.2.3.4
# cipherSuites: []
# https://docs.traefik.io/configuration/entrypoints/#specify-minimum-tls-version
# tlsMinVersion: VersionTLS12
defaultCert: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUVtekNDQTRPZ0F3SUJBZ0lKQUpBR1FsTW1DMGt5TUEwR0NTcUdTSWIzRFFFQkJRVUFNSUdQTVFzd0NRWUQKVlFRR0V3SlZVekVSTUE4R0ExVUVDQk1JUTI5c2IzSmhaRzh4RURBT0JnTlZCQWNUQjBKdmRXeGtaWEl4RkRBUwpCZ05WQkFvVEMwVjRZVzF3YkdWRGIzSndNUXN3Q1FZRFZRUUxFd0pKVkRFV01CUUdBMVVFQXhRTktpNWxlR0Z0CmNHeGxMbU52YlRFZ01CNEdDU3FHU0liM0RRRUpBUllSWVdSdGFXNUFaWGhoYlhCc1pTNWpiMjB3SGhjTk1UWXgKTURJME1qRXdPVFV5V2hjTk1UY3hNREkwTWpFd09UVXlXakNCanpFTE1Ba0dBMVVFQmhNQ1ZWTXhFVEFQQmdOVgpCQWdUQ0VOdmJHOXlZV1J2TVJBd0RnWURWUVFIRXdkQ2IzVnNaR1Z5TVJRd0VnWURWUVFLRXd0RmVHRnRjR3hsClEyOXljREVMTUFrR0ExVUVDeE1DU1ZReEZqQVVCZ05WQkFNVURTb3VaWGhoYlhCc1pTNWpiMjB4SURBZUJna3EKaGtpRzl3MEJDUUVXRVdGa2JXbHVRR1Y0WVcxd2JHVXVZMjl0TUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQwpBUThBTUlJQkNnS0NBUUVBdHVKOW13dzlCYXA2SDROdUhYTFB6d1NVZFppNGJyYTFkN1ZiRUJaWWZDSStZNjRDCjJ1dThwdTNhVTVzYXVNYkQ5N2pRYW95VzZHOThPUHJlV284b3lmbmRJY3RFcmxueGpxelUyVVRWN3FEVHk0bkEKNU9aZW9SZUxmZXFSeGxsSjE0VmlhNVFkZ3l3R0xoRTlqZy9jN2U0WUp6bmg5S1dZMnFjVnhEdUdEM2llaHNEbgphTnpWNFdGOWNJZm1zOHp3UHZPTk5MZnNBbXc3dUhUKzNiSzEzSUloeDI3ZmV2cXVWcENzNDFQNnBzdStWTG4yCjVIRHk0MXRoQkN3T0wrTithbGJ0ZktTcXM3TEFzM25RTjFsdHpITHZ5MGE1RGhkakpUd2tQclQrVXhwb0tCOUgKNFpZazErRUR0N09QbGh5bzM3NDFRaE4vSkNZK2RKbkFMQnNValFJREFRQUJvNEgzTUlIME1CMEdBMVVkRGdRVwpCQlJwZVc1dFhMdHh3TXJvQXM5d2RNbTUzVVVJTERDQnhBWURWUjBqQklHOE1JRzVnQlJwZVc1dFhMdHh3TXJvCkFzOXdkTW01M1VVSUxLR0JsYVNCa2pDQmp6RUxNQWtHQTFVRUJoTUNWVk14RVRBUEJnTlZCQWdUQ0VOdmJHOXkKWVdSdk1SQXdEZ1lEVlFRSEV3ZENiM1ZzWkdWeU1SUXdFZ1lEVlFRS0V3dEZlR0Z0Y0d4bFEyOXljREVMTUFrRwpBMVVFQ3hNQ1NWUXhGakFVQmdOVkJBTVVEU291WlhoaGJYQnNaUzVqYjIweElEQWVCZ2txaGtpRzl3MEJDUUVXCkVXRmtiV2x1UUdWNFlXMXdiR1V1WTI5dGdna0FrQVpDVXlZTFNUSXdEQVlEVlIwVEJBVXdBd0VCL3pBTkJna3EKaGtpRzl3MEJBUVVGQUFPQ0FRRUFjR1hNZms4TlpzQit0OUtCemwxRmw2eUlqRWtqSE8wUFZVbEVjU0QyQjRiNwpQeG5NT2pkbWdQcmF1SGI5dW5YRWFMN3p5QXFhRDZ0YlhXVTZSeENBbWdMYWpWSk5aSE93NDVOMGhyRGtXZ0I4CkV2WnRRNTZhbW13QzFxSWhBaUE2MzkwRDNDc2V4N2dMNm5KbzdrYnIxWVdVRzN6SXZveGR6OFlEclpOZVdLTEQKcFJ2V2VuMGxNYnBqSVJQNFhac25DNDVDOWdWWGRoM0xSZTErd3lRcTZoOVFQaWxveG1ENk5wRTlpbVRPbjJBNQovYkozVktJekFNdWRlVTZrcHlZbEpCemRHMXVhSFRqUU9Xb3NHaXdlQ0tWVVhGNlV0aXNWZGRyeFF0aDZFTnlXCnZJRnFhWng4NCtEbFNDYzkzeWZrL0dsQnQrU0tHNDZ6RUhNQjlocVBiQT09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
defaultKey: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdHVKOW13dzlCYXA2SDROdUhYTFB6d1NVZFppNGJyYTFkN1ZiRUJaWWZDSStZNjRDCjJ1dThwdTNhVTVzYXVNYkQ5N2pRYW95VzZHOThPUHJlV284b3lmbmRJY3RFcmxueGpxelUyVVRWN3FEVHk0bkEKNU9aZW9SZUxmZXFSeGxsSjE0VmlhNVFkZ3l3R0xoRTlqZy9jN2U0WUp6bmg5S1dZMnFjVnhEdUdEM2llaHNEbgphTnpWNFdGOWNJZm1zOHp3UHZPTk5MZnNBbXc3dUhUKzNiSzEzSUloeDI3ZmV2cXVWcENzNDFQNnBzdStWTG4yCjVIRHk0MXRoQkN3T0wrTithbGJ0ZktTcXM3TEFzM25RTjFsdHpITHZ5MGE1RGhkakpUd2tQclQrVXhwb0tCOUgKNFpZazErRUR0N09QbGh5bzM3NDFRaE4vSkNZK2RKbkFMQnNValFJREFRQUJBb0lCQUhrTHhka0dxNmtCWWQxVAp6MkU4YWFENnhneGpyY2JSdGFCcTc3L2hHbVhuQUdaWGVWcE81MG1SYW8wbHZ2VUgwaE0zUnZNTzVKOHBrdzNmCnRhWTQxT1dDTk1PMlYxb1MvQmZUK3Zsblh6V1hTemVQa0pXd2lIZVZMdVdEaVVMQVBHaWl4emF2RFMyUnlQRmEKeGVRdVNhdE5pTDBGeWJGMG5Zd3pST3ZoL2VSa2NKVnJRZlZudU1melFkOGgyMzZlb1UxU3B6UnhSNklubCs5UApNc1R2Wm5OQmY5d0FWcFo5c1NMMnB1V1g3SGNSMlVnem5oMDNZWUZJdGtDZndtbitEbEdva09YWHBVM282aWY5ClRIenBleHdubVJWSmFnRG85bTlQd2t4QXowOW80cXExdHJoU1g1U2p1K0xyNFJvOHg5bytXdUF1VnVwb0lHd0wKMWVseERFRUNnWUVBNzVaWGp1enNJR09PMkY5TStyYVFQcXMrRHZ2REpzQ3gyZnRudk1WWVJKcVliaGt6YnpsVQowSHBCVnk3NmE3WmF6Umxhd3RGZ3ljMlpyQThpM0F3K3J6d1pQclNJeWNieC9nUVduRzZlbFF1Y0FFVWdXODRNCkdSbXhKUGlmOGRQNUxsZXdRalFjUFJwZVoxMzlYODJreGRSSEdma1pscHlXQnFLajBTWExRSEVDZ1lFQXcybkEKbUVXdWQzZFJvam5zbnFOYjBlYXdFUFQrbzBjZ2RyaENQOTZQK1pEekNhcURUblZKV21PeWVxRlk1eVdSSEZOLwpzbEhXU2lTRUFjRXRYZys5aGlMc0RXdHVPdzhUZzYyN2VrOEh1UUtMb2tWWEFUWG1NZG9xOWRyQW9INU5hV2lECmRSY3dEU2EvamhIN3RZV1hKZDA4VkpUNlJJdU8vMVZpbDBtbEk5MENnWUVBb2lsNkhnMFNUV0hWWDNJeG9raEwKSFgrK1ExbjRYcFJ5VEg0eldydWY0TjlhYUxxNTY0QThmZGNodnFiWGJHeEN6U3RxR1E2cW1peUU1TVpoNjlxRgoyd21zZEpxeE14RnEzV2xhL0lxSzM0cTZEaHk3cUNld1hKVGRKNDc0Z3kvY0twZkRmeXZTS1RGZDBFejNvQTZLCmhqUUY0L2lNYnpxUStQREFQR0YrVHFFQ2dZQmQ1YnZncjJMMURzV1FJU3M4MHh3MDBSZDdIbTRaQVAxdGJuNk8KK0IvUWVNRC92UXBaTWV4c1hZbU9lV2Noc3FCMnJ2eW1MOEs3WDY1NnRWdGFYay9nVzNsM3ZVNTdYSFF4Q3RNUwpJMVYvcGVSNHRiN24yd0ZncFFlTm1XNkQ4QXk4Z0xiaUZhRkdRSDg5QWhFa0dTd1d5cWJKc2NoTUZZOUJ5OEtUCkZaVWZsUUtCZ0V3VzJkVUpOZEJMeXNycDhOTE1VbGt1ZnJxbllpUTNTQUhoNFZzWkg1TXU0MW55Yi95NUUyMW4KMk55d3ltWGRlb3VJcFZjcUlVTXl0L3FKRmhIcFJNeVEyWktPR0QyWG5YaENNVlRlL0FQNDJod294Nm02QkZpQgpvemZFa2wwak5uZmREcjZrL1p2MlQ1TnFzaWxaRXJBQlZGOTBKazdtUFBIa0Q2R1ZMUUJ4Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
# Basic auth to protect all the routes. Can use htpasswd to generate passwords
# > htpasswd -n -b testuser testpass
# > testuser:$apr1$JXRA7j2s$LpVns9vsme8FHN0r.aSt11
auth: {}
# basic:
# testuser: $apr1$JXRA7j2s$LpVns9vsme8FHN0r.aSt11
kvprovider:
## If you want to run Traefik in HA mode, you will need to setup a KV Provider. Therefore you can choose one of
## * etcd
## * consul
## * boltdb
## * zookeeper
##
## ref: https://docs.traefik.io/user-guide/cluster/
## storeAcme has to be enabled to support HA Support using acme, but at least one kvprovider is needed
storeAcme: false
importAcme: false
# etcd:
# endpoint: etcd-service:2379
# useAPIV3: false
# watch: true
# prefix: traefik
## Override default configuration template.
## For advanced users :)
##
## Optional
# filename: consul.tmpl
# username: foo
# password: bar
# tls:
# ca: "/etc/ssl/ca.crt"
# cert: "/etc/ssl/consul.crt"
# key: "/etc/ssl/consul.key"
# insecureSkipVerify: true
#
# consul:
# endpoint: consul-service:8500
# watch: true
# prefix: traefik
## Override default configuration template.
## For advanced users :)
##
## Optional
# filename: consul.tmpl
# username: foo
# password: bar
# tls:
# ca: "/etc/ssl/ca.crt"
# cert: "/etc/ssl/consul.crt"
# key: "/etc/ssl/consul.key"
# insecureSkipVerify: true
## only relevant for etcd
acme:
enabled: true
email: me#gmail.com
onHostRule: true
staging: true
logging: true
# Configure a Let's Encrypt certificate to be managed by default.
# This is the only way to request wildcard certificates (works only with dns challenge).
domains:
enabled: true
# List of sets of main and (optional) SANs to generate for
# for wildcard certificates see https://docs.traefik.io/configuration/acme/#wildcard-domains
domainsList:
- main: "*.k8s-test.hardstyletop40.com"
# - sans:
# - "k8s-test.hardstyletop40.com"
# - main: "*.example2.com"
# - sans:
# - "test1.example2.com"
# - "test2.example2.com"
## ACME challenge type: "tls-sni-01", "tls-alpn-01", "http-01" or "dns-01"
## Note the chart's default of tls-sni-01 has been DEPRECATED and (except in
## certain circumstances) DISABLED by Let's Encrypt. It remains as a default
## value in this chart to preserve legacy behavior and avoid a breaking
## change. Users of this chart should strongly consider making the switch to
## the recommended "tls-alpn-01" (avaialbe since v1.7), dns-01 or http-01
## (available since v1.5) challenge.
challengeType: tls-alpn-01
## Configure dnsProvider to perform domain verification using dns challenge
## Applicable only if using the dns-01 challenge type
delayBeforeCheck: 0
resolvers: []
# - 1.1.1.1:53
# - 8.8.8.8:53
dnsProvider:
name: nil
auroradns:
AURORA_USER_ID: ""
AURORA_KEY: ""
AURORA_ENDPOINT: ""
azure:
AZURE_CLIENT_ID: ""
AZURE_CLIENT_SECRET: ""
AZURE_SUBSCRIPTION_ID: ""
AZURE_TENANT_ID: ""
AZURE_RESOURCE_GROUP: ""
cloudflare:
CLOUDFLARE_EMAIL: ""
CLOUDFLARE_API_KEY: ""
digitalocean:
DO_AUTH_TOKEN: ""
dnsimple:
DNSIMPLE_OAUTH_TOKEN: ""
DNSIMPLE_BASE_URL: ""
dnsmadeeasy:
DNSMADEEASY_API_KEY: ""
DNSMADEEASY_API_SECRET: ""
DNSMADEEASY_SANDBOX: ""
dnspod:
DNSPOD_API_KEY: ""
dyn:
DYN_CUSTOMER_NAME: ""
DYN_USER_NAME: ""
DYN_PASSWORD: ""
exoscale:
EXOSCALE_API_KEY: ""
EXOSCALE_API_SECRET: ""
EXOSCALE_ENDPOINT: ""
gandi:
GANDI_API_KEY: ""
godaddy:
GODADDY_API_KEY: ""
GODADDY_API_SECRET: ""
gcloud:
GCE_PROJECT: ""
GCE_SERVICE_ACCOUNT_FILE: ""
linode:
LINODE_API_KEY: ""
namecheap:
NAMECHEAP_API_USER: ""
NAMECHEAP_API_KEY: ""
ns1:
NS1_API_KEY: ""
otc:
OTC_DOMAIN_NAME: ""
OTC_USER_NAME: ""
OTC_PASSWORD: ""
OTC_PROJECT_NAME: ""
OTC_IDENTITY_ENDPOINT: ""
ovh:
OVH_ENDPOINT: ""
OVH_APPLICATION_KEY: ""
OVH_APPLICATION_SECRET: ""
OVH_CONSUMER_KEY: ""
pdns:
PDNS_API_URL: ""
rackspace:
RACKSPACE_USER: ""
RACKSPACE_API_KEY: ""
rfc2136:
RFC2136_NAMESERVER: ""
RFC2136_TSIG_ALGORITHM: ""
RFC2136_TSIG_KEY: ""
RFC2136_TSIG_SECRET: ""
RFC2136_TIMEOUT: ""
route53:
AWS_REGION: ""
AWS_ACCESS_KEY_ID: ""
AWS_SECRET_ACCESS_KEY: ""
vultr:
VULTR_API_KEY: ""
## Save ACME certs to a persistent volume.
## WARNING: If you do not do this and you did not have configured
## a kvprovider, you will re-request certs every time a pod (re-)starts
## and you WILL be rate limited!
persistence:
enabled: true
annotations: {}
## acme data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "default"
accessMode: ReadWriteOnce
size: 1Gi
## A manually managed Persistent Volume Claim
## Requires persistence.enabled: true
## If defined, PVC must be created manually before volume will be bound
##
# existingClaim:
dashboard:
enabled: true
domain: traefik.k8s-test.hardstyletop40.com
# serviceType: ClusterIP
service: {}
# annotations:
# key: value
ingress: {}
# annotations:
# key: value
# labels:
# key: value
# tls:
# - hosts:
# - traefik.example.com
# secretName: traefik-default-cert
auth: {}
# basic:
# username: password
statistics: {}
## Number of recent errors to show in the ‘Health’ tab
# recentErrors:
service:
# annotations:
# key: value
# labels:
# key: value
## Further config for service of type NodePort
## Default config with empty string "" will assign a dynamic
## nodePort to http and https ports
nodePorts:
http: ""
https: ""
## If static nodePort configuration is required it can be enabled as below
## Configure ports in allowable range (eg. 30000 - 32767 on minikube)
# nodePorts:
# http: 30080
# https: 30443
gzip:
enabled: true
traefikLogFormat: json
accessLogs:
enabled: false
## Path to the access logs file. If not provided, Traefik defaults it to stdout.
# filePath: ""
format: common # choices are: common, json
## for JSON logging, finer-grained control over what is logged. Fields can be
## retained or dropped, and request headers can be retained, dropped or redacted
fields:
# choices are keep, drop
defaultMode: keep
names: {}
# ClientUsername: drop
headers:
# choices are keep, drop, redact
defaultMode: keep
names: {}
# Authorization: redact
rbac:
enabled: false
## Enable the /metrics endpoint, for now only supports prometheus
## set to true to enable metric collection by prometheus
metrics:
prometheus:
enabled: false
## If true, prevents exposing port 8080 on the main Traefik service, reserving
## it to the dashboard service only
restrictAccess: false
# buckets: [0.1,0.3,1.2,5]
datadog:
enabled: false
# address: localhost:8125
# pushinterval: 10s
statsd:
enabled: false
# address: localhost:8125
# pushinterval: 10s
deployment:
# labels to add to the pod container metadata
# podLabels:
# key: value
# podAnnotations:
# key: value
hostPort:
httpEnabled: false
httpsEnabled: false
dashboardEnabled: false
# httpPort: 80
# httpsPort: 443
# dashboardPort: 8080
sendAnonymousUsage: false
tracing:
enabled: false
serviceName: traefik
# backend: choices are jaeger, zipkin, datadog
# jaeger:
# localAgentHostPort: "127.0.0.1:6831"
# samplingServerURL: http://localhost:5778/sampling
# samplingType: const
# samplingParam: 1.0
# zipkin:
# httpEndpoint: http://localhost:9411/api/v1/spans
# debug: false
# sameSpan: false
# id128bit: true
# datadog:
# localAgentHostPort: "127.0.0.1:8126"
# debug: false
# globalTag: ""
## Create HorizontalPodAutoscaler object.
##
# autoscaling:
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Resource
# resource:
# name: cpu
# targetAverageUtilization: 60
# - type: Resource
# resource:
# name: memory
# targetAverageUtilization: 60
## Timeouts
##
# timeouts:
# ## responding are timeouts for incoming requests to the Traefik instance
# responding:
# readTimeout: 0s
# writeTimeout: 0s
# idleTimeout: 180s
# ## forwarding are timeouts for requests forwarded to the backend servers
# forwarding:
# dialTimeout: 30s
# responseHeaderTimeout: 0s
For your issue, it seems you misunderstand the persist volume claims. When you use the command:
kubectl get sc --all-namespaces
It just shows the storage class, not the persist volume claims. The storage class is used to define how a unit of storage is dynamically created with a persistent volume. You need to create the persist volume claims as you need like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 5Gi
And you can use the command to show the persist volume claims like below:
kubectl get pvc --all-namespaces
And it actually shows the persist volume claims that you create. Take a look at the Dynamically create and use a persistent volume with Azure disks in Azure Kubernetes Service (AKS). Or Use the special disk that you create.
Update
Also, I get the error as you, but when the pod in the running state, I check inside the pod and find the volumes all mounted correctly. So I guess if the error came because the pod is not in the running state. When the pod in the running state, volumes will mount as expected.
The main issue is that attaching external Azure resources is slow on which initially a retry is there. There the pod gives a lot of errors that it can not mount since the volume is dynamicly created. Due the retry jt recover after a few minutes.
In fact the actual container crash was due an issue with ACME and Traefik itself and not directly with the volumes.

Console_init Causes Kernel paniC

System Details:
OS: Debian/5.0 kernel 2.6.26-2 i686 SMP
Hardware: IBM Thinkpad T40 Type 2373 Pentium M 1.5GHz, 512MB RAM
Sources: sudo apt-get install linux-source-2.6.18 linux-patch-debian-2.6.18 linux-support-2.6.18-5
Toolchain: arm-linux-gcc3.4.cs-uclibc0.9.27 as installed by scratchbox
arm-linux-uclibc-gcc/-g++ -v:
Reading specs from /scratchbox/compilers/arm-linux-gcc3.4.cs-uclibc0.9.27/lib/gcc/arm-linux-uclibc/3.4.2/specs
Configured with: /home/larimo/sb-toolchains/cc/gcc-jp-pass2/work/gcc-2004-q3d/configure --target=arm-linux-uclibc --host=i686-pc-linux-gnu --build=i686-pc-linux-gnu --prefix=/scratchbox/compilers/arm-linux-gcc3.4.cs-uclibc0.9.27 --enable-languages=c,c++ --program-prefix=arm-linux-uclibc- --enable-shared --enable-static --with-sysroot=/scratchbox/compilers/arm-linux-gcc3.4.cs-uclibc0.9.27 --with-local-prefix=/scratchbox/compilers/arm-linux-gcc3.4.cs-uclibc0.9.27 --enable-symvers=gnu --with-gnu-ld Thread model: posix gcc version 3.4.2 (release) (CodeSourcery ARM Q3D 2004)
qemu-system-arm: v0.9.1
qemu command line: qemu-system-arm -m 32 -M integratorcp -kernel zImage -serial stdio -S -s
gdb command line: arm-uclibc-gdb --command=gdb_commands.vim --symbols /usr/src/linux-source-2.6.18/vmlinux
kernel config:
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.18
# Mon Jun 22 14:52:14 2009
#
CONFIG_ARM=y
# CONFIG_GENERIC_TIME is not set
CONFIG_MMU=y
CONFIG_GENERIC_HARDIRQS=y
CONFIG_HARDIRQS_SW_RESEND=y
CONFIG_GENERIC_IRQ_PROBE=y
CONFIG_RWSEM_GENERIC_SPINLOCK=y
CONFIG_GENERIC_HWEIGHT=y
CONFIG_GENERIC_CALIBRATE_DELAY=y
CONFIG_VECTORS_BASE=0xffff0000
CONFIG_DEFCONFIG_LIST="/lib/modules/$UNAME_RELEASE/.config"
#
# Code maturity level options
#
CONFIG_EXPERIMENTAL=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_LOCK_KERNEL=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
#
CONFIG_LOCALVERSION=""
CONFIG_LOCALVERSION_AUTO=y
CONFIG_SWAP=y
CONFIG_SYSVIPC=y
# CONFIG_POSIX_MQUEUE is not set
# CONFIG_BSD_PROCESS_ACCT is not set
# CONFIG_TASKSTATS is not set
# CONFIG_AUDIT is not set
CONFIG_IKCONFIG=y
# CONFIG_IKCONFIG_PROC is not set
# CONFIG_RELAY is not set
CONFIG_INITRAMFS_SOURCE=""
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EMBEDDED=y
CONFIG_UID16=y
CONFIG_SYSCTL=y
CONFIG_KALLSYMS=y
CONFIG_KALLSYMS_ALL=y
# CONFIG_KALLSYMS_EXTRA_PASS is not set
# CONFIG_HOTPLUG is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_ELF_CORE=y
CONFIG_BASE_FULL=y
# CONFIG_FUTEX is not set
CONFIG_EPOLL=y
# CONFIG_SHMEM is not set
# CONFIG_SLAB is not set
# CONFIG_VM_EVENT_COUNTERS is not set
CONFIG_TINY_SHMEM=y
CONFIG_BASE_SMALL=0
CONFIG_SLOB=y
#
# Loadable module support
#
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
# CONFIG_MODULE_FORCE_UNLOAD is not set
# CONFIG_MODVERSIONS is not set
# CONFIG_MODULE_SRCVERSION_ALL is not set
CONFIG_KMOD=y
#
# Block layer
#
# CONFIG_BLK_DEV_IO_TRACE is not set
#
# IO Schedulers
#
CONFIG_IOSCHED_NOOP=y
CONFIG_IOSCHED_AS=y
CONFIG_IOSCHED_DEADLINE=y
CONFIG_IOSCHED_CFQ=y
# CONFIG_DEFAULT_AS is not set
# CONFIG_DEFAULT_DEADLINE is not set
CONFIG_DEFAULT_CFQ=y
# CONFIG_DEFAULT_NOOP is not set
CONFIG_DEFAULT_IOSCHED="cfq"
#
# System Type
#
# CONFIG_ARCH_AAEC2000 is not set
CONFIG_ARCH_INTEGRATOR=y
# CONFIG_ARCH_REALVIEW is not set
# CONFIG_ARCH_VERSATILE is not set
# CONFIG_ARCH_AT91 is not set
# CONFIG_ARCH_CLPS7500 is not set
# CONFIG_ARCH_CLPS711X is not set
# CONFIG_ARCH_CO285 is not set
# CONFIG_ARCH_EBSA110 is not set
# CONFIG_ARCH_EP93XX is not set
# CONFIG_ARCH_FOOTBRIDGE is not set
# CONFIG_ARCH_NETX is not set
# CONFIG_ARCH_H720X is not set
# CONFIG_ARCH_IMX is not set
# CONFIG_ARCH_IOP3XX is not set
# CONFIG_ARCH_IXP4XX is not set
# CONFIG_ARCH_IXP2000 is not set
# CONFIG_ARCH_IXP23XX is not set
# CONFIG_ARCH_L7200 is not set
# CONFIG_ARCH_PNX4008 is not set
# CONFIG_ARCH_PXA is not set
# CONFIG_ARCH_RPC is not set
# CONFIG_ARCH_SA1100 is not set
# CONFIG_ARCH_S3C2410 is not set
# CONFIG_ARCH_SHARK is not set
# CONFIG_ARCH_LH7A40X is not set
# CONFIG_ARCH_OMAP is not set
#
# Integrator Options
#
# CONFIG_ARCH_INTEGRATOR_AP is not set
CONFIG_ARCH_INTEGRATOR_CP=y
CONFIG_ARCH_CINTEGRATOR=y
#
# Processor Type
#
CONFIG_CPU_32=y
# CONFIG_CPU_ARM720T is not set
CONFIG_CPU_ARM920T=y
# CONFIG_CPU_ARM922T is not set
CONFIG_CPU_ARM926T=y
# CONFIG_CPU_ARM1020 is not set
# CONFIG_CPU_ARM1022 is not set
# CONFIG_CPU_ARM1026 is not set
# CONFIG_CPU_V6 is not set
CONFIG_CPU_32v4T=y
CONFIG_CPU_32v5=y
CONFIG_CPU_ABRT_EV4T=y
CONFIG_CPU_ABRT_EV5TJ=y
CONFIG_CPU_CACHE_V4WT=y
CONFIG_CPU_CACHE_VIVT=y
CONFIG_CPU_COPY_V4WB=y
CONFIG_CPU_TLB_V4WBI=y
#
# Processor Features
#
CONFIG_ARM_THUMB=y
# CONFIG_CPU_ICACHE_DISABLE is not set
# CONFIG_CPU_DCACHE_DISABLE is not set
# CONFIG_CPU_DCACHE_WRITETHROUGH is not set
# CONFIG_CPU_CACHE_ROUND_ROBIN is not set
CONFIG_ICST525=y
#
# Bus support
#
CONFIG_ARM_AMBA=y
#
# PCCARD (PCMCIA/CardBus) support
#
#
# Kernel Features
#
CONFIG_PREEMPT=y
# CONFIG_NO_IDLE_HZ is not set
CONFIG_HZ=100
# CONFIG_AEABI is not set
# CONFIG_ARCH_DISCONTIGMEM_ENABLE is not set
CONFIG_SELECT_MEMORY_MODEL=y
CONFIG_FLATMEM_MANUAL=y
# CONFIG_DISCONTIGMEM_MANUAL is not set
# CONFIG_SPARSEMEM_MANUAL is not set
CONFIG_FLATMEM=y
CONFIG_FLAT_NODE_MEM_MAP=y
# CONFIG_SPARSEMEM_STATIC is not set
CONFIG_SPLIT_PTLOCK_CPUS=4096
# CONFIG_RESOURCES_64BIT is not set
CONFIG_LEDS=y
CONFIG_LEDS_TIMER=y
CONFIG_LEDS_CPU=y
CONFIG_ALIGNMENT_TRAP=y
#
# Boot options
#
CONFIG_ZBOOT_ROM_TEXT=0x0
CONFIG_ZBOOT_ROM_BSS=0x0
CONFIG_CMDLINE="root=/dev/mtdblock2 mem=32M"
# CONFIG_XIP_KERNEL is not set
#
# CPU Frequency scaling
#
# CONFIG_CPU_FREQ is not set
#
# Floating point emulation
#
#
# At least one emulation must be selected
#
CONFIG_FPE_NWFPE=y
# CONFIG_FPE_NWFPE_XP is not set
# CONFIG_FPE_FASTFPE is not set
# CONFIG_VFP is not set
#
# Userspace binary formats
#
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_AOUT is not set
# CONFIG_BINFMT_MISC is not set
# CONFIG_ARTHUR is not set
#
# Power management options
#
# CONFIG_PM is not set
# CONFIG_APM is not set
#
# Networking
#
CONFIG_NET=y
#
# Networking options
#
# CONFIG_NETDEBUG is not set
# CONFIG_PACKET is not set
CONFIG_UNIX=y
CONFIG_XFRM=y
# CONFIG_XFRM_USER is not set
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
# CONFIG_IP_ADVANCED_ROUTER is not set
CONFIG_IP_FIB_HASH=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
# CONFIG_IP_PNP_BOOTP is not set
# CONFIG_IP_PNP_RARP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
# CONFIG_ARPD is not set
# CONFIG_SYN_COOKIES is not set
# CONFIG_INET_AH is not set
# CONFIG_INET_ESP is not set
# CONFIG_INET_IPCOMP is not set
# CONFIG_INET_XFRM_TUNNEL is not set
# CONFIG_INET_TUNNEL is not set
CONFIG_INET_XFRM_MODE_TRANSPORT=y
CONFIG_INET_XFRM_MODE_TUNNEL=y
CONFIG_INET_DIAG=y
CONFIG_INET_TCP_DIAG=y
# CONFIG_TCP_CONG_ADVANCED is not set
CONFIG_TCP_CONG_BIC=y
# CONFIG_IPV6 is not set
# CONFIG_INET6_XFRM_TUNNEL is not set
# CONFIG_INET6_TUNNEL is not set
# CONFIG_NETWORK_SECMARK is not set
# CONFIG_NETFILTER is not set
#
# DCCP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_DCCP is not set
#
# SCTP Configuration (EXPERIMENTAL)
#
# CONFIG_IP_SCTP is not set
#
# TIPC Configuration (EXPERIMENTAL)
#
# CONFIG_TIPC is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_DECNET is not set
# CONFIG_LLC2 is not set
# CONFIG_IPX is not set
# CONFIG_ATALK is not set
# CONFIG_X25 is not set
# CONFIG_LAPB is not set
# CONFIG_ECONET is not set
# CONFIG_WAN_ROUTER is not set
#
# QoS and/or fair queueing
#
# CONFIG_NET_SCHED is not set
#
# Network testing
#
# CONFIG_NET_PKTGEN is not set
# CONFIG_HAMRADIO is not set
# CONFIG_IRDA is not set
# CONFIG_BT is not set
# CONFIG_IEEE80211 is not set
#
# Device Drivers
#
#
# Generic Driver Options
#
CONFIG_STANDALONE=y
CONFIG_PREVENT_FIRMWARE_BUILD=y
# CONFIG_DEBUG_DRIVER is not set
# CONFIG_SYS_HYPERVISOR is not set
#
# Connector - unified userspace <-> kernelspace linker
#
# CONFIG_CONNECTOR is not set
#
# Memory Technology Devices (MTD)
#
CONFIG_MTD=y
CONFIG_MTD_DEBUG=y
CONFIG_MTD_DEBUG_VERBOSE=2
CONFIG_MTD_CONCAT=y
CONFIG_MTD_PARTITIONS=y
CONFIG_MTD_REDBOOT_PARTS=y
CONFIG_MTD_REDBOOT_DIRECTORY_BLOCK=-1
# CONFIG_MTD_REDBOOT_PARTS_UNALLOCATED is not set
# CONFIG_MTD_REDBOOT_PARTS_READONLY is not set
CONFIG_MTD_CMDLINE_PARTS=y
# CONFIG_MTD_AFS_PARTS is not set
#
# User Modules And Translation Layers
#
CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
CONFIG_FTL=y
CONFIG_NFTL=y
# CONFIG_NFTL_RW is not set
CONFIG_INFTL=y
CONFIG_RFD_FTL=y
#
# RAM/ROM/Flash chip drivers
#
CONFIG_MTD_CFI=y
# CONFIG_MTD_JEDECPROBE is not set
CONFIG_MTD_GEN_PROBE=y
CONFIG_MTD_CFI_ADV_OPTIONS=y
CONFIG_MTD_CFI_NOSWAP=y
# CONFIG_MTD_CFI_BE_BYTE_SWAP is not set
# CONFIG_MTD_CFI_LE_BYTE_SWAP is not set
CONFIG_MTD_CFI_GEOMETRY=y
# CONFIG_MTD_MAP_BANK_WIDTH_1 is not set
# CONFIG_MTD_MAP_BANK_WIDTH_2 is not set
CONFIG_MTD_MAP_BANK_WIDTH_4=y
# CONFIG_MTD_MAP_BANK_WIDTH_8 is not set
# CONFIG_MTD_MAP_BANK_WIDTH_16 is not set
# CONFIG_MTD_MAP_BANK_WIDTH_32 is not set
# CONFIG_MTD_CFI_I1 is not set
CONFIG_MTD_CFI_I2=y
# CONFIG_MTD_CFI_I4 is not set
# CONFIG_MTD_CFI_I8 is not set
# CONFIG_MTD_OTP is not set
CONFIG_MTD_CFI_INTELEXT=y
# CONFIG_MTD_CFI_AMDSTD is not set
# CONFIG_MTD_CFI_STAA is not set
CONFIG_MTD_CFI_UTIL=y
# CONFIG_MTD_RAM is not set
# CONFIG_MTD_ROM is not set
# CONFIG_MTD_ABSENT is not set
# CONFIG_MTD_OBSOLETE_CHIPS is not set
#
# Mapping drivers for chip access
#
# CONFIG_MTD_COMPLEX_MAPPINGS is not set
# CONFIG_MTD_PHYSMAP is not set
# CONFIG_MTD_ARM_INTEGRATOR is not set
# CONFIG_MTD_PLATRAM is not set
#
# Self-contained MTD device drivers
#
# CONFIG_MTD_SLRAM is not set
# CONFIG_MTD_PHRAM is not set
# CONFIG_MTD_MTDRAM is not set
# CONFIG_MTD_BLOCK2MTD is not set
#
# Disk-On-Chip Device Drivers
#
# CONFIG_MTD_DOC2000 is not set
# CONFIG_MTD_DOC2001 is not set
# CONFIG_MTD_DOC2001PLUS is not set
#
# NAND Flash Device Drivers
#
# CONFIG_MTD_NAND is not set
#
# OneNAND Flash Device Drivers
#
# CONFIG_MTD_ONENAND is not set
#
# Parallel port support
#
# CONFIG_PARPORT is not set
#
# Plug and Play support
#
#
# Block devices
#
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=y
# CONFIG_BLK_DEV_CRYPTOLOOP is not set
# CONFIG_BLK_DEV_NBD is not set
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_COUNT=1
CONFIG_BLK_DEV_RAM_SIZE=4096
CONFIG_BLK_DEV_RAM_BLOCKSIZE=256
CONFIG_BLK_DEV_INITRD=y
# CONFIG_CDROM_PKTCDVD is not set
# CONFIG_ATA_OVER_ETH is not set
#
# SCSI device support
#
# CONFIG_RAID_ATTRS is not set
# CONFIG_SCSI is not set
#
# Multi-device support (RAID and LVM)
#
# CONFIG_MD is not set
#
# Fusion MPT device support
#
# CONFIG_FUSION is not set
#
# IEEE 1394 (FireWire) support
#
#
# I2O device support
#
#
# Network device support
#
CONFIG_NETDEVICES=y
# CONFIG_DUMMY is not set
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
# CONFIG_TUN is not set
#
# PHY device support
#
# CONFIG_PHYLIB is not set
#
# Ethernet (10 or 100Mbit)
#
CONFIG_NET_ETHERNET=y
CONFIG_MII=y
CONFIG_SMC91X=y
# CONFIG_DM9000 is not set
#
# Ethernet (1000 Mbit)
#
#
# Ethernet (10000 Mbit)
#
#
# Token Ring devices
#
#
# Wireless LAN (non-hamradio)
#
# CONFIG_NET_RADIO is not set
#
# Wan interfaces
#
# CONFIG_WAN is not set
# CONFIG_PPP is not set
# CONFIG_SLIP is not set
# CONFIG_SHAPER is not set
# CONFIG_NETCONSOLE is not set
# CONFIG_NETPOLL is not set
# CONFIG_NET_POLL_CONTROLLER is not set
#
# ISDN subsystem
#
# CONFIG_ISDN is not set
#
# Input device support
#
CONFIG_INPUT=y
#
# Userland interfaces
#
CONFIG_INPUT_MOUSEDEV=y
CONFIG_INPUT_MOUSEDEV_PSAUX=y
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
# CONFIG_INPUT_JOYDEV is not set
# CONFIG_INPUT_TSDEV is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_EVBUG is not set
#
# Input Device Drivers
#
CONFIG_INPUT_KEYBOARD=y
CONFIG_KEYBOARD_ATKBD=y
# CONFIG_KEYBOARD_SUNKBD is not set
# CONFIG_KEYBOARD_LKKBD is not set
# CONFIG_KEYBOARD_XTKBD is not set
# CONFIG_KEYBOARD_NEWTON is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_INPUT_JOYSTICK is not set
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
#
# Hardware I/O ports
#
CONFIG_SERIO=y
# CONFIG_SERIO_SERPORT is not set
# CONFIG_SERIO_AMBAKMI is not set
CONFIG_SERIO_LIBPS2=y
# CONFIG_SERIO_RAW is not set
# CONFIG_GAMEPORT is not set
#
# Character devices
#
CONFIG_VT=y
CONFIG_VT_CONSOLE=y
CONFIG_HW_CONSOLE=y
CONFIG_VT_HW_CONSOLE_BINDING=y
# CONFIG_SERIAL_NONSTANDARD is not set
#
# Serial drivers
#
# CONFIG_SERIAL_8250 is not set
#
# Non-8250 serial port support
#
# CONFIG_SERIAL_AMBA_PL010 is not set
CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
#
# IPMI
#
# CONFIG_IPMI_HANDLER is not set
#
# Watchdog Cards
#
# CONFIG_WATCHDOG is not set
CONFIG_HW_RANDOM=y
# CONFIG_NVRAM is not set
# CONFIG_DTLK is not set
# CONFIG_R3964 is not set
#
# Ftape, the floppy tape device driver
#
# CONFIG_RAW_DRIVER is not set
#
# TPM devices
#
# CONFIG_TCG_TPM is not set
# CONFIG_TELCLOCK is not set
#
# I2C support
#
# CONFIG_I2C is not set
#
# SPI support
#
# CONFIG_SPI is not set
# CONFIG_SPI_MASTER is not set
#
# Dallas's 1-wire bus
#
#
# Hardware Monitoring support
#
CONFIG_HWMON=y
# CONFIG_HWMON_VID is not set
# CONFIG_SENSORS_ABITUGURU is not set
# CONFIG_SENSORS_F71805F is not set
# CONFIG_HWMON_DEBUG_CHIP is not set
#
# Misc devices
#
#
# LED devices
#
# CONFIG_NEW_LEDS is not set
#
# LED drivers
#
#
# LED Triggers
#
#
# Multimedia devices
#
# CONFIG_VIDEO_DEV is not set
#
# Digital Video Broadcasting Devices
#
# CONFIG_DVB is not set
#
# Graphics support
#
# CONFIG_FIRMWARE_EDID is not set
CONFIG_FB=y
CONFIG_FB_CFB_FILLRECT=y
CONFIG_FB_CFB_COPYAREA=y
CONFIG_FB_CFB_IMAGEBLIT=y
# CONFIG_FB_MACMODES is not set
# CONFIG_FB_BACKLIGHT is not set
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
CONFIG_FB_ARMCLCD=y
CONFIG_FB_S1D13XXX=y
# CONFIG_FB_VIRTUAL is not set
#
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
# CONFIG_VGACON_SOFT_SCROLLBACK is not set
CONFIG_DUMMY_CONSOLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
CONFIG_FONTS=y
CONFIG_FONT_8x8=y
CONFIG_FONT_8x16=y
# CONFIG_FONT_6x11 is not set
# CONFIG_FONT_7x14 is not set
CONFIG_FONT_PEARL_8x8=y
CONFIG_FONT_ACORN_8x8=y
CONFIG_FONT_MINI_4x6=y
CONFIG_FONT_SUN8x16=y
# CONFIG_FONT_SUN12x22 is not set
# CONFIG_FONT_10x18 is not set
#
# Logo configuration
#
CONFIG_LOGO=y
CONFIG_LOGO_LINUX_MONO=y
CONFIG_LOGO_LINUX_VGA16=y
CONFIG_LOGO_LINUX_CLUT224=y
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
#
# CONFIG_SOUND is not set
#
# USB support
#
CONFIG_USB_ARCH_HAS_HCD=y
# CONFIG_USB_ARCH_HAS_OHCI is not set
# CONFIG_USB_ARCH_HAS_EHCI is not set
# CONFIG_USB is not set
#
# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support'
#
#
# USB Gadget Support
#
# CONFIG_USB_GADGET is not set
#
# MMC/SD Card support
#
# CONFIG_MMC is not set
#
# Real Time Clock
#
CONFIG_RTC_LIB=y
# CONFIG_RTC_CLASS is not set
#
# File systems
#
CONFIG_EXT2_FS=y
# CONFIG_EXT2_FS_XATTR is not set
# CONFIG_EXT2_FS_XIP is not set
# CONFIG_EXT3_FS is not set
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
# CONFIG_FS_POSIX_ACL is not set
# CONFIG_XFS_FS is not set
# CONFIG_OCFS2_FS is not set
# CONFIG_MINIX_FS is not set
CONFIG_ROMFS_FS=y
CONFIG_INOTIFY=y
CONFIG_INOTIFY_USER=y
# CONFIG_QUOTA is not set
CONFIG_DNOTIFY=y
# CONFIG_AUTOFS_FS is not set
# CONFIG_AUTOFS4_FS is not set
# CONFIG_FUSE_FS is not set
#
# CD-ROM/DVD Filesystems
#
# CONFIG_ISO9660_FS is not set
# CONFIG_UDF_FS is not set
#
# DOS/FAT/NT Filesystems
#
CONFIG_FAT_FS=y
CONFIG_MSDOS_FS=y
# CONFIG_VFAT_FS is not set
CONFIG_FAT_DEFAULT_CODEPAGE=437
# CONFIG_NTFS_FS is not set
#
# Pseudo filesystems
#
CONFIG_PROC_FS=y
CONFIG_SYSFS=y
# CONFIG_TMPFS is not set
# CONFIG_HUGETLB_PAGE is not set
CONFIG_RAMFS=y
# CONFIG_CONFIGFS_FS is not set
#
# Miscellaneous filesystems
#
# CONFIG_ADFS_FS is not set
# CONFIG_AFFS_FS is not set
# CONFIG_ASFS_FS is not set
# CONFIG_HFS_FS is not set
# CONFIG_HFSPLUS_FS is not set
# CONFIG_BEFS_FS is not set
# CONFIG_BFS_FS is not set
# CONFIG_EFS_FS is not set
# CONFIG_JFFS_FS is not set
CONFIG_JFFS2_FS=y
CONFIG_JFFS2_FS_DEBUG=0
CONFIG_JFFS2_FS_WRITEBUFFER=y
# CONFIG_JFFS2_SUMMARY is not set
# CONFIG_JFFS2_FS_XATTR is not set
# CONFIG_JFFS2_COMPRESSION_OPTIONS is not set
CONFIG_JFFS2_ZLIB=y
CONFIG_JFFS2_RTIME=y
# CONFIG_JFFS2_RUBIN is not set
CONFIG_CRAMFS=y
# CONFIG_VXFS_FS is not set
# CONFIG_HPFS_FS is not set
# CONFIG_QNX4FS_FS is not set
# CONFIG_SYSV_FS is not set
# CONFIG_UFS_FS is not set
#
# Network File Systems
#
CONFIG_NFS_FS=y
# CONFIG_NFS_V3 is not set
# CONFIG_NFS_V4 is not set
# CONFIG_NFS_DIRECTIO is not set
# CONFIG_NFSD is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
CONFIG_NFS_COMMON=y
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
# CONFIG_SMB_FS is not set
# CONFIG_CIFS is not set
# CONFIG_NCP_FS is not set
# CONFIG_CODA_FS is not set
# CONFIG_AFS_FS is not set
# CONFIG_9P_FS is not set
#
# Partition Types
#
# CONFIG_PARTITION_ADVANCED is not set
CONFIG_MSDOS_PARTITION=y
#
# Native Language Support
#
CONFIG_NLS=y
CONFIG_NLS_DEFAULT="iso8859-1"
# CONFIG_NLS_CODEPAGE_437 is not set
# CONFIG_NLS_CODEPAGE_737 is not set
# CONFIG_NLS_CODEPAGE_775 is not set
# CONFIG_NLS_CODEPAGE_850 is not set
# CONFIG_NLS_CODEPAGE_852 is not set
# CONFIG_NLS_CODEPAGE_855 is not set
# CONFIG_NLS_CODEPAGE_857 is not set
# CONFIG_NLS_CODEPAGE_860 is not set
# CONFIG_NLS_CODEPAGE_861 is not set
# CONFIG_NLS_CODEPAGE_862 is not set
# CONFIG_NLS_CODEPAGE_863 is not set
# CONFIG_NLS_CODEPAGE_864 is not set
# CONFIG_NLS_CODEPAGE_865 is not set
# CONFIG_NLS_CODEPAGE_866 is not set
# CONFIG_NLS_CODEPAGE_869 is not set
# CONFIG_NLS_CODEPAGE_936 is not set
# CONFIG_NLS_CODEPAGE_950 is not set
# CONFIG_NLS_CODEPAGE_932 is not set
# CONFIG_NLS_CODEPAGE_949 is not set
# CONFIG_NLS_CODEPAGE_874 is not set
# CONFIG_NLS_ISO8859_8 is not set
# CONFIG_NLS_CODEPAGE_1250 is not set
# CONFIG_NLS_CODEPAGE_1251 is not set
# CONFIG_NLS_ASCII is not set
CONFIG_NLS_ISO8859_1=y
# CONFIG_NLS_ISO8859_2 is not set
# CONFIG_NLS_ISO8859_3 is not set
# CONFIG_NLS_ISO8859_4 is not set
# CONFIG_NLS_ISO8859_5 is not set
# CONFIG_NLS_ISO8859_6 is not set
# CONFIG_NLS_ISO8859_7 is not set
# CONFIG_NLS_ISO8859_9 is not set
# CONFIG_NLS_ISO8859_13 is not set
# CONFIG_NLS_ISO8859_14 is not set
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
# CONFIG_NLS_UTF8 is not set
#
# Profiling support
#
# CONFIG_PROFILING is not set
#
# Kernel hacking
#
CONFIG_PRINTK_TIME=y
CONFIG_MAGIC_SYSRQ=y
# CONFIG_UNUSED_SYMBOLS is not set
CONFIG_DEBUG_KERNEL=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_DETECT_SOFTLOCKUP=y
# CONFIG_SCHEDSTATS is not set
# CONFIG_DEBUG_SPINLOCK is not set
# CONFIG_DEBUG_MUTEXES is not set
# CONFIG_DEBUG_RWSEMS is not set
# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
# CONFIG_DEBUG_LOCKING_API_SELFTESTS is not set
CONFIG_DEBUG_KOBJECT=y
CONFIG_DEBUG_BUGVERBOSE=y
CONFIG_DEBUG_INFO=y
# CONFIG_DEBUG_FS is not set
# CONFIG_DEBUG_VM is not set
CONFIG_FRAME_POINTER=y
CONFIG_UNWIND_INFO=y
CONFIG_FORCED_INLINING=y
# CONFIG_RCU_TORTURE_TEST is not set
CONFIG_DEBUG_USER=y
# CONFIG_DEBUG_WAITQ is not set
CONFIG_DEBUG_ERRORS=y
CONFIG_DEBUG_LL=y
# CONFIG_DEBUG_ICEDCC is not set
#
# Security options
#
# CONFIG_KEYS is not set
# CONFIG_SECURITY is not set
#
# Cryptographic options
#
# CONFIG_CRYPTO is not set
#
# Hardware crypto devices
#
#
# Library routines
#
# CONFIG_CRC_CCITT is not set
# CONFIG_CRC16 is not set
CONFIG_CRC32=y
# CONFIG_LIBCRC32C is not set
CONFIG_ZLIB_INFLATE=y
CONFIG_ZLIB_DEFLATE=y
I am trying to run a cross-compiled kernel in qemu. I am trying to simulate an ARM9 family processor on an integratorcp board. This is supported under qemu-system-arm and this setting is present in the command line far above. Although I am using the cross build toolchain supplied by scratchbox I am not compiling in scratchbox. This is because cross support is quite well developed for the kernel and is actually recommended to not be done in scratchbox. Everything is compiled into this kernel. Although loadable module support is enabled no modules are being built. I am able to succesfully produce a compressed binary (zImage). When qemu-system-arm inserts it as the kernel it sucessfully uncompresses Linux and almost immediately hangs. Feeding an uncompressed kernel binary (/usr/src/linux-2.8.16-source/vmlinux) to arm-uclibc-gdb for symbol extraction seems to work as well. Setting breakpoints and stepping through the instructions seems accurate. I was able to use arm-uclibc-gdb to trap the offending line. The crash occurs in drivers/video/console/vgacon.c:462, inside the vgacon_startup(void) function. After attempting to step into the instruction inb_p(VGA_IS1_RC) on line 462 it craps the bed. Setting a breakpoint at __do_kernel_fault, the callstack is as follows:`
#0 __do_kernel_fault (mm=0xc0225f28, addr=53, fsr=53, regs=0x30001)
at arch/arm/mm/fault.c:82
#1 0xc002a03c in do_bad_area (tsk=0xc0228340, mm=0xc0225e34,
addr=3221397564, fsr=53, regs=0xc0225f28) at arch/arm/mm/fault.c:145
#2 0xc002a2fc in do_translation_fault (addr=3223477848, fsr=53,
regs=0xc0225f28) at arch/arm/mm/fault.c:356
#3 0xc002a39c in do_DataAbort (addr=3992978394, fsr=53,
regs=0xc0225f28) at arch/arm/mm/fault.c:450
#4 0xc0023848 in __dabt_svc () at proc_fs.h:194
#5 0xc0023848 in __dabt_svc () at proc_fs.h:194
#6 0xc0023848 in __dabt_svc () at proc_fs.h:194
#7 0xc0023848 in __dabt_svc () at proc_fs.h:194
#8 0xc0023848 in __dabt_svc () at proc_fs.h:194
#9 0xc0023848 in __dabt_svc () at proc_fs.h:194
...
#1500 0xc0023848 in __dabt_svc () at proc_fs.h:194
...
#15000 0xc0023848 in __dabt_svc () at proc_fs.h:194
...
I eventually gave up trying to find what was calling __dabt_svc
A couple points of interest:
It seems impossible to inspect and set breaks on certain variables/functions even when they are within scope (inb_p, for example).
I have tried compiling with no optimization, -O and -O2 and have not succeeded in booting past this issue
The inb_p, man documentation clearly states that you need to compile with -O or -O2; otherwise, you risk the subroutine not being inlined
Although it prints the "Uncompressing Linux.....Ok, Booting the kernel" message to the screen, once it starts initializing the console you are flying blind. Nothing else gets printed to the screen by the kernel. There is no pretty-print panic message - you have to break into __show_regs() and manually snoop all the relevant registers (pc, fsr, etc). This is not as difficult as it sounds. It is about as tedious as it sounds.
I believe I have applied all relevant patches for ARM as well as for Debian.
I am ready to cry like a little girl with a skinned knee.
I would be very grateful for a fresh pair of eyes, or at the very least a nod in the right direction. Thank you in advance for having read this far and for any assistance you can provide.
It appears that QEMU does not emulate the VGA device in its Arm integrator emulation.
Check out this link for a possible workaround.
What is the kernel command line ? You shoul first put the console on the serial port using
a line like this :
console=/dev/ttyS0 or console=/dev/ttyAMA0
Then the messages should start to flow. If your kernel dies too early, you can also try the following hack :
Index: .kernel/kernel/printk.c
===================================================================
--- .kernel.orig/kernel/printk.c 2009-02-28 02:52:32.000000000 +0100
+++ .kernel/kernel/printk.c 2009-04-30 14:12:29.000000000 +0200
## -41,6 +41,7 ##
{
}
+extern void printascii(const char *);
#define __LOG_BUF_LEN (1 << CONFIG_LOG_BUF_SHIFT)
/* printk's without a loglevel use this.. */
## -552,7 +553,7 ##
/* Emit the output into the temporary buffer */
printed_len = vscnprintf(printk_buf, sizeof(printk_buf), fmt, args);
-
+ printascii(printk_buf);
if (printed_len > 0) {
unsigned int loglevel;
int mark_len;

Resources