I'm following the steps in this tutorial. I'm having trouble executing this CLI command:
az container create \
--name docks \
--resource-group MyResourceGroup \
--ip-address Public \
--image jenkins/inbound-agent:latest \
--os-type linux \
--ports 80 \
--command-line "jenkins-agent -url http://jenkinsServer:8080 secret agentName"
It gives the following output:
{
"containers": [
{
"command": [
"jenkins-agent",
"-url",
"http://jenkinsServer:8080",
"secret",
"agentName"
],
"environmentVariables": [],
"image": "jenkins/inbound-agent:latest",
"instanceView": {
"currentState": {
"detailStatus": "CrashLoopBackOff: Back-off restarting failed",
"exitCode": null,
"finishTime": null,
"startTime": null,
"state": "Waiting"
},
"events": [
{
"count": 1,
"firstTimestamp": "2022-09-07T16:57:57+00:00",
"lastTimestamp": "2022-09-07T16:57:57+00:00",
"message": "pulling image \"jenkins/inbound-agent#sha256:f495769bfc767bc77f6c2f8268a734dbac98249449f139f95fc434eb26c6489a\"",
"name": "Pulling",
"type": "Normal"
},
{
"count": 1,
"firstTimestamp": "2022-09-07T16:59:00+00:00",
"lastTimestamp": "2022-09-07T16:59:00+00:00",
"message": "Successfully pulled image \"jenkins/inbound-agent#sha256:f495769bfc767bc77f6c2f8268a734dbac98249449f139f95fc434eb26c6489a\"",
"name": "Pulled",
"type": "Normal"
},
{
"count": 2,
"firstTimestamp": "2022-09-07T16:59:57+00:00",
"lastTimestamp": "2022-09-07T17:00:18+00:00",
"message": "Started container",
"name": "Started",
"type": "Normal"
},
{
"count": 1,
"firstTimestamp": "2022-09-07T17:00:08+00:00",
"lastTimestamp": "2022-09-07T17:00:08+00:00",
"message": "Killing container with id XXXXXXXXXXXXXXXXXXXXXXX.",
"name": "Killing",
"type": "Normal"
}
],
"previousState": {
"detailStatus": "Error",
"exitCode": 255,
"finishTime": "2022-09-07T17:00:29.169000+00:00",
"startTime": "2022-09-07T17:00:18.785000+00:00",
"state": "Terminated"
},
"restartCount": 1
},
"livenessProbe": null,
"name": "docks",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"readinessProbe": null,
"resources": {
"limits": null,
"requests": {
"cpu": 1.0,
"gpu": null,
"memoryInGb": 1.5
}
},
"volumeMounts": null
}
],
"diagnostics": null,
"dnsConfig": null,
"encryptionProperties": null,
"id": "/subscriptions/azureSub/resourceGroups/MyResourceGroup/providers/Microsoft.ContainerInstance/containerGroups/docks",
"identity": null,
"imageRegistryCredentials": null,
"initContainers": [],
"instanceView": {
"events": [],
"state": "Running"
},
"ipAddress": {
"dnsNameLabel": null,
"fqdn": null,
"ip": "XX.XXX.XXX.XX",
"ports": [
{
"port": 80,
"protocol": "TCP"
}
],
"type": "Public"
},
"location": "westeurope",
"name": "docks",
"osType": "Linux",
"provisioningState": "Succeeded",
"resourceGroup": "MyResourceGroup",
"restartPolicy": "Always",
"sku": "Standard",
"subnetIds": null,
"tags": {},
"type": "Microsoft.ContainerInstance/containerGroups",
"volumes": null,
"zones": null
}
As you see it gives a 255 Error, however I didn't find anything related to it yet.
I also tried to change the --command-line to:
java -jar agent.jar -jnlpUrl http://jenkinsServer:8080 secret agentName
But the same output happens.
This creates the Container, but it keeps restarting indefinitely (starts and fails).
The Jenkins server is in a Linux VM made following this tutorial
How can I make a Jenkins agent from the VM be run in a docker container image using Azure?
When I tried to reproduce the issue, noted that we have different ways to get CrashLoopBackOff error.
1). ENVIRONMENT VARIABLES SETUP
CrashLoopBackOff will occur when the environment variables are set incorrectly.
please check the ENV_PATH set to be correct or not
type ENV in Azure Cli or PowerShell
2). INSTALLING THE CORRECT VERSION OF S/W
can you please check which java version you have installed
if it is JDK-8, please update it to 11.0.16.1, it will work
apt-get update -y
apt-get install openjdk-11-jdk
after installing java I have followed the steps Ms-Doc and created the container instance successfully.
az container create \
--name docks \
--resource-group jenkins-get-started-rgz \
--ip-address Public \
--image jenkins/inbound-agent:latest \
--os-type linux \
--ports 80 \
--command-line "jenkins-agent -url http://jenkinsServer:8080 JENKINS_SECRET AGENT_NAME"
NOTE:
if the EXIT CODE is "0" means we have created the container instance successfully
if the EXIT CODE is in between "1-255" it corresponds to error.
Related
I am trying to deploy Filebeat demonset on Azure Kubernetes services
I have grabbed my code from : https://github.com/elastic/beats/tree/master/deploy/kubernetes/filebeat
Below is the error i am facing, Kindly let me know if am missing something here
Error:
{
"kind": "Event",
"apiVersion": "v1",
"metadata": {
"name": "filebeat.1686897c8d8bxxxx",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/events/filebeat.1686897c8d8bxxxx",
"uid": "5b94cf20-b432-4d77-b20b-f45fd91xxxxx",
"resourceVersion": "708810xx",
"creationTimestamp": "2021-06-08T07:04:43Z",
"managedFields": [
{
"manager": "kube-controller-manager",
"operation": "Update",
"apiVersion": "v1",
"time": "2021-06-08T07:04:45Z",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:count": {},
"f:firstTimestamp": {},
"f:involvedObject": {
"f:apiVersion": {},
"f:kind": {},
"f:name": {},
"f:namespace": {},
"f:resourceVersion": {},
"f:uid": {}
},
"f:lastTimestamp": {},
"f:message": {},
"f:reason": {},
"f:source": {
"f:component": {}
},
"f:type": {}
}
}
]
},
"involvedObject": {
"kind": "DaemonSet",
"namespace": "kube-system",
"name": "filebeat",
"uid": "80f770e5-2b8b-xxxx-bcea-2c2dfba5xxxx",
"apiVersion": "apps/v1",
"resourceVersion": "7088xxxx"
},
"reason": "FailedCreate",
"message": "Error creating: pods \"filebeat-\" is forbidden: error looking up service account kube-system/filebeat: serviceaccount \"filebeat\" not found",
"source": {
"component": "daemonset-controller"
},
"firstTimestamp": "2021-06-08T07:04:43Z",
"lastTimestamp": "2021-06-08T07:04:45Z",
"count": 9,
"type": "Warning",
"eventTime": null,
"reportingComponent": "",
"reportingInstance": ""
}
Kubernetes is failing to create your pod because it references a Service Account that does not exist.
Please ensure to apply all the yaml files in the page you mentioned:
https://github.com/elastic/beats/tree/master/deploy/kubernetes/filebeat
As a basic example:
kubectl apply -f filebeat-configmap.yaml
kubectl apply -f filebeat-daemonset.yaml
kubectl apply -f filebeat-role-binding.yaml
kubectl apply -f filebeat-role.yaml
kubectl apply -f filebeat-service-account.yaml
According to the YAML files in the link that you provided, it seems the daemonset in the YAML file filebeat-daemonset.yaml depends on the service account filebeat. So you need to deploy the service account filebeat before you deploy the daemonset:
The following command creates a container in Azure that is mapped to a file share/volume:
az container create -g MyResourceGroup --name myapp --image myimage:latest
--azure-file-volume-share-name myshare --azure-file-volume-account-name mystorageacct
--azure-file-volume-account-key mystoragekey --azure-file-volume-mount-path /mnt/azfile
But I need my container to be mapped to two volumes, not just one. Is this possible?
I do not know if it is possible to do this via azure cli. I do know that you can do this through Azure Resource Manager Templates.
In this example, see how the container group has an array of volumes, while each container can have an array of volume mounts.
{
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2018-10-01",
"name": "[parameters('ContainerGroupName')]",
"location": "australiaeast",
"identity": {
"type": "UserAssigned",
"userAssignedIdentities": {
"[variables('managedIdentityId')]": {}
}
},
"dependsOn": [
"[variables('managedIdentityId')]"
],
"properties": {
"containers": [
{
"name": "[parameters('ContainerGroupName')]",
"properties": {
"image": "[parameters('SourceImage')]",
"ports": [{"port": 80},{"port": 443}],
"environmentVariables": [],
"resources": { "requests": { "memoryInGB": 1.5, "cpu": 1 } },
"volumeMounts": [
{
"name": "httpscertificatevolume",
"mountPath": "/https"
},
{
"name": "videofoldervolume",
"mountPath": "[variables('videoFolderMountPath')]"
}
]
}
}
],
"volumes": [{
"name": "httpscertificatevolume",
"azureFile": {
"shareName": "[parameters('HttpsCertificateFileShare')]",
"storageAccountName": "[parameters('StorageAccountName')]",
"storageAccountKey" : "[parameters('StorageAccountKey')]"
}
},
{
"name": "videofoldervolume",
"azureFile": {
"shareName": "[parameters('VideoFileShare')]",
"storageAccountName": "[parameters('StorageAccountName')]",
"storageAccountKey" : "[parameters('StorageAccountKey')]"
}
}
]
}
}
when sending CLI command - ec2 describe-instances --instance-id , I am getting all the data, but I need to get specifically the private ip's , and its returning null, even though I can see them .
The CLI command : ec2 describe-instances --instance-id i-0b7xxxxxxxxxxx --query Reservations[] --output json , is returning the following output :
[
{
"Groups": [],
"Instances": [
{
"AmiLaunchIndex": 0,
"ImageId": "ami-1bxxxxxxx",
"InstanceId": "i-0b7xxxxxxxxx",
"InstanceType": "r4.2xlarge",
"KeyName": "QA-xxx-xxxxxyz",
"LaunchTime": "2019-05-21T06:40:57.000Z",
"Monitoring": {
"State": "disabled"
},
"Placement": {
"AvailabilityZone": "eu-west-1c",
"GroupName": "",
"Tenancy": "default"
},
"PrivateDnsName": "ip-172-xxx-11-211.eu-west-
1.compute.internal",
"PrivateIpAddress": "172.xxx.11.211",
"ProductCodes": [],
"PublicDnsName": "",
"State": {
"Code": 16,
"Name": "running"
},
"StateTransitionReason": "",
"SubnetId": "subnet-3362797a",
"VpcId": "vpc-02a19a65",
"Architecture": "x86_64",
"BlockDeviceMappings": [
{
"DeviceName": "/dev/sda1",
"Ebs": {
"AttachTime": "2019-04-28T11:19:09.000Z",
"DeleteOnTermination": true,
"Status": "attached",
"VolumeId": "vol-02a052466755e023d"
}
}
],
"ClientToken": "qa-sip-sc1-1FBXNRII3WO13",
"EbsOptimized": false,
"EnaSupport": true,
"Hypervisor": "xen",
"IamInstanceProfile": {
"Arn": "arn:aws:iam::1xxxxxxx14:instance-profile/qa.tester.SBC-HA",
"Id": "AIPAI2xxxxxRPSC"
},
"NetworkInterfaces": [
{
"Attachment": {
"AttachTime": "2019-04-28T11:19:09.000Z",
"AttachmentId": "eni-attach-05xxxxxa8",
"DeleteOnTermination": false,
"DeviceIndex": 0,
"Status": "attached"
},
"Description": "SC1 interface for HA and cluster maintenance",
"Groups": [
{
"GroupName": "qa-sip-EvgenyZ-qa-Auto-network-clusterSecurityGroup-A4xxxxxxxC8",
"GroupId": "sg-0a2xxxxxxx2a"
}
],
"Ipv6Addresses": [],
"MacAddress": "06:xx:xx:xx:xx:xa",
"NetworkInterfaceId": "eni-xxxxxxxx",
"OwnerId": "xxxxxxx",
"PrivateDnsName": "ip-172-xxx-11-211.eu-west-1.compute.internal",
"PrivateIpAddress": "172.xxx.11.211",
"PrivateIpAddresses": [
{
"Primary": true,
"PrivateDnsName": "ip-172-xxx-11-211.eu-west-1.compute.internal",
"PrivateIpAddress": "172.xxx.11.211"
},
{
"Primary": false,
"PrivateDnsName": "ip-172-xxx-9-204.eu-west-1.compute.internal",
"PrivateIpAddress": "172.xxx.9.204"
}
],
"SourceDestCheck": true,
"Status": "in-use",
"SubnetId": "subnet-3xxxxa",
"VpcId": "vpc-xxxxx5"
}
I want to get the PrivateIpAddresses:172-xxx-9-204 and 172.xxx.11.211.
for this I am using the following CLI command
ec2 describe-instances --instance-id i-0b722cc96f7a14bfc --query
Reservations[].Instances[].PrivateIpAddress[].PrivateIpAddress --output
json
getting null.
expecting : 72-xxx-9-204 and 172.xxx.11.211
In the output of the query with --query=Reservations[] the Instances object is inside a list. So you have to index into the list first.
[*].Instances[*].PrivateIpAddress
This will give you:
[
[
"172.xxx.11.211"
]
]
Similarly,
[*].Instances[*].NetworkInterfaces[*].PrivateIpAddresses[*].PrivateIpAddress
Gives you:
[
[
[
[
"172.xxx.11.211",
"172.xxx.9.204"
]
]
]
]
Side Note: AWS CLI uses the JMESPath query language. You can experiment with your queries here: http://jmespath.org/
For me following query worked:
aws ec2 describe-instances --instance-id <id> --query Reservations[].Instances[].NetworkInterfaces[].PrivateIpAddresses[].PrivateIpAddress --output json
This definition clearly mentions you can use networkPolicy property as part of the networkProfile and set it to Calico, but that doesnt work. AKS creating just times out with all the nodes being in Not Ready state.
you need enable underlying provider feature:
az feature list --query "[?contains(name, 'Container')].{name:name, type:type}" # example to list all features
az feature register --name EnableNetworkPolicy --namespace Microsoft.ContainerService
az provider register -n Microsoft.ContainerService
after that you can just use REST API\ARM Template to create AKS:
{
"location": "location1",
"tags": {
"tier": "production",
"archv2": ""
},
"properties": {
"kubernetesVersion": "1.12.4", // has to be 1.12.x, 1.11.x doesnt support calico AFAIK
"dnsPrefix": "dnsprefix1",
"agentPoolProfiles": [
{
"name": "nodepool1",
"count": 3,
"vmSize": "Standard_DS1_v2",
"osType": "Linux"
}
],
"linuxProfile": {
"adminUsername": "azureuser",
"ssh": {
"publicKeys": [
{
"keyData": "keydata"
}
]
}
},
"servicePrincipalProfile": {
"clientId": "clientid",
"secret": "secret"
},
"addonProfiles": {},
"enableRBAC": false,
"networkProfile": {
"networkPlugin": "azure",
"networkPolicy": "calico", // set policy here
"serviceCidr": "xxx",
"dnsServiceIP": "yyy",
"dockerBridgeCidr": "zzz"
}
}
}
ps.
Unfortunately, helm doesnt seem to work at the time of writing (I suspect this is because kubectl port-forward which helm relies on doesnt work as well ).
Am trying to bring up a ubuntu container in a POD in openshift. I have setup my local docker registry and have configured DNS accordingly. Starting the ubuntu container with just docker works fine without any issues. When I deploy the POD, I can see that my docker ubuntu image is pulled successfully, but doesnt succeed in starting the same. It fails with back-off pulling image error. Is this because my entry point does not have any background process running in side the container ?
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
Snapshot of the events
Deployment-config :
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"namespace": "testproject",
"selfLink": "/oapi/v1/namespaces/testproject/deploymentconfigs/ubuntu",
"uid": "e7c7b9c6-4dbd-11e6-bd2b-0800277bbed5",
"resourceVersion": "4340",
"generation": 6,
"creationTimestamp": "2016-07-19T14:34:31Z",
"labels": {
"app": "ubuntu"
},
"annotations": {
"openshift.io/deployment.cancelled": "4",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"ubuntu"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "testproject",
"name": "ubuntu:latest"
},
"lastTriggeredImage": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"annotations": {
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"latestVersion": 5,
"details": {
"causes": [
{
"type": "ConfigChange"
}
]
},
"observedGeneration": 5
}
The problem was with the http proxy. After solving that image pull was successful