BNA Creation in Fabric error - hyperledger-fabric

I am in the process of deploying my .BNA file to fabric, I been testing and prototyping it in on the bluemix playground succesfully however, when I try to install the network application to fabric I get the error.
> Error: Error trying install business network.
>Error: No valid responses from any peers.
>Response from attempted peer comms was an error:
>Error: 14 UNAVAILABLE: Connect Failed
Command failed
**This is the steps I took**
1. Launch your Fabric network
> ./startFabric.sh
2.) Create the peer admin card
> ./createPeerAdminCard.sh
3.) Install the network application to fabric
> composer network install -a dist/bna.bna -c PeerAdmin#hlfv1
**This step is where I get the error**
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: 14 UNAVAILABLE: Connect Failed
Command failed
**Details of my env**
Node Version: v8.11.3
Docker version: 18.03
Composer version: v0.19.12
Docker PS:
[Docker PS Screen shot][1]
[1]: https://i.stack.imgur.com/HQGBf.png
Any help is really appreciated.
UPDATE
Connection.json for hlfv1
{
"name": "hlfv1",
"x-type": "hlfv1",
"x-commitTimeout": 300,
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300",
"eventHub": "300",
"eventReg": "300"
},
"orderer": "300"
}
}
},
"channels": {
"composerchannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.org1.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://localhost:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
},
"certificateAuthorities": {
"ca.org1.example.com": {
"url": "http://localhost:7054",
"caName": "ca.org1.example.com"
}
}
}
Hlfv11 vs HLFv1
I noticed when I look in the the fabric-scrips there are two components hlfv11 vs hlfv1.
Screen shot of fabric tools
When I start the startfabric I get the line that fabric assumes it is "hlfv11" instead of hlfv1.
enter image description here
Any help would be appreciated.
docker inspect peer0.org1.example.com
[
{
"Id": "6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac",
"Created": "2018-07-20T22:49:51.238208735Z",
"Path": "peer",
"Args": [
"node",
"start"
],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 7506,
"ExitCode": 0,
"Error": "",
"StartedAt": "2018-07-20T22:49:51.543106588Z",
"FinishedAt": "0001-01-01T00:00:00Z"
},
"Image": "sha256:b023f9be07714e495e6d41849d7e916434e85580754423ece145866468ad29a9",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/resolv.conf",
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/hosts",
"LogPath": "/mnt/sda1/var/lib/docker/containers/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac/6caa83b2a8a5ee976c9066d0bbd98475e5bff885736ec9931606c33f06ccd9ac-json.log",
"Name": "/peer0.org1.example.com",
"RestartCount": 0,
"Driver": "aufs",
"Platform": "linux",
"MountLabel": "",
"ProcessLabel": "",
"AppArmorProfile": "",
"ExecIDs": null,
"HostConfig": {
"Binds": [
"/var/run:/host/var/run:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer:/etc/hyperledger/configtx:rw",
"/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/peer/msp:rw"
],
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},
"NetworkMode": "composer_default",
"PortBindings": {
"7051/tcp": [
{
"HostIp": "",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "",
"HostPort": "7053"
}
]
},
"RestartPolicy": {
"Name": "",
"MaximumRetryCount": 0
},
"AutoRemove": false,
"VolumeDriver": "",
"VolumesFrom": [],
"CapAdd": null,
"CapDrop": null,
"Dns": null,
"DnsOptions": null,
"DnsSearch": null,
"ExtraHosts": null,
"GroupAdd": null,
"IpcMode": "shareable",
"Cgroup": "",
"Links": null,
"OomScoreAdj": 0,
"PidMode": "",
"Privileged": false,
"PublishAllPorts": false,
"ReadonlyRootfs": false,
"SecurityOpt": null,
"UTSMode": "",
"UsernsMode": "",
"ShmSize": 67108864,
"Runtime": "runc",
"ConsoleSize": [
0,
0
],
"Isolation": "",
"CpuShares": 0,
"Memory": 0,
"NanoCpus": 0,
"CgroupParent": "",
"BlkioWeight": 0,
"BlkioWeightDevice": null,
"BlkioDeviceReadBps": null,
"BlkioDeviceWriteBps": null,
"BlkioDeviceReadIOps": null,
"BlkioDeviceWriteIOps": null,
"CpuPeriod": 0,
"CpuQuota": 0,
"CpuRealtimePeriod": 0,
"CpuRealtimeRuntime": 0,
"CpusetCpus": "",
"CpusetMems": "",
"Devices": null,
"DeviceCgroupRules": null,
"DiskQuota": 0,
"KernelMemory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,
"MemorySwappiness": null,
"OomKillDisable": false,
"PidsLimit": 0,
"Ulimits": null,
"CpuCount": 0,
"CpuPercent": 0,
"IOMaximumIOps": 0,
"IOMaximumBandwidth": 0
},
"GraphDriver": {
"Data": null,
"Name": "aufs"
},
"Mounts": [
{
"Type": "bind",
"Source": "/var/run",
"Destination": "/host/var/run",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/users",
"Destination": "/etc/hyperledger/msp/users",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer",
"Destination": "/etc/hyperledger/configtx",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/Users/wppa/fabric-dev-servers/fabric-scripts/hlfv11/composer/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp",
"Destination": "/etc/hyperledger/peer/msp",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
"Hostname": "6caa83b2a8a5",
"Domainname": "",
"User": "",
"AttachStdin": false,
"AttachStdout": false,
"AttachStderr": false,
"ExposedPorts": {
"7051/tcp": {},
"7053/tcp": {}
},
"Tty": false,
"OpenStdin": false,
"StdinOnce": false,
"Env": [
"CORE_LOGGING_LEVEL=debug",
"CORE_CHAINCODE_LOGGING_LEVEL=DEBUG",
"CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock",
"CORE_PEER_ID=peer0.org1.example.com",
"CORE_PEER_ADDRESS=peer0.org1.example.com:7051",
"CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=composer_default",
"CORE_PEER_LOCALMSPID=Org1MSP",
"CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/msp",
"CORE_LEDGER_STATE_STATEDATABASE=CouchDB",
"CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"FABRIC_CFG_PATH=/etc/hyperledger/fabric"
],
"Cmd": [
"peer",
"node",
"start"
],
"Image": "hyperledger/fabric-peer:x86_64-1.1.0",
"Volumes": {
"/etc/hyperledger/configtx": {},
"/etc/hyperledger/msp/users": {},
"/etc/hyperledger/peer/msp": {},
"/host/var/run": {}
},
"WorkingDir": "/opt/gopath/src/github.com/hyperledger/fabric",
"Entrypoint": null,
"OnBuild": null,
"Labels": {
"com.docker.compose.config-hash": "d44983248579bb25822020f82382fba01b891c3338b2fe91bb17ac3936126c69",
"com.docker.compose.container-number": "1",
"com.docker.compose.oneoff": "False",
"com.docker.compose.project": "composer",
"com.docker.compose.service": "peer0.org1.example.com",
"com.docker.compose.version": "1.21.1",
"org.hyperledger.fabric.base.version": "0.4.6",
"org.hyperledger.fabric.version": "1.1.0"
}
},
"NetworkSettings": {
"Bridge": "",
"SandboxID": "5645c1988100b53fa9a8c2d13adc40c43f3995cb808b3eda28771176033b26b4",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"7051/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7051"
}
],
"7053/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "7053"
}
]
},
"SandboxKey": "/var/run/docker/netns/5645c1988100",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"composer_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"peer0.org1.example.com",
"6caa83b2a8a5"
],
"NetworkID": "d4f496b7b3aeae87d1b1461523bc8620ac34b54d9b3b9f8d31c6cfa7be4da024",
"EndpointID": "a19687702d04e166dc0291dc9ce1130caf5eccf484ece4fd988c13cc2660c8fb",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:05",
"DriverOpts": null
}
}
}
}
]
Fixed: Needed to reinstall Hyperledger fabric, composer, node, npm, and docker. And need to set "unset ${!DOCKER*}" there seemed to an docker issue.

This error is usually seen when the CLI cannot connect to the Fabric using the addresses specified in the PeerAdmin's connection.json file. Did you download the latest fabric-tools as shown here prior to this?
Sometimes if there is a proxy involved (on a corporate network), there can be some routing failures.
see answer here which may help you -> Hyperledger composer network install

ERROR 14 means that you the composer can't locate the peers. Your issue is here:
"peers": {
"peer0.org1.example.com": {}
}
you need to write something like:
"peers": {
"peer0.org1.example.com": {
"url": "grpc://localhost:7051",
"eventUrl": "grpc://localhost:7053"
}
}
FIXED:
I uninstalled docker, node, npm, and reinstalled everthing and made sure to use unset ${!DOCKER*} when first installing docker for Mac OS

Related

#pnp/sp list request returns different objects on SharePoint Site view/edit mode

I use the #pnp/sp library to get list data in a SPFX web part.
The crucial code for my issue looks as follows:
const list = await sp.web.lists
.getByTitle("ListTitle")
.fields.getByTitle("Category")
.get();
return list.Choices.results;
When I open the SharePoint page, where the web part is located, in view mode, it loads the correct object:
{
"__metadata": {
"id": "...",
"uri": "...",
"type": "SP.FieldChoice"
},
"DescriptionResource": {
"__deferred": {
"uri": "..."
}
},
"TitleResource": {
"__deferred": {
"uri": "..."
}
},
"AutoIndexed": false,
"CanBeDeleted": true,
"ClientSideComponentId": "00000000-0000-0000-0000-000000000000",
"ClientSideComponentProperties": null,
"ClientValidationFormula": null,
"ClientValidationMessage": null,
"CustomFormatter": null,
"DefaultFormula": null,
"DefaultValue": "SocialEvent",
"Description": "",
"Direction": "none",
"EnforceUniqueValues": false,
"EntityPropertyName": "...",
"Filterable": true,
"FromBaseType": false,
"Group": "Benutzerdefinierte Spalten",
"Hidden": false,
"Id": "...",
"Indexed": false,
"IndexStatus": 0,
"InternalName": "...",
"IsModern": false,
"JSLink": "clienttemplates.js",
"PinnedToFiltersPane": false,
"ReadOnlyField": false,
"Required": true,
"SchemaXml": "<Field Type=\"Choice\" DisplayName=\"Category\" Required=\"TRUE\" EnforceUniqueValues=\"FALSE\" Indexed=\"FALSE\" Format=\"Dropdown\" FillInChoice=\"FALSE\" ID=\"{...}\" SourceID=\"{{listid:...}}\" StaticName=\"...\" Name=\"...\" ColName=\"nvarchar4\" RowOrdinal=\"0\" CustomFormatter=\"\" Version=\"2\"><Default>SocialEvent</Default><CHOICES><CHOICE>SocialEvent</CHOICE><CHOICE>Schulung</CHOICE><CHOICE>Fachbesprechung</CHOICE><CHOICE>Messe</CHOICE></CHOICES></Field>",
"Scope": "/sites/.../Lists/...",
"Sealed": false,
"ShowInFiltersPane": 0,
"Sortable": true,
"StaticName": "...",
"Title": "Category",
"FieldTypeKind": 6,
"TypeAsString": "Choice",
"TypeDisplayName": "Auswahl",
"TypeShortDescription": "Auswahl (Menü)",
"ValidationFormula": null,
"ValidationMessage": null,
"FillInChoice": false,
"Mappings": null,
"Choices": {
"__metadata": {
"type": "Collection(Edm.String)"
},
"results": [
"SocialEvent",
"Schulung",
"Fachbesprechung",
"Messe"
]
},
"EditFormat": 0
}
But when I change the mode to edit, the same code gives me the following result (I shortened it, because the result is 10.000+ lines):
{
"user": {
"#odata.context": "...",
"#odata.type": "#SP.User",
"#odata.id": "...",
"#odata.editLink": "Web/GetUserById(17)",
"Id": 17,
"IsHiddenInUI": false,
"LoginName": "...",
"Title": "...",
"PrincipalType": 1,
"Email": "...",
"Expiration": "",
"IsEmailAuthenticationGuestUser": false,
"IsShareByEmailGuestUser": false,
"IsSiteAdmin": true,
"UserId": {
"NameId": "...",
"NameIdIssuer": "urn:federation:microsoftonline"
},
"UserPrincipalName": "..."
},
"item": {
"#odata.context": "...",
"#odata.type": "#SP.Data.SitePagesItem",
"#odata.id": "...",
"#odata.etag": "\"363\"",
"#odata.editLink": "...",
"FileSystemObjectType": 0,
"Id": 1,
"ContentTypeId": "...",
"OData__ModerationComments": null,
"ComplianceAssetId": null,
"Title": "Homepage",
"PageLayoutType": "Home",
"BannerImageUrl": {
"Description": "...",
"Url": "..."
},
"Description": "...",
"PromotedState": 0,
"FirstPublishedDate": null,
"LayoutWebpartsContent": null,
"OData__AuthorBylineId": null,
"OData__TopicHeader": null,
"OData__SPSitePageFlags": null,
"OData__SPCallToAction": null,
"OData__OriginalSourceUrl": null,
"OData__OriginalSourceSiteId": null,
"OData__OriginalSourceWebId": null,
"OData__OriginalSourceListId": null,
"OData__OriginalSourceItemId": null,
"ID": 1,
"Created": "2022-07-30T16:11:23-07:00",
"AuthorId": 0,
"Modified": "2022-09-28T00:45:46-07:00",
"EditorId": 0,
"OData__ModerationStatus": 3,
"CheckoutUserId": 0,
"UniqueId": "...",
"owshiddenversion": 363,
"OData__UIVersionString": "41.5",
"GUID": "..."
},
"itemProperties": {}
...
}
The following versions are installed:
"#pnp/common": "1.3.9",
"#pnp/logging": "1.3.9",
"#pnp/odata": "1.3.9",
"#pnp/sp": "1.3.9",
Update:
I installed the web part in different site collections within the tenant, which resulted in the same error. Now I have installed the web part in another tenant, where the problem don't occur. Unfortunately, I still cannot determine where the error is.

How do one should set a custom agent pool in DevOps release definition?

I create release definitions using DevOps REST APIs. Due to lack of documentation I used to capture HTTP requests and examine JSON payload.
I'm able to set a release using Azure agent pools. It follows only the relevant node:
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": 0
},
"agentSpecification": {
"identifier": "windows-2019"
},
"skipArtifactsDownload": false,
"artifactsDownloadInput": {},
"queueId": 749,
"demands": [],
"enableAccessToken": false,
"timeoutInMinutes": 0,
"jobCancelTimeoutInMinutes": 1,
"condition": "succeeded()",
"overrideInputs": {},
"dependencies": []
}
I want to set a custom defined agent pool, but if I try to capture the request I still can't undertand how to set it. This is the full JSON of an empty release with custom agent set:
{
"id": 0,
"name": "New release pipeline",
"source": 2,
"comment": "",
"createdOn": "2020-10-31T10:02:19.034Z",
"createdBy": null,
"modifiedBy": null,
"modifiedOn": "2020-10-31T10:02:19.034Z",
"environments": [
{
"id": -1,
"name": "Stage 1",
"rank": 1,
"variables": {},
"variableGroups": [],
"preDeployApprovals": {
"approvals": [
{
"rank": 1,
"isAutomated": true,
"isNotificationOn": false,
"id": 0
}
],
"approvalOptions": {
"executionOrder": 1
}
},
"deployStep": {
"tasks": [],
"id": 0
},
"postDeployApprovals": {
"approvals": [
{
"rank": 1,
"isAutomated": true,
"isNotificationOn": false,
"id": 0
}
],
"approvalOptions": {
"executionOrder": 2
}
},
"deployPhases": [
{
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": 0
},
"agentSpecification": null,
"skipArtifactsDownload": false,
"artifactsDownloadInput": {},
"queueId": 1039,
"demands": [],
"enableAccessToken": false,
"timeoutInMinutes": 0,
"jobCancelTimeoutInMinutes": 1,
"condition": "succeeded()",
"overrideInputs": {},
"dependencies": []
},
"rank": 1,
"phaseType": 1,
"name": "Agent job",
"refName": null,
"workflowTasks": [],
"phaseInputs": {
"phaseinput_artifactdownloadinput": {
"artifactsDownloadInput": {},
"skipArtifactsDownload": false
}
}
}
],
"runOptions": {},
"environmentOptions": {
"emailNotificationType": "OnlyOnFailure",
"emailRecipients": "release.environment.owner;release.creator",
"skipArtifactsDownload": false,
"timeoutInMinutes": 0,
"enableAccessToken": false,
"publishDeploymentStatus": true,
"badgeEnabled": false,
"autoLinkWorkItems": false,
"pullRequestDeploymentEnabled": false
},
"demands": [],
"conditions": [
{
"conditionType": 1,
"name": "ReleaseStarted",
"value": ""
}
],
"executionPolicy": {
"concurrencyCount": 1,
"queueDepthCount": 0
},
"schedules": [],
"properties": {
"LinkBoardsWorkItems": false,
"BoardsEnvironmentType": "unmapped"
},
"preDeploymentGates": {
"id": 0,
"gatesOptions": null,
"gates": []
},
"postDeploymentGates": {
"id": 0,
"gatesOptions": null,
"gates": []
},
"environmentTriggers": [],
"owner": {
"displayName": "Giacomo Stelluti Scala",
"id": "3617734a-1751-66f2-8343-c71c1398b5e6",
"isAadIdentity": true,
"isContainer": false,
"uniqueName": "giacomo.stelluti#dev4side.com",
"url": "https://dev.azure.com/dev4side/"
},
"retentionPolicy": {
"daysToKeep": 30,
"releasesToKeep": 3,
"retainBuild": true
},
"processParameters": {}
}
],
"artifacts": [],
"variables": {},
"variableGroups": [],
"triggers": [],
"lastRelease": null,
"tags": [],
"path": "\\test-poc",
"properties": {
"DefinitionCreationSource": "ReleaseNew",
"IntegrateJiraWorkItems": "false",
"IntegrateBoardsWorkItems": false
},
"releaseNameFormat": "Release-$(rev:r)",
"description": ""
}
Where do this is agent is set? Anyone knows how to do it properly?
Any help really appreciated.
Giacomo S. S.
I've found the solution in this question.
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": 0
},
"agentSpecification": null,
"skipArtifactsDownload": false,
"artifactsDownloadInput": {},
"queueId": 1039,
"demands": [],
"enableAccessToken": false,
"timeoutInMinutes": 0,
"jobCancelTimeoutInMinutes": 1,
"condition": "succeeded()",
"overrideInputs": {},
"dependencies": []
}
agentSpecification must be null and queueId must be set.

How to get the running status of IOT Edge Modules via Azure REST API? (not the connectionState or status from the 'Get Modules on Device' API Call)

Context:
Azure REST - Modules - Get Modules On Device
By using this API call, I can get information about connectionState(connected/disconnected) & status(enalbled/disabled) of modules. We can check the runtime status of the modules deployed on the device by visiting the Azure Iot Hub web portal
portal.azure.com -> iot hub -> iot edge section -> select the device you wish to find the details for
Question:
How can I get this RUNTIME STATUS via the Azure API?(please refer to the picture).
If you check the module twin of edgeAgent it is like below:-
{
"deviceId": "edgeDevice",
"moduleId": "$edgeAgent",
"etag": "AAAAAAAAAEA=",
"deviceEtag": "NDU1OTY3MjA=",
"status": "enabled",
"statusUpdateTime": "0001-01-01T00:00:00Z",
"connectionState": "Disconnected",
"lastActivityTime": "0001-01-01T00:00:00Z",
"cloudToDeviceMessageCount": 0,
"authenticationType": "sas",
"x509Thumbprint": {
"primaryThumbprint": null,
"secondaryThumbprint": null
},
"version": 501,
"properties": {
"desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
}
}
},
"modules": {
"CustomModuleName": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "iotregdev300.azurecr.io/customModuleName:0.0.2-amd64",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"8080/tcp\":[{\"HostPort\":\"8080\"}]}}}"
}
}
}
},
"reported": {
"schemaVersion": "1.0",
"version": {
"version": "1.0.9.4",
"build": "32971639",
"commit": "12d55e582cc7ce95c8abfe11eddfbbc938ed6001"
},
"lastDesiredStatus": {
"code": 200,
"description": ""
},
"runtime": {
"platform": {
"os": "linux",
"architecture": "x86_64",
"version": "1.0.9.4"
},
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"exitCode": 0,
"statusDescription": "running",
"lastStartTimeUtc": "2020-09-09T07:34:34.4585643Z",
"lastExitTimeUtc": "2020-09-09T07:34:26.9869915Z",
"runtimeStatus": "running",
"imagePullPolicy": "on-create",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"imageHash": "sha256:1a2fffc3c74a2b2510a3149bb2295b68a553e4c9aca90698879902f36fd6d163",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"imagePullPolicy": "on-create",
"env": {},
"exitCode": 0,
"statusDescription": "running",
"lastStartTimeUtc": "2020-09-09T07:34:50.8012461Z",
"lastExitTimeUtc": "2020-09-09T07:34:26.9845717Z",
"restartCount": 0,
"lastRestartTimeUtc": "2020-09-09T07:34:26.9845717Z",
"runtimeStatus": "running",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"imageHash": "sha256:f531eb6c23f347c37ea8c90204e9cb12024aec77d8b2e68e93b14c38ec066520",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}]}}}"
}
}
},
"lastDesiredVersion": 64,
"modules": {
"CustomModuleName": {
"exitCode": 0,
"statusDescription": "running",
"lastStartTimeUtc": "2020-09-09T07:34:49.3923079Z",
"lastExitTimeUtc": "2020-09-09T07:34:26.9606688Z",
"restartCount": 0,
"lastRestartTimeUtc": "2020-09-09T07:34:26.9606688Z",
"runtimeStatus": "running",
"version": "1.0",
"status": "running",
"restartPolicy": "always",
"imagePullPolicy": "on-create",
"type": "docker",
"settings": {
"image": "iotregdev300.azurecr.io/custommodulename:0.0.2-amd64",
"imageHash": "sha256:e728d4b8804d2114beab7c1903f706d8152e404be3f5601ee5e7371e8ac32ecf",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"8080/tcp\":[{\"HostPort\":\"8080\"}]}}}"
},
"env": {}
}
}
}
}
}
In the above json, CustomModuleName is the custom module and it has the field called runtimeStatus: "running". Same field exists in the edgeHub and edgeAgent modules too. So you need to just fetch the edgeAgentTwin through REST API or Azure Device/Service sdk.

KubeDNS not injecting nameservers, Kubernetes 1.5.2 on RHEL 7-

I've created a cluster with a master and 5 nodes with flannel for the POD network and that is working fine.
What is not working is that after installing kubeDNS (kubedns, dnsmasq and sidecar) but I can't get the new nameserver to be injected into HOST /etc/resolv.conf, because of that I can't resolve any hostnames.
Everything else works fine, all KubeDNS containers are running and with no errors
My kube-proxy ARGS
KUBE_PROXY_ARGS="--cluster-cidr=10.254.0.0/16"
My Kubelet configs
KUBELET_DNS="--cluster-dns=10.254.0.253"
KUBELET_DOMAIN="--cluster-domain=cluster.local"
Here are my configs for the DNS POD:
{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-dns-4073989832-f7g5g",
"generateName": "kube-dns-4073989832-",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/pods/kube-dns-4073989832-f7g5g",
"uid": "6f76055c-5b1e-11e7-b0c5-0050568fc023",
"resourceVersion": "3974782",
"creationTimestamp": "2017-06-27T09:53:13Z",
"labels": {
"k8s-app": "kube-dns",
"pod-template-hash": "4073989832"
},
"annotations": {
"kubernetes.io/created-by": "{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"kube-dns-4073989832\",\"uid\":\"8afa7fce-5a9e-11e7-b714-0050568fc023\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"3974404\"}}\n",
"scheduler.alpha.kubernetes.io/critical-pod": ""
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "ReplicaSet",
"name": "kube-dns-4073989832",
"uid": "8afa7fce-5a9e-11e7-b714-0050568fc023",
"controller": true
}
]
},
"spec": {
"volumes": [
{
"name": "kube-dns-config",
"configMap": {
"name": "kube-dns",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "kubedns",
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"args": [
"--domain=cluster.local",
"--dns-port=10053",
"--config-dir=/kube-dns-config",
"--kube-master-url=http://10.64.146.26:8080",
"--v=2"
],
"ports": [
{
"name": "dns-local",
"containerPort": 10053,
"protocol": "UDP"
},
{
"name": "dns-tcp-local",
"containerPort": 10053,
"protocol": "TCP"
},
{
"name": "metrics",
"containerPort": 10055,
"protocol": "TCP"
}
],
"env": [
{
"name": "PROMETHEUS_PORT",
"value": "10055"
}
],
"resources": {
"limits": {
"memory": "170Mi"
},
"requests": {
"cpu": "100m",
"memory": "70Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/kube-dns-config"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/kubedns",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"readinessProbe": {
"httpGet": {
"path": "/readiness",
"port": 8081,
"scheme": "HTTP"
},
"initialDelaySeconds": 3,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "dnsmasq",
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"args": [
"-v=2",
"-logtostderr",
"-configDir=/etc/k8s/dns/dnsmasq-nanny",
"-restartDnsmasq=true",
"--",
"-k",
"--cache-size=1000",
"--log-facility=-",
"--server=/cluster.local/127.0.0.1#10053",
"--server=/in-addr.arpa/127.0.0.1#10053",
"--server=/ip6.arpa/127.0.0.1#10053"
],
"ports": [
{
"name": "dns",
"containerPort": 53,
"protocol": "UDP"
},
{
"name": "dns-tcp",
"containerPort": 53,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "150m",
"memory": "20Mi"
}
},
"volumeMounts": [
{
"name": "kube-dns-config",
"mountPath": "/etc/k8s/dns/dnsmasq-nanny"
}
],
"livenessProbe": {
"httpGet": {
"path": "/healthcheck/dnsmasq",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
},
{
"name": "sidecar",
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"args": [
"--v=2",
"--logtostderr",
"--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A",
"--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A"
],
"ports": [
{
"name": "metrics",
"containerPort": 10054,
"protocol": "TCP"
}
],
"resources": {
"requests": {
"cpu": "10m",
"memory": "20Mi"
}
},
"livenessProbe": {
"httpGet": {
"path": "/metrics",
"port": 10054,
"scheme": "HTTP"
},
"initialDelaySeconds": 60,
"timeoutSeconds": 5,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 5
},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "Default",
"serviceAccountName": "kube-dns",
"serviceAccount": "kube-dns",
"nodeName": "gopher01",
"securityContext": {}
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:45Z"
},
{
"type": "Ready",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:52:55Z"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2017-06-27T09:53:13Z"
}
],
"hostIP": "10.64.146.24",
"podIP": "172.30.18.4",
"startTime": "2017-06-27T09:52:45Z",
"containerStatuses": [
{
"name": "dnsmasq",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-dnsmasq-nanny-amd64#sha256:5a9dda0fdf5bf548eb6a63260c3f5e6f5cdc3d0917279e38a435c00967c6c57c",
"containerID": "docker://682fa7e0ffb28f26aee97a8ac7fe564096ece3ef3d7fe14fd9ed6857526d2d2f"
},
{
"name": "kubedns",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-kube-dns-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-kube-dns-amd64#sha256:c78ed83587e42e7fc21f07756364c568c5c0fe10289f4f7f19d03a97f15b7a60",
"containerID": "docker://20b729004655a43efd384f8dded1f97d898a3b54092e190aba3d2031e72da056"
},
{
"name": "sidecar",
"state": {
"running": {
"startedAt": "2017-06-27T09:52:47Z"
}
},
"lastState": {},
"ready": true,
"restartCount": 0,
"image": "vvcelparti01:443/k8s-dns-sidecar-amd64:1.14.2",
"imageID": "docker-pullable://vvcelparti01:443/k8s-dns-sidecar-amd64#sha256:8d8c0e03e5f91ae85be7402ac88f804c52431dac32491c7a2557fd462fd2695b",
"containerID": "docker://bbaec6e9d0aa933daaee7c33b6d64d0f37f1a57213fabd2aa1c686c61a356f7f"
}
]
}
}
Here is my throubleshooting session:
$ kubectl get svc --namespace=kube-system
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns 10.254.0.253 <none> 53/UDP,53/TCP 24d
kubernetes-dashboard 10.254.170.86 <none> 80/TCP 29d
$ kubectl get ep kube-dns --namespace=kube-system
NAME ENDPOINTS AGE
kube-dns 172.30.18.4:53,172.30.18.4:53 24d

Unable to start a ubuntu container in openshift-origin

Am trying to bring up a ubuntu container in a POD in openshift. I have setup my local docker registry and have configured DNS accordingly. Starting the ubuntu container with just docker works fine without any issues. When I deploy the POD, I can see that my docker ubuntu image is pulled successfully, but doesnt succeed in starting the same. It fails with back-off pulling image error. Is this because my entry point does not have any background process running in side the container ?
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
Snapshot of the events
Deployment-config :
{
"kind": "DeploymentConfig",
"apiVersion": "v1",
"metadata": {
"name": "ubuntu",
"namespace": "testproject",
"selfLink": "/oapi/v1/namespaces/testproject/deploymentconfigs/ubuntu",
"uid": "e7c7b9c6-4dbd-11e6-bd2b-0800277bbed5",
"resourceVersion": "4340",
"generation": 6,
"creationTimestamp": "2016-07-19T14:34:31Z",
"labels": {
"app": "ubuntu"
},
"annotations": {
"openshift.io/deployment.cancelled": "4",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"strategy": {
"type": "Rolling",
"rollingParams": {
"updatePeriodSeconds": 1,
"intervalSeconds": 1,
"timeoutSeconds": 600,
"maxUnavailable": "25%",
"maxSurge": "25%"
},
"resources": {}
},
"triggers": [
{
"type": "ConfigChange"
},
{
"type": "ImageChange",
"imageChangeParams": {
"automatic": true,
"containerNames": [
"ubuntu"
],
"from": {
"kind": "ImageStreamTag",
"namespace": "testproject",
"name": "ubuntu:latest"
},
"lastTriggeredImage": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86"
}
}
],
"replicas": 1,
"test": false,
"selector": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app": "ubuntu",
"deploymentconfig": "ubuntu"
},
"annotations": {
"openshift.io/container.ubuntu.image.entrypoint": "[\"top\"]",
"openshift.io/generated-by": "OpenShiftNewApp"
}
},
"spec": {
"containers": [
{
"name": "ubuntu",
"image": "ns1.myregistry.com:5000/ubuntu#sha256:6d9a2a1bacdcb2bd65e36b8f1f557e89abf0f5f987ba68104bcfc76103a08b86",
"resources": {},
"terminationMessagePath": "/dev/termination-log",
"imagePullPolicy": "Always"
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"securityContext": {}
}
}
},
"status": {
"latestVersion": 5,
"details": {
"causes": [
{
"type": "ConfigChange"
}
]
},
"observedGeneration": 5
}
The problem was with the http proxy. After solving that image pull was successful

Resources