When trying to deploy my IoT edge modules, I get an error code 400 (Invalid config). Looking at the issue, it says Invalid image placeholder. Inside deployment.template.json, my module looks like this:
"modules": {
"sampleMod": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.sampleMod}", // This line has 'Invalid image placeholder' as an error
"createOptions": {}
}
}
}
This used to work. A couple of months ago, this file would work just fine, but now it doesn't. I can put an image URL there, and it works just fine, but since they change with every version it would be tedious to switch out the URL after every push.
I'm not sure when this behavior changed (July 12 is a good candidate, as that's the release date of IoT edge v1.3) but adding the architecture solved it for me. It's worth noting that my module dockerfile is named Dockerfile.arm64v8, as it was before this change.
"modules": {
"sampleMod": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "${MODULES.sampleMod.arm64v8}", // Add architecture here.
"createOptions": {}
}
}
}
Related
I am testing the setting of simple dashboard examples using Azure CLI. Using docs The structure of Azure dashboards This file is a single tile (a small square of browser window) that outputs a short message. I used the following command in VS Code terminal:
az portal dashboard import --name "mySingleTileDashboard1" --resource-group "example-resources_copy" --input-path singleTileDashboard.json
Here is the terminal output showing the .JSON script.
{
"id": "/subscriptions/xxxxxxxxxxxx/resourceGroups/example-resources_copy/providers/Microsoft.Portal/dashboards/mySingleTileDashboard1",
"lenses": {
"0": {
"metadata": null,
"order": 0,
"parts": {
"0": {
"metadata": null,
"position": {
"colSpan": 3,
"metadata": null,
"rowSpan": 2,
"x": 0,
"y": 0
}
}
}
}
},
"location": "westus",
"metadata": {
"inputs": [],
"settings": {
"content": {
"settings": {
"content": "## Dashboard Overview\r\nSingle tile example. Code lifted from azure-portal-dashboards-structure",
"subtitle": "",
"title": ""
}
}
},
"type": "Extension/HubsExtension/PartType/MarkdownPart"
},
"name": "mySingleTileDashboard1",
"resourceGroup": "example-resources_copy",
"tags": {
"hidden-title": "Created via API"
},
"type": "Microsoft.Portal/dashboards"
}
The portal shows that the dashboard has been set. The Overview shows all parameters present. But when I use "Go to dashboard" I get an error page:
Dashboard 'arm/subscriptions/xxxxxxxxxx/resourcegroups/example-resources_copy/providers/microsoft.portal/dashboards/mysingletiledashboard1' no longer exists. It was previously published to resource group 'example-resources_copy' in subscription 'xxxxxxxxxxxxx'.
I followed through on the error using Resolve errors for resource not found The Activity Log showed that the Set Dashboard Succeeded.
The doubt I had, in my script, was the following line:
"type": "Extension/HubsExtension/PartType/MarkdownPart"
The docs show the line to be Extension[azure]/ ... etc. However I tried both versions but got the same result.
Previously I have only set a blank dashboard via script. And it worked. Here it doesn't. So I have a suspicion the line with the MarkdownPart may be screwing things up.
I am trying to create a rule engine in a cdn endpoint. like this:
But using a Json file (The result in the image has been achieved manually but now i want to automate this).
So far I got this:
"deliveryPolicy": {
"description": "Rewrite and Redirect",
"rules": [
{
"name" : "UrlFileExtension",
"order": 2,
"conditions": [
{
"name": "UrlFileExtension",
"parameters": {
"#odata.type": "#Microsoft.Azure.Cdn.Models.UrlFileExtensionMatchConditionParameters",
"Extension": 0,
"operator": "LessThanOrEqual",
"matchValues": [0]
}
}
],
"actions": [
{
"name": "UrlRewrite",
"parameters": {
"#odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlRewriteActionParameters",
"sourcePattern": "/",
"destination": "/index.html",
"preserveUnmatchedPath": false
}
}
]
},
The action works just fine, but the urlfileextentionI cant get it to work, it does not recognize the odata.type either.
Please any hint ot suggestion how to fix the condition?
You might want to try with this odata.type for the condition
"#odata.type": "#Microsoft.Azure.Cdn.Models.DeliveryRuleUrlFileExtensionMatchConditionParameters",
instead of
"#odata.type": "#Microsoft.Azure.Cdn.Models.UrlFileExtensionMatchConditionParameters",
https://learn.microsoft.com/en-us/python/api/azure-mgmt-cdn/azure.mgmt.cdn.models.urlfileextensionmatchconditionparameters?view=azure-python
(I'm aware the documentation says Python, I could not find better, but it could be your solution)
I redeployed my (sideloaded) Teams app that implements a very simple bot that auto-messages rooms every day.
This was working for a long time, and I made a slight change so I needed to redeploy, remove from the Teams room, and add it back.
After I removed and tried to add it back (without changing any of the settings) I now get an error telling me "Manifest Parsing has failed"
I also get the following errors in my console log:
Manifest is below. This was 100% generated within Teams, and is not something I made edits to myself, so not sure why it's telling me it can't parse (some fields redacted):
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.8/MicrosoftTeams.schema.json",
"manifestVersion": "1.8",
"version": "1.0.0",
"id": "dbb36443-1bce-48e0-81d2-b30aa3698144",
"packageName": "com.prosourcer-teams",
"developer": {
"name": "MY NAME",
"websiteUrl": "URL",
"privacyUrl": "URL",
"termsOfUseUrl": "URL"
},
"icons": {
"color": "color.png",
"outline": "outline.png"
},
"name": {
"short": "ps-app",
"full": "ps-chatBot"
},
"description": {
"short": "short desc",
"full": "full desc"
},
"accentColor": "#FFFFFF",
"bots": [
{
"botId": "bfcb70de-e093-4733-b236-742eb3b0aad8",
"scopes": [
"personal",
"team",
"groupchat"
],
"supportsFiles": false,
"isNotificationOnly": false
}
],
"permissions": [
"identity",
"messageTeamMembers"
],
"validDomains": [
"URL"
]
}
UPDATE: If I try to add the bot to an individual team, I also get the following error in my console. I have confirmed that appId is correct, not sure where I'm supposed to be setting my TeamsId:
If there's an existing installation somewhere still around it might be causing this. Try incrementing the version number. Currently it's 1.0.0, try bumping even to 1.0.1 or 1.1.0.
Update - maybe there's an issue in Teams - there is a question just before yours today with a similar issue - see "Manifest parsing has failed" when installing teams apps from App Studio . Sounds like an issue with Teams or App Studio. If so, you can try manually uploading the manifest to your internal company store.
Manually change the manifest version to 1.7 (down from 1.8). As of Oct 15, 2020 that is the work around.
manifestVersion": "1.7"
(The Teams App Studio app generates the manifest with version 1.8, but the Teams client parsing fails as you have also run into)
I'm having problem with loading $schema in SPFx within my new web part for SP. Web part is working on benchmark.aspx but my whole manifest is not being processed so I can't set preconfiguredEntries and it's big problem for me.
error is:
Problems loading reference 'https://developer.microsoft.com/json-schemas/spfx/client-side-manifest-base.schema.json': Request vscode/content failed unexpectedly without providing any details.(768)
Any idea on this issue please?
{
"$schema": "https://developer.microsoft.com/json-schemas/spfx/client-side-web-part-manifest.schema.json",
"id": "56dab116-67ba-453f-883d-b7a11690e965",
"alias": "ReadListWebPart",
"supportedHosts": ["SharePointWebPart"],
"componentType": "Webpart",
"version": "1.0",
"manifestVersion": "2",
"requiresCustomScript": false,
"preconfiguredEntries": [{
"groupId": "5c03119e-3074-46fd-976b-c60198311f70",
"group": { "default": "Other" },
"title": { "default": "read-list" },
"description": { "default": "popis web party" },
"officeFabricIconFontName": "Page",
"properties": {
"vedouci_velke_foto": true,
"asistenti_pod_vedoucim": false,
"nazev_web_party": "To jsme my"
}
}]
}
I checked the manifest.json, will be the same as yours, have the following waring:
Then tested to access "https://developer.microsoft.com/json-schemas/spfx/client-side-web-part-manifest.schema.json" in my local, no problem, still can be accessed.
After this, I tested to output the preconfigured properties in React SPFX Web Part like this:
Props.ts
WebPart.ts
.tsx
Still able to output properties:
In conclusion, you can just igore this issue, it's still able to read preconfiguredEntries.
I'm using data factory with blob storage.
I sometime get the below error intermittently - this can occur on different pipelines/data-sources. However I always get the same error, regardless of which task fails - 400 The specified block list is invalid.
Copy activity encountered a user error at Sink side: ErrorCode=UserErrorBlobUploadFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error occurred when trying to upload blob 'https://blob.core.windows.net/', detailed message: The remote server returned an error: (400) Bad Request.,Source=,''Type=Microsoft.WindowsAzure.Storage.StorageException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage,StorageExtendedMessage=The specified block list is invalid.
Type=System.Net.WebException,Message=The remote server returned an error: (400) Bad Request.,Source=Microsoft.WindowsAzure.Storage
This seems to be most common if there is more than one task running at a time that is writing data to the storage. Is there anything I can do to make this process more reliable? Is it possible something has been misconfigured? It's causing slices to fail in data factory, so I'd really love to know what I should be investigating.
A sample pipeline that has suffered from this issue:
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.Pipeline.json",
"name": "Pipeline",
"properties": {
"description": "Pipeline to copy Processed CSV from Data Lake to blob storage",
"activities": [
{
"type": "Copy",
"typeProperties": {
"source": {
"type": "AzureDataLakeStoreSource"
},
"sink": {
"type": "BlobSink",
"writeBatchSize": 0,
"writeBatchTimeout": "00:00:00"
}
},
"inputs": [ { "name": "DataLake" } ],
"outputs": [ { "name": "Blob" } ],
"policy": {
"concurrency": 10,
"executionPriorityOrder": "OldestFirst",
"retry": 0,
"timeout": "01:00:00"
},
"scheduler": {
"frequency": "Hour",
"interval": 1
},
"name": "CopyActivity"
}
],
"start": "2016-02-28",
"end": "2016-02-29",
"isPaused": false,
"pipelineMode": "Scheduled"
}
}
I'm only using LRS standard storage, but I still wouldn't expect it to intermittently throw errors.
EDIT: adding linked service json
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-09-01/Microsoft.DataFactory.LinkedService.json",
"name": "Ls-Staging-Storage",
"properties": {
"type": "AzureStorage",
"typeProperties": {
"connectionString": "DefaultEndpointsProtocol=https;AccountName=;AccountKey="
}
}
}
Such error is mostly caused by racing issues. E.g. multiple concurrent activity runs write to the same blob file.
Could you further check your pipelines settings whether it is the case? And please avoid such setting if so.