Is it possible to point a Google Cloud DNS entry to a Google Compute Engine instance? - dns

Could I use a reference or link to my Google Compute Engine host's logical name, instead of 173.255.x.x. in:
{
"additions": [
{
"kind": "dns#resourceRecordSet",
"name": "compute-engine-host.domain.com.",
"rrdatas": [
"173.255.x.x."
],
"ttl": 60,
"type": "A"
}
]
}

I don't believe that this is possible yet, but I'll see if we can get it on the list of things to build.
Until then, you should be able to use the Cloud DNS API's Changes.create method to have your GCE machine add/update a DNS record when it boots up...

Related

Why doesn't this declarative net request rule work?

I'm trying to block a website with a chrome extension that uses the new declarative net request API for Manifest V3, but it isn't working at all. I have added the permission in the manifest and made sure to add the priority, id, action and conditions, but it still doesn't do anything at all. Since I am blocking only one domain, I tried changing the domain list in the conditions from "domains" to "domain" but this just blocks every domain. Here is the applicable part of my manifest. I'm not sure why, but when I open the website in a private/incognito tab, occasionally, it will work.
"declarative_net_request" : {
"rule_resources" : [{
"id": "rules1",
"enabled": true,
"path": "rules.json"
}]
},
"permissions": [
"declarativeNetRequest"
],
Here is my rules.json file.
[{
"id": 1,
"priority": 1,
"action": {
"type": "block"
},
"condition": {
"domains": ["google.com"],
"resourceTypes": ["main_frame"]
}
}]
You need to change the "Domains" tag to "requestDomains" or "initiatorDomains" since the "Domains" tag is deprecated. I am assuming that the google.com is your initiator Domain which means you want to block all requests originating from google.com to any destination website. If you want to block google.com from being a destination website i.e you want to stop any requests being made to google.com, you need to use the "urlFilter" tag instead.
I ran into the same problem, google does a terrible job at telling us what's deprecated. domains was deprecated and we have to use initiatorDomains instead:
what worked for me was:
inside your rules conditions -
urlFilter: "https://example.com" to block a request going to your "https://example.com"
and then:
initiatorDomains:["mail.google.com"] to block a request coming from "mail.google.com" (just an example)
must include in the manifest version 3
"host_permissions": [ "<all_urls>" ]
if I had that example on how to block from and to, it'd have saved me a lot of time. I hope this helps

Azure Marketplace image not found

I'm deploying Linux images using Terraform. To do that, we need to get the image purchase plan info from Azure Marketplace images (some require terms be accepted and some don't). Microsoft has an instructional doc on how to do this: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/cli-ps-findimage. Great. The problem is that it does not work on some images some of our teams want to deploy and I can't see why, so I'm stuck not knowing how to deploy the images they've asked for.
Here's an example where finding the purchase plan info of a Checkpoint image works and I can accept the marketplace terms successfully. Notice the "plan" block information from the first command and then the terms showing as "accepted" in the second command:
scott#Azure:~$ az vm image show --urn checkpoint:check-point-cg-r81:mgmt-byol:latest
{
"automaticOsUpgradeProperties": {
"automaticOsUpgradeSupported": false
},
"dataDiskImages": [],
"disallowed": {
"vmDiskType": "None"
},
"extendedLocation": null,
"features": null,
"hyperVGeneration": "V1",
"id": "/Subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/Providers/Microsoft.Compute/Locations/westus/Publishers/checkpoint/ArtifactTypes/VMImage/Offers/check-point-cg-r81/Skus/mgmt-byol/Versions/8100.900392.0710",
"location": "westus",
"name": "8100.900392.0710",
"osDiskImage": {
"operatingSystem": "Linux",
"sizeInBytes": 107374182912,
"sizeInGb": 100
},
"plan": {
"name": "mgmt-byol",
"product": "check-point-cg-r81",
"publisher": "checkpoint"
},
"tags": null
}
scott#Azure:~$ az vm image terms show --urn checkpoint:check-point-cg-r81:mgmt-byol:latest
{
"accepted": true,
"id": "/subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/providers/Microsoft.MarketplaceOrdering/offerTypes/VirtualMachine/publishers/checkpoint/offers/check-point-cg-r81/plans/mgmt-byol/agreements/current",
"licenseTextLink": "https://mpcprodsa.blob.core.windows.net/legalterms/3E5ED_legalterms_CHECKPOINT%253a24CHECK%253a2DPOINT%253a2DCG%253a2DR81%253a24MGMT%253a2DBYOL%253a24U2R6YKHF2KWHXN7Y4Q4Q4OEKEYL6JZJCCZGIIGQBSB7FNDUBYTDIRQY6QPT5XMT7NGAH5XWH3LHSQY22URTFS3X7HZHQXZ3CIVJKC2Y.txt",
"marketplaceTermsLink": "https://mpcprodsa.blob.core.windows.net/marketplaceterms/3EDEF_marketplaceterms_VIRTUALMACHINE%253a24AAK2OAIZEAWW5H4MSP5KSTVB6NDKKRTUBAU23BRFTWN4YC2MQLJUB5ZEYUOUJBVF3YK34CIVPZL2HWYASPGDUY5O2FWEGRBYOXWZE5Y.txt",
"name": "mgmt-byol",
"plan": "mgmt-byol",
"privacyPolicyLink": "http://www.checkpoint.com/privacy",
"product": "check-point-cg-r81",
"publisher": "checkpoint",
"retrieveDatetime": "2021-07-21T13:48:54.3464069Z",
"signature": "R65W6K5QQIRJP7DUOIK26PND236FGY6YIVTOOJ3ZFZC2CRQGPNF5TA5BNANFJWTFRKFZULYKINVSJ2BIB2DDNRW5AMUS2N5KQR7YTBQ",
"systemData": {
"createdAt": "2021-07-21T13:48:54.417391+00:00",
"createdBy": "5ff78d61-5262-4bd6-81fa-42d8723b8e3e",
"createdByType": "ManagedIdentity",
"lastModifiedAt": "2021-07-21T13:48:54.417391+00:00",
"lastModifiedBy": "5ff78d61-5262-4bd6-81fa-42d8723b8e3e",
"lastModifiedByType": "ManagedIdentity"
},
"type": "Microsoft.MarketplaceOrdering/offertypes"
}
Now, using the exact same method, the one Microsoft prescribes in their own documentation, I can get an RHEL image, but then when I try to accept the terms it errors out that the image is not found. For all intents and purposes, the output of the first command has no appreciable difference from the Checkpoint image that worked as expected. Notice also I included the location information just to ensure the image was available in the intended region.
scott#Azure:~$ az vm image show -l westeurope --urn redhat:rhel-byos:rhel-lvm83:latest
{
"automaticOsUpgradeProperties": {
"automaticOsUpgradeSupported": false
},
"dataDiskImages": [],
"disallowed": {
"vmDiskType": "None"
},
"extendedLocation": null,
"features": [
{
"name": "IsAcceleratedNetworkSupported",
"value": "True"
}
],
"hyperVGeneration": "V1",
"id": "/Subscriptions/5ff78d61-5262-4bd6-81fa-42d8723b8e3e/Providers/Microsoft.Compute/Locations/westeurope/Publishers/redhat/ArtifactTypes/VMImage/Offers/rhel-byos/Skus/rhel-lvm83/Versions/8.3.20210409",
"location": "westeurope",
"name": "8.3.20210409",
"osDiskImage": {
"operatingSystem": "Linux",
"sizeInBytes": 68719477248,
"sizeInGb": 64
},
"plan": {
"name": "rhel-lvm83",
"product": "rhel-byos",
"publisher": "redhat"
},
"tags": null
}
scott#Azure:~$ az vm image terms show --urn redhat:rhel-byos:rhel-lvm83:latest
(BadRequest) Offer with PublisherId: 'redhat' and OfferId: 'rhel-byos' not found. Consider the following solutions: 1-Check to see if offer details are correct 2- If this offer is created recently, please allow up to 30 minutes for thisoffer to be available for purchase 3- If the offer is removed from the marketplace for new purchase. See similar offers here 'https://azuremarketplace.microsoft.com/en-us/marketplace/apps?page=1%26search=redhat%20rhel-byos'. CorrelationId '75335d2a-fc28-4e4c-acd3-ec2ea423f212'.
Clearly, it is the correct information. Yet, Azure can't find the image it literally just gave me the info for. What am I missing here? I'm not looking for workarounds or "use a different image" answers. I'm looking to understand what's going on to better deal with it head on, or deliver bad news with backed up data if need be. Cheers!
I have tested the commands you have ran for both images in my subscription. I am able to see the Checkpoint image and its terms as well but not the RHEL-BYOS image terms.
Checkpoint offer is Public with pay-as-you-go azure subscription . That’s why its showing the terms for purchase . But the RHEL-BYOS [bring-your-own-subscription (BYOS) (Red Hat Gold Image) model] offer is Private.
Requirements to use RHEL BYOS images:
You must have access to Red Hat Cloud Access Program. Enable your
Red Hat subscriptions for Cloud Access at Red Hat
Subscription-Manager. You need to have on hand the Azure
subscriptions that are going to be registered for Cloud Access.
If the Red Hat subscriptions you enabled for Cloud Access meet the
eligibility requirements, your Azure subscriptions are automatically
enabled for Gold Image access.
After you finish the Cloud Access enablement steps, Red Hat
validates your eligibility for the Red Hat Gold Images. If
validation is successful, you receive access to the Gold Images
within three hours.
Reference:
Red Hat Enterprise Linux bring-your-own-subscription Azure images - Azure Virtual Machines
Red Hat Enterprise Linux Bring-Your-Own-Subscription Gold Images now Generally Available in Azure | Azure updates | Microsoft Azure

Azure DevOps Multiple Widget Configuration

I am building an Azure devops dashboard with custom widgets for my organization. There are some common configurations to some of the widgets. Is there any way I can achieve this without actual modifying every widget individually. In other words, is there a way I can pass parameter to all these widgets?
I am super new to Azure/Azure devops dashboard. Please route me to the right board if this isn't the right one. Thank you.
I haven't tried this myself but you could try the Extension Data Service: https://learn.microsoft.com/en-us/azure/devops/extend/reference/client/api/vss/sdk/services/extensiondata/extensiondataservice?view=azure-devops
To Instantiate this you provide a publisher name, extension name and registration id. the ID I think is scoped to the VSIX package so if all your extensions are published in the same package then they may be able to share data using this service.
Your other option would be to require the user to setup and configure in your widget an Azure service to act as the integration point. If the value is high enough they may do it but it would be a chore and likely a cost.
You can try to use this rest api : Widgets - Update Widgets
PATCH https://dev.azure.com/{organization}/{project}/{team}/_apis/dashboard/dashboards/{dashboardId}/widgets?api-version=5.1-preview.2
Sample request body:
{
"id": "69f6c5b7-0eb0-4067-b75f-6edff74d0fcf",
"eTag": "5",
"name": "Other Links",
"position": {
"row": 1,
"column": 2
},
"size": {
"rowSpan": 1,
"columnSpan": 2
},
"settings": null,
"settingsVersion": {
"major": 1,
"minor": 0,
"patch": 0
},
"contributionId": "ms.vss-dashboards-web.Microsoft.VisualStudioOnline.Dashboards.OtherLinksWidget"
}

Azure metric alert dimension operator ARM-Template

I want to exclude a Virtual Machine to an alert rule. I use ARM-Templates to deploy my alerts. The problem is that exclude won't work.
"dimensions": [
{
"name": "Computer",
"operator": "Exclude",
"values": [ "VMname" ]
},
ISSUE: If I choose Exclude as operator. It performs the same as Include.
Anyone has the same Issue?
Related question is raised in this MSDN thread; Just sharing this for the benefit of broader audience who might face similar issue.

Which action(s) can I use to create a folder in SharePoint Online via Azure Logic App?

As question title states, I am looking for a proper action in Logic Apps to create a folder. This action will be executed several times -- once per directory as per business rule. There will be no files created in these folders because the intent of the Logic App is to prepare a template folder structure for the users' needs.
In the official documentation I see that there are create file, create item, and list folder actions. They suggest that there might be an action to create a folder too (which I can't find).
If such action does not exist, I may need to use some SharePoint Online API, but that will be a last resort solution.
I was able to create a directory by means of SharePoint - CreateFile action. Creating a directory via a side effect of the file creation action is definitely a dirty hack (btw, inspired by a comment on MS suggestion site). This bug/feature is not documented, so relying on it in production environment is probably not a good idea.
More that that, if my problem requires creating a directory in SharePoint without any files in it whatsoever, an extra step in App Logic needs to be used. Make sure to delete the file using the Id provided by Create File action.
Here's what your JSON might look like, if you were trying to create a directory called folderCreatedAsSideEffect under preexisting TestTarget document library.
"actions": {
"Create_file": {
"inputs": {
"body": "#triggerBody()?['Name']",
"host": { "connection": { "name": "#parameters('$connections')['sharepointonline']['connectionId']" } },
"method": "post",
"path": "/datasets/#{encodeURIComponent(encodeURIComponent('https://MY.sharepoint.com/LogicApps/'))}/files",
"queries": {
"folderPath": "/TestTarget/folderCreatedAsSideEffect",
"name": "placeholder"
}
},
"runAfter": {},
"type": "ApiConnection"
},
"Delete_file": {
"inputs": {
"host": { "connection": { "name": "#parameters('$connections')['sharepointonline']['connectionId']" } },
"method": "delete",
"path": "/datasets/#{encodeURIComponent(encodeURIComponent('https://MY.sharepoint/LogicApps/'))}/files/#{encodeURIComponent(body('Create_file')?['Id'])}"
},
"runAfter": {
"Create_file": [
"Succeeded"
]
},
"type": "ApiConnection"
}
},
Correct, so far the SharePoint Connector does not support Folder management tasks.
So, your best option currently is to use the SharePoint API or client libraries in an API or Function App.

Resources