The git cookbook has an error in the url that it uses to download the appropriate version. The url is set in the attributes file as a default attribute so I figured I could just overwrite the url with something static but it does not work. Here is the code from the git cookbook:
case node['platform_family']
when 'windows'
default['git']['version'] = '2.8.1'
if node['kernel']['machine'] == 'x86_64'
default['git']['architecture'] = '64'
default['git']['checksum'] = '5e5283990cc91d1e9bd0858f8411e7d0afb70ce26e23680252fb4869288c7cfb'
else
default['git']['architecture'] = '32'
default['git']['checksum'] = '17418c2e507243b9c98db161e9e5e8041d958b93ce6078530569b8edaec6b8a4'
end
default['git']['url'] = 'https://github.com/git-for-windows/git/releases/download/v%{version}.windows.1/Git-%{version}-%{architecture}-bit.exe'
The cookbook is being included as a dependency in my metadata.rb file and used as a resource in my recipe. It is not part of the runlist. I've tried overwriting the url in my role file like so
"name": "web",
"description": "Web Server Role.",
"json_class": "Chef::Role",
"default_attributes": {
"chef_client": {
"interval": 300,
"splay": 60
},
"git": {
"url": "a test string"
}
},...
That did not work, so I tried adding it to the attributes file of my recipe as a default value, and when that did not work, I tried the override! method which still did not work.
I think the problem is due to the fact that the attribute does not exist when I have declared it, and it gets overwritten by the git recipe.
I don't know how to get around that.
Use override_attributes instead of default_attributes:
"name": "web",
"description": "Web Server Role.",
"json_class": "Chef::Role",
"default_attributes": {
"chef_client": {
"interval": 300,
"splay": 60
}
},
"override_attributes": {
"git": {
"url": "a test string"
}
},...
Related
I have two integration runtimes(both are self-hosted). When I try to delete one I get an error message.
Error: Failed to delete integration runtime.
Detail: The document cannot be deleted since it is referenced by AzureSqlDatabaseContoso.
But this is not true. At the moment there is no such thing as "AzureSqlDatabaseContoso". Perhaps it might have been there before. I did a search on source code as well, it is not present in the whole Git branch.
How can I delete it ?
This happened to me before. I just recreated the phantom object with the same name, associated it with the IR to be deleted, and then deleted the newly-recreated object (AzureSqlDatabaseContoso, in this case).
After that, ADF let me delete the underlying IR. Weird, but it worked for me.
the answer is what JeffRamos posted. Another option is to rename the file and 'name' field in git source and reload the adf and delete it there.
Source/linkedService/AzureKeyVault.json
rename this to
Source/linkedService/test.json
json content
{
"name": "AzureKeyVault",
"properties": {
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://mykv.vault.azure.net/"
}
}
}
Rename "name" field
{
"name": "test",
"properties": {
"annotations": [],
"type": "AzureKeyVault",
"typeProperties": {
"baseUrl": "https://mykv.vault.azure.net/"
}
}
}
I'm trying to get environment info given project id and environment id. I'm following Gitlab doc. Right now I'm able to get all environments by a project id with the following call: http://my_gitlab_url/gitlab/api/v4/projects/27/environments. I get correctly the result:
[
{
"id": 46,
"name": "my_first_env",
"slug": "my_first_env",
"external_url": null,
"project": {
"id": 27,
...
}
},
{
"id": 47,
"name": "my_second_env",
"slug": "my_second_env",
"external_url": null,
"project": {
"id": 27,
...
}
}
]
Then I want to get a single env info, so using the previous informations I call: http://my_gitlab_url/gitlab/api/v4/projects/27/environments/47 but I receive a 404 error. That is strange because I got the pair (project, env) from the previous call. Using the env name or slug won't work either. Also in the env settings page: http://my_gitlab_url/gitlab/my_project/environments/47/edit I only see the name section, no ID. Plus, in this last URL the project ID matches the one I'm using. Am I missing something? Where else can I find the env ID?
Your requests to the API are perfectly fine, you are using the correct environment ID.
However, the "single environment" endpoint was added in version 11.11 and is not yet available in 11.08. You need to update the GitLab server.
I am trying to create my Azure DevOps release pipeline for Azure Data Factory.
I have followed the rather cryptic guide from Microsoft (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment ) regarding adding additional parameters to the ARM template that gets generated when you do a publish (https://learn.microsoft.com/en-us/azure/data-factory/continuous-integration-deployment#use-custom-parameters-with-the-resource-manager-template )
Created a arm-template-parameters-definition.json file in the route of the master branch. When I do a publish, the ARMTemplateParametersForFactory.json in the adf_publish branch remains completely unchanged. I have tried many configurations.
I have defined some Pipeline Parameters in Data Factory and want them to be configurable in my deployment pipeline. Seems like an obvious requirement to me.
Have I missed something fundamental? Help please!
The JSON is as follows:
{
"Microsoft.DataFactory/factories/pipelines": {
"*": {
"properties": {
"parameters": {
"*": "="
}
}
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {
"*": "="
},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {},
"Microsoft.DataFactory/factories/datasets": {}
}
I've been struggling with this for a few days and did not found a lot of info, so here what I've found out. You have to put the arm-template-parameters-definition.json in the configured root folder of your collaboration branch:
So in my example, it has to look like this:
If you work in a separate branch, you can test your configuration by downloading the arm templates from the data factory. When you make a change in the parameters-definition you have to reload your browser screen (f5) to refresh the configuration.
If you really want to parameterize all of the parameters in all of the pipelines, the following should work:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"*":{
"defaultValue":"="
}
}
}
}
I prefer specifying the parameters that I want to parameterize:
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"parameters":{
"LogicApp_RemoveFileFromADLSURL":{
"defaultValue":"=:-LogicApp_RemoveFileFromADLSURL:"
},
"LogicApp_RemoveBlob":{
"defaultValue":"=:-LogicApp_RemoveBlob:"
}
}
}
}
Just to clarify on top of Simon's great answer. If you have non standard git hierarchy (i.e. you move the root to a sub-folder like I have done below with "Source"), it can be confusing when the doc refers to the "repo root". Hopefully this diagram helps.
You've got the right idea, but the arm-template-parameters-definition.json file needs to follow the hierarchy of the element you want to parameterize.
Here is my pipeline activity I want to parameterize. The "url" should change based on the environment it's deployed in
{
"name": "[concat(parameters('factoryName'), '/ExecuteSPForNetPriceExpiringContractsReport')]",
"type": "Microsoft.DataFactory/factories/pipelines",
"apiVersion": "2018-06-01",
"properties": {
"description": "",
"activities": [
{
"name": "NetPriceExpiringContractsReport",
"description": "Passing values to the Logic App to generate the CSV file.",
"type": "WebActivity",
"typeProperties": {
"url": "[parameters('ExecuteSPForNetPriceExpiringContractsReport_properties_1_typeProperties')]",
"method": "POST",
"headers": {
"Content-Type": "application/json"
},
"body": {
"resultSet": "#activity('NetPriceExpiringContractsReportLookup').output"
}
}
}
]
}
}
Here is the arm-template-parameters-definition.json file that turns that URL into a parameter.
{
"Microsoft.DataFactory/factories/pipelines": {
"properties": {
"activities": [{
"typeProperties": {
"url": "-::string"
}
}]
}
},
"Microsoft.DataFactory/factories/integrationRuntimes": {},
"Microsoft.DataFactory/factories/triggers": {},
"Microsoft.DataFactory/factories/linkedServices": {
"*": "="
},
"Microsoft.DataFactory/factories/datasets": {
"*": "="
}
}
So basically in the pipelines of the ARM template, it looks for properties -> activities -> typeProperties -> url in the JSON and parameterizes it.
Here are the necessary steps to clear up confusion:
Add the arm-template-parameters-definition.json to your master branch.
Close and re-open your Dev ADF portal
Do a new Publish
Your ARMTemplateParametersForFactory.json will then be updated.
I have experienced similar problems with the ARMTemplateParametersForFactory.json file not being updated whenever I publish and have changed the arm-template-parameters-definition.json.
I figured that I can force update the Publish branch by doing the following:
Update the custom parameter definition file as you wish.
Delete ARMTemplateParametersForFactory.json from the Publish branch.
Refresh (F5) the Data Factory portal.
Publish.
The easiest way to validate your custom parameter .json syntax seems to be by exporting the ARM template, just as Simon mentioned.
I went to the Functions/API Key to retrieve the user&password, but I still receive this error:
Dialog node error
Mandatory action property "credentials" missing or
invalid for server-side CloudFunctions action call. The value must be
a string that references a variable such as "$my_creds" that expands
to an object like {"user":"..", "password":".."}. Dialog node:
[GetProducts]
Any ideas why?
// IBM WATSON Dialog:
// Dialog Node Name: GetProducts
// JSON Editor:
{
"context": {
"private": {
"my_creds": {
"user": "*********",
"password:": "*********"
}
}
},
"output": {
"text": {
"values": [
"Product : "<?entities.products[0].literal?>"
],
"selection_policy": "sequential"
}
},
"actions": [
{
"name": "/*****#gmail.com_dev/getProducts2",
"type": "server",
"parameters": {
"url": "<?entities.products[0].literal?>"
},
"credentials": "$private.my_creds",
"result_variable": "context.result"
}
]
}
The credentials need to be present when that dialog node is processed. The context section defines what will be present at the end of that processing. Thus, the credentials are not known to the action.
My advise is to NOT store the credentials in the workspace. This is a security issue and bad practice, even for testing. Follow the example in the Watson Assistant documentation. It has instructions on how to add the credentials to the "Try it out" panel. For production, pass in the credentials from the app or middleware. Here are some examples on how that can be done.
Packagist does not allow package names to have capital letters. To workaround this, it recommends using hyphens -. Thus my package name went from TableCreator to table-creator. Unfortunately, this seems to have prevented my library from autoloading with the following error message:
Class 'Company\TableCreator\DatabaseField' not found
This error message disappears as soon as I manually include the specific file rather than relying on the vendor/autoload.php file.
My packages composer.json file is as follows
{
"name": "company/table-creator",
"type": "library",
"description": "Package creating or editing MySQL tables.",
"keywords": ["mysql", "mysqli","models"],
"license": "MIT",
"authors": [
{
"name": "xxx xxx",
"email": "xxx#xxx.org",
"role": "Developer"
}
],
"require": {
"php": ">=5.3.0"
},
"autoload": {
"psr-4": {
"company\\table-creator\\": ""
}
}
}
The namespace declared in the file is still namespace Company\TableCreator;
What do I need to tweak in the composer config for the classes to autload now that the package name has a hyphen?
You need to revert the change to the PSR-4 namespace prefix:
{
"autoload": {
"psr-4": {
"Company\\TableCreator\\": ""
}
}
}