Attach multiple libraries to a cluster terraforming Databricks - terraform

I'm currently trying to attach more than one maven artifact to my terraform configuration of a cluster.
In the documentation, nothing says it can't work. It is only specified that one type of library must correspond to one configuration block.
How can we add more than one artifact in my terraform configuration ?

If finally did it by duplicating my configuration blocks.
library {
maven {
coordinates = "..."
}
}
library {
maven {
coordinates = "..."
}
}

You could use a dynamic library block which will repeat it for you. An example for Python packages where listOfPythonPackages is a variable list.
dynamic "library" {
for_each = toset(var.listOfPythonPackages)
content {
pypi {
package = library.value
}
}
}

Related

How to allow an ordered list in a custom terraform provider resource?

I have a custom terraform provider with a resource that takes a list as one of its inputs.
Here is the list in question: https://github.com/volterraedge/terraform-provider-volterra/blob/main/volterra/resource_auto_volterra_http_loadbalancer.go#L3501
When I declare the list, it needs to be set as multiple blocks like the following:
active_service_policies {
policies {
name = "foobar"
namespace = "shared"
}
policies {
name = "batz"
namespace = "shared"
}
}
Instead, I want to be able to declare it like the following:
active_service_policies {
policies = [
{
name = "foobar"
namespace = "shared"
},
{
name = "batz"
namespace = "shared"
}
]
}
This causes the following error:
Error: Unsupported argument
on main.tf line 79, in resource "volterra_http_loadbalancer" "sp":
79: policies = [
An argument named "policies" is not expected here. Did you mean to define a block
of type "policies"?
Why cant I use an ordered list and how can I allow its use?
Is this issue becaue the policies is a Type: schema.TypeList, should this be a TypeSet or some other object instead?
The Terraform SDK you are using was originally designed for Terraform v0.11 and earlier and so it doesn't support configuration constructs that those older Terraform versions didn't support, and Terraform v0.11 and earlier didn't support lists of objects in the way you are intending here.
To use the full capabilities of the modern Terraform language you can instead build your provider with the newer Plugin Framework, which is designed around the modern Terraform language type system, although it is also currently less mature than the older SDK due to their difference in age.
In the new framework you can declare a tfsdk.Attribute which has an Attributes field set to a tfsdk.ListNestedAttributes result:
tfsdk.Attribute{
Attributes: tfsdk.ListNestedAttributes(
map[string]tfsdk.Attribute{
"name": tfsdk.Attribute{
// ...
},
"namespace": tfsdk.Attribute{
// ...
},
},
tfsdk.ListNestedAttributesOptions{},
),
// ...
}
The above (partial) example declares an attribute that expects a list of objects where each object can have name and namespace attributes itself.
The closest you can get to this in the older SDK is the sequence of blocks you showed in your example. In older providers built with that SDK, the common pattern here would be to give the block the singular name policy, rather than the plural name policies, so that it's clearer in the configuration that each block is declaring just one policy in the sequence.

Android Studio - Building library aar file doesn't append flavor or buildType to output

I am running assemble for my library module , I see from logs that it should generate two files myLib-release.aar and myLib-debug.aar inside the myLib/build/outputs/ folder.
However, I always only find one lib there that is myLib.aar, it doesn't matter if I run assemble for both, assembleDbug or assembleRelease.
Why is this happening?
According to this discussion it is an error (or planned feature) in gradle, up to date it is still the same.
https://github.com/gradle/gradle/issues/8328
Workaround can be to implement this:
// in library/library.gradle
afterEvaluate {
def debugFile = file("$buildDir/outputs/aar/library.aar")
tasks.named("assembleDebug").configure {
doLast {
debugFile.renameTo("$buildDir/outputs/aar/library-debug.aar")
}
}
tasks.named("assembleRelease").configure {
doLast {
debugFile.renameTo("$buildDir/outputs/aar/library-release.aar")
}
}
}
You may then implement copy tasks as desired.

Add parameter "Build Selector for Copy Artifact" using Jenkins DSL

I'm converting a Jenkins job from a manual configuration to DSL which means I'm attempting to create a DSL script which creates the job(s) as it is today.
The job is currently parameterized and one of the parameters is of the type "Build Selector for Copy Artifact". I can see in the job XML that it is the copyartifact plugin and specifically I need to use the BuildSelectorParameter.
However the Jenkins DSL API has no guidance on using this plugin to set a parameter - it only has help for using it to create a build step, which is not what I need.
I also can't find anything to do with this under the parameter options in the API.
I want to include something in the DSL seed script which will create a parameter in the generated job matching the one in the image.
parameter
If I need to use the configure block then any tips on that would be welcome to because for a beginner, the documentation on this is pretty useless.
I have found no other way to setup the build selector parameter but using the configure block. This is what I used to set it up:
freeStyleJob {
...
configure { project ->
def paramDefs = project / 'properties' / 'hudson.model.ParametersDefinitionProperty' / 'parameterDefinitions'
paramDefs << 'hudson.plugins.copyartifact.BuildSelectorParameter'(plugin: "copyartifact#1.38.1") {
name('BUILD_SELECTOR')
description('The build number to deploy')
defaultSelector(class: 'hudson.plugins.copyartifact.SpecificBuildSelector') {
buildNumber()
}
}
}
}
In order to reach that, I manually created a job with the build selector parameter. And then looked for the job's XML configuration under jenkins to look at the relevant part, in my case:
<project>
...
<properties>
<hudson.model.ParametersDefinitionProperty>
<parameterDefinitions>
...
<hudson.plugins.copyartifact.BuildSelectorParameter plugin="copyartifact#1.38.1"
<name>BUILD_SELECTOR</name>
<description></description>
<defaultSelector class="hudson.plugins.copyartifact.SpecificBuildSelector">
<buildNumber></buildNumber>
</defaultSelector>
</hudson.plugins.copyartifact.BuildSelectorParameter>
</parameterDefinitions>
</hudson.model.ParametersDefinitionProperty>
</properties>
...
</project>
To replicate that using the configure clause you need to understand the following things:
The first argument to the configure clause is the job node.
Using the / operator will return a child of a node with the given node, if it doesn't exist gets created.
Using the << operator will append to the left-hand-side operand the node given as the right-hand-side operand.
When creating a node, you can give it the attributes in the constructor like: myNodeName(attrributeName: 'attributeValue')
You can pass a lambda to the new node and use it to populate its internal structure.
I have Jenkins version 1.6 (with copy artifact plugin) and you can do it in DSL like this:
job('my-job'){
steps{
copyArtifacts('job-id') {
includePatterns('artifact-name')
buildSelector { latestSuccessful(true) }
}
}
}
full example:
job('03-create-hive-table'){
steps{
copyArtifacts('seed-job-stash') {
includePatterns('jenkins-jobs/scripts/landing/hive/landing-table.sql')
buildSelector { latestSuccessful(true) }
}
copyArtifacts('02-prepare-landing-dir') {
includePatterns('jenkins-jobs/scripts/landing/shell/02-prepare-landing-dir.properties')
buildSelector { latestSuccessful(true) }
}
shell(readFileFromWorkspace('jenkins-jobs/scripts/landing/03-ps-create-hive-table.sh'))
}
wrappers {
environmentVariables {
env('QUEUE', 'default')
env('DB_NAME', 'table_name')
env('VERSION', '20161215')
}
credentialsBinding { file('KEYTAB', 'mycred') }
}
publishers{ archiveArtifacts('03-create-landing-hive-table.properties') }
}

Duplicate declaration of same resource defined in separate classes

I have a class definition which requires the build-essential package:
class erlang($version = '17.3') {
package { "build-essential":
ensure => installed
}
...
}
Another class in a different module also requires the build-essential package:
class icu {
package { "build-essential":
ensure => installed
}
...
}
However, when I try to perform puppet apply, the error I receive is:
Error: Duplicate declaration: Package[build-essential] is already declared in file /vagrant/modules/erlang/manifests/init.pp:18; cannot redeclare at /vagrant/modules/libicu/manifests/init.pp:17 on node vagrant-ubuntu-trusty-64.home
I was expecting classes to encapsulate the resources they use but this doesn't seem to be the case? How can I resolve this clash?
This is common question when dealing with multiple modules.
There's a number of ways of doing this, the best practise is to modularise and allow the installation of build essential as a parameter:
class icu ($manage_buildessential = false){
if ($manage_buildessential == true) {
package { "build-essential":
ensure => installed
}
}
}
Then, where you want to include your ICU class:
class {'icu':
manage_buildessential => 'false',
}
However, for a quick and dirty fix:
if ! defined(Package['build-essential']) {
package { 'build-essential': ensure => installed }
}
Or if you have puppetlabs-stdlib module:
ensure_packages('build-essential')
If you control both modules, you should write a third class (module) to manage the shared resource.
class build_essential {
package { 'build-essential': ensure => installed }
}
Contexts that require the package just
include build_essential
Do not touch the defined() function with a 12" pole. There can be only pain down this road.
There are multiple ways as the other answers explain but this is another reliable way of doing it if you want to use the same resource multiple times.
Declare once and then realize it multiple times.. For example, Create a new virtual resource like this:
in modules/packages/manifests/init.pp
class packages {
#package{ 'build-essential':
ensure => installed
}
}
Then, in your both classes, include the below lines to realize the above virtual resource
include packages
realize Package('build-essential')

Publish artifacts using base plugin

I want to publish few artifacts using base plugin. This is how my build looks like:
apply plugin: 'base'
group = 'eu.com'
version = '0.9'
def winCdZip = file('dist/winCd.zip')
configurations {
wincd
}
repositories {
ivy {
url 'http://ivy.repo'
}
}
artifacts {
wincd winCdZip
}
buildscript {
repositories {
ivy {
url 'http://ivy.repo'
}
}
dependencies {
classpath group: 'eu.com', name:'MyCustomTask', version:'0.9-SNAPSHOT', configuration: 'runtime'
}
}
buildWincd {
// call MyCustomTask; is it possible to call it in this way?
MyCustomTask {
// pass few parameters required by this task
}
// I know that it's impossible to call zip in this way but I don't want to create another task
zip {
// build zip and save it in 'winCdZip'
}
}
uploadWincd {
repositories { add project.repositories.ivy }
}
And those are my problems to solve:
Is it possible to create nested tasks?
Is it possible to call zip without create new task but with closures?
Is it possible to call custom task using closures (the same example as at 2nd point)?
I can create zip/custom task in this way
task myZip(type: Zip) {
// do the job
}
is it possible to call it in this way?
zip {
// do the job
}
If it is not possible to call tasks using closures, how can I do it? Creating new tasks is the only way? Maybe I can create nested tasks?
The answer to your questions is 'no'. Gradle is a declarative build system. Instead of having one task call another, you declare task dependencies, which Gradle will obey during execution.
For some task types (e.g. Copy), there is an equivalent method (e.g. project.copy), but not for Zip. In most cases, it's better to use a task even if a method exists.
The first several chapters of the Gradle User Guide explain core Gradle concepts such as task dependencies in detail.

Resources