I'm using Jooq and gradle-jooq-plugin for code generation. It works fine, but I'm having a problem getting the generated code to update when a table is added or a column is dropped. I was able to force an update by changing the "packageName" config parameter and build a new package. And by going back to the original name the code was updated as expected.
What would be the correct way to re-generate code after schema change with my setup?
jooq {
version = '3.13.1'
edition = 'OSS'
generateSchemaSourceOnCompilation = true
sample(sourceSets.main) {
jdbc {
driver = 'org.postgresql.Driver'
url = 'jdbc:postgresql://0.0.0.0:5432/victor'
user = 'postgres'
password = 'docker'
properties {
property {
key = 'ssl'
value = 'false'
}
}
}
generator {
name = 'org.jooq.codegen.DefaultGenerator'
strategy {
name = 'org.jooq.codegen.DefaultGeneratorStrategy'
}
database {
name = 'org.jooq.meta.postgres.PostgresDatabase'
inputSchema = 'public'
forcedTypes {
forcedType {
name = 'varchar'
expression = '.*'
types = 'INET'
}
}
}
generate {
relations = true
deprecated = false
records = true
immutablePojos = true
fluentSetters = true
}
target {
packageName = 'net.bravo.victor.model'
directory = 'src/'
}
}
}
I'm using https://github.com/etiennestuder/gradle-jooq-plugin
plugins {
id 'nu.studer.jooq' version '4.1'
}
I am not sure whether it is correct way but for me works this:
generateNavigoJooqSchemaSource {
dependsOn cleanGenerateNavigoJooqSchemaSource
}
task buildJooq(dependsOn: generateNavigoJooqSchemaSource)
So I have created task name (buildJooq) I can remember that depends on generate task (generateNavigoJooqSchemaSource) and that depends on clean (cleanGenerateNavigoJooqSchemaSource) task.
Previously I have used this code which works too:
tasks.named("generateNavigoJooqSchemaSource").configure {
outputs.upToDateWhen { false }
}
It also forces run every time.
Related
Prompt me, please, how can I get separately a value for key paswd-0. I mean, I need separated values for password and username.
This is remote data from data.terraform_remote_state.user_passwd.outputs.login_passwd
output = {
paswd-0 = jsonencode(
{
password = "uGo="
username = "git"
}
)
paswd-1 = jsonencode(
{
password = "wM="
username = "kun"
}
)
}
I'm trying this and get error parameter: lookup() requires a map as the
output "tetts" {
value = lookup(tomap(data.terraform_remote_state.user_passwd.outputs.login_passwd.paswd-0), "password", null)
}
Ideally I would go through of each value and fill these fields.
argocd_repositories = {
[
"private-repo" = {
url = "https://repo.git"
username = "argocd"
password = "access_token"
},
"git-repo" = {
url = "https://repo.git"
password = "argocd_access_token"
username = "admin"
},
"private-helm-chart" = {
url = "https://charts.jetstack.io"
type = "helm"
username = "foo"
password = "bar"
},
]
}
As per my comment, you can get the value from the data source by using the jsondecode built-in function [1]. You would have to update the output to look like the following:
output "tetts" {
value = lookup(tomap(jsondecode(data.terraform_remote_state.user_passwd.outputs.login_passwd["paswd-0"]), "password", null)
}
This is only to make it work as you intended it to. However, it will output only the value for the password. Since I do not have the remote state, I managed to get close to what you want with locals and the following:
locals {
output = {
paswd-0 = jsonencode(
{
password = "uGo="
username = "git"
}
)
paswd-1 = jsonencode(
{
password = "wM="
username = "kun"
}
)
}
sorted_values = { for k, v in local.output : jsondecode(v).username => jsondecode(v).password }
}
Note that jsondecode is used on the values of the original map. Furthermore, since the JSON decoded values are also in a key value pair format, you can access the keys and corresponding values using the usual terraform notation (i.e., jsondecode(v).username and jsondecode(v).password). Using terraform console, the local sorted_values variable has the following look:
> local.sorted_values
{
"git" = "uGo="
"kun" = "wM="
}
I guess this is close to what you wanted to achieve with the tomap function.
[1] https://www.terraform.io/language/functions/jsondecode
I'm trying to write an eslint rule that enforces making sure the name property is defined on any classes that extend from other Error/Exception named classes (and fixes them).
As far as I can tell, it works in the astexplorer.net individually, but when I'm running it alongside other rules, it ends up getting ran multiple times, so the name property ends up being repeated multiple times in the resulting "fixed" file.
Is there anything in particular I can do to prevent it being run multiple times? I'm assuming what's happening is that it's inserting my name = 'ClassName';, then prettier is needing to reformat the code, which it does, but then maybe it's re-running my rule? I'm not sure.
Rule/fix code shown below. I've tried things like using *fix and yield, but that doesn't seem to help either (see commented code below, based on information in the eslint documentation)
module.exports = {
meta: {
hasSuggestions: true,
type: 'suggestion',
docs: {},
fixable: 'code',
schema: [], // no options,
},
create: function (context) {
return {
ClassDeclaration: function (node) {
const regex = /.*(Error|Exception)$/;
// If the parent/superClass is has "Error" or "Exception" in the name
if (node.superClass && regex.test(node.superClass.name)) {
let name = null;
const className = node.id.name;
// Test class object name
if (!regex.test(className)) {
context.report({
node: node,
message: 'Error extensions must end with "Error" or "Exception".',
});
}
// Find name ClassProperty
node.body.body.some(function (a) {
if (a.type === 'ClassProperty' && a.key.name === 'name') {
name = a.value.value;
return true;
}
});
// Name property is required
if (!name) {
context.report({
node: node,
message: 'Error extensions should have a descriptive name',
fix(fixer) {
return fixer.replaceTextRange(
[node.body.range[0]+1, node.body.range[0]+1],
`name = '${className}';`
);
},
// *fix(fixer) {
// name = className;
// yield fixer.replaceTextRange(
// [node.body.range[0]+1, node.body.range[0]+1],
// `name = '${className}';`
// );
//
// // extend range of the fix to the range of `node.parent`
// yield fixer.insertTextBefore(node.body, '');
// yield fixer.insertTextAfter(node.body, '');
// },
});
}
}
},
};
},
};
Turns out I had the AST Explorer set to the wrong parser, so it was showing me the wrong string name for the ClassProperty node. I should have been using PropertyDefinition instead.
I have some Kinesis Firehose Delivery Stream resources created via Terraform. Due to a known bug (https://github.com/hashicorp/terraform-provider-aws/issues/9827) , when lambda transform params are kept default, Terraform avoids them to be written in state file and Every plan/apply is trying to create them again. Because of this issue, I'm trying to add ignore_lifecycle to them.
This is one of my resources;
resource "aws_kinesis_firehose_delivery_stream" "some_stream" {
name = "some_name"
destination = ""
s3_configuration {
role_arn = "some_name"
bucket_arn = "arn:aws:s3:::somebucket"
prefix = "some/prefix/"
buffer_size = 64
buffer_interval = 60
compression_format = "GZIP"
cloudwatch_logging_options {
enabled = true
log_group_name = aws_cloudwatch_log_group.some_log_group.name
log_stream_name = aws_cloudwatch_log_stream.some_log_stream.name
}
}
elasticsearch_configuration {
domain_arn = "arn:aws:es:some-es-domain"
role_arn = "arn:aws:iam::some-role"
index_name = "some-index"
index_rotation_period = "OneDay"
buffering_interval = 60
buffering_size = 64
retry_duration = 300
s3_backup_mode = "AllDocuments"
cloudwatch_logging_options {
enabled = true
log_group_name = aws_cloudwatch_log_group.some_log_group.name
log_stream_name = aws_cloudwatch_log_stream.some_log_stream.name
}
processing_configuration {
enabled = "true"
processors {
type = "Lambda"
parameters {
parameter_name = "LambdaArn"
parameter_value = "arn:aws:lambda:some-lambda"
}
parameters {
parameter_name = "BufferSizeInMBs"
parameter_value = "3"
}
parameters {
parameter_name = "BufferIntervalInSeconds"
parameter_value = "60"
}
}
}
}
}
In the resource above BufferSizeInMBs and BufferIntervalInSeconds are constantly changing. I'm trying to ignore these two without touching the LambdaArn but since all of them are using the same structure below, I couldn't quite figure it out how to do that, I don't even know it's possible or not.
parameters {
parameter_name = ""
parameter_value = ""
}
I tried this;
lifecycle {
ignore_changes = [elasticsearch_configuration.0.processing_configuration.0.processors]
}
But this doesn't exclude the parameter_name = "LambdaArn"
To go further,
I tried something like;
lifecycle {
ignore_changes=[elasticsearch_configuration.0.processing_configuration.0.processors[1],elasticsearch_configuration.0.processing_configuration.0.processors[2]]
]
}
But it didn't work. It didn't give an error, but didn't ignore the changes either. My Terraform version is 1.1.6 and provider version is ~3.0 (3.75.1 to be exact)
Any help will be highly appreciated,
Thank you very much,
Best Regards.
My gradle.build (using nu.studer.jooq plugin)
jooq {
MyProject(sourceSets.main) {
generator {
database {
name = 'org.jooq.meta.extensions.ddl.DDLDatabase'
properties {
property {
key = 'scripts'
value = 'src/main/resources/database.sql'
}
}
inputSchema = ''
outputSchema = 'something'
// schemata {
// schema {
// inputSchema = "" // I've tried this too
// outputSchema = 'something'
// }
// }
forcedTypes {
forcedType {
name = 'varchar'
expression = '.*'
types = 'JSONB?'
}
forcedType {
name = 'varchar'
expression = '.*'
types = 'INET'
}
}
}
generate {
relations = true
springAnnotations = true
deprecated = false
fluentSetters = true
// ...
}
target {
packageName = 'com.springforum'
}
}
}
}
In the build process, it can generate the schema just fine, but it keep using the PUBLIC schema for the output even though I've set outputSchema (I've tried using empty string and non-empty string)
Update: The problem only happen if inputSchema is empty, I tried with another sql script with schema and it works as intended
This is a known issue that originates from the fact that behind the scenes DDLDatabase uses an H2 in-memory database to emulate running your SQL script, and then reverse engineers that H2 database. By default, in H2 (and a few other databases), everything goes in the PUBLIC schema. The issue is here: #7650
jOOQ 3.11 workaround
Currently (as of jOOQ 3.11), I suggest you either specify the schema in your DDL script explicitly, or use inputSchema = "PUBLIC", knowing the above.
jOOQ 3.12 solution
In jOOQ 3.12, this was fixed through #7759. It will be possible to specify the behaviour of unqualified schema objects:
<!-- The default schema for unqualified objects:
- public: all unqualified objects are located in the PUBLIC (upper case) schema
- none: all unqualified objects are located in the default schema (default)
This configuration can be overridden with the schema mapping feature -->
<property>
<key>unqualifiedSchema</key>
<value>none</value>
</property>
I currently have this in my nixpkgs.config
packageOverrides = pkgs: rec {
netbeans81 = pkgs.stdenv.lib.overrideDerivation pkgs.netbeans ( oldAttrs: {
name = "netbeans-8.1";
src = pkgs.fetchurl {
url = http://download.netbeans.org/netbeans/8.1/final/zip/netbeans-8.1-201510222201.zip;
md5 = "361ce18421761a057bad5cb6cf7b58f4";
};
});
};
and I want to add a kernel config. I added this
packageOverrides = pkgs: {
stdenv = pkgs.stdenv // {
platform = pkgs.stdenv.platform // {
kernelExtraConfig = "SND_HDA_PREALLOC_SIZE 4096";
};
};
};
but that did not work. The problem is packageOverrides is already defined.
How can I add the kernel configs and my netbeans overrides?
In the nix language, braces ({}) indicate attribute sets (not scope like in C++ etc.). You can have multiple items in a single attribute set (attr. sets are like dicts in python). Also, nix is a functional language, which means there is no state. This, in turn, means that you can't redefine a variable in the same scope. In the words of Eminem, "You only get one shot".
Try this:
packageOverrides = pkgs: rec {
netbeans81 = pkgs.stdenv.lib.overrideDerivation pkgs.netbeans (oldAttrs: {
name = "netbeans-8.1";
src = pkgs.fetchurl {
url = http://download.netbeans.org/netbeans/8.1/final/zip/netbeans-8.1-201510222201.zip;
md5 = "361ce18421761a057bad5cb6cf7b58f4";
};
});
stdenv = pkgs.stdenv // {
platform = pkgs.stdenv.platform // {
kernelExtraConfig = "SND_HDA_PREALLOC_SIZE 4096";
};
};
};