Terraform partially interprets a decimal number - terraform

I would like to know if some of you encountered the following issue;
While I'm trying to upgrade my EKS cluster to version 1.20 with the following variable-
eks_version = 1.20
This picture shows the result, terraform converts 1.20 to 1.2-
For some reason, terraform not take into account the total decimal number, resulting in an error;
Error: error updating EKS Cluster (stage) version: InvalidParameterException: unsupported Kubernetes version
P.S
I tried to use the format function as well
eks_version = format("%.2s", 1.20)
With the same output.
Any ideas on how to make terraform take into account the whole decimal number?

Ervin's comment is correct.
The answer is to stop formatting it using this method.
The format spec of %.2f says to limit the input 1.20 with a width of 2.
If you want a specific version, remove the call to the format function.

Thank you guys for your comments!
Your comments helped me realize that I need to make this variable a string, not a number!
I had to change my variable defenition to string:
variable "eks_version" {
type = string
default = "1.20"
}

Related

I am getting Error: Cycle when running a terraform plan

I was getting the following error while running terraform plan:
Error: Cycle: aws_sagemaker_notebook_instance.mlops_datapipeline_notebookinstance_main, aws_sagemaker_notebook_instance.mlops_datapipeline_notebookinstance_demo, data.aws_iam_policy_document.sagemaker_neptune-access, aws_iam_policy.sagemaker_execution_policy, aws_neptune_cluster.neptune_for_demo, aws_neptune_cluster.neptune_for_main, data.aws_iam_policy_document.neptune-access, aws_iam_policy.neptune_access_policy, aws_iam_role.Neptune_execution_role
I assume you are using AWS because your filename contains "ec2", even though you don't show enough code in your question or provide enough details.
The AWS Terraform provider expects tags to be a map, not a single string. You have enclosed the entire thing in double quotes, converting it into a string. Try this:
tags = merge(var.tags, map({"Name", format("%s-%d", var.name, count.index+1)}))

Dynamic formatting of last modified filter in Data factory DataSet

I'm trying to set the last modified filter in a azure data factory dataset dynamically.
I'm using the following expression:
#formatDateTime(adddays(utcnow(),-2),'yyyy-mm-ddThh:mm:ss.fffZ')
I'm getting the following error:
Activity Copy1 failed: Failure happened on 'Source' side. ErrorCode=UserErrorInvalidValueInPayload,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Failed to convert the value in 'modifiedDatetimeStart' property to 'System.Nullable`1[[System.DateTime, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]]' type. Please make sure the payload structure and value are correct.,Source=Microsoft.DataTransfer.DataContracts,''Type=System.Reflection.TargetInvocationException,Message=Exception has been thrown by the target of an invocation.,Source=mscorlib,''Type=System.FormatException,Message=The DateTime represented by the string is not supported in calendar System.Globalization.GregorianCalendar.,Source=mscorlib,'
I'm also not able to preview the data with this filter. I guess something is wrong here. Any ideas?
From the error message I understand that the string represenation of the date is not supported by the calander.
The DateTime represented by the string is not supported in calendar
Why do you need to format the string for the comparison?
Actually the following commands are tested & working after publish & trigger:
#utcnow()
#adddays(utcnow(),-2)
It's the preview functionality in the front end that is not able to deal with the expressions. This will hopefully be solved by Microsoft.
Perhaps, as a workaround, you could use this expression to get a rid of the extra characters in your datetime expression:
#substring(formatDateTime(adddays(utcnow(),-2), 'o'), 0, 23)
I tested this with utcnow() and it should return the datetime in the desired format:
"value": "2019-04-12T10:11:51.108Z"
turns out you can solve the above prepending a conversion to string to your line above, so from
#formatDateTime(adddays(utcnow(),-2),'yyyy-mm-ddThh:mm:ss.fffZ')
change it to
#string(formatDateTime(adddays(utcnow(),-2),'yyyy-mm-ddThh:mm:ss.fffZ'))
it works on my end
Encountered the same problem in data flow:
currentUTC()
did not work for pulling the last modified file in blob storage but
currentTimestamp()
did

Using indexed types for ElasticSearch in Titan

I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel

different numbers of variable names and field specifiers error in tcl for linux server

here i am using set numCut [scan $inline1 "%d"] in tcl script for linux server, but after execting script it's showing below error
`different numbers of variable names and field specifiers` variable $inline1
value is `2) "NYMEX UTBAPI Worker" (NYMEX UTBAPI Poller): STOPPED`
i searched in google for this then i got below
`
0x1771b07c tcl_s_cmdmz_diff_num_var_field
Text: Different numbers of variable names and field specifiers
Severity: tcl_c_general_error
Component: tcl / tcl_s_general
Explanation: The scan command detected that the number of variable names
provided differs from the number of field specifiers provided.
Action: Verify that the number of variable names is the same as the number of
field specifiers.
`
here i got the above description.
Can anyone help me out how to solve this issue?
thanks in advance...
The ability to return the matched fields was added in Tcl 8.5. Prior to that, you had to supply a variable for each field that you had in the scan, and the result would be the number of fields matched (and it still is if you provide variable names).
Change:
set numCut [scan $inline1 "%d"]
to:
scan $inline1 "%d" numCut
Or switch to a more recent version of Tcl if you can, as 8.4 is almost out of its extended support period. (There will be a final patch release this summer to address some minor issues with build problems on recent systems, but that's it. We won't support it after that.)
I think that the Tcl error message is telling you that the number of specifiers in your format string %d is different to the number of variables in your Tcl command scan $inline1 "%d".
So, you have one format specifier, and no variables and that's what the Tcl interpreter is telling you.
Try changing your command to scan $inline1 "%d" numCut and see if that works any better.

Cassandra display of column value

I have upgraded Cassandra from v0.6 to v0.7.2 following the instructions in NEWS.txt. It seemed to be successful, except that the column value has changed.
For example, in 0.6, there was a column that looked like this:
(column=Price, value='2.5')
Now, in 0.7.2, the same column has changed to this:
(column=Price, value=32392e3939)
How can I fix this problem?
The CLI no longer makes assumptions about the type of data you're viewing, so all outputs are in hex unless the data type is known or you tell the CLI to assume a data type.
See this section of the documentation on human readable data in the CLI for more details.

Resources