As in title i ask for optional or dynamic value presence check in script, my goal is only to combine 2 metrics of same type, so joining them would be also a solution.
I have 2 different routs for same metrics, getting answere only from one of them at a time. I'd like to group them into one dashboard like:
fetch container |
{ metric custom.googleapis.com/http/.../count;
metric custom.googleapis.com/http/joe/.../count }
| join
I tried different combinations, outer_join 0 seemed closest, but having no traffic in one of routs causes:
> Input table 1 does not have time series identifier column
> 'metric.requestType' that is present in table 0.
NOTE: One endpoind is NOT connected at all for a period of time
Configurationin similar for each in metrics.yaml:
---
apiVersion: monitoring.cnrm.cloud.google.com/v1beta1
kind: MonitoringMetricDescriptor
metadata:
labels:
app: << app_name >>
name: custom/http/client/custom/requests/count
namespace: << project_name >>
spec:
type: custom.googleapis.com/http/client/custom/requests/count
metricKind: GAUGE
valueType: INT64
labels:
- key: pod_name
You might be able to try outer_join 0,0. I've not tested this, but the suggestion was taken from https://stackoverflow.com/a/70595836 which states that it will substitute zeros if either stream's value is missing. There's a couple of variations on this depending on what you want to do.
Related
I am currently defining a parameter within my azure pipeline as follows:
parameters:
- name: nodeSize
displayName: nodeSize
type: string
default: size1
values:
- size1
- size2
The result of this, is that when attempting to run the pipeline the user is presented with a drop down menu that allows them to choose one of the defined values, as shown bellow:
My goal is to create my parameters in a way that the user can select from the dropdown box, OR enter their own value. So the resulting dropdown menu would like like:
Size 1
Size 2
<Users optional input>
I'm afraid that this is not possible. You can create two parameters:
parameters:
- name: nodeSize
displayName: nodeSize
type: string
default: size1
values:
- size1
- size2
parameters:
- name: customSize
displayName: customNodeSize
type: string
default: ' '
and then check if customSize size was provided. I understood that this is far away from perfect. But we are limited to functionality we have now.
So I'm trying to run a query on Google Cloud's datastore as such:
let query = datastore.createQuery(dataType)
.select(["id", "version", "lm", "name"])
.filter("owner", "=", request.params.owner)
.filter("lm", ">=", request.query.date_start)
.groupBy(["lm"])
.order("lm")
Yet I run into an error stating that
no matching index found. recommended index is:
- kind: Entry
properties:
- name: lm
- name: id
- name: name
- name: version
When I run the query with id instead of lm for all the methods I run into no errors.
Is this because in my index.yaml file I have the index as such: ?
- kind: Entry
properties:
- name: id
- name: version
- name: lm
- name: name
Must I actually create a new index with the recommended ordering? Or is there a way I can do this without having to make another index? Thank you!
Yes, you need to create a new index, the existing one you showed is not equivalent, so it can't be used for your query - the property order matters (at least for the inequality filters and sorting). See related Google Datastore Composite index issue
Please excuse if this is a dumb question. I'm a terraform noob and trying to determine the best approach to meet an enterprise requirement for resource naming.
Our cloud governance team has determined a naming scheme for all resources where you have [region][resource_type][app_name][instance 0001-999][env] So, for instance we might have something like the following for vm's:
uw1vmmyapp001dev
uw1vmmyapp002dev
etc.
This is all well and good when deploying from scratch as I just use the {count.index} However, now I am trying to determine how to deploy additional resources and start from the previously deloyed resources (that weren't deployed by terraform). Is there a terraform standard for gathering the existing inventory, parsing the current values and starting your incrementing from the highest instance number? (I was using randoms but our cloud governance team squashed that quickly.)
I'm really doing a poor job with my wording. Hopefully this makes some sort of sense?
Oh, I'm using azurerm_virtual_machine
It's going to be pretty difficult when there isn't any delimiting characters... it's just a shoved together string. If there was a delimiting character you could maybe use split to break up the string and find out the number portion. There also doesn't appear to be a data source equivalent of azurerm_virtual_machine to get the naming information anyway.
Given that you'd need to manually look up the name or id anyway to import information about current resources you could find the highest numbered VM then use something like the following to add additional VMs and increment the number:
${var.region}${var.resource_type}${var.appname}${format("%03d", count.index + var.last_num)}${var.env}
To test what this looks like you can look at this example:
variable "last_num" {
default = 98
}
variable "region" {
default = "uw"
}
variable "resource_type" {
default = "vm"
}
variable "appname" {
default = "myapp"
}
variable "env" {
default = "dev"
}
resource "local_file" "foo" {
count = 3
filename = "foo.text"
content = "${var.region}${var.resource_type}${var.appname}${format("%03d", count.index + 1 + var.last_num)}${var.env}"
}
This gives naming output like this:
+ local_file.foo[0]
id: <computed>
content: "uwvmmyapp099dev"
filename: "foo.text"
+ local_file.foo[1]
id: <computed>
content: "uwvmmyapp100dev"
filename: "foo.text"
+ local_file.foo[2]
id: <computed>
content: "uwvmmyapp101dev"
filename: "foo.text"
I want to create a config map that uses a JSON object for it's value
The JSON object looks something like this, (variable name = manifestJSON)
{
"appRepoVersionsMap": {
"repoA": "1.0.0.131",
"repoB": "1.0.0.7393"
},
"deployerVersion": "49",
"openshiftConfigCommitId": "o76to87y"
}
Then I want to create a configmap that takes this JSON object and adds it as a value of the configmap.
The command I am trying to make it work is
def osCmd = "create configmap manifest-config" +
" --from-literal=manifest.os_config_branch=${envVars.OS_CONFIG_BRANCH}" +
" --from-literal=manifest.os_server=${envVars.OPENSHIFT_SERVER_URL}"
" --from-literal=manifest.os_manifest=${manifestJSON}"
os.call(osCmd)
OpenShift client gives the following error:
10:23:37 error: cannot add key manifest.os_manifest, another key by that name already exists: map[manifest.os_config_branch:deployment-orchestrator manifest.os_server:<snipped>, manifest.os_manifest:appRepoVersionsMap:repoA:1.0.0.131 ].
So either groovy or OpenShift sees the JSON object within the JSON object and can't handle it.
I am trying to avoid using --from-file because I will have to write to disk and then run the command and I am afraid this will cause issues in a Jenkins environment with multiple deploys to multiple projects taking place.
The solution ended up being rather simple and I was overthinking the character escapes while trying various solutions.
In order to use a JSON as value when creating a configmap (or a secret) is to add '' around the JSON object itself
def osCmd = "create configmap manifest-config" +
" --from-literal=manifest.os_config_branch=${envVars.OS_CONFIG_BRANCH}" +
" --from-literal=manifest.os_server=${envVars.OPENSHIFT_SERVER_URL}"
" --from-literal=manifest.os_manifest='${manifestJSON}'" // <----- single quotes
os.call(osCmd)
That allowed the configmap creation and I confirmed from the OpenShift side that the configmap is present with the manifest values I was expecting.
I'm struggling to understand the hiera way of working with data, it seems to me like plain yaml using frontmatter to include global data files would be simpler and more powerful.
In any case, I want to accomplish something like this:
# global.yaml
collection1: &collection1
foo: 1
collection2: &collection2
bar: 2
collection3: &collection3
baz: 3
# development_environment.yaml
collection:
<<: *collection1
<<: *collection2
# production_environment.yaml
collection:
<<: *collection2
<<: *collection3
Essentially, so that I can maintain a couple of lists of things in a single place and then combine them in different ways depending on the environment. Hiera has an option for merging top level keys vs deep merging, but I can't find anything about including data from higher up in the hierarchy (for my particular problem I could also get it working reasonably well if there were a way to overwrite the data in the global file rather than merge it in to the more specific file but that doesn't seem possible either).
How can I do this? Am I stuck manually duplicating the base data in all my different environments?
I realize that I could put an environment case statement in puppet code to choose which base collections to include, but that breaks the separation of concerns of keeping data in hiera and code in puppet. If I have to do that, I may as well skip hiera altogether and put my data in puppet modules.
You can do it by manually loading the list of collections and iterating over it :
# global.yaml
collection1:
foo: 1
collection2:
bar: 2
collection3:
baz: 3
# development_environment.yaml
collection:
- collection1
- collection2
# production_environment.yaml
collection:
- collection2
- collection3
Now you can write something like that :
# this variable will contain something like ['collection1','collection2']
$collections = hiera('collection')
# Now get all the corresponding values
$hashparts = $collections.map |$r| { $x = hiera($r); $x } # [{"baz"=>3}, {"bar"=>2}]
# Now we merge all the parts
$hash = $hashparts.reduce |$a,$b| { $x = merge($a,$b); $x } # {"baz"=>3, "bar"=>2}
This is ugly, but should do what you expect. The deal about $x = function(); $x is here because of the unfortunate decision that all the lambda functions can be used in any context (statement, or value), so we don't know at parsing time whether we expect the last "token" of the "block" to be a statement or an expression.