loading json data with epp template in puppet - puppet

I am facing while printing values from .epp template in puppet
I have following data in hiera, which defines groups (array of hashes)
profiles::groups:
- name: 'admins'
type: 1
rights: []
- name: 'users'
type: 2
rights: []
The above hiera key is called in .epp template like this
.....
groups = <%= require 'json'; JSON.pretty_generate scope['groups'] %>
.....
Above declaration should print the values in file in following format (from above epp template) but gives error as mentioned below
.......
groups = [
{
"name": "admins",
"type": 1,
"rights": []
},
{
"name": "users",
"type": 2,
"rights": []
}
]
.....
Error Message: Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Function Call, epp(): Invalid EPP: Syntax error at 'json'
While declaring the same format in .erb template works fine and it produces desired output.

In erb it's embedded ruby you can use regular ruby functions, on epp it's embedded Puppet so you need to use Puppet functions. There is a to_json_pretty available in stdlib https://forge.puppet.com/modules/puppetlabs/stdlib which will do the job and you can use it like this;
groups = <%= $groups.to_json_pretty %>
I've just tested it out with;
# test/manifests/jsontest.pp
class test::jsontest {
$groups = lookup('test::groups')
file { '/root/epp':
ensure => file,
content => epp("test/jsondata.epp", {groups => $groups})
}
}
# test/templates/jsondata.epp
groups = <%= $groups.to_json_pretty %>
# common.yaml
test::groups:
- name: 'admins'
type: 1
rights: []
- name: 'users'
type: 2
rights: []

Related

`apiRevision` flag isn't working in Bicep template

I am using Bicep to deploy open api json into Azure API Management. The snippet looks like this.
resource fuseintegrationsapi 'Microsoft.ApiManagement/service/apis#2021-08-01' = {
name: '${apim.name}/integrations-api-${environment_name}'
properties: {
description: 'Contains integrations apis used to control the platform.'
type: 'http'
apiRevision: '1234'
isCurrent: true
subscriptionRequired: false
displayName: 'Integrations Api'
serviceUrl: '${api_backend_url}/api/test/v1/integrations'
path: '${environment_name}/api/test/v1/integrations'
protocols: [
protocol
]
value: api_link
format: 'openapi+json-link'
apiType: 'http'
}
dependsOn: [
api2
]
resource symbolicname 'policies' = {
name: 'policy'
properties: {
value: anonymous_operation_policy
format: 'rawxml'
}
}
}
Even though revision is hardcoded to 1234 it's always using default 1 and the API is not updating with latest open api specification.
I had the same problem and figured out that you have to put the revision in the name also.
name: '${apim.name}/integrations-api-${environment_name};rev=1234'

Flask-restplus: how to define a nested model with 'allOf' operation?

Creating a python flask rest plus server application,
I'm trying to create a model for input body (in POST operation) with 'allOf' operator,
which is equivalent to the following example, taken from swagger.yaml I've created with the swagger editor:
definitions:
XXXOperation:
description: something...
properties:
oper_type:
type: string
enum:
- oper_a
- oper_b
- oper_c
operation:
allOf:
- $ref: '#/definitions/OperA'
- $ref: '#/definitions/OperB'
- $ref: '#/definitions/OperC'
It should be something like (just in my crazy imagination):
xxx_oper_model = api.model('XXXOperation', {
'oper_type': fields.String(required=True, enum['oper_a', 'oper_b', 'oper_c']),
'operation': fields.Nested([OperA, OperB, OperC], type='anyof')
})
when OperA, OperB, OperC are also defined as models.
How can I do that?
Actually, I prefer to use 'oneOf', but as I understand it's not supported even in the swagger editor, so I try to use the 'allOf' with not required fields.
Versions: flask restplus: 0.10.1, flask: 0.12.2, python: 3.6.2
Thanks a lot
You need to use api.inherit. As mentioned in page 30 of documentation example;
parent = api.model('Parent', {
'name': fields.String,
'class': fields.String(discriminator=True)
})
child = api.inherit('Child', parent, {
'extra': fields.String
})
this way, Child will have all properties of Parent + its own additional property extra
{
"Child": {
"allOf": [
{
"$ref": "#/definitions/Parent"
},
{
"properties": {
"extra": {
"type": "string"
}
}
}
]
}
}

AWS Cloudformation: How to reuse bash script placed in user-data parameter when creating EC2?

In Cloudformation I have two stacks (one nested).
Nested stack "ec2-setup":
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Parameters" : {
// (...) some parameters here
"userData" : {
"Description" : "user data to be passed to instance",
"Type" : "String",
"Default": ""
}
},
"Resources" : {
"EC2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"UserData" : { "Ref" : "userData" },
// (...) some other properties here
}
}
},
// (...)
}
Now in my main template I want to refer to nested template presented above and pass a bash script using the userData parameter. Additionally I do not want to inline the content of user data script because I want to reuse it for few ec2 instances (so I do not want to duplicate the script each time I declare ec2 instance in my main template).
I tried to achieve this by setting the content of the script as a default value of a parameter:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters" : {
"myUserData": {
"Type": "String",
"Default" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
},
(...)
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Ref" : "myUserData" }
}
But I get following error while trying to launch stack:
"Template validation error: Template format error: Every Default
member must be a string."
The error seems to be caused by the fact that the declaration { Fn::Base64 (...) } is an object - not a string (although it results in returning base64 encoded string).
All works ok, if I paste my script directly into to the parameters section (as inline script) when calling my nested template (instead of reffering to string set as parameter):
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
but I want to keep the content of userData script in a parameter/variable to be able to reuse it.
Any chance to reuse such a bash script without a need to copy/paste it each time?
Here are a few options on how to reuse a bash script in user-data for multiple EC2 instances defined through CloudFormation:
1. Set default parameter as string
Your original attempted solution should work, with a minor tweak: you must declare the default parameter as a string, as follows (using YAML instead of JSON makes it possible/easier to declare a multi-line string inline):
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
myUserData:
Type: String
Default: |
#!/bin/bash
yum update -y
# Install the files and packages from the metadata
echo 'tralala' > /tmp/hahaha
(...)
Resources:
myEc2:
Type: AWS::CloudFormation::Stack
Properties
TemplateURL: "s3://path/to/ec2-setup.yml"
TimeoutInMinutes: 10
Parameters:
# (...)
userData: !Ref myUserData
Then, in your nested stack, apply any required intrinsic functions (Fn::Base64, as well as Fn::Sub which is quite helpful if you need to apply any Ref or Fn::GetAtt functions within your user-data script) within the EC2 instance's resource properties:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
userData:
Description: user data to be passed to instance
Type: String
Default: ""
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": !Ref userData
# (...) some other properties here
# (...)
2. Upload script to S3
You can upload your single Bash script to an S3 bucket, then invoke the script by adding a minimal user-data script in each EC2 instance in your template:
AWSTemplateFormatVersion: "2010-09-09"
Parameters:
# (...) some parameters here
ScriptBucket:
Description: S3 bucket containing user-data script
Type: String
ScriptKey:
Description: S3 object key containing user-data script
Type: String
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
UserData:
"Fn::Base64":
"Fn::Sub": |
#!/bin/bash
aws s3 cp s3://${ScriptBucket}/${ScriptKey} - | bash -s
# (...) some other properties here
# (...)
3. Use preprocessor to inline script from single source
Finally, you can use a template-preprocessor tool like troposphere or your own to 'generate' verbose CloudFormation-executable templates from more compact/expressive source files. This approach will allow you to eliminate duplication in your source files - although the templates will contain 'duplicate' user-data scripts, this will only occur in the generated templates, so should not pose a problem.
You'll have to look outside the template to provide the same user data to multiple templates. A common approach here would be to abstract your template one step further, or "template the template". Use the same method to create both templates, and you'll keep them both DRY.
I'm a huge fan of cloudformation and use it to create most all my resources, especially for production-bound uses. But as powerful as it is, it isn't quite turn-key. In addition to creating the template, you'll also have to call the coudformation API to create the stack, and provide a stack name and parameters. Thus, automation around the use of cloudformation is a necessary part of a complete solution. This automation can be simplistic ( bash script, for example ) or sophisticated. I've taken to using ansible's cloudformation module to automate "around" the template, be it creating a template for the template with Jinja, or just providing different sets of parameters to the same reusable template, or doing discovery before the stack is created; whatever ancillary operations are necessary. Some folks really like troposphere for this purpose - if you're a pythonic thinker you might find it to be a good fit. Once you have automation of any kind handling the stack creation, you'll find it's easy to add steps to make the template itself more dynamic, or assemble multiple stacks from reusable components.
At work we use cloudformation quite a bit and are tending these days to prefer a compositional approach, where we define the shared components of the templates we use, and then compose the actual templates from components.
the other option would be to merge the two stacks, using conditionals to control the inclusion of the defined resources in any particular stack created from the template. This works OK in simple cases, but the combinatorial complexity of all those conditions tends to make this a difficult solution in the long run, unless the differences are really simple.
Actually I found one more solution than already mentioned. This solution on the one hand is a little "hackish", but on the other hand I found it to be really useful for "bash script" use case (and also for other parameters).
The idea is to create an extra stack - "parameters stack" - which will output the values. Since outputs of a stack are not limited to String (as it is for default values) we can define entire base64 encoded script as a single output from a stack.
The drawback is that every stack needs to define at least one resource, so our parameters stack also needs to define at least one resource. The solution for this issue is either to define the parameters in another template which already defines existing resource, or create a "fake resource" which will never be created becasue of a Condition which will never be satisified.
Here I present the solution with fake resource. First we create our new paramaters-stack.json as follows:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Outputs/returns parameter values",
"Conditions" : {
"alwaysFalseCondition" : {"Fn::Equals" : ["aaaaaaaaaa", "bbbbbbbbbb"]}
},
"Resources": {
"FakeResource" : {
"Type" : "AWS::EC2::EIPAssociation",
"Condition" : "alwaysFalseCondition",
"Properties" : {
"AllocationId" : { "Ref": "AWS::NoValue" },
"NetworkInterfaceId" : { "Ref": "AWS::NoValue" }
}
}
},
"Outputs": {
"ec2InitScript": {
"Value":
{ "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash \n",
"yum update -y \n",
"# Install the files and packages from the metadata\n",
"echo 'tralala' > /tmp/hahaha"
]]}}
}
}
}
Now in the main template we first declare our paramters stack and later we refer to the output from that parameters stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myParameters": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/paramaters-stack.json",
"TimeoutInMinutes": "10"
}
},
"myEc2": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"TemplateURL": "s3://path/to/ec2-setup.json",
"TimeoutInMinutes": "10",
"Parameters": {
// (...)
"userData" : {"Fn::GetAtt": [ "myParameters", "Outputs.ec2InitScript" ]}
}
}
}
}
Please note that one can create up to 60 outputs in one stack file, so it is possible to define 60 variables/paramaters per single stack file using this technique.

Allow swagger query param to be array of strings or integers

In building a rest api using swagger2 (openAPI), I want to allow a query param station_id to support the following:
?station_id=23 (returns station 23)
?station_id=23,45 (returns stations 23 and 45)
?station_id=[3:14] (returns stations 3 through 14)
?station_id=100% (%s act as wildcards so returns things like 1001,
10049, etc..)
I use the following swagger definition (an array of strings) as an attempt to accomplish this:
parameters:
- name: station_id
in: query
description: filter stations by station_id
required: false
type: array
items:
type: string
With this definition all of the previous examples work except ?station_id=23 as swagger validation fails with the following message:
{
"message": "Validation errors",
"errors": [
{
"code": "INVALID_REQUEST_PARAMETER",
"errors": [
{
"code": "INVALID_TYPE",
"params": [
"array",
"integer"
],
"message": "Expected type array but found type integer",
"path": [],
"description": "filter stations by station_id"
}
],
"in": "query",
"message": "Invalid parameter (station_id): Value failed JSON Schema validation",
"name": "station_id",
"path": [
"paths",
"/stations",
"get",
"parameters",
"0"
]
}
]
}
note that if I quote the station_id like ?station_id='23' validation passes and I get a correct response. But I'd really prefer not to have to use quotes. Something like a union type would help solve this, but as far as I can tell they aren't supported.
I also have another endpoint /stations/{id} that can handle the case of a single id, but still have many other (non primary key) numerical fields that I want to filter on in the way specified above. For instance station_latitude.
Any ideas to a work around - maybe I can use pattern (regex) somehow? If there is no workaround in the swagger definition is there a way to tweak or bypass the validator? This is a nodejs project using swagger-node I've upgraded the version of swagger-express-mw to 0.7.0.
I think what you'd need is the anyOf or oneOf keyword similar to the one provided by JSON Schema so that you could define the type of your station_id parameter to be either a number or a string. anyOf and oneOf are supported in OpenAPI 3.0 but not in 2.0. An OpenAPI 3.0 definition would look like this:
openapi: 3.0.0
...
paths:
/something:
get:
parameters:
- in: query
name: station_id
required: true
explode: false
schema:
oneOf:
- type: integer # Optional? Array is supposed to cover the use case with a single number
example: 23
- type: array
items:
type: integer
minItems: 1
example: [23, 45]
- type: string
oneOf:
- pattern: '^\[\d+:\d+]$'
- pattern: '^\d+%$'
# or using a single pattern
# pattern: '^(\[\d+:\d+])|(\d+%)$'
example: '[3:14]'
As an alternative, perhaps you could add sortBy, skip, and limit parameters to allow you to keep the type uniform. For example: ?sortBy=station_id&skip=10&limit=10 would retrieve only stations 10 - 20.

Elasticsearch Error: SearchPhaseExecutionException: SearchParseException

I am getting the following error when I try and use a template search on an AWS elasticsearch cluster using the query
"match": { "title": "copyright" }
Parse Failure [Failed to parse source [{\"match\"{\"title\":\"copyright\"}}]]];
nested: Parse Failure [No parser for element [match]]];
The query is failing during the search phase, whilst trying to parse the query.
Why is the Parse failing?
My query works fine for a localhost elasticsearch instance.
Here is my mapping for the index type:
properties: {
title: { type: 'string' },
toc: {
type: 'nested',
properties: {
title: { type: 'string' },
},
},
},
AWS Elasticsearch service supports an older version of elasticsearch to the one I was using, 1.5.2, compared to 2.1.
This older version supports a different syntax for template searches, where the template attribute is used instead of the inline attribute to supply your template, show here

Resources