Replace a json object array value dynamically in AzureDevops pipeline - azure

A json file in my repository has object arrays.
In Azure pipeline I want to read the file and replace certain object values before proceeding with other tasks like publishing, etc.
Example json file :
"Routes": [
{
"UpstreamPathTemplate": "/product",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 7133
}
]
},
{
"UpstreamPathTemplate": "/product/delete",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 7133
}
]
},
{
"UpstreamPathTemplate": "/order",
"DownstreamHostAndPorts": [
{
"Host": "localhost",
"Port": 7203
}
]
}
]
Now I want to pick the first unique word after '/' from UpstreamPathTemplate for each object and replace DownstreamHostAndPorts[0].Host with that word.
Also all Port value should be 80.
So after replacement above Json would look like
"Routes": [
{
"UpstreamPathTemplate": "/product",
"DownstreamHostAndPorts": [
{
"Host": "product",
"Port": 80
}
]
},
{
"UpstreamPathTemplate": "/product/delete",
"DownstreamHostAndPorts": [
{
"Host": "product",
"Port": 80
}
]
},
{
"UpstreamPathTemplate": "/order",
"DownstreamHostAndPorts": [
{
"Host": "order",
"Port": 80
}
]
}
]

Related

How to concatenate two template files into one file and pass onto ECS container task definition in terraform

I have two template files. I want to merge these template files into one and pass them onto the ECS attribute container_definitions in the aws_ecs_task_definition resource.
Terraform Version => v0.14.9
nginx.tpl.json:
[
{
"name": "nginx",
"image": "public.ecr.aws/nginx/nginx:latest",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
]
}
]
redis.json.tpl:
[
{
"name": "redis",
"image": "public.ecr.aws/peakon/redis:6.0.5",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379,
"protocol": "tcp"
}
]
}
]
When combined two template files manually like below it is working. But with Terraform concat or format getting errors.
[
{
"name": "nginx",
"image": "public.ecr.aws/nginx/nginx:latest",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
]
},
{
"name": "redis",
"image": "public.ecr.aws/peakon/redis:6.0.5",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379,
"protocol": "tcp"
}
]
}
]
data "template_file" "ecs_task" {
template = format("%s%s",file("./ecs/templates/nginx.json.tpl"),
file("./ecs/templates/redis.json. tpl")
)
} => Here I need to combine the two template files and then pass them onto the container_definitions to the below resource.
resource "aws_ecs_task_definition" "testapp" {
family = "testapp"
network_mode = "awsvpc"
cpu = 256
memory = 512
container_definitions = data.template_file.ecs_task.rendered # I'm getting the following error.
}
Error:invalid character '}' looking for the beginning of object key string
Can someone help me with this, please?
Update
Remove brackets from your files
{
"name": "nginx",
"image": "public.ecr.aws/nginx/nginx:latest",
"portMappings": [
{
"containerPort": 80,
"hostPort": 80,
"protocol": "tcp"
}
]
}
and
{
"name": "redis",
"image": "public.ecr.aws/peakon/redis:6.0.5",
"portMappings": [
{
"containerPort": 6379,
"hostPort": 6379,
"protocol": "tcp"
}
]
}
Then instead of "%s%s". Seems you are missing comma: "[%s,%s]".

Dynamically add natRuleCollections for Azure Firewall using ARM template

When creating an Azure Firewall, I would like to dynamically add the natRuleCollections using a parameter:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"natRules": {
"value": [
{
"name": "sql01",
"priority": 501,
"rules": [
{
"port": "1433",
"protocols": [
"TCP",
"UDP"
],
"translatedAddress": "10.1.1.1",
"sourceAddresses": [
"*"
]
}
]
},
{
"name": "other01",
"priority": 502,
"rules": [
{
"port": "1234",
"protocols": [
"TCP",
"UDP"
],
"translatedAddress": "10.1.1.2",
"sourceAddresses": [
"1.2.3.4",
"5.6.7.8"
]
},
{
"port": "5678",
"protocols": [
"TCP",
"UDP"
],
"translatedAddress": "10.1.1.2",
"sourceAddresses": [
"9.10.11.12",
"13.14.15.16"
]
}
]
}
]
}
}
}
There are examples of creating the firewall with and ARM template, but they are simplified and do not have arrays with real-life scenario's (like re-using created public IP addresses). I can use the copy functionality, but that will only give me the first level, not the nested rules.
How can I achieve this scenario with ARM templates?

The .Net core extension SeriLog filtering is not working

I'm using .net Core 2.0.9 and Serilog.Filters.Expressions 2.0.0.
I configured my appsettings.json to write in a log table in the Database.The data are recorded successfuly in the database but the RequestPath property is always null :
"Serilog": {
"MinimumLevel": {
"Default": "Debug",
"Override": {
"Microsoft": "Debug"
}
},
"WriteTo": [
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "myconnectionString",
"tableName": "Log"
}
}
],
"WriteTo:Async": {
"Name": "Async",
"Args": {
"configure": [
{
"Name": "File",
"Args": {
"path": "..\\output\\log.txt",
"rollingInterval": "Day"
}
}
]
}
},
"Using": [ "Serilog.Settings.Configuration" ]
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "RequestPath like '%/api/book%'"
}
}
]
},
But i want to filter and save only log entries that have a specific api path. In this case, just entries that contain the api/user path in the RequestPath. But no data are saved anymore and i have no log errors, any idea why ?
Here are working steps for me, check the difference:
appsettings.json
"Serilog": {
"MinimumLevel": "Information",
"Override": {
"Microsoft": "Critical"
},
"WriteTo": [
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "Data Source=xx",
"autoCreateSqlTable ": true,
"tableName": "Logs",
"autoCreateSqlTable": true,
"columnOptionsSection": {
"removeStandardColumns": [ "Properties" ],
"customColumns": [
{
"ColumnName": "Release",
"DataType": "varchar",
"DataLength": 32
},
{
"ColumnName": "RequestPath",
"DataType": "varchar"
},
{
"ColumnName": "ConnectionId",
"DataType": "varchar"
}
]
}
}
},
{
"Name": "RollingFile",
"Args": {
"pathFormat": "Logs/app-{Date}.txt",
"outputTemplate": "{Timestamp:yyyy-MM-dd HH:mm:ss.fff zzz} [{Level}] {Message} {UserName} {ActionName} {NewLine} {Exception}"
}
}
],
"Using": [ "Serilog.Settings.Configuration" ],
"Filter": [
{
"Name": "ByIncludingOnly",
"Args": {
"expression": "RequestPath like '%/api%'"
}
}
]
},
Startup.cs
Log.Logger = new LoggerConfiguration()
.ReadFrom.ConfigurationSection(Configuration.GetSection("Serilog"))
.CreateLogger();
For checking serilog error, adding code below:
Log.Logger = new LoggerConfiguration()
.ReadFrom.ConfigurationSection(Configuration.GetSection("Serilog"))
.CreateLogger();
Serilog.Debugging.SelfLog.Enable(msg =>
{
Debug.Print(msg);
Debugger.Break();
});

Using different ormconfig.json files depending on env

My ormconfig.json is static of course, it looks like:
{
"type": "mariadb",
"host": "localhost",
"port": 3306,
"username": "root",
"password": "moove",
"database": "moove_db",
"synchronize": true,
"logging": false,
"entities": [
"dist/entity/**/*.js"
],
"migrations": [
"dist/migration/**/*.js"
],
"subscribers": [
"dist/subscriber/**/*.js"
],
"cli": {
"entitiesDir": "dist/entity",
"migrationsDir": "dist/migration",
"subscribersDir": "dist/subscriber"
}
}
but what if I want to create another config for our production server?
Do I create another config file? How do I point typeorm to the other config file?
For the moment, I was able to just change ormconfig.json, to ormconfig.js, and then use env variables, like this:
module.exports = {
"port": process.env.port,
"entities": [
// ...
],
"migrations": [
// ...
],
"subscribers": [
// ...
],
"cli": {
// ...
}
}
Don't use the ormconfig.json. You can pass a config object directly to createConnection() like
import { createConnection } from "typeorm";
const config:any = {
"port": process.env.port || "28017",
"entities": [
// ...
],
"migrations": [
// ...
],
"subscribers": [
// ...
],
"cli": {
// ...
}
}
createConnection(config).then(async connection => {
await loadPosts(connection);
}).catch(error => console.log(error));

test kitchen i want to write node attributes and want to search in node serach

In test kitchen i am trying to search nodes but it is not giving any output
cookbook/test/integration/nodes
Json file
{
"id": "hive server",
"chef_type": "node",
"environment": "dev",
"json_class": "Chef::Node",
"run_list": [],
"automatic": {
"hostname": "test.net",
"fqdn": "127.0.0.1",
"name": "test.net",
"ipaddress": "127.0.0.1",
"node_zone": "green",
"roles": []
},
"attributes": {
"hiveserver": "true"
}
}
Recipe
hiveNodes = search(:node, "hiveserver:true AND environment:node.environment AND node_color:node["node_color"])
#hiveserverList = ""
#hiveNodes.each |hnode| do
# hiveserverList += hnode
#end
#file '/tmp/test.txt' do
# content '#{hiveserverList}'
#end

Resources