Can't set environment variables for container in ApplicationLoadBalancedFargateService - python-3.x

(CDK 1.18.0 and Python 3.6)
task_role = iam.Role(
self,
id=f"...",
assumed_by=iam.ServicePrincipal("ecs-tasks.amazonaws.com"),
managed_policies=[...]
)
repo = get_repo(self)
task_def = ecs.FargateTaskDefinition(
self,
"...",
memory_limit_mib=30720,
cpu=4096,
task_role=task_role,
execution_role=self.ecs_execution_role,
)
cont = task_def.add_container(
"...",
image=ecs.ContainerImage.from_ecr_repository(repo),
logging=ecs.LogDrivers.aws_logs(stream_prefix=f"Logging"),
command=["bash", "start.sh"],
environment={"NAME1": 'VALUE1', "NAME2": 'VALUE2'} # what would I have to put here?
)
cont.add_port_mappings(ecs.PortMapping(container_port=8080))
fg = ecsp.ApplicationLoadBalancedFargateService(
self,
"...",
task_definition=task_def,
assign_public_ip=True,
)
I want to pass NAME1=VALUE1 and NAME2=VALUE2 to the container.
I tried various ways to express the environment variables. But none worked. Am I doing something fundamentally wrong here?
Other than this specific issue the service deploys and runs.

The approach you follow seems to work here on the latest version (1.23.0). But I could not find any hint in the release notes why this might have changed. Can you update to the latest version?
task_def.add_container("container", environment={"a": "b", "c": "d"}, image=aws_ecs.ContainerImage.from_registry(name="TestImage"), memory_limit_mib=512)
newtask1C300F30:
Type: AWS::ECS::TaskDefinition
Properties:
ContainerDefinitions:
- Environment:
- Name: a
Value: b
- Name: c
Value: d
Essential: true
Image: TestImage
Memory: 512
Name: container

Related

How to iterate all the variables from a variables template file in azure pipeline?

test_env_template.yml
variables:
- name: DB_HOSTNAME
value: 10.123.56.222
- name: DB_PORTNUMBER
value: 1521
- name: USERNAME
value: TEST
- name: PASSWORD
value: TEST
- name: SCHEMANAME
value: SCHEMA
- name: ACTIVEMQNAME
value: 10.123.56.223
- name: ACTIVEMQPORT
value: 8161
and many more variables in the list.
I wanted to iterate through all the variables in the test_env_template.yml using a loop to replace the values in a file, Is there a way to do that rather than calling each values separately like ${{ variables.ACTIVEMQNAME}} as the no. of variables in the template is dynamic.
In short no. There is no easy way to get your azure pipeline variables specific to tamplet variable. You can get env variables, but there you will get regular env variables and pipeline variables mapped to env variables.
You can get it via env | sort but I'm pretty sure that this is not waht you want.
You can't display variables specific to template but you can get all pipeline variables in this way:
steps:
- pwsh:
Write-Host "${{ convertToJson(variables) }}"
and then you will get
{
system: build,
system.hosttype: build,
system.servertype: Hosted,
system.culture: en-US,
system.collectionId: be1a2b52-5ed1-4713-8508-ed226307f634,
system.collectionUri: https://dev.azure.com/thecodemanual/,
system.teamFoundationCollectionUri: https://dev.azure.com/thecodemanual/,
system.taskDefinitionsUri: https://dev.azure.com/thecodemanual/,
system.pipelineStartTime: 2021-09-21 08:06:07+00:00,
system.teamProject: DevOps Manual,
system.teamProjectId: 4fa6b279-3db9-4cb0-aab8-e06c2ad550b2,
system.definitionId: 275,
build.definitionName: kmadof.devops-manual 123 ,
build.definitionVersion: 1,
build.queuedBy: Krzysztof Madej,
build.queuedById: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedFor: Krzysztof Madej,
build.requestedForId: daec281a-9c41-4c66-91b0-8146285ccdcb,
build.requestedForEmail: krzysztof.madej#hotmail.com,
build.sourceVersion: 583a276cd9a0f5bf664b4b128f6ad45de1592b14,
build.sourceBranch: refs/heads/master,
build.sourceBranchName: master,
build.reason: Manual,
system.pullRequest.isFork: False,
system.jobParallelismTag: Public,
system.enableAccessToken: SecretVariable,
DB_HOSTNAME: 10.123.56.222,
DB_PORTNUMBER: 1521,
USERNAME: TEST,
PASSWORD: TEST,
SCHEMANAME: SCHEMA,
ACTIVEMQNAME: 10.123.56.223,
ACTIVEMQPORT: 8161
}
If you prefix them then you can try to filter using jq.

Access environment variables set by configMapRef in kubernetes pod

I have a set of environment variables in my deployment using EnvFrom and configMapRef. The environment variables held in these configMaps were set by kustomize originally from json files.
spec.template.spec.containers[0].
envFrom:
- secretRef:
name: eventstore-login
- configMapRef:
name: environment
- configMapRef:
name: eventstore-connection
- configMapRef:
name: graylog-connection
- configMapRef:
name: keycloak
- configMapRef:
name: database
The issue is that it's not possible for me to access the specific environment variables directly.
Here is the result of running printenv in the pod:
...
eventstore-login={
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
}
evironment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
eventstore={
"EVENT_STORE_HOST": "eventstore-cluster",
"EVENT_STORE_PORT": "1113"
}
graylog={
"GRAYLOG_HOST":"",
"GRAYLOG_SERVICE_PORT_GELF_TCP":""
}
...
This means that from my nodejs app I need to do something like this
> process.env.graylog
'{\n "GRAYLOG_HOST":"",\n "GRAYLOG_SERVICE_PORT_GELF_TCP":""\n}\n'
This only returns the json string that corresponds to my original json file. But I want to be able to do something like this:
process.env.GRAYLOG_HOST
To retrieve my environment variables. But I don't want to have to modify my deployment to look something like this:
env:
- name: NODE_ENV
value: dev
- name: EVENT_STORE_HOST
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_HOST
- name: EVENT_STORE_PORT
valueFrom:
secretKeyRef:
name: eventstore-secret
key: EVENT_STORE_PORT
- name: KEYCLOAK_REALM_PUBLIC_KEY
valueFrom:
configMapKeyRef:
name: keycloak-local
key: KEYCLOAK_REALM_PUBLIC_KEY
Where every variable is explicitly declared. I could do this but this is more of a pain to maintain.
Short answer:
You will need to define variables explicitly or change configmaps so they have 1 environment variable = 1 value structure, this way you will be able to refer to them using envFrom. E.g.:
"apiVersion": "v1",
"data": {
"EVENT_STORE_LOGIN": "admin",
"EVENT_STORE_PASS": "changeit"
},
"kind": "ConfigMap",
More details
Configmaps are key-value pairs that means for one key there's only one value, configmaps can get string as data, but they can't work with map.
I tried edited manually the configmap to confirm the above and got following:
invalid type for io.k8s.api.core.v1.ConfigMap.data: got "map", expected "string"
This is the reason why environment comes up as one string instead of structure.
For example this is how configmap.json looks:
$ kubectl describe cm test2
Name: test2
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
test.json:
----
environment={
"LOTUS_ENV":"dev",
"DEV_ENV":"dev"
}
And this is how it's stored in kubernetes:
$ kubectl get cm test2 -o json
{
"apiVersion": "v1",
"data": {
"test.json": "evironment={\n \"LOTUS_ENV\":\"dev\",\n \"DEV_ENV\":\"dev\"\n}\n"
},
In other words observed behaviour is expected.
Useful links:
ConfigMaps
Configure a Pod to Use a ConfigMap

Jest test passes locally, but why does it fail when running npm test in GitHub actions?

I'm having an issue with certain Jest tests in the GitHub CI. My project is in TypeScript so I'm using ts-jest. Here is the function I'm testing, it sets the "text" fields of date and time elements:
const months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"];
const days = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"];
export const setDateAndTime = (dateDisplay: TextElement, clockDisplay: TextElement): void => {
let now: Date = new Date(Date.now());
dateDisplay.text = `${days[now.getDay()]}, ${months[now.getMonth()]} ${now.getDate()}, ${now.getFullYear()}`;
let hours: number = preferences.clockDisplay === "12h" ? now.getHours() % 12 || 12 : now.getHours();
let minutes: number = now.getMinutes();
clockDisplay.text = minutes < 10 ? `${hours}:0${minutes}` : `${hours}:${minutes}`;
};
Here is a test for that function:
import { TestElement } from "../mocks/test-element";
let dateDisplay = new TestElement() as TextElement;
let clockDisplay = new TestElement() as TextElement;
test("Sets date and time display correctly", () => {
jest.spyOn(Date, "now").mockImplementation(() => 1607913488);
setDateAndTime(dateDisplay, clockDisplay);
expect(dateDisplay.text).toBe("Mon, Jan 19, 1970");
expect(clockDisplay.text).toBe("9:38");
});
TestElement is just a dummy element with a "text" field:
export class TestElement {
text = "";
}
Locally, both of the expect() statements pass. But, in GitHub, I get the following error, for the second statement only:
TypeError: (0 , _jestDiff.diffStringsRaw) is not a function
18 | setDateAndTime(dateDisplay, clockDisplay);
19 | expect(dateDisplay.text).toBe("Mon, Jan 19, 1970");
> 20 | expect(clockDisplay.text).toBe("9:38");
| ^
21 | });
Since the issue is happening only in GitHub, I'll post my node.js.yml configuration as well:
name: Node.js CI
on: [push]
jobs:
build:
runs-on: windows-latest
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npx fitbit-build
- run: npm test
For the life of me, I can't figure out why only the second string comparison fails - it's the exact same function performed on the exact same class of element. After doing some research, the only thing I can find is that diffStringsRaw is used internally by Jest through the jest-diff package, but I haven't gotten much farther than that. Any help would be much appreciated!
Another way to fix this, if you didn't want to use the GitHub Action, is to replace your test command with
TZ=Europe/London npm test, where Europe/London is your desired time zone.
See this link for discussion
The answer to this wasn't in Jest at all - rather it was that the GitHub test runner runs in GMT, not the local time zone, which was why the first expect() passed for comparing the date, and the second one failed for comparing the time.
For anyone interested, the possible solutions are:
Refactor your function or your test to account for the system time zone; or
Add a GitHub action to your configuration that explicitly sets the time zone.
I chose option 2 and went with Setup Timezone (shoutout to zcong1993 for the easy to use action!)

ansible set fact from another task output

I have trouble some ansible modules. I wrote the custom module and its output like this:
ok: [localhost] => {
"msg": {
"ansible_facts": {
"device_id": "/dev/sdd"
},
"changed": true,
"failed": false
}
}
my custom module:
#!bin/env python
from ansible.module_utils.basic import *
import json
import array
import re
def find_uuid():
with open("/etc/ansible/roles/addDisk/library/debug/disk_fact.json") as disk_fact_file, open("/etc/ansible/roles/addDisk/library/debug/device_links.json") as device_links_file:
disk_fact_data = json.load(disk_fact_file)
device_links_data = json.load(device_links_file)
device = []
for p in disk_fact_data['guest_disk_facts']:
if disk_fact_data['guest_disk_facts'][p]['controller_key'] == 1000 :
if disk_fact_data['guest_disk_facts'][p]['unit_number'] == 3:
uuid = disk_fact_data['guest_disk_facts'][p]['backing_uuid'].split('-')[4]
for key, value in device_links_data['ansible_facts']['ansible_device_links']['ids'].items():
for d in device_links_data['ansible_facts']['ansible_device_links']['ids'][key]:
if uuid in d:
if key not in device:
device.append(key)
if len(device) == 1:
json_data={
"device_id": "/dev/" + device[0]
}
return True, json_data
else:
return False
check, jsonData = find_uuid()
def main():
module = AnsibleModule(argument_spec={})
if check:
module.exit_json(changed=True, ansible_facts=jsonData)
else:
module.fail_json(msg="error find device")
main()
I want to use device_id variable on the other tasks. I think handle with module.exit_json method but how can I do that?
I want to use device_id variable on the other tasks
The thing you are looking for is register: in order to make that value persist into the "host facts" for the hosts against which that task ran. Then, you can go with "push" model in which you set that fact upon every other host that interests you, or you can go with a "pull" model wherein interested hosts can reach out to get the value at the time they need it.
Let's look at both cases, for comparison.
First, capture that value, and I'll use a host named "alpha" for ease of discussion:
- hosts: alpha
tasks:
- name: find the uuid task
# or whatever you have called your custom task
find_uuid:
register: the_uuid_result
Now the output is available is available on the host "alpha" as {{ vars["the_uuid_result"]["device_id"] }} which will be /dev/sdd in your example above. One can also abbreviate that as {{ the_uuid_result.device_id }}
In the "push" model, you can now iterate over all hosts, or just those in a specific group, that should also receive that device_id fact; for this example, let's target an existing group of hosts named "need_device_id":
- hosts: alpha # just as before, but for context ...
tasks:
- find_uuid:
register: the_uuid_result
# now propagate out the value
- name: declare device_id fact on other hosts
set_fact:
device_id: '{{ the_uuid_result.device_id }}'
delegate_to: '{{ item }}'
with_items: '{{ groups["need_device_id"] }}'
And, finally, in contrast, one can reach over and pull that fact if host "beta" needs to look up the device_id that host "alpha" discovered:
- hosts: alpha
# as before
- hosts: beta
tasks:
- name: set the device_id fact on myself from alpha
set_fact:
device_id: '{{ hostvars["alpha"]["the_uuid_result"]["device_id"] }}'
You could also run that same set_fact: device_id: business on "alpha" in order to keep the "local" variable named the_uuid_result from leaking out of alpha's playbook. Up to you.

Elixir postgrex with poolboy example on Windows fails with 'module DBConnection.Poolboy not available'

I am exploring using Elixir for fast Postgres data imports of mixed types (CSV, JSON). Being new to Elixir, I am following the the example given in the youtube video "Fast Import and Export with Elixir and Postgrex - Elixir Hex package showcase" (https://www.youtube.com/watch?v=YQyKRXCtq4s). The basic mix application works until the point that Poolboy is introduced, i.e. Postgrex successfully loads records into the database using a single connection.
When I try to follow the Poolboy configuration, and test it by running
FastIoWithPostgrex.import("./data_with_ids.txt")
in iex or the command line, I get the following error, for which I can not determine the cause (username and password removed):
** (UndefinedFunctionError) function DBConnection.Poolboy.child_spec/1 is
undefined (module DBConnection.Poolboy is not available)
DBConnection.Poolboy.child_spec({Postgrex.Protocol, [types:
Postgrex.DefaultTypes, name: :pg, pool: DBConnection.Poolboy, pool_size: 4,
hostname: "localhost", port: 9000, username: "XXXX", password:
"XXXX", database: "ASDDataAnalytics-DEV"]})
(db_connection) lib/db_connection.ex:383: DBConnection.start_link/2
(fast_io_with_postgrex) lib/fast_io_with_postgrex.ex:8:
FastIoWithPostgrex.import/1
I am running this on Windows 10, connecting to a PostgreSQL 10.x Server through a local SSH tunnel. Here is the lib/fast_io_with_postgrex.ex file:
defmodule FastIoWithPostgrex do
#moduledoc """
Documentation for FastIoWithPostgrex.
"""
def import(filepath) do
{:ok, pid} = Postgrex.start_link(name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4,
hostname: "localhost",
port: 9000,
username: "XXXX", password: "XXXX", database: "ASDDataAnalytics-DEV")
File.stream!(filepath)
|> Stream.map(fn line ->
[id_str, word] = line |> String.trim |> String.split("\t", trim: true, parts: 2)
{id, ""} = Integer.parse(id_str)
[id, word]
end)
|> Stream.chunk_every(10_000, 10_000, [])
|> Task.async_stream(fn word_rows ->
Enum.each(word_rows, fn word_sql_params ->
Postgrex.transaction(:pg, fn conn ->
IO.inspect Postgrex.query!(conn, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
# IO.inspect Postgrex.query!(pid, "INSERT INTO asdda_dataload.words (id, word) VALUES ($1, $2)", word_sql_params)
end , pool: DBConnection.Poolboy, pool_timeout: :infinity, timeout: :infinity)
end)
end, timeout: :infinity)
|> Stream.run
end # def import(file)
end
Here is the mix.exs file:
defmodule FastIoWithPostgrex.MixProject do
use Mix.Project
def project do
[
app: :fast_io_with_postgrex,
version: "0.1.0",
elixir: "~> 1.7",
start_permanent: Mix.env() == :prod,
deps: deps()
]
end
# Run "mix help compile.app" to learn about applications.
def application do
[
extra_applications: [:logger, :poolboy, :connection]
]
end
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git",
tag: "0.1.0"},
{:postgrex, "~>0.14.1"},
{:poolboy, "~>1.5.1"}
]
end
end
Here is the config/config.exs file:
# This file is responsible for configuring your application
# and its dependencies with the aid of the Mix.Config module.
use Mix.Config
config :fast_io_with_postgrex, :postgrex,
database: "ASDDataAnalytics-DEV",
username: "XXXX",
password: "XXXX",
name: :pg,
pool: DBConnection.Poolboy,
pool_size: 4
# This configuration is loaded before any dependency and is restricted
# to this project. If another project depends on this project, this
# file won't be loaded nor affect the parent project. For this reason,
# if you want to provide default values for your application for
# 3rd-party users, it should be done in your "mix.exs" file.
# You can configure your application as:
#
# config :fast_io_with_postgrex, key: :value
#
# and access this configuration in your application as:
#
# Application.get_env(:fast_io_with_postgrex, :key)
#
# You can also configure a 3rd-party app:
#
# config :logger, level: :info
#
# It is also possible to import configuration files, relative to this
# directory. For example, you can emulate configuration per environment
# by uncommenting the line below and defining dev.exs, test.exs and such.
# Configuration from the imported file will override the ones defined
# here (which is why it is important to import them last).
#
# import_config "#{Mix.env()}.exs"
Any assistance with finding the cause of this error would be greatly appreciated!
I didn't want to goo to deep into how this isn't working, but that example is a little old, and that poolboy 1.5.1 you get pulled with deps.get is from 2015.. and the example uses elixir 1.4
Also, if you see Postgrex's mix.exs deps, you will notice your freshly installed lib (1.14) depends on elixir_ecto/db_connection 2.x
The code you are referring uses Postgres 1.13.x, which depends on {:db_connection, "~> 1.1"}. So i would expect incompatibilities.
I would play with the versions of the libs you see in examples code mix.lock file, an the elixir version if I wanted to see that working.
Maybe try lowering the Postgrex version first to something around that time (maybe between 0.12.2 and the locked version of the example).
Also, the version of elixir might have some play here, check this
Greetings!
have fun
EDIT:
You can use DBConnection.ConnectionPool instead of poolboy and get way using the latest postgrexand elixir versions, not sure about the performance difference but you can compare, just do this:
on config/config.exs (check if you need passwords, etc..)
config :fast_io_with_postgrex, :postgrex,
database: "fp",
name: :pg,
pool: DBConnection.ConnectionPool,
pool_size: 4
And in lib/fast_io_with.....ex replace both Postgrex.start_link(... lines with:
{:ok, pid} = Application.get_env(:fast_io_with_postgrex, :postgrex)
|> Postgrex.start_link
That gives me:
mix run -e 'FastIoWithPostgrex.import("./data_with_ids.txt")'
1.76s user 0.69s system 106% cpu 2.294 total
on Postgrex 0.14.1 and Elixir 1.7.3
Thank you, using your advice I got the original example to work by downgrading the dependency versions in the mix.exs file and adding a dependency to an earlier version of db_connection:
# Run "mix help deps" to learn about dependencies.
defp deps do
[
# {:dep_from_hexpm, "~> 0.3.0"},
# {:dep_from_git, git: "https://github.com/elixir-lang/my_dep.git", tag: "0.1.0"},
{:postgrex, "0.13.5"},
{:db_connection, "1.1.3"},
{:poolboy, "~>1.5.1"}
]
end
I will also try your suggestion of changing the code to replace Poolboy with the new pool manager in the later version of db_connection to see if that works as well.
I'm sure there was a lot of thought that went into the architecture change, however I must say there is very little out there in terms of why Poolboy was once so popular, but in the latest version of db_connection it's not even supported as a connection type.

Resources