How to flatten a tuple of server ids to a string? - terraform

I'm trying to create a file that includes the ID's of multiple server hosts that were generated with the count attribute:
resource "aws_instance" "workers" {
count = "${var.worker_count}"
...
}
resource "local_file" "stop_instances" {
filename = "${path.module}/generated/stop_instances.py"
content =<<EOF
import boto3
# Boto Connection
ec2 = boto3.resource('ec2', '${var.region}')
def lambda_handler(event, context):
# Retrieve instance IDs
instance_ids = ["${aws_instance.controller.id}", "${aws_instance.gateway.id}", "${aws_instance.workers.*.id}"]
# stopping instances
stopping_instances = ec2.instances.filter(InstanceIds=instance_ids).stop()
EOF
}
However, I'm getting the following error:
447: instance_ids = ["${aws_instance.controller.id}",
"${aws_instance.gateway.id}", "${aws_instance.workers.*.id}"]
449:
450:
451:
|----------------
| aws_instance.workers is tuple with 3 elements
Cannot include the given value in a string template: string required.
Is there a way that I can flatten the tuple to a string?
I've tried the tostring() method, but that only accepts primitive types.

join was the solution for me:
instance_ids = ["${aws_instance.controller.id}","${aws_instance.gateway.id}","${join("\",\"", aws_instance.workers.*.id)}"]

The best way to produce a string from a list will depend on the specific situation, because there are lots of different ways to represent a list in a string.
For this particular situation, it seems likely that JSON array/string syntax is compatible enough with Python syntax that you could get away with using jsonencode to produce a Python list expression:
import boto3
# Boto Connection
ec2 = boto3.resource('ec2', ${jsonencode(var.region)})
def lambda_handler(event, context):
# Retrieve instance IDs
instance_ids = ${jsonencode(
concat(
[
aws_instance.controller.id,
aws_instance.gateway.id,
],
aws_instance.workers.*.id
)
)}
# stopping instances
stopping_instances = ec2.instances.filter(InstanceIds=instance_ids).stop()
For situations where a lot of data needs to be passed into a program written in another language, and where the JSON syntax might not 100% align with the target language, a more general solution would be to pass the data structure in as JSON and then parse it using the language's own JSON parser.
If you know that all of your values are strings as in this case, you could also simplify things and join all of your string values together with some delimiter using the join function and then split it using Python's split method, at the expense of the resulting source code looking even less like a human might hand-write it.

Related

DynamoDB Query Reserved Words using boto3 Resource?

I've been playing around with some "weirder" querying of DynamoDB with Reserved Words using boto3.resource method, and came across a pretty annoying issue which I can't resolve for quite some time (Always the same error sigh), and can't seem to find the answer anywhere.
My code is the following:
import logging
import boto3
from boto3.dynamodb.conditions import Key
logger = logging.getLogger()
logger.setLevel(logging.INFO)
TABLE_NAME = "some-table"
def getItems(record, table=None):
if table is None:
dynamodb = boto3.resource("dynamodb")
table = dynamodb.Table(TABLE_NAME)
record = str(record)
get_item = table.query(
KeyConditionExpression=Key("#pk").eq(":pk"),
ExpressionAttributeNames={"#pk": "PK"},
ExpressionAttributeValues={":pk": record},
)
logger.info(
f"getItem parameters\n{json.dumps(get_item, indent=4,sort_keys=True, default=str)}"
)
return get_item
if __name__ == "__main":
record = 5532941
getItems(record)
It's nothing fancy, as I mentioned I'm just playing around, but I'm constantly getting the following error no matter what I try:
"An error occurred (ValidationException) when calling the Query operation: Value provided in ExpressionAttributeNames unused in expressions: keys: {#pk}"
As far as I understand in order to "replace" the reserved keys/values with something arbitrary you put it into ExpressionAttributeNames and ExpressionAttributeValues, but I can't wrap my head around as to why it's telling me that this key is not used.
I should mention that this Primary Key exists with this value in the record var in DynamoDB.
Any suggestions?
Thanks
If you're using Key then just provide the string values and don't be fancy with substitution. See this example:
https://github.com/aws-samples/aws-dynamodb-examples/blob/master/DynamoDB-SDK-Examples/python/WorkingWithQueries/query_equals.py
If you're writing an equality expression as a single string, then you need the substitution. See this example:
https://github.com/aws-samples/aws-dynamodb-examples/blob/master/DynamoDB-SDK-Examples/python/WorkingWithQueries/query-consistent-read.py

Unable to fetch only values from a List in Groovy with Jmeter script

In groovy, I am getting below output in List. I am using Jmeter JSR223 Post processor for the script. My List print below data in result.
def a = [{Zip=36448, CountryID=2}]
I want to fetch only values (36448 and 2) from this List and not Key. How Can I do that?
For simple single instance fetch do this:
def zip = a.first().Zip
def countryId = a.first().CountryID
Seems pretty straight forward if those are only known values that you want.
If you want all Zips and CountryIDs then you can do this:
def zips = a*.Zip
def countryIds = a*.CountryID
That will return 2 Lists one with all the Zips, and one with all the CountryIDs using the spread operator.
I don't know what is the data structure is inside your list your code is not a valid Groovy code.
For Map it would be something like:
a[0].collect {it -> it.value}
More information on Groovy scripting in JMeter: Apache Groovy - Why and How You Should Use It

F Strings and Interpolation using a properties file

I have a simple python app and i'm trying to combine bunch of output messages to standardize output to the user. I've created a properties file for this, and it looks similar to the following:
[migration_prepare]
console=The migration prepare phase failed in {stage_name} with error {error}!
email=The migration prepare phase failed while in {stage_name}. Contact support!
slack=The **_prepare_** phase of the migration failed
I created a method to handle fetching messages from a Properties file... similar to:
def get_msg(category, message_key, prop_file_location="messages.properties"):
""" Get a string from a properties file that is utilized similar to a dictionary and be used in subsequent
messaging between console, slack and email communications"""
message = None
config = ConfigParser()
try:
dataset = config.read(prop_file_location)
if len(dataset) == 0:
raise ValueError("failed to find property file")
message = config.get(category, message_key).replace('\\n', '\n') # if contains newline characters i.e. \n
except NoOptionError as no:
print(
f"Bad option for value {message_key}")
print(f"{no}")
except NoSectionError as ns:
print(
f"There is no section in the properties file {prop_file_location} that contains category {category}!")
print(f"{ns}")
return f"{message}"
The method returns the F string fine, to the calling class. My question is, in the calling class if the string in my properties file contains text {some_value} that is intended to be interpolated by the compiler in the calling class using an F String with curly brackets, why does it return a string literal? The output is literal text, not the interpolated value I expect:
What I get The migration prepare phase failed while in {stage_name} stage. Contact support!
What I would like The migration prepare phase failed while in Reconciliation stage. Contact support!
I would like the output from the method to return the interpolated value. Has anyone done anything like this?
I am not sure where you define your stage_name but in order to interpolate in config file you need to use ${stage_name}
Interpolation in f-strings and configParser files are not the same.
Update: added 2 usage examples:
# ${} option using ExtendedInterpolation
from configparser import ConfigParser, ExtendedInterpolation
parser = ConfigParser(interpolation=ExtendedInterpolation())
parser.read_string('[example]\n'
'x=1\n'
'y=${x}')
print(parser['example']['y']) # y = '1'
# another option - %()s
from configparser import ConfigParser, ExtendedInterpolation
parser = ConfigParser()
parser.read_string('[example]\n'
'x=1\n'
'y=%(x)s')
print(parser['example']['y']) # y = '1'

Creating custom component in SpaCy

I am trying to create SpaCy pipeline component to return Spans of meaningful text (my corpus comprises pdf documents that have a lot of garbage that I am not interested in - tables, headers, etc.)
More specifically I am trying to create a function that:
takes a doc object as an argument
iterates over the doc tokens
When certain rules are met, yield a Span object
Note I would also be happy with returning a list([span_obj1, span_obj2])
What is the best way to do something like this? I am a bit confused on the difference between a pipeline component and an extension attribute.
So far I have tried:
nlp = English()
Doc.set_extension('chunks', method=iQ_chunker)
####
raw_text = get_test_doc()
doc = nlp(raw_text)
print(type(doc._.chunks))
>>> <class 'functools.partial'>
iQ_chunker is a method that does what I explain above and it returns a list of Span objects
this is not the results I expect as the function I pass in as method returns a list.
I imagine you're getting a functools partial back because you are accessing chunks as an attribute, despite having passed it in as an argument for method. If you want spaCy to intervene and call the method for you when you access something as an attribute, it needs to be
Doc.set_extension('chunks', getter=iQ_chunker)
Please see the Doc documentation for more details.
However, if you are planning to compute this attribute for every single document, I think you should make it part of your pipeline instead. Here is some simple sample code that does it both ways.
import spacy
from spacy.tokens import Doc
def chunk_getter(doc):
# the getter is called when we access _.extension_1,
# so the computation is done at access time
# also, because this is a getter,
# we need to return the actual result of the computation
first_half = doc[0:len(doc)//2]
secod_half = doc[len(doc)//2:len(doc)]
return [first_half, secod_half]
def write_chunks(doc):
# this pipeline component is called as part of the spacy pipeline,
# so the computation is done at parse time
# because this is a pipeline component,
# we need to set our attribute value on the doc (which must be registered)
# and then return the doc itself
first_half = doc[0:len(doc)//2]
secod_half = doc[len(doc)//2:len(doc)]
doc._.extension_2 = [first_half, secod_half]
return doc
nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser", "ner"])
Doc.set_extension("extension_1", getter=chunk_getter)
Doc.set_extension("extension_2", default=[])
nlp.add_pipe(write_chunks)
test_doc = nlp('I love spaCy')
print(test_doc._.extension_1)
print(test_doc._.extension_2)
This just prints [I, love spaCy] twice because it's two methods of doing the same thing, but I think making it part of your pipeline with nlp.add_pipe is the better way to do it if you expect to need this output on every document you parse.

How to modify the signature of a function dynamically

I am writing a framework in Python. When a user declares a function, they do:
def foo(row, fetch=stuff, query=otherStuff)
def bar(row, query=stuff)
def bar2(row)
When the backend sees query= value, it executes the function with the query argument depending on value. This way the function has access to the result of something done by the backend in its scope.
Currently I build my arguments each time by checking whether query, fetch and the other items are None, and launching it with a set of args that exactly matches what the user asked for. Otherwise I got the "got an unexpected keyword argument" error. This is the code in the backend:
#fetch and query is something computed by the backend
if fetch= None and query==None:
userfunction(row)
elif fetch==None:
userunction (row, query=query)
elif query == None:
userfunction (row, fetch=fetch)
else:
userfunction (row,fetch=fetch,query=query)
This is not good; for each additional "service" the backend offers, I need to write all the combinations with the previous ones.
Instead of that I would like to primarily take the function and manually add a named parameter, before executing it, removing all the unnecessary code that does these checks. Then the user would just use the stuff it really wanted.
I don't want the user to have to modify the function by adding stuff it doesn't want (nor do I want them to specify a kwarg every time).
So I would like an example of this if this is doable, a function addNamedVar(name, function) that adds the variable name to the function function.
I want to do that that way because the users functions are called a lot of times, meaning that it would trigger me to, for example, create a dict of the named var of the function (with inspect) and then using **dict. I would really like to just modify the function once to avoid any kind of overhead.
This is indeed doable in AST and that's what I am gonna do because this solution will suit better for my use case . However you could do what I asked more simply by having a function cloning approach like the code snippet I show. Note that this code return the same functions with different defaults values. You can use this code as example to do whatever you want.
This works for python3
def copyTransform(f, name, **args):
signature=inspect.signature(f)
params= list(signature.parameters)
numberOfParam= len(params)
numberOfDefault= len(f.__defaults__)
listTuple= list(f.__defaults__)
for key,val in args.items():
toChangeIndex = params.index(key, numberOfDefault)
if toChangeIndex:
listTuple[toChangeIndex- numberOfDefault]=val
newTuple= tuple(listTuple)
oldCode=f.__code__
newCode= types.CodeType(
oldCode.co_argcount, # integer
oldCode.co_kwonlyargcount, # integer
oldCode.co_nlocals, # integer
oldCode.co_stacksize, # integer
oldCode.co_flags, # integer
oldCode.co_code, # bytes
oldCode.co_consts, # tuple
oldCode.co_names, # tuple
oldCode.co_varnames, # tuple
oldCode.co_filename, # string
name, # string
oldCode.co_firstlineno, # integer
oldCode.co_lnotab, # bytes
oldCode.co_freevars, # tuple
oldCode.co_cellvars # tuple
)
newFunction=types.FunctionType(newCode, f.__globals__, name, newTuple, f.__closure__)
newFunction.__qualname__=name #also needed for serialization
You need to do that weird stuff with the names if you want to Pickle your clone function.

Resources