freeradius unlang disable escaping - ldap-query

I have the following list of VLAN's I want to check against ldap.
Tmp-String-0 = "CN=vlan10,CN=Users,DC=aaa,DC=local;CN=vlan20,CN=Users,DC=aaa,DC=local"
With ulang I explode them and loop through them
if ("%{explode:&control:Tmp-String-0 ;}" > 0) {
foreach &control:Tmp-String-0 {
Then I try to do checking with
Tmp-String-1 := "%{ldap_aaa.local:ldapi://192.168.0.199:389/cn=Users,dc=aaa,dc=local?memberof?sub?(&(objectCategory=User)(sAMAccountName=%{%{Stripped-User-Name}:-%{User-Name}})(memberOf=%{Foreach-Variable-0}))}"
However %{Foreach-Variable-0} gets the escaped version of the string:
CN3dvlan202cCN3dUsers2cDC3daaa2cDC3dlocal
The escaped version is not working, if I replace it with hardcoded unescaped version the it works.
CN=vlan20,CN=Users,DC=aaa,DC=local
How to prevent unlang for escaping the variable?

You can't prevent unescaping in v3. In version 4 there's the concept of "tainted" and "untainted" values, with tainted values being escaped, and "untainted" values being inserted verbatim, but I'm not sure that's implemented yet in the LDAP module expansions.
You could use the group membership checks in the LDAP module to do what you're trying to implement with the xlat expansion?
e.g.
if ("%{explode:&control:Tmp-String-0 ;}" > 0) {
foreach &control:Tmp-String-0 {
if (LDAP-Group == "%{control:Tmp-String-0}") {
update reply {
# VLAN-Attrs
}
}
}
}

Related

How to use in each statement with mixed version dml devices?

Got a device with some common code in DML 1.4 where I want to verify a parameter is properly set on all fields of all registers by using and in each statement:
dml 1.4;
//import "each-bank.dml";
template stop_template {
param stop default true;
}
template dont_stop is stop_template{
param stop = false;
}
in each bank {
in each register {
in each field {
is stop_template;
#if (this.stop)
{
error "Stop here";
}
}
}
}
For a device in dml 1.2 my error statements triggers on d.o even if I add the template or parameter myself:
dml 1.2;
device each_device;
import "each-import.dml"
bank a {
register x size 4 #0x0 {
field alpha #[2:0] {
is dont_stop;
}
}
}
bank d {
register o size 4 #0x0 is stop_template;
}
DML-DEP each-device.dmldep
DEP each-device-dml.d
DMLC each-device-dml.c
Using the Simics 5 API for test-device module
/modules/test-device/each-device.dml:15:5: In d.o
/modules/test-device/each-import.dml:18:17: error: Stop here
...
gmake: *** [test-device] Error 2
Running the same code with an 1.4 device works as intended. Does the in each statement does not work with mixed devices?
Your issue is that in dml 1.2 registers with no declared fields get one whole-register-covering field implicitly defined by the compilator. This is then picked up by your each-statement, correctly.
To work around this, you need your template to check if it is actually being applied to an explicit field (part of the unfortunate pain that comes with writing common-code for both 1.2 and 1.4).
To do this: use the 1.2 parameter 'explicit' which is set by the compiler, which is 'true' for declared fields only. This parameter is however not present in 1.4, so you need to additionally guard this check with a check on the 'dml_1_2' parameter.
Something along the lines of:
in each bank {
in each register {
in each field {
// We cannot put this inside the hashif, despite wanting to,
// because DML does not (yet) allow conditional templating on
// parameters
is stop_template;
// This _does_ mean we can check this.stop first, saving us
// a little bit of code
#if (this.stop) {
// It is generally considered good practice to keep
// dml_1_2 checks in their own #if branch, so that they
// can be removed wholesale once 1.2 is no longer supported
#if (dml_1_2) {
#if (explicit) {
error "Stop here";
}
} #else {
error "Stop here";
}
}
}
}
}
In your case, the problematic code is a consistency check that's not strictly necessary for the device to function, so one option is to just skip this check for DML 1.2, assuming that most remaining DML 1.2 devices are legacy things where not much development is happening. So you can just put the problematic definitions inside #if (!dml_1_2) { } and it will compile again. Note that the compiler has a special case for the dml_1_2 parameter, which allows a top-level #if to contain any kind of top-level statement, including template definitions and typedefs.

Escaping dollar sign in Terraform

In an attempt to create a route key named $disconnect for an API Gateway, I'm running the snippet below, while var.route_name should receive the string "disconnect":
resource "aws_apigatewayv2_route" "route" {
api_id = var.apigw_api.id
route_key = "$${var.route_name}"
# more stuff...
}
But it's not escaping it correctly. I coulnd't find a proper way to emit a $, followed by var.route_name's content.
How to do that?
In Terraform's template language, the sequence $${ is the escape sequence for literal ${, and so unfortunately in your example Terraform will understand $${var.route_name} as literally ${var.route_name}, and not as a string interpolation at all.
To avoid this, you can use any strategy that causes the initial $ to be separate from the following ${, so that Terraform will understand the first $ as a literal and the remainder as an interpolation sequence.
One way to do that would be to present that initial literal $ via an interpolation sequence itself:
"${"$"}${var.route_name}"
The above uses an interpolation sequence that would typically be redundant -- its value is a literal string itself -- but in this case it's grammatically useful to change Terraform's interpretation of that initial dollar sign.
Some other permutations:
join("", ["$", var.route_name])
format("$%s", var.route_name)
locals {
dollar = "$"
}
resource "aws_apigatewayv2_route" "route" {
route_key = "${local.dollar}${var.route_name}"
# ...
}
Again, all of these are just serving to present the literal $ in various ways that avoid it being followed by either { or ${ and thus avoid Terraform's parser treating it as a template sequence or template escape.
There probably exists an easier way to escape a $ in hcl2 string interpolation, but the format function will also assist you here:
resource "aws_apigatewayv2_route" "route" {
api_id = var.apigw_api.id
route_key = format("$%s", var.route_name)
# more stuff...
}
If you are trying to dynamically set a variable name (ie the variable name depends on another variable), this is not possible. Otherwise you could do this:
resource "aws_apigatewayv2_route" "route" {
api_id = var.apigw_api.id
route_key = "$$${var.route_name}"
# more stuff...
}
Instead, create a map route_keys and choose the key based on the name:
locals {
route_keys = {
route_name1 = ...
route_name2 = ...
}
}
resource "aws_apigatewayv2_route" "route" {
api_id = var.apigw_api.id
route_key = local.route_keys[var.route_name]
# more stuff...
}

How can I use a variable as an attribute name in terraform 3.0?

Is it possible to somehow create an arbitrary attribute from a variable? Here is what I am trying to achieve.
How I currently do it (now deprecated in 3.0.0):
resource "aws_lb_listener_rule" "example" {
condition {
field = var.condition_field
values = var.condition_values
}
}
The new syntax requires a nested block with the condition field. But my condition is stored in a variable:
resource "aws_lb_listener_rule" "example" {
condition {
var.condition_field {
values = var.condition_values
}
}
}
Is it possible to somehow create an arbitrary attribute from a variable?
or: Can I store a nested attribute block in a variable?
Background on my question: I am currently trying to upgrade from 2.70.0 to 3.0.0 and there are quite a few breaking changes in my system. One of them includes the aws_lb_listener_rule. If it is not possible to create the attribute from the variable I would have to either pin the version or change the module API used by a ton of projects.
It actually seems like it is not possible to do that. The closes thing I have found that allows me to use 3.0.0 without changing my module variables and with that all the Terraform scripts that use it are dynamic conditional blocks.
dynamic "condition" {
for_each = var.field == "path-pattern" ? [var.field] : []
content {
path_pattern {
values = var.patterns
}
}
}
This is repeated for all possible var.field values.

How should I handle Perl 6 $*ARGFILES that can't be read by lines()?

I'm playing around with lines which reads lines from the files you specify on the command line:
for lines() { put $_ }
If it can't read one of the filenames it throws X::AdHoc (one day maybe it will have better exception types so we can grab the filename with a .path method). Fine, so catch that:
try {
CATCH { default { put .^name } }
for lines() { put $_ }
}
So this catches the X::AdHoc error but that's it. The try block is done at that point. It can't .resume and try the next file:
try {
CATCH { default { put .^name; .resume } } # Nope
for lines() { put $_ }
}
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
I could filter #*ARGS first then reconstruct $*ARGFILES if there are some arguments:
$*ARGFILES = IO::CatHandle.new:
#*ARGS.grep( { $^a.IO.e and $^a.IO.r } ) if +#*ARGS;
for lines() { put $_ }
That works although it silently ignores bad files. I could handle that but it's a bit tedious to handle the argument list myself, including - for standard input as a filename and the default with no arguments:
my $code := { put $_ };
#*ARGS = '-' unless +#*ARGS;
for #*ARGS -> $arg {
given $arg {
when '-' { $code.($_) for $*IN.lines(); next }
when ! .IO.e { note "$_ does not exist"; next }
when ! .IO.r { note "$_ is not readable"; next }
default { $code.($_) for $arg.IO.lines() }
}
}
But that's a lot of work. Is there a simpler way to handle this?
To warn on bad open and move on, you could use something like this:
$*ARGFILES does role { method next-handle { loop {
try return self.IO::CatHandle::next-handle;
warn "WARNING: $!.message"
}}}
.say for lines
Simply mixing in a role that makes the IO::CatHandle.next-handle method re-try getting next handle. (you can also use but operator to mixin on a copy instead).
If it can't read one of the filenames it throws X::AdHoc
The X::AdHoc is from .open call; there's a somewhat moldy PR to make those exceptions typed, so once that's fixed, IO::CatHandle would throw typed exceptions as well.
It can't .resume
Yeah, you can only resume from a CATCH block that caught it, but in this case it's caught inside .open call and is made into a Failure, which is then received by IO::CatHandle.next-handle and its .exception is re-.thrown.
However, even if it were resumable here, it'd simply resume into a path where exception was thrown, not re-try with another handle. It wouldn't help. (I looked into making it resumable, but that adds vagueness to on-switch and I'm not comfortable speccing that resuming Exceptions from certain places must be able to meaningfully continue—we currently don't offer such a guarantee for any place in core).
including - for standard input as a filename
Note that that special meaning is going away in 6.d language as far as IO::Handle.open (and by extension IO::CatHandle.new) goes. It might get special treatment in IO::ArgFiles, but I've not seen that proposed.
Back in Perl 5 land you get a warning about the bad filename and the program moves on to the next thing.
In Perl 6, it's implemented as a generalized IO::CatHandle type users can use for anything, not just file arguments, so warning and moving on by default feels too lax to me.
IO::ArgFiles could be special-cased to offer such behaviour. Personally, I'm against special casing stuff all over the place and I think that is the biggest flaw in Perl 5, but you could open an Issue proposing that and see if anyone backs it.

How in operator works in Groovy?

I want to check for multiple string comparison in groovy.
For example:
if (cityName in ('AHD','BLR','DEL'))
{
}
But, using this way, it is giving syntax error.
To define in-place collection use [] instead of ():
if (cityName in ['AHD','BLR','DEL']) {
}
Anyway, in is used correctly.

Resources