I am using Ansible to create an Azure storage account which must have a max name size of 24 characters. I am looking at the Jinja truncate() method but the parameter passed to this method removes that number of characters rather than limiting the string to that number of characters.
Is there a different way of implementing a max length of string variable?
Do I need to combine truncate and length filters from Jinja?
You could use Python's slicing notation for this.
Slice objects are also generated when extended indexing syntax is used. For example: a[start:stop:step] or a[start:stop, i].
More in the documentation: https://docs.python.org/3/library/functions.html?highlight=slice#slice
Also a good read: https://python-reference.readthedocs.io/en/latest/docs/brackets/slicing.html
Given:
- debug:
msg: "{{ str[:24] }}"
vars:
str: abcdefghijklmnopqrstuvwxyz0123456789
This should give you:
abcdefghijklmnopqrstuvwx
Related
I created an array variable and tried to pass that to max math function in ADF but i'm getting error in it. So how to using max function there?
Array is one of the datatypes supported in ADF with both parameters and variables, so if you have a valid array then max will work either. A simple example:
create a valid parameter of the Array datatype:
Create a variable and add a Set Variable activity. Use this expression as the Value argument:
#string(max(pipeline().parameters.pInputArray))
NB I'm using the max function directly on the array and then string as only the String, Array and Boolean datatypes are supported for variables (at this time).
I write the code in SAS enterprise guide 8.3, and find there is space in the name of the file. Anyone knows why? please reference the attached screenshot for details. The space is after the character "Y_2021"; Thanks
year1 is being padded with spaces by the concatenation operator. It is a a better practice to use cats, catt, or catx rather than the || operator for concatenation. These functions ensure that the values being concatenated are not padded.
Use the code below instead.
csvfile_temp = cats('Y_', year1, '0101_', dateyyyymmdd, '.csv');
SAS has two types of variables. Floating point numbers and fixed length character strings.
You have not explicitly set the types or lengths of any of the variables, so SAS must decide how to define them based on how they are first used in the code. Since TODAY1 is getting the results of PUT() function using a format of width 8 it will be defined as length $8. And since they other variables are first seen as the result of functions operating on TODAY1 they will also be defined as length $8.
So the TRIM() function calls in your second and third assignment statement are doing nothing. It removes the trailing spaces and then stores the value back into a fixed length variable which means that value is padded out to 8 bytes again.
You could define the variables before using them.
length today1 $8 year1 $4 ;
You could also create the variables with statements that you are sure will cause SAS to define the length to what you need.
today = today();
year = put(today,year4.);
dateyyyymmdd = put(today,yymmddn8.);
If you do need to concatenate variables that could have short values then add the TRIM() function calls into the place where you need to trimmed values. Or better still use the CATS() function instead of the concatenation operator.
I have an assignment, similar to scrabble. I have to check if a subset is in the set. can only use a letter once. so if subset has 2t and the set has 1t it is false.
My problem is, I used 2 inputs to allow people to enter the subset and set, but that create a string no breaks between the letters which mean split or list won't create a LIST with individual letters. (at least I can't find any way.)
My plan was something like
wordset = word.lower().split()
subset = letters.lower()
for i in range(len(subset)):
if i in subset and in set:
set.remove(i)
I know that properly won't work but until I can get it into a list or someone gives me a hint how to do it with string I can't start testing it. Sorry for so much writing.
If you wish to get a list of characters in a given string you can use a list comprehension:
characters = [x for x in some_string]
I have the following definition in a yaml file:
keepalived:
cluster_name: "cluster.example.lan"
cluster_ip: "192.168.1.10"
cluster_nic: "eth0"
haproxy:
bind_address: %{hiera('keepalived::cluster_ip')}
And as a result in bind_address I've got an empty string.
If I use %{hiera('keepalived')} I've got the whole hash printed, but I need only cluster_ip from this hash. How can I lookup the cluster_ip ?
I think it is not possible:
Hiera can only interpolate variables whose values are strings. (Numbers from Puppet are also passed as strings and can be used safely.) You cannot interpolate variables whose values are booleans, numbers not from Puppet, arrays, hashes, resource references, or an explicit undef value.
Additionally, Hiera cannot interpolate an individual element of any array or hash, even if that element’s value is a string.
You can always define cluster_ip as a variable:
common::cluster_ip: "192.168.1.10"
and than use it:
keepalived:
cluster_name: "cluster.example.lan"
cluster_ip: "%{hiera('common::cluster_ip')}"
cluster_nic: "eth0"
haproxy:
bind_address: "%{hiera('common::cluster_ip')}"
Hiera uses the . in a string interpolation to look up sub-elements in an array or hash. Change your hiera code to look like this:
keepalived:
cluster_name: "cluster.example.lan"
cluster_ip: "192.168.1.10"
cluster_nic: "eth0"
haproxy:
bind_address: %{hiera('keepalived.cluster_ip')}
For an array, you use the array index (0 based) instead of a hash key.
See interpolating hash or array elements
Are there restricted character patterns within Azure TableStorage RowKeys? I've not been able to find any documented via numerous searches. However, I'm getting behavior that implies such in some performance testing.
I've got some odd behavior with RowKeys consisting on random characters (the test driver does prevent the restricted characters (/ \ # ?) plus blocking single quotes from occurring in the RowKey). The result is I've got a RowKey that will insert fine into the table, but cannot be queried (the result is InvalidInput). For example:
RowKey: 9}5O0J=5Z,4,D,{!IKPE,~M]%54+9G0ZQ&G34!G+
Attempting to query by this RowKwy (equality) will result in an error (both within our app, using Azure Storage Explorer, and Cloud Storage Studio 2). I took a look at the request being sent via Fiddler:
GET /foo()?$filter=RowKey%20eq%20'9%7D5O0J=5Z,4,D,%7B!IKPE,~M%5D%54+9G0ZQ&G34!G+' HTTP/1.1
It appears the %54 in the RowKey is not escaped in the filter. Interestingly, I get similar behavior for batch requests to table storage with URIs in the batch XML that include this RowKey. I've also seen similar behavior for RowKeys with embedded double quotes, though I have not isolated that pattern yet.
Has anyone co me across this behavior? I can easily restrict additional characters from occurring in RowKeys, but would really like to know the 'rules'.
The following characters are not allowed in PartitionKey and RowKey fields:
The forward slash (/) character
The backslash (\) character
The number sign (#) character
The question mark (?) character
Further Reading: Azure Docs > Understanding the Table service data model
public static readonly Regex DisallowedCharsInTableKeys = new Regex(#"[\\\\#%+/?\u0000-\u001F\u007F-\u009F]");
Detection of Invalid Table Partition and Row Keys:
bool invalidKey = DisallowedCharsInTableKeys.IsMatch(tableKey);
Sanitizing the Invalid Partition or Row Key:
string sanitizedKey = DisallowedCharsInTableKeys.Replace(tableKey, disallowedCharReplacement);
At this stage you may also want to prefix the sanitized key (Partition Key or Row Key) with the hash of the original key to avoid false collisions of different invalid keys having the same sanitized value.
Do not use the string.GetHashCode() though since it may produce different hash code for the same string and shall not be used to identify uniqueness and shall not be persisted.
I use SHA256: https://msdn.microsoft.com/en-us/library/s02tk69a(v=vs.110).aspx
to create the byte array hash of the invalid key, convert the byte array to hex string and prefix the sanitized table key with that.
Also see related MSDN Documentation:
https://msdn.microsoft.com/en-us/library/azure/dd179338.aspx
Related Section from the link:
Characters Disallowed in Key Fields
The following characters are not allowed in values for the PartitionKey and RowKey properties:
The forward slash (/) character
The backslash (\) character
The number sign (#) character
The question mark (?) character
Control characters from U+0000 to U+001F, including:
The horizontal tab (\t) character
The linefeed (\n) character
The carriage return (\r) character
Control characters from U+007F to U+009F
Note that in addition to the mentioned chars in the MSDN article, I also added the % char to the pattern since I saw in a few places where people mention it being problematic. I guess some of this also depends on the language and the tech you are using to access the table storage.
If you detect additional problematic chars in your case, then you can add those to the regex pattern, nothing else needs to change.
I just found out (the hard way) that the '+' sign is allowed, but not possible to query in PartitionKey.
I found that in addition to the characters listed in Igorek's answer, these also can cause problems (e.g. inserts will fail):
|
[]
{}
<>
$^&
Tested with the Azure Node.js SDK.
I transform the key using this function:
private static string EncodeKey(string key)
{
return HttpUtility.UrlEncode(key);
}
This needs to be done for the insert and for the retrieve of course.