Search and replace with the result of a command - vim

In vim, I want to search and replace a text. But I want to replace it with the output of a command.
Given the text:
{ "_template": "foo" }
{ "_template": "foo" }
I want to search and replace that to become:
{ "_id": "239c55fd-538e-485f-8588-83d9735b6819" }
{ "_id": "2ae9f49f-244c-47b0-8f0f-c6c46e860af3" }
The latter is the result of the unix/linux command uuidgen.
It would look something like s:/"_template": "foo"/"_id": "<uuidgen>"/, but I'm unsure what to do at <uuidgen>. The command uuidgen is just an example, but it would be a command that takes no arguments and does not need any stdin passed in (like for example wc or so would).
It needs to call the command for each search/replace again.
Is this possible at all with vim? Or should I better use sed and/or awk instead?

Hum...
Something like that?
:s/"_template": "foo"/\='"_id": "'.trim(system('uuidgen')).'"'
The key is :s-\=

Related

How do I grep and replace string in bash

I have a file which contains my json
{
"type": "xyz",
"my_version": "1.0.1.66~22hgde",
}
I want to edit the value for key my_version and everytime replace the value after third dot with another number which is stored in a variable so it will become something like 1.0.1.32~22hgde. I am using sed to replace it
sed -i "s/\"my_version\": \"1.0.1.66~22hgde\"/\"my_version\": \"1.0.1.$VAR~22hgde\"/g" test.json
This works but the issue is that my_version string doesn't remain constant and it can change and the string can be something like this 1.0.2.66 or 2.0.1.66. So how do I handle such case in bash?
how do I handle such case?
You write a regular expression to match any possible combination of characters that can be there. You can learn regex with fun with regex crosswords online.
Do not edit JSON files with sed - sed is for lines. Consider using JSON aware tools - like jq, which will handle any possible case.
A jq answer: file.json contains
{
"type": "xyz",
"my_version": "1.0.1.66~22hgde",
"object": "can't end with a comma"
}
then, replacing the last octet before the tilde:
VAR=32
jq --arg octet "$VAR" '.my_version |= sub("[0-9]+(?=~)"; $octet)' file.json
outputs
{
"type": "xyz",
"my_version": "1.0.1.32~22hgde",
"object": "can't end with a comma"
}

Find and return an if code block from a file

I am writing a bash script to find if a code block starting with
if (isset($conf['memcache_servers'])) { exists in a file?
If true, then I need to return the whole if block.
How to do that?
Code block return example:
if (isset($conf['memcache_servers'])) {
$conf['cache_backends'][] = '.memcache.inc';
$conf['cache_default_class'] = 'MemCache';
$conf['cache_class_cache_form'] = 'DatabaseCache';
}
You can use sed to do this. From a bash command line, run this:
sed -n "/if (isset(\$conf\['memcache_servers'\]))/,/}/p" inputFile
This uses range option /pattern1/,/pattern2/ from sed, and p to print everything between and including the if...{ and } lines.
Here, I have used double quotes to express the sed script because the first pattern includes single quotes'. he sqaure-brackets need to be escaped as well. \[ and \].

Is there a simple way to group multiple lines of a command output based on a beginning and end match?

Is there a simple way to group multiple lines which match a pattern into single lines?
Basically, the output of a command lists something like:
key1 blah blah = dict {
unrelated stuff {
}
something I actually want to match via grep or something
some common end term for key1 I can use as an end pattern match
}
x 100 similar keys
My end-game here in this specific case is to strip an XML of entries which have a specific entry within them. I could do this (and solve a lot of other day-to-day problems) if each entry was its own line instead of multi-line (grep in the matches, sed out the text after the bracket, etc.)
Something like:
print multi-line crap | merge beginningpattern endpattern | grep lines now that everything is merged
Basically the 'merged' command would strip all linefeeds between every new beginningpattern and endpattern (maybe putting a linefeed at the end)
awk and gsub would be the right way, if I understand your question correctly. For example:
required_string=$(cat $i.xml | awk 'BEGIN { x=0 ; y=0} /<yourStartingString>/ { x=1 } /<EndingString>/ {x=0} {if (x==1 && y==1) { gsub(/(.*<grepforwhatyouneed>)|(<endgrep>)/,"");print } } { if(x==1 && y==0) y=1 }')

Bash - String replace of index-based substring

I have a file that includes, among other things, a json. The json contains passwords that need masking. The bash script responsible for the masking has no way of knowing the actual password itself, so its not a simple sed search-and-replace.
The passwords appear within the json under a constant key named "password" or "Password". Typically, the appearance is like -
...random content..."Password\":\"actualPWD\"...random content....
The bash script needs to change such appearances to -
...random content..."Password\":\"******\"...random content....
The quotes aren't important, so even ...random
content..."Password\":******...random content...
would work.
I reckon the logic would need to find the index of the ':' that appears after the text "Password"/"password" and then substring from that point on till the second occurrence of quote (") from there and replace the whole thing with *****. But I'm not sure how to do this with sed or awk. Any suggestion would be helpful.
Perl to the rescue!
perl -pe 's/("[Pp]assword\\":\\")(.*?)(\\")/$1 . ("." x length $2) . $3/ge'
/e interprets the replacement part as code, so you can use the repetition operator x and repeat the dot length $2 times.
Since JSON is structured, any approach based solely on regular expressions is bound to fail at some point unless the input is constrained in some way. It would be far better (simpler and safer) to use a JSON-aware approach.
One particularly elegant JSON-aware tool worth knowing about is jq. (Yes, the "j" is for JSON :-)
Assuming we have an input file consisting of valid JSON and that we want to change the value of every "password" or "Password" key to "******" (no matter how deeply nested the object having these keys may be), we could proceed as follows:
Place the following into a file, say mask.jq:
def mask(p): if has(p) then .[p] = "******" else . end;
.. |= if type == "object"
then mask("password") | mask("Password") else . end
Now suppose in.json has this JSON:
{"password": "secret", "details": [ {"Password": "another secret"} ]}
Then executing the command:
jq -f mask.jq in.json
produces:
{
"password": "******",
"details": [
{
"Password": "******"
}
]
}
More on jq at https://github.com/stedolan/jq

Extract text between braces

I have a string as:
MESSAGES { "Instance":[{"InstanceID":"i-098098"}] } ff23710b29c0220849d4d4eded562770 45c391f7-ea54-47ee-9970-34957336e0b8
I need to extract the part { "Instance":[{"InstanceID":"i-098098"}] } i.e from the first occurence of '{' to last occurence of '}' and keep it in a separate file.
If you have this in a file,
sed 's/^[^{]*//;s/[^}]*$//' file
(This will print to standard output. Redirect to a file or capture into a variable or do whatever it is that you want to do with it.)
If you have this in a variable called MESSAGES,
EXTRACTED=${MESSAGES#*{}
EXTRACTED="{${EXTRACTED%\}*}}"
I would suggest either sed or awk from this article. But initial testing shows its a little more complicated and you will probably have to use a combination or pipe:
echo "MESSAGES { "Instance":[{"InstanceID":"i-098098"}] } ff23710b29c0220849d4d4eded562770 45c391f7-ea54-47ee-9970-34957336e0b8" | sed 's/^\(.*\)}.*$/\1}/' | sed 's/^[^{]*{/{/'
So the first sed delete everything after the last } and replace it with a } so it still shows; and the second sed delete everything up to the first { and replace it with a { so it still shows.
This is the output I got:
{ Instance:[{InstanceID:i-098098}] }

Resources