parse thru txt file with elements separated by {} brakets - linux

Is there a way to parse thru a txt file that includes elements seperated by {}s
Here is a sample from the file:
virtual vs_devtnet_80 {
snat automap
pool pool_devtnet_80
destination 167.69.107.41:http
ip protocol tcp
profiles {
profile_http_health {}
tcp-lan-optimized {}
}
}
virtual vs_devdpp_4430 {
snat automap
pool pool_devdpp_5430
destination 167.69.107.31:https
ip protocol tcp
persist devdpp
profiles tcp-lan-optimized {}
}
virtual vs_devwww30_80 {
snat automap
pool pool_devwww30_80
destination 167.69.107.46:http
ip protocol tcp
profiles {
profile_http_health {}
tcp-lan-optimized {}
}
}
As you can see, the elements are separated, but with {}
Any help would be gladly appreciated. I was trying to use grep but it only returns one line...
I would like to be able to search by the top most element, for example searh.sh virtual vs_devtnet_80, and have it return the entire blob..furthermore perhaps be able to search for botht eh top layer and one of its sub layers, for example search.sh virtual vs_devtnet_80 pool which would return pool_devtnet_80

Something like:
cat .tmp | sed '/.*{/ ! { s/.*//g}'
this wont solve it completely, but i think it does something similar to what you want

Look at JSON pasers, they're written in all sort of languages, syntax looks similar enough to give you some ideas on how to tackle this.
Basically what you need to do is have a recursive function that calls itself whenever it encounters a '{' and returns the content whenever it encounters a '}'.
I wrote an article on a lisp-like parser that actually does just that. Check it out here for inspiration: http://www.codeproject.com/KB/linq/TinyLisp.aspx
Rgds Gert-Jan

This is Tcl syntax, so you can set up a mechanism to run it as a Tcl script that creates a data structure of itself.
#!/usr/bin/env tclsh
# create a safe slave interpreter
set slave [interp create -safe]
# set up the slave to parse the input file
$slave eval {
proc virtual {subkey body} {
eval $body
}
proc unknown {cmd args} {
upvar 1 subkey subkey ;# access the 'subkey' variable from the 'virtual' proc
if {[llength $args] == 1} {set args [lindex $args 0]}
dict set ::data $subkey $cmd $args
}
set data [dict create]
}
$slave expose source
# now do the parsing in the slave
$slave eval source [lindex $argv 0]
# fetch the data structure
set data [$slave eval set data]
# and do stuff with it.
dict for {subkey subdict} $data {
puts "$subkey destination = [dict get $subdict destination]"
}
And then, parser.tcl input_file outputs
vs_devaetnet_80 destination = 167.69.107.41:http
vs_devdpp_4430 destination = 167.69.107.31:https
vs_devwww30_80 destination = 167.69.107.46:http

This should give you the 2nd 'blob', for instance:
sed -n '/virtual vs_devdpp_4430.*/,/^}$/p' filename.txt
and pipe through grep -oP '(?<=pool ).*' to get what follows pool in that blob.

I ended up having to create a recursive function that first used strpos to find the search variable within the entire config file. Then using a recursive function pop'd on and pop'd off brackets to return the searched variable's entire body.

Related

How to call a forward the value of a variable created in the script in Nextflow to a value output channel?

i have process that generates a value. I want to forward this value into an value output channel. but i can not seem to get it working in one "go" - i'll always have to generate a file to the output and then define a new channel from the first:
process calculate{
input:
file div from json_ch.collect()
path "metadata.csv" from meta_ch
output:
file "dir/file.txt" into inter_ch
script:
"""
echo ${div} > alljsons.txt
mkdir dir
python3 $baseDir/scripts/calculate.py alljsons.txt metadata.csv dir/
"""
}
ch = inter_ch.map{file(it).text}
ch.view()
how do I fix this?
thanks!
best, t.
If your script performs a non-trivial calculation, writing the result to a file like you've done is absolutely fine - there's nothing really wrong with this approach. However, since the 'inter_ch' channel already emits files (or paths), you could simple use:
ch = inter_ch.map { it.text }
It's not entirely clear what the objective is here. If the desire is to reduce the number of channels created, consider instead switching to the new DSL 2. This won't let you avoid writing your calculated result to a file, but it might mean you can avoid an intermediary channel, potentially.
On the other hand, if your Python script actually does something rather trivial and can be refactored away, it might be possible to assign a (global) variable (below the script: keyword) such that it can be referenced in your output declaration, like the line x = ... in the example below:
Valid output
values
are value literals, input value identifiers, variables accessible in
the process scope and value expressions. For example:
process foo {
input:
file fasta from 'dummy'
output:
val x into var_channel
val 'BB11' into str_channel
val "${fasta.baseName}.out" into exp_channel
script:
x = fasta.name
"""
cat $x > file
"""
}
Other than that, your options are limited. You might have considered using the env output qualifier, but this just adds some syntactic-sugar to your shell script at runtime, such that an output file is still created:
Contents of test.nf:
process test {
output:
env myval into out_ch
script:
'''
myval=$(calc.py)
'''
}
out_ch.view()
Contents of bin/calc.py (chmod +x):
#!/usr/bin/env python
print('foobarbaz')
Run with:
$ nextflow run test.nf
N E X T F L O W ~ version 21.04.3
Launching `test.nf` [magical_bassi] - revision: ba61633d9d
executor > local (1)
[bf/48815a] process > test [100%] 1 of 1 ✔
foobarbaz
$ cat work/bf/48815aeefecdac110ef464928f0471/.command.sh
#!/bin/bash -ue
myval=$(calc.py)
# capture process environment
set +u
echo myval=$myval > .command.env

Tcl script can't read "startreg(1)": no such variable

I tried to run a Tcl script that creates a geometry file from an input file (where the geometry is defined). The script can be run simply as script.tcl inputfile.
When I run it (on both Mac and Linux) using either wish or tclsh command, I get this error:
can't read "startreg(1)": no such variable
while executing
"if { $startreg($i)==0 && $stopreg($i)==0 } {
# All are material 1, change nothing
} else {
for {set iz $startz($i)} {$iz<=$stopz($i)} {incr i..."
invoked from within
"if [string compare $descrip regions]==0 {
# Get the mednum, start and stop regions
seek $fileid $startpos start
while { [eof $fileid] != 1 } {
..."
(procedure "read_inputfile" line 214)
invoked from within
"read_inputfile "
invoked from within
"if [file exists $inputfile]==1 {
read_inputfile
} else {
puts "The file $inputfile doesn't exist!"
exit
}"
(file "~/EGS_Windows/preview3d.tcl" line 580)
Any help/suggestion would be highly appreciated!
TA
You apparently have never initialized that variable.
% array set startreg {}
% puts $startreg(1)
can't read "startreg(1)": no such element in array
% unset startreg
% puts $startreg(1)
can't read "startreg(1)": no such variable
Is startreg a global variable, and you forgot to global startreg in a proc?
I notice another error in the stacktrace
if [string compare $descrip regions]==0 {
You surely want braces around the condition, so that the test is performed when you expect it to be performed:
if {[string compare $descrip regions]==0} {
This applies to all if expressions, and all expressions in general. See this wiki page: http://wiki.tcl.tk/10225
In this case, if {$descrip eq "regions"} is more clear.

How to modify a perl script to read excel instead of Html files

My first question is:
Is this possible to do this, since now I have a perl script which reads Html file and extract data to display on another html file.
If the answer for the question above is Yes, my second question would be:
How to do this?
Sorry to ask frankly as this, but since I'm so new for perl, and I have to take this task, so I'm here for some useful advice or suggestion to guide me through this task. Appreciate your help in advance.
Here's a part of the code, since the whole chunk is quite long:
$date=localtime();
($TWDAY, $TMTH, $TD1D, $TSE, $TYY) = split(/\s+/, $date);
$TSE =~ s/\://g;
$STAMP=_."$TD1D$TMTH$TYY";
#ServerInfo=();
#--------------------------------------------------------------------------- -------------------------------
# Read Directory
#----------------------------------------------------------------------------------------------------------
$myDir=getcwd;
#----------------------------------------------------------------------------------------------------------
# INITIALIZE HTML FORMAT
#----------------------------------------------------------------------------------------------------------
&HTML_FORMAT;
#----------------------------------------------------------------------------------------------------------
# REPORT
#----------------------------------------------------------------------------------------------------------
if (! -d "$myDir/report") { mkdir("$myDir/report");};
$REPORTFILE="$myDir/report/checkpack".".htm";
open OUT,">$REPORTFILE" or die "\nCannot open out file $REPORTFILE\n\n";
print OUT "$Tag_Header";
#----------------------------------------------------------------------------------------------------------
sub numSort {
if ($b < $a) { return -1; }
elsif ($a == $b) { return 0;}
elsif ($b > $a) { return 1; }
}
#ArrayDir = sort numSort #DirArray;
#while (<#ArrayDir>) {
#OutputDir=grep { -f and -T } glob "$myDir/*.htm $myDir/*.html";
#}
#----------------------------------------------------------------------------------------------------------
#ReadLine3=();
$xyxycnt=0;
foreach $InputFile (#OutputDir) { #---- MAIN
$filename=(split /\//, $InputFile) [-1]; print "-"x80 ; print "\nFilename\t:$filename\n";
open IN, "<$InputFile" or die "Cannot open Input file $InputFile\n";
#MyData=();
$DataCnt=0;
#MyLine=();
$MyLineCnt=0;
while (<IN>) {
$LINE=$_;
chomp($LINE);
$LINE=~s/\<br\>/XYXY/ig;
$LINE=~s/\<\/td\>/ \nXYZXYZ\n/ig;
$LINE=~s/\<dirname\>/xxxdirnameyyy/ig;
$LINE=linetrim3($LINE);
$LINE=linetrim($LINE);
$LINE=~s/XYXY/\<br\>/ig;
$LINE=~s/xxxdirnameyyy/&lt dirname &gt/ig;
$LINE=~s/^\s+//ig;
print OUT2 "$LINE\n";
if (defined($LINE)) { $MyData[$DataCnt]="$LINE"; $DataCnt++ ; }
}
close IN;
foreach $ReadFile (#MyData) { #--- Mydata
$MyLineCnt++;
$MyLine[$MyLineCnt]="";
#### FILENAME
$ServerInfo[0]="$filename";
#### IP ADDRESS
if ($ReadFile =~ /Host\/Device Name\:/) {
#print "$ReadFile\n"
($Hostname)=(split /\:|\s+/, $ReadFile)[3]; print "$Hostname\n";
&myServerInfo("$Hostname","1");
}
if ($ReadFile =~ /IP Address\(es\)/) {#ListIP=(); $SwIP=1; $CntIP=0 ; };
#### OPERATING SYSTEM & VERSION
if ($ReadFile =~ /Operating System\:/) {
$SwIP=0;
$OS= (split /\:|\s+/, $ReadFile)[3]; &myServerInfo("$OS","3") ; print "$OS\n";
$OSVer= (split /\:|\s+/, $ReadFile)[-2]; &myServerInfo("$OSVer","4") ; print "$OSVer\n";
};
#### GET IP VALUE
if ($SwIP==1) {
$ReadFile=(split /\:/,$ReadFile) [2];
$ReadFile=~s/[a-z|A-Z]|\(|\)|\// /ig; print "$ReadFile\n";
if ($CntIP==0) {
#$ListIP[$CntIP]=(split /\s+/,$ReadFile) [1];
#ListIP="$ReadFile";
} elsif ($CntIP==1) { print "\n\t\t $ReadFile\n" ; $ListIP[$CntIP]="\n$ReadFile";
} else { print "\t\t $ReadFile\n" ; $ListIP[$CntIP]="\n$ReadFile"; };
$CntIP++;
}
I'm afraid if you don't understand what is going on in this program and you also don't understand how to approach a task like this at all, Stack Overflow might not be the right place to get help.
Let me try to show you the approach I would take with this. I'm assuming there is more code.
First, write down a list of everything you know:
What is the input format of the existing file
Where does the existing file come from now
What is the output format of the existing file
Where does the generated output file go afterwards
What does the new file look like
Where does the new file come from
Use perltidy to indent the inherited code so you can read it better. The default options should be enough.
Read the code, take notes about what pieces do what, add comments
Write a unit test for the desired output format. You can use Test::More. Another useful testing module here is Test::File.
Refactor the part that generated the output format to work with a certain data structure. Use your tests to make sure you don't break it.
Write code to parse the new file into the data structure from the point above. Now you can plug that in and get the expected output.
Refactor the part that takes the old input file from the existing file location to be a function, so you can later switch it for the new one.
Write code to get the new file from the new file location.
Document what you did so the next guy is not in the same situation. Remember that could be you in half a year.
Also add use strict and use warnings while you refactor to catch errors more easily. If stuff breaks because of that, make it work before you continue. Those pragmas tell you what's wrong. The most common one you will encounter is Global symbol "$foo" requires explicit package name. That means you need to put my in front of the first assignment, or declare the variable before.
If you have specific questions, ask them as a new question with a short example. Read how to ask to make sure you will get help on those.
Good luck!
After seing your comment I am thinking you want a different input and a different output. In that case, disregard this, throw away the old code and start from scratch. If you don't know enough Perl, get a book like Curtis Poe's Beginning Perl if you already know programming. If not, check out Learning Perl by Randal L. Schwartz.

get all keys set in memcached

How can I get all the keys set in my memcached instance(s)?
I tried googling, but didn't find much except that PHP supports a getAllKeys method, which means it is actually possible to do this somehow. How can I get the same within a telnet session?
I have tried out all the retrieval related options mentioned in memcached cheat sheet and Memcached telnet command summary, but none of them work and I am at a loss to find the correct way to do this.
Note: I am currently doing this in development, so it can be assumed that there will be no issues due to new keys being set or other such race conditions happening, and the number of keys will also be limited.
Found a way, thanks to the link here (with the original google group discussion here)
First, Telnet to your server:
telnet 127.0.0.1 11211
Next, list the items to get the slab ids:
stats items
STAT items:3:number 1
STAT items:3:age 498
STAT items:22:number 1
STAT items:22:age 498
END
The first number after ‘items’ is the slab id. Request a cache dump for each slab id, with a limit for the max number of keys to dump:
stats cachedump 3 100
ITEM views.decorators.cache.cache_header..cc7d9 [6 b; 1256056128 s]
END
stats cachedump 22 100
ITEM views.decorators.cache.cache_page..8427e [7736 b; 1256056128 s]
END
memdump
There is a memcdump (sometimes memdump) command for that (part of libmemcached-tools), e.g.:
memcdump --servers=localhost
which will return all the keys.
memcached-tool
In the recent version of memcached there is also memcached-tool command, e.g.
memcached-tool localhost:11211 dump | less
which dumps all keys and values.
See also:
What's the simplest way to get a dump of all memcached keys into a file?
How do I view the data in memcache?
Base on #mu 無 answer here. I've written a cache dump script.
The script dumps all the content of a memcached server. It's tested with Ubuntu 12.04 and a localhost memcached, so your milage may vary.
#!/usr/bin/env bash
echo 'stats items' \
| nc localhost 11211 \
| grep -oe ':[0-9]*:' \
| grep -oe '[0-9]*' \
| sort \
| uniq \
| xargs -L1 -I{} bash -c 'echo "stats cachedump {} 1000" | nc localhost 11211'
What it does, it goes through all the cache slabs and print 1000 entries of each.
Please be aware of certain limits of this script i.e. it may not scale for a 5GB cache server for example. But it's useful for debugging purposes on a local machine.
If you have PHP & PHP-memcached installed, you can run
$ php -r '$c = new Memcached(); $c->addServer("localhost", 11211); var_dump( $c->getAllKeys() );'
Bash
To get list of keys in Bash, follow the these steps.
First, define the following wrapper function to make it simple to use (copy and paste into shell):
function memcmd() {
exec {memcache}<>/dev/tcp/localhost/11211
printf "%s\n%s\n" "$*" quit >&${memcache}
cat <&${memcache}
}
Memcached 1.4.31 and above
You can use lru_crawler metadump all command to dump (most of) the metadata for (all of) the items in the cache.
As opposed to cachedump, it does not cause severe performance problems and has no limits on the amount of keys that can be dumped.
Example command by using the previously defined function:
memcmd lru_crawler metadump all
See: ReleaseNotes1431.
Memcached 1.4.30 and below
Get list of slabs by using items statistics command, e.g.:
memcmd stats items
For each slub class, you can get list of items by specifying slub id along with limit number (0 - unlimited):
memcmd stats cachedump 1 0
memcmd stats cachedump 2 0
memcmd stats cachedump 3 0
memcmd stats cachedump 4 0
...
Note: You need to do this for each memcached server.
To list all the keys from all stubs, here is the one-liner (per one server):
for id in $(memcmd stats items | grep -o ":[0-9]\+:" | tr -d : | sort -nu); do
memcmd stats cachedump $id 0
done
Note: The above command could cause severe performance problems while accessing the items, so it's not advised to run on live.
Notes:
stats cachedump only dumps the HOT_LRU (IIRC?), which is managed by a background thread as activity happens. This means under a new enough version which the 2Q algo enabled, you'll get snapshot views of what's in just one of the LRU's.
If you want to view everything, lru_crawler metadump 1 (or lru_crawler metadump all) is the new mostly-officially-supported method that will asynchronously dump as many keys as you want. you'll get them out of order but it hits all LRU's, and unless you're deleting/replacing items multiple runs should yield the same results.
Source: GH-405.
Related:
List all objects in memcached
Writing a Redis client in pure bash (it's Redis, but very similar approach)
Check other available commands at https://memcached.org/wiki
Check out the protocol.txt docs file.
The easiest way is to use python-memcached-stats package, https://github.com/abstatic/python-memcached-stats
The keys() method should get you going.
Example -
from memcached_stats import MemcachedStats
mem = MemcachedStats()
mem.keys()
['key-1',
'key-2',
'key-3',
... ]
I was using Java's spyMemcached, and used this code. It is based on Anshul Goyal's answer
#Autowired
#Qualifier("initMemcachedClient")
private MemcachedClient memcachedClient;
public List<String> getCachedKeys(){
Set<Integer> slabIds = new HashSet<>();
Map<SocketAddress, Map<String, String>> stats;
List<String> keyNames = new ArrayList<>();
// Gets all the slab IDs
stats = memcachedClient.getStats("items");
stats.forEach((socketAddress, value) -> {
System.out.println("Socket address: "+socketAddress.toString());
value.forEach((propertyName, propertyValue) -> {
slabIds.add(Integer.parseInt(propertyName.split(":")[1]));
});
});
// Gets all keys in each slab ID and adds in List keyNames
slabIds.forEach(slabId -> {
Map<SocketAddress, Map<String, String>> keyStats = memcachedClient.getStats("cachedump "+slabId+" 0");
keyStats.forEach((socketAddress, value) -> {
value.forEach((propertyName, propertyValue) -> {
keyNames.add(propertyName);
});
});
});
System.out.println("number of keys: "+keyNames.size());
return keyNames;
}
Java Solution:
Thanks! #Satvik Nema
Your solution helped me to find the approach, but it doesn't work for memcached 2.4.6 version. (implementation 'com.googlecode.xmemcached:xmemcached:2.4.6')
Not sure when did new method getStatsByItem included.
I figured out required changes using documentation and below code worked for me.
// Gets all the slab IDs
Set<Integer> slabIds = new HashSet<>();
Map<InetSocketAddress, Map<String, String>> itemsMap = null;
try {
itemsMap = this.memcachedClient.getStatsByItem("items");
} catch (Exception e) {
log.error("Failed while pulling 'items'. ERROR", e);
}
if (Objects.nonNull(itemsMap)) {
itemsMap.forEach((key, value) -> {
log.info("itemsMap {} : {}", key, value);
value.forEach((k, v) -> {
slabIds.add(Integer.parseInt(k.split(":")[1]));
});
});
}
// Gets all keys in each slab ID and adds in List keyNames
slabIds.forEach(slabId -> {
Map<InetSocketAddress, Map<String, String>> keyStats = null;
try {
keyStats = this.memcachedClient.getStatsByItem("cachedump " + slabId + " 0");
} catch (Exception e) {
log.error("Failed while pulling 'cachedump' for slabId: {}. ERROR", slabId, e);
}
if (Objects.nonNull(keyStats)) {
keyStats.forEach((socketAddress, value) -> {
value.forEach((propertyName, propertyValue) -> {
//keyNames.add(propertyName);
log.info("keyName: {} Value: {}", propertyName, propertyValue);
});
});
}
});

Puppet iteration string/array

Can you think of a way to solve this problem in Puppet?
I have a custom fact with generates a string of IP addresses depending on the domain it is run on, it can resolve to have 1 to n addresses.
"10.1.29.1"
"10.1.29.1,10.1.29.5"
"10.1.29.1,10.1.29.5,10.1.29.7"
etc
I want to add these to the host file with a generated server names of servernameX for example;
10.1.29.1 myservername1
10.1.29.5 myservername2
10.1.29.7 myservername3
So how can you do this as puppet doesn't have an array iterator like "for each"?
Sadly, even if you go about and use a custom "define" to iterate over an array upon splitting your custom fact on a comma, the result will be rather not what you expect and not even close to a "for each" loop -- aside of causing you a headache, probably.
Said that, I am not sure if this is what you want to achieve, but have a look at this approach:
$fact = '1.1.1.1,2.2.2.2,3.3.3.3'
$servers = split($::fact, ',')
$count = size($servers)
$names = bracket_expansion("host[01-${count}].address")
file { '/tmp/test.txt':
content => inline_template('<%= #servers.each_with_index.map {|v,i| "#{v}\t\t#{#names[i]}\n" } %>'),
ensure => present
}
What we have there are two custom functions: size() and bracket_expansion(); which we then use values that they provide inside a hack that leverages the inline_template() function to render content of the file utilising parallel access to two arrays -- one with IP addresses from your fact and one with host names that should follow.
The result is a follows:
matti#acrux ~ $ cat | puppet apply
$fact = '1.1.1.1,2.2.2.2,3.3.3.3'
$servers = split($::fact, ',')
$count = size($servers)
$names = bracket_expansion("host[01-${count}].address")
file { '/tmp/test.txt':
content => inline_template('<%= #servers.each_with_index.map {|v,i| "#{v}\t\t#{#names[i]}\n" } %>'),
ensure => present
}
notice: /Stage[main]//File[/tmp/test.txt]/ensure: created
notice: Finished catalog run in 0.07 seconds
matti#acrux ~ $ cat /tmp/test.txt
1.1.1.1 host01.address
2.2.2.2 host02.address
3.3.3.3 host03.address
matti#acrux ~ $
Both size() and bracket_expansion() functions can be found here:
https://github.com/kwilczynski/puppet-functions/tree/master/lib/puppet/parser/functions/
I hope this helps a little :-)

Resources