How to allocate a value that changes while execution in vim (lightline plugin)? - vim

I'm hacking on the lightline plugin of vim (downloaded version). I can modify the colors of each themes. I did something that works well in the powerline.vim scheme (path : ~/.vim/pack/plugins/start/lightline/autoload/lightline/colorscheme/powerline.vim)
Now I want the colortheme to change while I'm in vim. I added this code in the begining of powerline.vim :
10 let s:BSsplitscolor = "'darkestgreen', 'brightgreen'"
11 if g:BSsplitsbool == "1"
12 let s:BSsplitscolor = "'gray4', 'brightorange'"
13 endif
14
15 " ============================== NOTE: below : already there
16
17 let s:p = {'normal': {}, 'inactive': {}, 'insert': {}, 'replace': {}, 'visual': {}, 'tabline': {}}
18 let s:p.normal.left = [ [s:BSsplitscolor, 'bold'], ['white', 'gray4'] ]
Here s:BSsplitscolor contains the colors I want : it's either 'gray4', 'brightorange' if g:BSsplitsbool equals 1 or 'darkestgreen', 'brightgreen' if not. It's g:BSsplitsbool that changes.
Now the problem is at the 16th line : when I add s:BSsplitscolor after [ [, I get these errors when I restart vim (translate from french) :
Error detected while treating functionlightline#update[5]..lightline#colorscheme[18]..lightline#highlight :
line 18 :
E254: can not allocate color darkestgreen
E416: missing '=' : , 'brightgreen' guibg=bold ctermfg=0 ctermbg=0
Error detected while treating function lightline#update :
line 5 :
E171: missing :endif
I think I'm missing something... I'm not so good at vim scripting : I can do an if instruction, remap, and that's all.

First, the solution:
let s:BSsplitscolor = ['darkestgreen', 'brightgreen']
[...]
let s:p.normal.left = [ s:BSsplitscolor + ['bold'], ['white', 'gray4'] ]
Second, the explanation:
You are trying to build a list of three items:
['darkestgreen', 'brightgreen', 'bold']
out of a string that vaguely looks like a list:
"'darkestgreen', 'brightgreen'"
and a list with a single string:
['bold']
by inserting that string in that list:
[ s:BSsplitscolor, 'bold']
which gives you this monstrosity:
['''darkestgreen'', ''brightgreen''', 'bold']
which is a list of two items, not at all what you are trying to build. I'm not aware of a scripting language where something like that could be expected to work.
The actual solution is to make s:BSsplitscolor a list:
let s:BSsplitscolor = ['darkestgreen', 'brightgreen']
and merge it with ['bold']. This can be done in several ways. With :help expr-+:
let s:p.normal.left = [ s:BSsplitscolor + ['bold'], ['white', 'gray4'] ]
or with :help extend():
let s:p.normal.left = [ extend(s:BSsplitscolor, ['bold']), ['white', 'gray4'] ]

Related

How to force gem that converts all bins to 93k multibins to output 93k native bins?

My need is to get good old fashioned 93k native bad bins defined in my testflow. My ruby file compiles but looks like the gem is converting all bins to multibins. Is there a way to force this from my ruby file instead of hacking the gem files? If yes, going ahead with this, I couldn't find how to specify hardbin description and softbin description in origen. That is something I would like to add in the ruby code instead of on ATE.
Also on a side note, I am trying to force the output file name to something i want. Like in the sample code below i want the output file to be test.tf. The gem is adding some string and an underscore in front of "test". I don't need that either.
sample code:
Flow.create interface: 'MyTester::Interface', params: :room, unique_test_names: nil, flow_name:
:test, file_name: :test, insertion: :prb do
test_info1 = {"key_1" =>
[{:testname => "t1",
:sbin => 100,
:patternname => "p1"}],
"key_2" =>
[{:testname => "t2",
:sbin => 200,
:patternname => "t3"}]
}
testnum = 100000
test_info1.each do |key,val|
puts key
val.each do |info|
tname, sb, pname = info.values_at(:testname, :sbin, :patternname)
puts "#{tname} : #{sb} : #{pname}"
test_suites.add("#{tname}", pattern: "#{pname}", tim_spec_set: 1, timset: 1, lev_equ_set: 1,
lev_spec_set: 10, levset: 1, test_method: test_methods.ac_tml.ac_test.functional_test)
testnum = testnum+100
test :"#{tname}", bin: 10, softbin: "#{sb}", tnum: testnum
end
end
end

Why file path string is not splitting

I want to find files in a directory, then split the pathname and print each part of path on a separate line:
(Directory working: '.')
allFilesMatching: '*.st' do: [ :ff | (ff name)
findTokens: '/' "Linux separator"
"splitOn: '/' -this also does not work"
do: [ :i|
i displayNl ]]
However it is giving following error:
$ gst firstline.st
"Global garbage collection... done"
Object: '/home/abcd/firstline.st' error: did not understand #findTokens:do:
MessageNotUnderstood(Exception)>>signal (ExcHandling.st:254)
String(Object)>>doesNotUnderstand: #findTokens:do: (SysExcept.st:1448)
optimized [] in UndefinedObject>>executeStatements (firstline.st:3)
[] in Kernel.RecursiveFileWrapper(FilePath)>>filesMatching:do: (FilePath.st:903)
[] in Kernel.RecursiveFileWrapper>>namesDo:prefixLength: (VFS.st:378)
[] in File>>namesDo: (File.st:589)
BlockClosure>>ensure: (BlkClosure.st:268)
File>>namesDo: (File.st:586)
Kernel.RecursiveFileWrapper>>namesDo:prefixLength: (VFS.st:373)
Kernel.RecursiveFileWrapper>>namesDo: (VFS.st:396)
Kernel.RecursiveFileWrapper(FilePath)>>filesMatching:do: (FilePath.st:902)
File(FilePath)>>allFilesMatching:do: (FilePath.st:775)
Directory class>>allFilesMatching:do: (Directory.st:225)
UndefinedObject>>executeStatements (firstline.st:2)
The error message is really long and complex!
Both findTokens and splitOn are not working.
Where is the problem and how can this be solved.
The message maybe long but the line says the reason:
Object: '/home/abcd/firstline.st' error: did not understand #findTokens:do
You probably want to use a split differently, probably using subStrings: $character. I just tried it on GNU Smalltalk windows version:
The command:
'C:\prg_sdk\GNU Smalltalk(x86)\share\smalltalk\unsupported\torture.st' subStrings: $\
The result:
OrderedCollection ('C:' 'prg_sdk' 'GNU Smalltalk(x86)' 'share' 'smalltalk' 'unsupported' 'torture.st' )
Where you get your path when you have it in the collection. You start either from beginning or end.
For example you can start from beginning like this:
resultPath := nil.
pathCollection := 'C:\prg_sdk\GNU Smalltalk(x86)\share\smalltalk\unsupported\torture.st' subStrings: $\.
pathCollection do: [ :eachPartPath |
resultPath := (resultPath isNil) ifTrue: [
eachPartPath
] ifFalse: [
resultPath, '\', eachPartPath
].
resultPath displayNl
]

logstash parse complex message from Telegram

I'm processing through Telegram history (txt file) and I need to extract & process quite complex (nested) multiline pattern.
Here's the whole pattern
Free_Trade_Calls__AltSignals:IOC/ BTC (bittrex)
BUY : 0.00164
SELL :
TARGET 1 : 0.00180
TARGET 2 : 0.00205
TARGET 3 : 0.00240
STOP LOS : 0.000120
2018-04-19 15:46:57 Free_Trade_Calls__AltSignals:TARGET
basically I am looking for a pattern starting with
Free_Trade_Calls__AltSignals: ^%(
and ending with a timestamp.
Inside that pattern (telegram message)
- exchange - in brackets in the 1st line
- extract value after BUY
- SELL values in a array of 3 SELL[3] : target 1-3
- STOP loss value (it can be either STOP, STOP LOSS, STOP LOS)....
I've found this Logstash grok multiline message but I am very new to logstash firend advised it to me. I was trying to parse this text in NodeJS but it really is pain in the ass and mad about it.
Thanks Rob :)
Since you need to grab values from each line, you don't need to use multi-line modifier. You can skip empty line with %{SPACE} character.
For your given log, this pattern can be used,
Free_Trade_Calls__AltSignals:.*\(%{WORD:exchange}\)\s*BUY\s*:\s*%{NUMBER:BUY}\s*SELL :\s*TARGET 1\s*:\s*%{NUMBER:TARGET_1}\s*TARGET 2\s*:\s*%{NUMBER:TARGET_2}\s*TARGET 3\s*:\s*%{NUMBER:TARGET_3}\s*.*:\s*%{NUMBER:StopLoss}
please note that \s* equals to %{SPACE}
It will output,
{
"exchange": [
[
"bittrex"
]
],
"BUY": [
[
"0.00164"
]
],
"BASE10NUM": [
[
"0.00164",
"0.00180",
"0.00205",
"0.00240",
"0.000120"
]
],
"TARGET_1": [
[
"0.00180"
]
],
"TARGET_2": [
[
"0.00205"
]
],
"TARGET_3": [
[
"0.00240"
]
],
"StopLoss": [
[
"0.000120"
]
]
}

Not sure how to create ArangoDB graph using columns in existing collection

Background
I have a rocksdb collection that contains three fields: _id, author, subreddit.
Problem
I would like to create a Arango graph that creates a graph connecting these two existing columns. But the examples and the drivers seem to only accept collections as its edge definitions.
Issue
The ArangoDb documentation is lacking information on how I can create a graph using edges and nodes pulled from the same collection.
EDIT:
Solution
This was fixed with a code change at this Arangodb issues ticket.
Here's one way to do it using jq, a JSON-oriented command-line tool.
First, an outline of the steps:
1) Use arangoexport to export your author/subredit collection to a file, say, exported.json;
2) Run the jq script, nodes_and_edges.jq, shown below;
3) Use arangoimp to import the JSON produced in (2) into ArangoDB.
There are several ways the graph can be stored in ArangoDB, so ultimately you might wish to tweak nodes_and_edges.jq accordingly (e.g. to generate the nodes first, and then the edges).
INDEX
If your jq does not have INDEX defined, then use this:
def INDEX(stream; idx_expr):
reduce stream as $row ({};
.[$row|idx_expr|
if type != "string" then tojson
else .
end] |= $row);
def INDEX(idx_expr): INDEX(.[]; idx_expr);
nodes_and_edges.jq
# This module is for generating JSON suitable for importing into ArangoDB.
### Generic Functions
# nodes/2
# $name must be the name of the ArangoDB collection of nodes corresponding to $key.
# The scheme for generating key names can be altered by changing the first
# argument of assign_keys, e.g. to "" if no prefix is wanted.
def nodes($key; $name):
map( {($key): .[$key]} ) | assign_keys($name[0:1] + "_"; 1);
def assign_keys(prefix; start):
. as $in
| reduce range(0;length) as $i ([];
. + [$in[$i] + {"_key": "\(prefix)\(start+$i)"}]);
# nodes_and_edges facilitates the normalization of an implicit graph
# in an ArangoDB "document" collection of objects having $from and $to keys.
# The input should be an array of JSON objects, as produced
# by arangoexport for a single collection.
# If $nodesq is truthy, then the JSON for both the nodes and edges is emitted,
# otherwise only the JSON for the edges is emitted.
#
# The first four arguments should be strings.
#
# $from and $to should be the key names in . to be used for the from-to edges;
# $name1 and $name2 should be the names of the corresponding collections of nodes.
def nodes_and_edges($from; $to; $name1; $name2; $nodesq ):
def dict($s): INDEX(.[$s]) | map_values(._key);
def objects: to_entries[] | {($from): .key, "_key": .value};
(nodes($from; $name1) | dict($from)) as $fdict
| (nodes($to; $name2) | dict($to) ) as $tdict
| (if $nodesq then $fdict, $tdict | objects
else empty end),
(.[] | {_from: "\($name1)/\($fdict[.[$from]])",
_to: "\($name2)/\($tdict[.[$to]])"} ) ;
### Problem-Specific Functions
# If you wish to generate the collections separately,
# then these will come in handy:
def authors: nodes("author"; "authors");
def subredits: nodes("subredit"; "subredits");
def nodes_and_edges:
nodes_and_edges("author"; "subredit"; "authors"; "subredits"; true);
nodes_and_edges
Invocation
jq -cf extract_nodes_edges.jq exported.json
This invocation will produce a set of JSONL (JSON-Lines) for "authors", one for "subredits" and an edge collection.
Example
exported.json
[
{"_id":"test/115159","_key":"115159","_rev":"_V8JSdTS---","author": "A", "subredit": "S1"},
{"_id":"test/145120","_key":"145120","_rev":"_V8ONdZa---","author": "B", "subredit": "S2"},
{"_id":"test/114474","_key":"114474","_rev":"_V8JZJJS---","author": "C", "subredit": "S3"}
]
Output
{"author":"A","_key":"name_1"}
{"author":"B","_key":"name_2"}
{"author":"C","_key":"name_3"}
{"subredit":"S1","_key":"sid_1"}
{"subredit":"S2","_key":"sid_2"}
{"subredit":"S3","_key":"sid_3"}
{"_from":"authors/name_1","_to":"subredits/sid_1"}
{"_from":"authors/name_2","_to":"subredits/sid_2"}
{"_from":"authors/name_3","_to":"subredits/sid_3"}
Please note that the following queries take a while to complete on this huge dataset, however they should complete sucessfully after some hours.
We start the arangoimp to import our base dataset:
arangoimp --create-collection true --collection RawSubReddits --type jsonl ./RC_2017-01
We use arangosh to create the collections where our final data is going to live in:
db._create("authors")
db._createEdgeCollection("authorsToSubreddits")
We fill the authors collection by simply ignoring any subsequently occuring duplicate authors;
We will calculate the _key of the author by using the MD5 function,
so it obeys the restrictions for allowed chars in _key, and we can know it later on by calling MD5() again on the author field:
db._query(`
FOR item IN RawSubReddits
INSERT {
_key: MD5(item.author),
author: item.author
} INTO authors
OPTIONS { ignoreErrors: true }`);
After the we have filled the second vertex collection (we will keep the imported collection as the first vertex collection) we have to calculate the edges.
Since each author can have created several subreds, its most probably going to be several edges originating from each author. As previously mentioned,
we can use the MD5()-function again to reference the author previously created:
db._query(`
FOR onesubred IN RawSubReddits
INSERT {
_from: CONCAT('authors/', MD5(onesubred.author)),
_to: CONCAT('RawSubReddits/', onesubred._key)
} INTO authorsToSubreddits")
After the edge collection is filled (which may again take a while - we're talking about 40 million edges herer, right? - we create the graph description:
db._graphs.save({
"_key": "reddits",
"orphanCollections" : [ ],
"edgeDefinitions" : [
{
"collection": "authorsToSubreddits",
"from": ["authors"],
"to": ["RawSubReddits"]
}
]
})
We now can use the UI to browse the graphs, or use AQL queries to browse the graph. Lets pick the sort of random first author from that list:
db._query(`for author IN authors LIMIT 1 RETURN author`).toArray()
[
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_Eu-----_",
"author" : "punchyourbuns"
}
]
We identified an author, and now run a graph query for him:
db._query(`FOR vertex, edge, path IN 0..1
OUTBOUND 'authors/1cec812d4e44b95e5a11f3cbb15f7980'
GRAPH 'reddits'
RETURN path`).toArray()
One of the resulting paths looks like that:
{
"edges" : [
{
"_key" : "128327199",
"_id" : "authorsToSubreddits/128327199",
"_from" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_to" : "RawSubReddits/38026350",
"_rev" : "_W_LOxgm--F"
}
],
"vertices" : [
{
"_key" : "1cec812d4e44b95e5a11f3cbb15f7980",
"_id" : "authors/1cec812d4e44b95e5a11f3cbb15f7980",
"_rev" : "_W_HAL-y--_",
"author" : "punchyourbuns"
},
{
"_key" : "38026350",
"_id" : "RawSubReddits/38026350",
"_rev" : "_W-JS0na--b",
"distinguished" : null,
"created_utc" : 1484537478,
"id" : "dchfe6e",
"edited" : false,
"parent_id" : "t1_dch51v3",
"body" : "I don't understand tension at all."
"Mine is set to auto."
"I'll replace the needle and rethread. Thanks!",
"stickied" : false,
"gilded" : 0,
"subreddit" : "sewing",
"author" : "punchyourbuns",
"score" : 3,
"link_id" : "t3_5o66d0",
"author_flair_text" : null,
"author_flair_css_class" : null,
"controversiality" : 0,
"retrieved_on" : 1486085797,
"subreddit_id" : "t5_2sczp"
}
]
}
For a graph you need an edge collection for the edges and vertex collections for the nodes. You can't create a graph using only one collection.
Maybe this topic in the documentations is helpful for you.
Here's an AQL solution, which however presupposes that all the referenced collections already exist, and that UPSERT is not necessary.
FOR v IN testcollection
LET a = v.author
LET s = v.subredit
FILTER a
FILTER s
LET fid = (INSERT {author: a} INTO authors RETURN NEW._id)[0]
LET tid = (INSERT {subredit: s} INTO subredits RETURN NEW._id)[0]
INSERT {_from: fid, _to: tid} INTO author_of
RETURN [fid, tid]

RBParser message nodes and targeting the receiver and the argument?

Trying to get some of my old code up and running in Pharo. Some method names are different but after some hardship I managed to find equivalents that work.
I am parsing my code and I'd like to check if the receiver or any of the arguments is aSymbol in an effort to match them to supported alternatives. I've managed to do this to selectors, by analysing RBMessageNode s
aNode selector == aSymbol ifTrue: [ aNode selector: replacementSymbol ].
How can this be done to arguments and receivers? Is there a comprehensive guide on RBParser somewhere?
By direct manipulation
Assuming that you are looking for cases like this:
aSymbol message: aSymbol message: aSymbol
For receiver you should do:
(aNode isMessage and: [
aNode receiver isVariable and: [
aNode receiver name = 'aSymbol' ]]) ifTrue: [
"do your job here" ]
Here is another example on how to replace #aSymbol arguments with #newSymbol:
messageNode arguments: (messageNode arguments collect: [ :arg |
(arg isLiteralNode and: [ arg value = #aSymbol ])
ifFalse: [ arg ]
ifTrue: [ | newNode |
newNode := #aNewSymbol asLiteralNode.
arg replaceSourceWith: newNode.
newNode ] ]).
methodClass compile: ast newSource
The replaceSourceWith: makes sure that just a source will be replaced, but for newSource to actually return a new source you also need to swap the nodes themselves, that's why I'm doing a collect on arguments and return the new ones where needed.
You can view help about RBParser in Word Menu > Help > Help Browser > Refactoring Framework.
You can also play around by inspecting
RBParser parseExpression: 'aSymbol message: aSymbol message: aSymbol'
and looking at its contents
By Parse Tree Transformation
You can use pattern code to match and replace certain code. For example to change the symbol argument of a perform: message you can do this:
ast := yourMethod parseTree.
rewriter := RBParseTreeRewriter new
replace: '`receiver perform: #aSymbol'
with: '`receiver perform: #newSelector'.
(rewriter executeTree: ast) ifTrue: [
yourMethod class compile: ast newSource ]
You can learn more about the pattern matching syntax in the help topic Word Menu > Help > Help Browser > Refactoring Framework > Refactoring Engine > RBPatternParser …. I thing that MatchTool from pharo catalog can greatly help you in testing the match expressions (it also has a dedicated help topic about the matching syntax) while RewriteTool can help you to preview how your code will be transformed.

Resources