transforms.route.topic.expression and groovy expression - groovy

I'm trying to use debezium transforms.route.topic.expression
Here the entries in connector configuration
"transforms": "dropPrefix,unwrapi,route",
"transforms.dropPrefix.type": "org.apache.kafka.connect.transforms.RegexRouter",
"transforms.dropPrefix.regex": "rocketlawyer.dbo.(.*)",
"transforms.dropPrefix.replacement": "$1",
"transforms.unwrap.type": "io.debezium.transforms.ExtractNewRecordState",
"transforms.unwrap.drop.tombstones": "false",
"transforms.route.type": "io.debezium.transforms.ContentBasedRouter",
"transforms.route.language": "jsr223.groovy",
"transforms.route.topic.expression": "value.snapshotequals('true') ? ${topic} : cdc.$1"
when I'm applying config call I'm getting following error
iguenkin_rocketlawyer_com#dbadm-r208:~/confluent$ curl -d #mssql_trg_cf.json -H "Content-Type: application/json" -X PUT http://localhost:8083/connectors/mssql_trg/config | jq
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1326 100 242 100 1084 30250 132k --:--:-- --:--:-- --:--:-- 161k
{
"error_code": 500,
"message": "Unexpected character ('\"' (code 34)): was expecting comma to separate Object entries\n at [Source: (org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream); line: 1, column: 832]"
}
column: 832
"transforms.route.topic.expression": "value.snapshotequals('true') ? ${topic} : cdc.$1"
column 830 is a blank space in string : cdc
Here the questions
how can I be sure that jsr223.groovy is installed and accessible to transformations
as I am understand it is a part of dibezium connector, so it is not listed as a separate plugin
here the list of jars whee debezium is installed
`-rw-r--r-- 1 iguenkin_rocketlawyer_com 1234193955 20272 Dec 9 20:44 debezium-api-1.3.1.Final.jar
-rw-r--r-- 1 iguenkin_rocketlawyer_com 1234193955 91369 Dec 9 20:44 debezium-connector-sqlserver-1.3.1.Final.jar
-rw-r--r-- 1 iguenkin_rocketlawyer_com 1234193955 844500 Dec 9 20:44 debezium-core-1.3.1.Final.jar
-rw-r--r-- 1 iguenkin_rocketlawyer_com 1234193955 19090 Dec 15 22:01 debezium-scripting-1.3.1.Final.jar
-rw-r--r-- 1 iguenkin_rocketlawyer_com 1234193955 16839 Dec 16 01:04 groovy-jsr223-3.0.7-indy.jar`
if groovy is installed what is wrong with expression ? I've used following documentation to set connector https://debezium.io/documentation/reference/configuration/content-based-routing.html

Related

Vim loading and formatting slow

I'm using vim with many plugins, the .vimrc file has a big number of plugins, but yet it was very fast, and suddenly for some reason it's not any more, not sure may be after I started using eslinter, it takes about two second every time I save a file, or open a file in a new tab.
Is there any way I can find which plugin that is causing all that delay?
FUNCTIONS SORTED ON TOTAL TIME
count total (s) self (s) function
1 1.193202 0.000100 <SNR>67_BufWritePostHook()
1 1.192941 0.000469 <SNR>67_UpdateErrors()
1 1.188491 0.000942 <SNR>67_CacheErrors()
1 1.184402 0.000040 287()
1 1.184210 0.000254 286()
1 1.183663 0.000135 SyntaxCheckers_javascript_eslint_GetLocList()
1 1.182540 0.000674 SyntasticMake()
1 1.181614 0.000483 syntastic#util#system()
3 0.023413 0.000393 airline#extensions#tabline#get()
3 0.023020 0.001615 airline#extensions#tabline#tabs#get()
12 0.022842 0.010081 <SNR>180_parse_screen()
2 0.018991 0.003055 381()
12 0.012386 <SNR>180_create_matches()
12 0.011903 0.002183 <SNR>172_OnCursorMovedNormalMode()
8 0.011681 0.000275 <SNR>157_get_seperator()
14 0.010961 0.010719 <SNR>172_OnFileReadyToParse()
46 0.009832 0.003413 airline#highlighter#get_highlight()
10 0.009530 0.000443 <SNR>157_get_transitioned_seperator()
10 0.009087 0.000364 airline#highlighter#add_separator()
10 0.008723 0.000830 <SNR>153_exec_separator()
FUNCTIONS SORTED ON SELF TIME
count total (s) self (s) function
12 0.012386 <SNR>180_create_matches()
14 0.010961 0.010719 <SNR>172_OnFileReadyToParse()
12 0.022842 0.010081 <SNR>180_parse_screen()
92 0.005751 <SNR>153_get_syn()
46 0.009832 0.003413 airline#highlighter#get_highlight()
13 0.003297 <SNR>123_Highlight_Matching_Pair()
2 0.018991 0.003055 381()
1 0.002336 0.002331 gitgutter#sign#remove_signs()
12 0.011903 0.002183 <SNR>172_OnCursorMovedNormalMode()
3 0.001629 airline#extensions#tabline#tabs#map_keys()
3 0.023020 0.001615 airline#extensions#tabline#tabs#get()
1 0.001750 0.001601 gitgutter#async#execute()
12 0.001445 <SNR>146_update()
1 0.001305 0.001302 gitgutter#sign#find_current_signs()
12 0.001276 <SNR>157_get_accented_line()
14 0.001155 <SNR>172_AllowedToCompleteInBuffer()
10 0.003474 0.001059 airline#highlighter#exec()
2 0.001717 0.000985 xolox#misc#cursorhold#autocmd()
1 0.002802 0.000983 347()
1 0.001208 0.000966 gitgutter#sign#upsert_new_gitgutter_signs()

How to use alsa sound and/or snd_pcm_open in docker?

I am running an Ubuntu 12.04 Docker container on an Ubuntu 16.04 host. Some test code I have exercises 'snd_pcm_open'/'snd_pcm_close' operations with the SND_PCM_STREAM_PLAYBACK and SND_PCM_STREAM_CAPTURE stream types.
I do not need any actual sound/audio capabilities but just getting the 'snd_pcm_open' return 0 with a valid handle, then 'snd_pcm_close' to return 0 on the same handle would be good enough for my purposes. I do not want to modify the code as it's already got some not-so-nice platform dependent switches and I am not the maintainer.
I am using the simple code and compiling it as 'g++ alsa_test.cpp -lasound'
#include <stdio.h>
#include <alsa/asoundlib.h>
int main() {
snd_pcm_t* handle;
snd_pcm_stream_t stream_type[]= {SND_PCM_STREAM_PLAYBACK, SND_PCM_STREAM_CAPTURE};
printf("\nstarting\n");
for (unsigned char i = 0; i < sizeof(stream_type) / sizeof(stream_type[0]); ++i) {
printf(">>>>>>>>\n\n");
int deviceResult = snd_pcm_open(&handle, "default" , stream_type[i], 0);
printf("\n%d open: %d\n", stream_type[i], deviceResult);
if (deviceResult >= 0) {
printf("attempting to close %d\n", stream_type[i]);
snd_pcm_drain(handle);
deviceResult = snd_pcm_close(handle);
printf("%d close: %d\n\n", stream_type[i], deviceResult);
}
printf("<<<<<<<<\n\n");
}
return 0;
}
It works just fine on the host but despite all the different things I tried, 'snd_pcm_open' returns '-2' for both stream types in the container.
I tried installing the 'libasound2.dev' but 'modinfo soundcore' is empty and '/dev/snd' does not exist.
Also tried running the container with the options below, even though it feels like a massive over kill for such a simple purpose
--privileged --cap-add=ALL -v /dev:/dev -v /lib/modules:/lib/modules
After these extra parameters to the container, following commands generate the same output both in the host and the container.
root#31142791f82d:/export# modinfo soundcore
filename: /lib/modules/4.4.0-59-generic/kernel/sound/soundcore.ko
alias: char-major-14-*
license: GPL
author: Alan Cox
description: Core sound module
srcversion: C941364F5CD0B525693B243
depends:
intree: Y
vermagic: 4.4.0-59-generic SMP mod_unload modversions
parm: preclaim_oss:int
root#31142791f82d:/export# ls -l /dev/snd/
total 0
drwxr-xr-x 2 root root 100 Feb 2 21:10 by-path
crw-rw----+ 1 root audio 116, 2 Feb 2 07:42 controlC0
crw-rw----+ 1 root audio 116, 7 Feb 2 07:42 controlC1
crw-rw----+ 1 root audio 116, 12 Feb 2 21:10 controlC2
crw-rw----+ 1 root audio 116, 6 Feb 2 07:42 hwC0D0
crw-rw----+ 1 root audio 116, 11 Feb 2 07:42 hwC1D0
crw-rw----+ 1 root audio 116, 3 Feb 2 07:42 pcmC0D3p
crw-rw----+ 1 root audio 116, 4 Feb 2 07:42 pcmC0D7p
crw-rw----+ 1 root audio 116, 5 Feb 2 07:42 pcmC0D8p
crw-rw----+ 1 root audio 116, 9 Feb 2 10:44 pcmC1D0c
crw-rw----+ 1 root audio 116, 8 Feb 2 07:42 pcmC1D0p
crw-rw----+ 1 root audio 116, 10 Feb 2 21:30 pcmC1D1p
crw-rw----+ 1 root audio 116, 14 Feb 2 21:10 pcmC2D0c
crw-rw----+ 1 root audio 116, 13 Feb 2 21:10 pcmC2D0p
crw-rw----+ 1 root audio 116, 1 Feb 2 07:42 seq
crw-rw----+ 1 root audio 116, 33 Feb 2 07:42 timer
The container only has the 'root' user by the way, so, access rights shouldn't be an issue either.
What would be the easiest and least hacky way to get this working? I'd rather get rid off the privileged mode and dev/modules mapping to the container however, these containers are not accessed from the outside world and are only created/destroyed for some short lived tasks. So, safety isn't exactly a massive concern.
Thanks in advance.
If you don't actually need the device to work correctly, use the null device instead of default.
To make the null plugin the default one, put this into the container's /etc/asound.conf, or into the user's ~/.asoundrc:
pcm.!default = null;

Syntaxnet Turkish Language Data Set Non Existent Map Files

I am new to Syntaxnet and i tried to use pre-trained model of Turkish language through the instructions here
Point-1 : Although I set the MODEL_DIRECTORY environment variable, tokenize.sh didn't find the related path and it gives error like below :
root#4562a2ee0202:/opt/tensorflow/models/syntaxnet# echo "Eray eve geldi." | syntaxnet/models/parsey_universal/tokenize.sh
F syntaxnet/term_frequency_map.cc:62] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. **Not found: label-map**)
Point-2 : So, I changed the tokenize.sh through commenting the MODEL_DIR=$1 and set my Turkish language model path like below to go on :
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval
CONTEXT=syntaxnet/models/parsey_universal/context.pbtxt
INPUT_FORMAT=stdin-untoken
MODEL_DIR=$1
MODEL_DIR=syntaxnet/models/etiya-smart-tr
Point-3 : After that when I run it as told, it gives error like below :
root#4562a2ee0202:/opt/tensorflow/models/syntaxnet# echo "Eray eve geldi" | syntaxnet/models/parsey_universal/tokenize.sh
I syntaxnet/term_frequency_map.cc:101] Loaded 29 terms from syntaxnet/models/etiya-smart-tr/label-map.
I syntaxnet/embedding_feature_extractor.cc:35] Features: input.char input(-1).char input(1).char; input.digit input(-1).digit input(1).digit; input.punctuation-amount input(-1).punctuation-amount input(1).punctuation-amount
I syntaxnet/embedding_feature_extractor.cc:36] Embedding names: chars;digits;puncts
I syntaxnet/embedding_feature_extractor.cc:37] Embedding dims: 16;16;16
F syntaxnet/term_frequency_map.cc:62] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. **Not found: syntaxnet/models/etiya-smart-tr/char-map**)
I had downloaded the Turkish package through tracing the link pattern indicated like download.tensorflow.org/models/parsey_universal/.zip
and my language mapping file list like below :
-rw-r----- 1 root root 50646 Sep 22 07:24 char-ngram-map
-rw-r----- 1 root root 329 Sep 22 07:24 label-map
-rw-r----- 1 root root 133477 Sep 22 07:24 morph-label-set
-rw-r----- 1 root root 5553526 Sep 22 07:24 morpher-params
-rw-r----- 1 root root 1810 Sep 22 07:24 morphology-map
-rw-r----- 1 root root 10921546 Sep 22 07:24 parser-params
-rw-r----- 1 root root 39990 Sep 22 07:24 prefix-table
-rw-r----- 1 root root 28958 Sep 22 07:24 suffix-table
-rw-r----- 1 root root 561 Sep 22 07:24 tag-map
-rw-r----- 1 root root 5234212 Sep 22 07:24 tagger-params
-rw-r----- 1 root root 172869 Sep 22 07:24 word-map
QUESTION-1 :
I am aware that there is no char-map file in the directory so I got the error written # Point-3 above. So, does anyone have an opinion about how the Turkish language test could be done and the result was shared as %93,363 for Part-of-Speech for example?
QUESTION-2:
How can I find the char-map file for Turkish language?
QUESTION-3:
If there is no char-map file, must I train through tracing the steps indicated as SyntaxNet's Obtain Data & Training?
QUESTION-4:
Is there a way to generate word-map, char-map... etc. files? Is it the well known word2vec approach that can be used to generate map files which will be able to be processed wt. Syntaxnet tokenizers?
Try this https://github.com/tensorflow/models/issues/830 issue - it contains an (at this moment) temporary solution.

Convert string to class object in Python

I have a list of strings. Each string of the list has same format. I would like to convert each string into a class object (if that is the best option), so I can do some analysis of the list of class object.
As an example,
I have the following list
ls_list = ['-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 bar1',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 bar2',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 foo1',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 foo2']
I would like to convert each of the above string into a class that has Nine members (perm, etc).
You don't want to do that. Use os.listdir() and os.stat() to get the information you want.
You do it something like this:
import os
data = {}
for file in os.listdir('.'):
data[file] = os.stat(file)
This gives you the information for all the files in the current directory, as objects, as you requested. These can then be inspected, and you can use other functions to figure out the username of the userid you get, etc.
Here's one way of doing what you ask, but as others have noted, don't use this for parsing the output of ls. Also, it assumes that there are only 8 areas of whitespace in your input string separating your data. If any of the data substrings also contain whitespace, this code will fail:
class FileAttribs(object):
ORDERED_ATTRIB_NAMES = ["permissions", "links", "owner",
"groups", "size", "month", "day", "time", "name"]
def __init__(self, lsString):
for (attrib, s) in zip(self.ORDERED_ATTRIB_NAMES, lsString.split()):
setattr(self, attrib, s)
if __name__ == '__main__':
ls_list = ['-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 bar1',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 bar2',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 foo1',
'-rw-r--r-- 1 ahmed None 0 Apr 21 17:10 foo2']
files = []
for s in ls_list:
files.append(FileAttribs(s))
#do stuff with files, e.g.
for f in files:
print f.permissions, f.name

How to grep all the data in a specified col from standard output?

Now i have a standard output which is printed by my program and displayed on the screen. The outputs looks like the following part:
Parsing command line string 'InputFile = foreman.qcif'.
Parsing command line string 'NumberReferenceFrames = 1'.
Parsing command line string 'QPISlice = 24'.
Parsing command line string 'QPPSlice = 24'.
------------------------------- JM 11.0 (FRExt) --------------------------------
Input YUV file : foreman.qcif
Output H.264 bitstream : test.264
Output YUV file : test_rec.yuv
YUV Format : YUV 4:2:0
Frames to be encoded I-P/B : 150/0
PicInterlace / MbInterlace : 0/0
Transform8x8Mode : 1
-------------------------------------------------------------------------------
Frame Bit/pic QP SnrY SnrU SnrV Time(ms) MET(ms) Frm/Fld Ref
-------------------------------------------------------------------------------
0000(NVB) 168
0000(IDR) 34280 24 39.724 41.720 43.998 286 0 FRM 1
0002(P) 7432 24 38.857 41.565 43.829 402 99 FRM 1
0004(P) 8976 24 38.642 41.275 43.698 409 97 FRM 1
0006(P) 8344 24 38.427 41.266 43.515 407 99 FRM 1
0008(P) 8224 24 38.609 41.082 43.524 413 94 FRM 1
0010(P) 7784 24 38.655 40.991 43.235 406 95 FRM 1
0012(P) 7136 24 38.534 40.687 43.273 411 95 FRM 1
0014(P) 6688 24 38.464 40.756 43.146 410 92 FRM 1
0016(P) 7720 24 38.516 40.585 42.851 410 91 FRM 1
0018(P) 6864 24 38.474 40.631 42.958 411 101 FRM 1
0020(P) 8392 24 38.433 40.607 42.646 415 99 FRM 1
0022(P) 9744 24 38.371 40.554 42.498 416 94 FRM 1
0024(P) 8368 24 38.362 40.531 42.380 417 93 FRM 1
0026(P) 7904 24 38.414 40.586 42.415 418 95 FRM 1
0028(P) 8688 24 38.403 40.523 42.366 418 96 FRM 1
0030(P) 9128 24 38.545 40.390 42.661 416 89 FRM 1
0032(P) 9664 24 38.399 40.538 42.740 413 88 FRM 1
0034(P) 8928 24 38.394 40.590 42.852 414 95 FRM 1
0036(P) 10024 24 38.423 40.562 42.697 415 92 FRM 1
0038(P) 9320 24 38.442 40.389 42.689 414 94 FRM 1
0040(P) 7304 24 38.404 40.487 42.884 410 90 FRM 1
0042(P) 8560 24 38.447 40.590 42.673 411 95 FRM 1
.......
Now I only need to process the 4th col of SnrY.
So, my question is how to grep this rol and to store them in a data.txt file?
Then I can use plot tool or Matlab to plot the data trends with these data..
Any advices? Many thanks for your kind help!
Addition:
Since it needs to pick those data from standard output, do I need to use(add) the command provided by you in the command line to make it works during outputs those data
?
$ awk '/FRM/ { print $4 }' < in.txt > data.txt
Note: I'm just guessing that the interesting lines contain FRM. If this is not the case, you must come up with a different way to identify them.
tail -n+17 in.txt | cut -c 23-31
./cmd_that_generates_results | awk '/^[0-9]/{print $4}' > data.txt
Quick breakdown of awk statement:
/^[0-9]/ : match lines that start with a number
{print $4} : print out fourth column
(Updated): to address question "Since it needs to pick those data from standard output, do I need to use(add) the command provided by you in the command line to make it works during outputs those data ?"
You can use the commands given in answers by piping to them the standard output of your program using the pipe (|) command. The results can then be store into "data.txt" using >.
see example above. Similar techniques can be used for other solutions in this page.
$ ./command | awk 'BEGIN{RS="--+";FS="\n"} END{ for(i=1;i<=NF;i++){if($i ~ /^[0-9]/){m=split($i,a," ");if(a[4]) print a[4]}}}'
39.724
38.857
38.642
38.427
38.609
38.655
38.534
38.464
38.516
38.474
38.433
38.371
38.362
38.414
38.403
38.545
38.399
38.394
38.423
38.442
38.404
38.447
I am anticipating that the data you want to get
need not be after the 16th line, may be after the 17th , or 18th ,etc
or that FRM/Fld field may have different indicators,
or the 4th field may not be at the exact character position every time.
So the above just say: set record separator as the line with dashes "-" , so the last record is the whole data you need. then using the newline as field separator (FS="\n") , each field will be each line of data. Split them up with spaces and get element 4 of the splitted line. that will be your desired column.
This will match numbers like 38.4* in the fourth column:
grep -E '^([^ ]+[ ]+){3}38.4'

Resources