Is there a way to view exported coloured logs as in console (with colour)?
My program uses colour coding for errors, warnings, etc. If I redirect output of my program to file.log, I get records like:
[32m[1m(INF)[0m /environment/converter: State map: [ 2, 3, 4, 5, 6, 7, 8, 11, 12, 13, 14, 15, 16, 17, 18 ][0m
[32m[1m(INF)[0m /environment/converter: Action map: [ -1, -1, -1, 0, 1, 2, 3, 4, 5 ][0m
[32m[1m(INF)[0m PyEnv: Observation dims: 15[0m
[32m[1m(INF)[0m PyEnv: Action dims: 6[0m
Random seed None
[32m[1m(INF)[0m GRL seed 1428[0m
Now I want to see file.log, but without colour codes or even better with colours preserved.
I have tried nano, vi and gedit but they all do not do what I want.
Here are some ideas if you're still wondering for a good way to do this.
A text editor won't be able to read the color codes, as those are Bash specific. However, cat works because it can interpret the color codes. However, the way you're storing the colors isn't formatted correctly in the first place.
I'm not sure how you are writing into your log file, but I echoed your example and redirected the output into a file. I didn't see colors either. The reason is that you have to escape every square bracket for the codes to be interpreted by Bash.
So the way that I escaped the brackets is by manually adding \e before every square bracket and using echo -e to evaluate the escape characters:
`echo -e "\e[32m\e[1m(INF)\e[0m /environment/converter: State map: [ 2, 3, 4, 5, 6, 7, 8, 11, 12, 13, 14, 15, 16, 17, 18 ]\e[0m`
`\e[32m\e[1m(INF)\e[0m /environment/converter: Action map: [ -1, -1, -1, 0, 1, 2, 3, 4, 5 ]\e[0m`
`\e[32m\e[1m(INF)\e[0m PyEnv: Observation dims: 15\e[0m`
`\e[32m\e[1m(INF)\e[0m PyEnv: Action dims: 6\e[0m`
`Random seed None`
`\e[32m\e[1m(INF)\e[0m GRL seed 1428\e[0m " > example.txt`
Now when I open the file with cat I see correctly colored text:
The color codes are now stored and escaped correctly in the .txt file:
^[[32m[1m(INF)[0m GRL seed 1428^[[0m
vs
[32m[1m(INF)[0m GRL seed 1428[0m
An alternative solution is to use a package like Ansi Html Adapter (Aha).
It will produce HTML instead of plain text. Then you can open the output in your browser, and it would have the right color coding.
However, you would meet with the same problem. If the color codes aren't escaped correctly, the output is not going to be colored.
I ran the following command to convert the correctly formatted text to HTML:
aha -f example.txt > example.html
Here's the result in the browser:
You can find more info on how to use colors in bash in this Bash Tips article.
Related
The asn1crypto package with x509 is being used. I'd like to find particular values in the .der file. The file is opened and read(), then:
mycert = x509.Certificate.load(data)
This returns an object of type asn1crypto.x509.Certificate like so b'0\x81\x50\...'. In debug, mycert can be expanded to show the various keys and values, however I'd like to search directly in the 'mycert' for such keys/values. How can I do this?
EDIT:
The asn1crypto package doesn't have to be used, another one can be used instead.
EDIT:
Expanded code:
with open(cert_path, 'rb') as cert_file:
data = cert_file.read()
mycert = x509.Certificate.load(data)
a = mycert.native # doesn't work!
In asn1crypto.x509 the attribute native contains the native Python datatype representation of the certificate. The values are hierarchically structured and can be OrderedDicts as well:
import asn1crypto.x509 as x509
import pprint
with open('crt.der', mode='rb') as file:
data = file.read()
mycert = x509.Certificate.load(data)
pprint.pprint(mycert.native)
Output:
OrderedDict([('tbs_certificate',
OrderedDict([('version', 'v3'),
('serial_number', 15158908894724103801),
('signature',
OrderedDict([('algorithm', 'sha256_rsa'),
('parameters', None)])),
('issuer',
OrderedDict([('country_name', 'XX'),
('state_or_province_name',
'Some-State'),
('locality_name', 'Some-City'),
('organization_name', 'example ltd'),
('common_name', 'www.example.com'),
('email_address',
'info#example.com')])),
('validity',
OrderedDict([('not_before',
datetime.datetime(2022, 9, 5, 6, 58, 21, tzinfo=datetime.timezone.utc)),
('not_after',
datetime.datetime(2022, 10, 5, 6, 58, 21, tzinfo=datetime.timezone.utc))])),
('subject',
OrderedDict([('country_name', 'XX'),
('state_or_province_name',
'Some-State'),
('locality_name', 'Some-City'),
('organization_name', 'example ltd'),
('common_name', 'www.example.com'),
('email_address',
'info#example.com')])),
...
You can find several discussions in SO on how to search in a nested dict like "Find all occurrences of a key in nested dictionaries and lists".
I am trying to print Korean characters using the hygothic-medium font. I have font as 'korean.h2gtrm.ttf'file under the site-packages\reportlab\fonts folder.
I did the following :-
pdfmetrics.registerFont(UnicodeCIDFont(TTFont('HYGothic-
Medium','korean.h2gtrm.ttf')))
pdfmetrics.registerFontFamily('HYGothic-Medium',normal='HYGothic-
Medium',bold='HYGothic-Medium',italic='HYGothic-
Medium',boldItalic='HYGothic-
Medium')
addMapping('HYGothic-Medium',0, 0, 'korean.h2gtrm') #normal
addMapping('HYGothic-Medium', 0, 1, 'korean.h2gtrm')
addMapping('HYGothic-Medium',1, 0, 'korean.h2gtrm')
addMapping('HYGothic-Medium',1, 1, 'korean.h2gtrm')
pstyle= ParagraphStyle(name='KOR', fontName = 'HYGothic-Medium',
fontSize = 10 ))
p=Paragraph(text,pstyle)) # where text is my Korean characters
story.append(p) # error !!
I get the following error:-
Can't map determine family/bold/italic for hygothic-medium
paragraph text '<para>\uc724\uadfc Machine</para>' caused exception
Any suggestions ?.
I am trying to write efm for this:
file: app/assets/javascripts/topbar.js
line 17, col 3, Missing semicolon.
line 19, col 19, 'is_mobile' was used before it was defined.
line 21, col 1965, Expected '{' and instead saw 'check'.
line 25, col 18, 'onScroll' was used before it was defined.
file: app/assets/javascripts/trends.js
line 2, col 55, Missing semicolon.
line 6, col 27, 'trendTypeSelected' was used before it was defined.
line 7, col 32, Expected '===' and instead saw '=='.
JSHint check failed
but I think I am missing the concept of file on separate line. Can someone help me out with
this?
This is what I have so far and it does not work:
%-Pfile:\ %f,line\ %l\,\ col\ %c\,\ %m
Thanks!
Solution:
set efm=%-Pfile:\ %f,%*[\ ]line\ %l\\,\ col\ %c\\,\ %m,%-Q,%-GJSHint\ check\ failed
Vim provides %P, which pushes the parsed file (%f) onto the stack, see :help errorformat-separate-filename.
So for your error output, it would be something like %+Pfile: %f,...
I need to import an excel document into mathematica which has 2000 compounds in it, with each compound have 6 numerical constants assigned to it. The end goal is to type a compound name into mathematica and have the 6 numerical constants be outputted. So far my code is:
t = Import["Titles.txt.", {"Text", "Lines"}] (imports compound names)
n = Import["NA.txt.", "List"] (imports the 6 values for each compound)
n[[2]] (outputs the second compounds 6 values)
Instead of n[[#]] i would like to know how to type in a compound from the imported compound names and have the 6 values be outputted .
I'm not sure if I understand your question - you have two text files, rather than an Excel file, for example, and it's not clear what the data looks like. But there are probably plenty of ways to do this. Here's a suggestion (it might not be the best way):
Let's assume that you've got all your data into a table (a list of lists):
pt = {
{"Hydrogen", "H", 1, 1.0079, -259, -253, 0.09, 0.14, 1776, 1, 13.5984},
{"Helium", "He", 2, 4.0026, -272, -269, 0, 0, 1895, 18, 24.5874},
{"Lithium" , "Li", 3, 6.941, 180, 1347, 0.53, 0, 1817, 1, 5.3917}
}
To find the information associated with a particular string:
Cases[pt, {"Helium", rest__} -> rest]
{"He", 2, 4.0026, -272, -269, 0, 0, 1895, 18, 24.5874}
where the pattern rest__ holds everything that was found after "Helium".
To look for the second item:
Cases[pt, {_, "Li", rest__} -> rest]
{2, 4.0026, -272, -269, 0, 0, 1895, 18, 24.5874}
If you add more information to the patterns, you have more flexibility in how you choose elements from the table:
Cases[pt, {name_, symbol_, aNumber_, aWeight_, mp_, bp_, density_,
crust_, discovered_, rest__}
/; discovered > 1850 -> {name, symbol, discovered}]
{{"Helium", "He", 1895}}
For something interactive, you could knock up a Manipulate:
elements = pt[[All, 1]];
headings = {"symbol", "aNumber", "aWeight", "mp", "bp", "density", "crust", "discovered", "group", "ion"};
Manipulate[
Column[{
elements[[x]],
TableForm[{
headings, Cases[pt, {elements[[x]], rest__} -> rest]}]}],
{x, 1, Length[elements], 1}]
Say I have a matrix that looks something like this:
{{foobar, 77},{faabar, 81},{foobur, 22},{faabaa, 8},
{faabian, 88},{foobar, 27}, {fiijii, 52}}
and a list like this:
{foo, faa}
Now I would like to add up the numbers for each line in the matrix based on the partial match of the strings in the list so that I end up with this:
{{foo, 126},{faa, 177}}
I assume I need to map a Select command, but I am not quite sure how to do that and match only the partial string. Can anybody help me? Now my real matrix is around 1.5 million lines so something that isn't too slow would be of added value.
Here is a starting point:
data={{"foobar",77},{"faabar",81},{"foobur",22},{"faabaa",8},{"faabian",88},{"foobar",27},{"fiijii",52}};
{str,vals}=Transpose[data];
vals=Developer`ToPackedArray[vals];
findValPos[str_List,strPat_String]:=
Flatten[Developer`ToPackedArray[
Position[StringPosition[str,strPat],Except[{}],{1},Heads->False]]]
Total[vals[[findValPos[str,"faa"]]]]
Here is yet another approach. It is reasonably fast, and also concise.
data =
{{"foobar", 77},
{"faabar", 81},
{"foobur", 22},
{"faabaa", 8},
{"faabian", 88},
{"foobar", 27},
{"fiijii", 52}};
match = {"foo", "faa"};
f = {#2, Tr # Pick[#[[All, 2]], StringMatchQ[#[[All, 1]], #2 <> "*"]]} &;
f[data, #]& /# match
{{"foo", 126}, {"faa", 177}}
You can use ruebenko's pre-processing for greater speed.
This is about twice as fast as his method on my system:
{str, vals} = Transpose[data];
vals = Developer`ToPackedArray[vals];
f2 = {#, Tr # Pick[vals, StringMatchQ[str, "*" <> # <> "*"]]} &;
f2 /# match
Notice that in this version I test substrings that are not at the beginning, to match ruebenko's output. If you want to only match at the beginning of strings, which is what I assumed in the first function, it will be faster still.
make data
mat = {{"foobar", 77},
{"faabar", 81},
{"foobur", 22},
{"faabaa", 8},
{"faabian", 88},
{"foobar", 27},
{"fiijii", 52}};
lst = {"foo", "faa"};
now select
r1 = Select[mat, StringMatchQ[lst[[1]], StringTake[#[[1]], 3]] &];
r2 = Select[mat, StringMatchQ[lst[[2]], StringTake[#[[1]], 3]] &];
{{lst[[1]], Total#r1[[All, 2]]}, {lst[[2]], Total#r2[[All, 2]]}}
gives
{{"foo", 126}, {"faa", 177}}
I'll try to make it more functional/general if I can...
edit(1)
This below makes it more general. (using same data as above):
foo[mat_, lst_] := Select[mat, StringMatchQ[lst, StringTake[#[[1]], 3]] &]
r = Map[foo[mat, #] &, lst];
MapThread[ {#1, Total[#2[[All, 2]]]} &, {lst, r}]
gives
{{"foo", 126}, {"faa", 177}}
So now same code above will work if lst was changed to 3 items instead of 2:
lst = {"foo", "faa", "fii"};
How about:
list = {{"foobar", 77}, {"faabar", 81}, {"foobur", 22}, {"faabaa",
8}, {"faabian", 88}, {"foobar", 27}, {"fiijii", 52}};
t = StringTake[#[[1]], 3] &;
{t[#[[1]]], Total[#[[All, 2]]]} & /# SplitBy[SortBy[list, t], t]
{{"faa", 177}, {"fii", 52}, {"foo", 126}}
I am sure I have read a post, maybe here, in which someone described a function that effectively combined sorting and splitting but I cannot remember it. Maybe someone else can add a comment if they know of it.
Edit
ok must be bedtime -- how could I forget Gatherby
{t[#[[1]]], Total[#[[All, 2]]]} & /# GatherBy[list, t]
{{"foo", 126}, {"faa", 177}, {"fii", 52}}
Note that for a dummy list of 1.4 million pairs this took a couple of seconds so not exactly a super fast method.