I'm currently writing a command line tool for myself, that needs to print some information on the terminal. I'm a little annoyed of the whole formatting. Here is my example.
formatter = logging.Formatter(fmt = '%(message)s')
console_logger = logging.getLogger("console_logger")
console_logger.setLevel(logging.DEBUG)
console_logger_handler = logging.StreamHandler()
console_logger_handler.setFormatter(formatter)
console_logger.addHandler(console_logger_handler)
console_logger.propagate = False
here goes some further code and then I have the printing function
for element in open_orders:
console_logger.info("Type: {}, Rate: {}, amount: {}, state: {}, pair: {}/{}, creation: {}, id: {}".format(element.type,
element.rate,
element.amount,
element.state,
element.currency_pair.get_base_currency().upper(),
element.currency_pair.get_quote_currency().upper(),
creation_time,
element.order_id))
I rather would like to have this as a column where the output is aligned at the colon. after each element a line of underscores or minusses would be nice as well, this should respect terminal width. I know this can be hardcoded in some manner, but isn't there a better way? Some kind of templating engine that can handle multiline output?
EDIT:
So here is an example:
Type : buy
Rate : 1234
amount : 1
state : active
pair : usd/eur
creation : 2017.12.12
I know this can be printed line by line with format but I need to determine the length of the longest string on my own and I was wondering if there isn a framework or something more elegant doing this for me.
id : 123456
Use format, add with your data :
for element in open_orders:
console_logger.info("Type: {:25s}, Rate: {:25s}, amount: {:07.2f}, state: {:25s}, pair: {:25s}/{:25s}, creation: {:25s}, id: {:25s}".format(element.type,
element.rate,
element.amount,
element.state,
element.currency_pair.get_base_currency().upper(),
element.currency_pair.get_quote_currency().upper(),
creation_time,
element.order_id))
You can also visit this site : https://pyformat.info/
In addition, you could try to use Colorama.
You have to install it, tipically, from pypi.
It allows you to handle cursor positioning, so you can control in which position at the screen (terminal) you want to print data, using "coordinates". Also, you can apply colors to text, which could give you a cleaner and prettier look if you want to.
So what I finally found which helps a lot at least in case of lists and formatting of them is this
terminaltable
Related
I ran a PDF through a series of processes to extra the text from it. I was successful in that regard. However, now I want to extract specific text from documents.
The document is set up as a multi lined string (I believe. when I paste it into Word the paragraph character is at the end of each line):
Send Unit: COMPLETE
NOA Selection: 20-0429.07
#for some reason, in this editor, despite the next line having > infront of it, the following line (Pni/Trk) keeps wrapping up to the line above. This doesn't exist in the actual doc.
Pni/Trk: 3 Panel / 3 Track
Panel Stack: STD
Width: 142.0000
The information is want to extract are the numbers following "NOA Selection:".
I know I can do a regex something to the effect of:
pattern = re.compile(r'NOA\sSelection:\s\d*-\d*\.\d*)
but I only want the numbers after the NOA selection, especially because NOA Selection will always be the same but the format of the numbers/letters/./-/etc. can vary pretty wildly. This looked promising but it is in Java and I haven't had much luck recreating it in Python.
I think I need to use (?<=...), but haven't been able to implement it.
Also, several of the examples show the string stored in the python file as a variable, but I'm trying to access it from a .txt file, so I might be going wrong there. This is what I have so far.
with open('export1.txt', 'r') as d:
contents = d.read()
p = re.compile('(?<=NOA)')
s = re.search(p, contents)
print(s.group())
Thank you for any help you can provide.
With your shown samples, you could try following too. For sample 20-0429.07 I have kept .07 part optional in regex in case you have values 20-0429 only it should work for those also.
import re
val = """Send Unit: COMPLETE
NOA Selection: 20-0429.07"""
matches = re.findall(r'NOA\s+Selection:\s+(\d+-\d+(?:\.\d+)?)', val)
print(matches)
['20-0429.07']
Explanation: Adding detailed explanation(only for explanation purposes).
NOA\s+Selection:\s+ ##matching NOA spaces(1 or more occurrences) Selection: spaces(1 or more occurrences)
(\d+-\d+(?:\.\d+)?) ##Creating capturing group matching(1 or more occurrences) digits-digits(1 or more occurrences)
##and in a non-capturing group matching dot followed by digits keeping it optional.
Keeping it simple, you could use re.findall here:
inp = """Send Unit: COMPLETE
NOA Selection: 20-0429.07"""
matches = re.findall(r'\bNOA Selection: (\S+)', inp)
print(matches) # ['20-0429.07']
The Problem
I have slice of string values wherein each value is formatted based on a template. In my particular case, I am trying to parse Markdown URLs as shown below:
- [What did I just commit?](#what-did-i-just-commit)
- [I wrote the wrong thing in a commit message](#i-wrote-the-wrong-thing-in-a-commit-message)
- [I committed with the wrong name and email configured](#i-committed-with-the-wrong-name-and-email-configured)
- [I want to remove a file from the previous commit](#i-want-to-remove-a-file-from-the-previous-commit)
- [I want to delete or remove my last commit](#i-want-to-delete-or-remove-my-last-commit)
- [Delete/remove arbitrary commit](#deleteremove-arbitrary-commit)
- [I tried to push my amended commit to a remote, but I got an error message](#i-tried-to-push-my-amended-commit-to-a-remote-but-i-got-an-error-message)
- [I accidentally did a hard reset, and I want my changes back](#i-accidentally-did-a-hard-reset-and-i-want-my-changes-back)
What I want to do?
I am looking for ways to parse this into a value of type:
type Entity struct {
Statement string
URL string
}
What have I tried?
As you can see, all the items follow the pattern: - [{{ .Statement }}]({{ .URL }}). I tried using the fmt.Sscanf function to scan each string as:
var statement, url string
fmt.Sscanf(s, "[%s](%s)", &statement, &url)
This results in:
statement = "I"
url = ""
The issue is with the scanner storing space-separated values only. I do not understand why the URL field is not getting populated based on this rule.
How can I get the Markdown values as mentioned above?
EDIT: As suggested by Marc, I will add couple of clarification points:
This is a general purpose question on parsing strings based on a format. In my particular case, a Markdown parser might help me but my intention to learn how to handle such cases in general where a library might not exist.
I have read the official documentation before posting here.
Note: The following solution only works for "simple", non-escaped input markdown links. If this suits your needs, go ahead and use it. For full markdown-compatibility you should use a proper markdown parser such as gopkg.in/russross/blackfriday.v2.
You could use regexp to get the link text and the URL out of a markdown link.
So the general input text is in the form of:
[some text](somelink)
A regular expression that models this:
\[([^\]]+)\]\(([^)]+)\)
Where:
\[ is the literal [
([^\]]+) is for the "some text", it's everything except the closing square brackets
\] is the literal ]
\( is the literal (
([^)]+) is for the "somelink", it's everything except the closing brackets
\) is the literal )
Example:
r := regexp.MustCompile(`\[([^\]]+)\]\(([^)]+)\)`)
inputs := []string{
"[Some text](#some/link)",
"[What did I just commit?](#what-did-i-just-commit)",
"invalid",
}
for _, input := range inputs {
fmt.Println("Parsing:", input)
allSubmatches := r.FindAllStringSubmatch(input, -1)
if len(allSubmatches) == 0 {
fmt.Println(" No match!")
} else {
parts := allSubmatches[0]
fmt.Println(" Text:", parts[1])
fmt.Println(" URL: ", parts[2])
}
}
Output (try it on the Go Playground):
Parsing: [Some text](#some/link)
Text: Some text
URL: #some/link
Parsing: [What did I just commit?](#what-did-i-just-commit)
Text: What did I just commit?
URL: #what-did-i-just-commit
Parsing: invalid
No match!
You could create a simple lexer in pure-Go code for this use case. There's a great talk by Rob Pike from years ago that goes into the design of text/template which would be applicable. The implementation chains together a series of state functions into an overall state machine, and delivers the tokens out through a channel (via Goroutine) for later processing.
I have an elasticsearch index that contains various member documents. Each member document contains a membership object, along with various fields associated with / describing individual membership. For example:
{membership:{'join_date':2015-01-01,'status':'A'}}
Membership status can be 'A' (active) or 'I' (inactive); both Unicode string values. I'm interested in providing a slight boost the score of documents that contain active membership status.
In my groovy script, along with other custom boosters on various numeric fields, I have added the following:
String status = doc['membership.status'].value;
float status_boost = 0.0;
if (status=='A') {status_boost = 2.0} else {status_boost=0.0};
return _score + status_boost
For some reason associated with how strings operate via groovy, the check (status=='A') does not work. I've attempted (status.toString()=='A'), (status.toString()=="A"), (status.equals('A')), plus a number of other variations.
How should I go about troubleshooting this (in a productive, efficient manner)? I don't have a stand-alone installation of groovy, but when I pull the response data in python the status is very much so either a Unicode 'A' or 'I' with no additional spacing or characters.
#VineetMohan is most likely right about the value being 'a' rather than 'A'.
You can check how the values are indexed by spitting them back out as script fields:
$ curl -XGET localhost:9200/test/_search -d '
{
"script_fields": {
"status": {
"script": "doc[\"membership.status\"].values"
}
}
}
'
From there, it should be an indication of what you're actually working with. More than likely based on the name and your usage, you will want to reindex (recreate) your data so that membership.status is mapped as a not_analyzed string. If done, then you won't need to worry about lowercasing of anything.
In the mean time, you can probably get by with:
return _score + (doc['membership.status'].value == 'a' ? 2 : 0)
As a big aside, you should not be using dynamic scripting. Use stored scripts in production to avoid security issues.
The problem is this:
In my programme at first the user gets options for a first name - so hopefully he likes something from the options and he chooses it -so far everything is OK!
But then when he types space he starts receiving options for second name and a if he likes something and chooses it - then the Autocomplete just erases the first name. Is there any way I can change that?
hello Rich thank you very much or your response - now i've decided to change my task and here is what I made when a user types for example I character i get all the first names that start with I- so far no problem! ANd when he types the white space and K for example I make request to my web service that gets the middle names that starts with K or the last names that start with K (one of them should start with K for Iwelina), so in this case for Iwelina Ive got RADULSKA KOSEWA and KOSEWA NEDEWA! For the source of autocomplete I concatenate iwelina with (radulska kosewa)and iwelina with (KOSEWA NEDEWA) so at the end I've got IWELINA IELINA RADULSKA KOSEWA and IWELINA KOSEWA NEDEWA!!! the only problem is that when i type Iwelina K i get only IWELINA KOSEWA NEDEWA!!!here is the code for autocomlete
$('#input').autocomplete({
source: function(request, response) {
var matcher = new RegExp( $.ui.autocomplete.escapeRegex(request.term, " "));
var data = $.grep( srcAutoComp, function(value) {
return matcher.test( value.label || value.value || value );
});
response(data);
}
});
if you know how i can change it I will be glad for the help
I don't understand how, when the user begines to type the second name, he's getting results that are only the last name. For example, if he types "Joh" and selects "John" from the options, and then continues to type "John Do", then how is it possible that your drop down gives him results for only the last name, like "Doe"?
At any rate, assuming this is truly happening, you could just combine all combinations of first and last names in your source data and that will show "John Doe" in the drop down when the user types "Joh" selects "John" and then continues to type "John Do".
Another way to do this is with a complicated change to the search and response events to search after a space if it is there, and recombine it with the first string after the search for the last name is complete. If you give me your source data, I could put something together for this.
for convenience in grouping couchdb functions
i created a file format that groups separate things together using yaml
it basically contains entries in the form of name.ext: |
followed by a intended block of code in the language fitting to .ext
for more pleasant editing i'd like to have vim use the correct syntax highlighters for them
edit
some code examples as requested
simple:
map.coffee: |
(doc) ->
for item in doc.items:
emit [doc.category, item], null
return
reduce: _count
more complex:
map.coffee: |
(doc) ->
emit doc.category, {items: 1, val: doc.value}
return
reduce.coffee: |
(keys, values, rereduce) ->
ret = {items: 0, val: 0}
for v in values
ret.items += doc.items
ret.val += doc.val
return ret
I believe that what you want it to make use of Vim's syntax regions (:help syn-region). But regions take delimiters as parameters.
You have a well defined start but not a defined end, maybe you could work your way around by establishing some conventions here like "2 empty new lines at the end".
There are similar answers that might give you a hint (including the docs) on how to implement a solution, like: Embedded syntax highligting in Vim
Also interesting and similar approach is this Vimtip: http://vim.wikia.com/wiki/Different_syntax_highlighting_within_regions_of_a_file
You have to write your own syntax file, and define a syntax region for each of your entries. Inside that region, you can then syntax-include the corresponding language as defined by your ext. Read all the details at :help :syn-include.
If that sounds too complicated, check out my SyntaxRange plugin. It is based on the Vimtip mentioned by alfredodeza. With it, you can quickly assign a syntax to a range of lines, e.g. :11,42SyntaxInclude perl