Clojure - Docjure: Method works in REPL but not in File - excel

I just try to read the content of an excel file in clojure. I use the docjure library. When I use the sample code in the REPL, the output is as I wanted it. But after inserting it into the file I got an Wrong number of args - Error for the spreadsheet/select-sheet method.
Here is the code:
(use 'dk.ative.docjure.spreadsheet)
(->> (load-workbook (str (System/getProperty "user.dir") "/resources/public/xls/test.xls")
(select-sheet "menu")
(select-columns {:A :number, :D :name})
))
The args for this method are [name ^Workbook workbook]. Why does it only need one argument in the REPL but two in the file?

Just as Alex said in comments, you messed with parens.
Right now your code code evaluates into:
(load-workbook (str (System/getProperty "user.dir")
"/resources/public/xls/test.xls")
(select-sheet "menu")
(select-columns {:A :number, :D :name}))
Here is how your actual code should look like:
(->> "/resources/public/xls/test.xls"
(str (System/getProperty "user.dir")) ; prefix it with user.dir
load-workbook ; load .xls workbook
(select-sheet "menu") ; select menu sheet
(select-columns {:A :number, :D :name})) ; select some columns
Which evaluates into:
(select-columns {:A :number, :D :name}
(select-sheet "menu"
(load-workbook (str (System/getProperty "user.dir")
"/resources/public/xls/test.xls"))))
As you can see, both select-sheet and select-columns are called with two arguments here.
To better understand how thread-last macro ->> works, see its documentation.

Related

iterating through a list to look up data, and construct a string

Elisp newbie, looking for help with this.
I have this variable:
(setq bibtex-completion-additional-search-fields '(tags keywords))
I then have a function, which, if this variable is set, then needs to iterate through those field names, and look them up in a data record, concatenate the resulting values into a string, which it returns.
Here's what the data looks like:
("2009-03-01 Zukin, Sharon and Trujillo, Valerie and Frase, Peter and Jackson, Danielle and Recuber, Tim and Walker, Abraham gentrification New Retail Capital and Neighborhood Change: Boutiques and Gentrification in New York City article zukin_new_2009"
("date" . "2009-03-01")
("author" . "Zukin, Sharon and Trujillo, Valerie and Frase, Peter and Jackson, Danielle and Recuber, Tim and Walker, Abraham")
("tags" . "gentrification, retail")
("title" . "New {{Retail Capital}} and {{Neighborhood Change}}: {{Boutiques}} and {{Gentrification}} in {{New York City}}")
("=type=" . "article")
("=key=" . "zukin_new_2009"))
This is what I have for the function ATM, which I know is wrong. But I can't wrap my head around how to do this in elisp (I have more experience with Python and Ruby).
(defun bibtex-completion--get-extra-search-data (candidate)
"Return extended search metadata as string."
(if bibtex-completion-additional-search-fields
; if the data is present, pull its value(s), join into a single string
; TODO FIX ME, this is wrong
(format "%s" (cl-loop
for field in bibtex-completion-additional-search-fields
collect
(cdr (assoc field (cdr candidate)))
))))
So with the example data above, the function should return that string "gentrification, retail". And if that record were to have a keyword field with "foo", the return string would be "gentrification, retail, foo" (or could just be space-separated; not sure it matters).
First, the keys in your data structure are strings, not symbols. So, you could change your lookup fields,
(setq bibtex-completion-additional-search-fields '("tags" "keywords"))
but, using symbols as the cars in the candidate data structure is probably better (efficiency-wise I believe).
The canonical elisp for joining list into string is
(mapconcat #'identity ...),
(mapconcat
#'identity
(delq nil
(cl-loop for field in bibtex-completion-additional-search-fields
collect (cdr (assoc field (cdr candidate)))))
", ")

Optional lines in Ultisnips using parameters

I am trying to have some additional lines inserted in snippets based on a parameter. I am not sure how design such snippet.
snippet 'mysnip' 'snippets with optional lines'
This snippet line1 is inserted by default
<This line1a should be inserted if parameter1 is true>
This snippet line2 is inserted by default
<This line2a should be inserted if parameter1 is true>
endsnippet
It is not very clear to me how/where you want to enter your parameters.
One option is to define two snippets, one called mysnip and the other one mysnip1 - in this case you pass the parameter in the snippet name, and the definition of these two snippets should be straightforward.
Another option is to just define one snippet mysnip, and pass the parameter somewhere within this snippet. A working example could look like this:
snippet mysnip1
${1:Change this snippet line to have the text "True" (without quotes).}
This line is always present. `!p
if t[1]=="True":
snip += "A line displayed when $1 has the text True.
`
endsnippet
You can fake this using regular expression triggers. It only works if you do not want to have tabstops in your optional arguments though:
snippet /mysnip([a-z]*)/ "Optionals" r
this is always here!`!p
if "a" in match.group(1):
snip += "only when a"
if "b" in match.group(1):
snip += "only when b"`
endsnippet
If you type mysnip it will just be the first line, mysnipb the first and thirdm and mysnipab will be all of it.
Can't you put your optional lines in a variable that your snippet engine will expand?
In case it doesn't join automatically empty lines produced from a empty variable, you may have to have your variable contain a newline character and put it or the line before/after.

text with redundant side chars formatting in emacs

I did lots of search without luck. I think even this is easy but it could help, so here it goes.
Here the goal is to format a kind of Java String to plain text.
For example, consider a String in java code,
logger.LogText( "Hi, this is 1st line " + "\n" +
"speak sth. in 2nd line " + "\n" +
"answered...? ");
and i want to copy from the whole String and paste to my plain text file, then run
M-x some-format-function-by-template-on-selection
and i got a result
Hi, this is 1st line
speak sth. in 2nd line
answered...?
Is there a built-in command for this?
It's not have to use template, but don't you think it's cool?
Currently i try to use 'align' to work around.
The built-in commands are the regexp functions :-)
(defun my-reduce-to-string (start end)
"Extract a quoted string from the selected region."
(interactive "r")
(let* ((text1 (replace-regexp-in-string ".*?\"\\([^\"]+\\)\"[^\"]*" "\\1"
(buffer-substring start end)))
(text (replace-regexp-in-string "\\\\n" "\n" text1)))
(delete-region start end)
(insert text)))
Note that this is a destructive function -- it replaces the text in the buffer as requested.

Stata behaviour on macros, different outputs

I have a manual list I created in a macro in stata, something like
global list1 "a b c d"
which I later iterate through with something like
foreach name in $list1 {
action
}
I am trying to change this to a DB driven list because the list is getting big and changing quickly, I create a new $list1 with the following commands
odbc load listitems=items, exec("SELECT items from my_table")
levelsof listitems
global list1=r(levels)
The items on each are the same, but this list seems to be different and when I have too many items it break on the for loop with the error
{ required
r(100);
Also, when I run only levelsof listitems I get the output
`"a"' `"b"' `"c"' `"d"'
Which looks a little bit different than the other macros.
I've been stuck in this for a while. Again, it only fails when the number of items becomes large (over 15), any help would be very appreciated.
Solution 1:
levelsof listitems, clean local(list1)
foreach name of local list1 {
...action with `name'...
}
Solution 2:
levelsof listitems, clean
global list1 `r(levels)'
foreach name of global list1 {
...action with `name'...
}
Explanation:
When you type
foreach name in $list1 {
then whatever is in $list1 gets substituted inline before Stata ever sees it. If global macro list1 contains a very long list of things, then Stata will see
foreach name in a b c d e .... very long list of things here ... {
It is more efficient to tell Stata that you have a list of things in a global or local macro, and that you want to loop over those things. You don't have to expand them out on the command line. That is what
foreach name of local list1 {
and
foreach name of global list1 {
are for. You can read about other capabilities of foreach in -help foreach-.
Also, you originally coded
levelsof listitems
global list1=r(levels)
and you noted that you saw
`"a"' `"b"' `"c"' ...
as a result. Those are what Stata calls "compound quoted" strings. A compound quoted string lets you effectively nest quoted things. So, you can have something like
`"This is a string with `"another quoted string"' inside it"'
You said you don't need that, so you can use the "clean" option of levelsof to not quote up the results. (See -help levelsof- for more info on this option.) Also, you were assigning the returned result of levelsof (which is in r(levels)) to a global macro afterward. It turns out -levelsof- actually has an option named -local()- where you can specify the name of a local (not global) macro to directly put the results in. Thus, you can just type
levelsof listitems, clean local(list1)
to both omit the compound quotes and to directly put the results in a local macro named list1.
Finally, if you for some reason don't want to use that local() option and want to stick with putting your list in a global macro, you should code
global list1 `r(levels)'
rather than
global list1=r(levels)
The distinction is that the latter treats r(levels) as a function and runs it through Stata's string expression parser. In Stata, strings (strings, not macros containing strings) have a limit of 244 characters. Macros containing strings on the other hand can have thousands of characters in them. So, if r(levels) had more than 244 characters in it, then
global list1=r(levels)
would end up truncating the result stored in list1 at 244 characters.
When you instead code
global list1 `r(levels)'
then the contents of r(levels) are expanded in-line before the command is executed. So, Stata sees
global list1 a b c d e ... very long list ... x y z
and everything after the macro name (list1) is copied into that macro name, no matter how long it is.

How can I import data from text files into Excel?

I have multiple folders. There are multiple txt files inside these folder. I need to extract data (just a single value: value --->554) from a particular type of txt file in this folder.(individual_values.txt)
No 100 Value 555 level match 0.443 top level 0.443 bottom 4343
There will be many folders with same txt file names but diff value. Can all these values be copyed to excel one below the other.
I have to extract a value from a txt file which i mentioned above. Its a same text file with same name located inside different folders. All i want to do is extract this value from all the text file and paste it in excel or txt one below the other in each row.
Eg: The above is a text file here I have to get the value of 555 and similarly from other diff values.
555
666
666
776
Yes.
(you might want to clarify your question )
Your question isn't very clear, I imagine you want to know how this can be done.
You probably need to write a script that traverses the folders, reads the individual files, parses them for the value you want, and generates a Comma Separated Values (CSV) file. CSV files can easily be imported to Excel.
There are two or three basic methods you can use to get stuff into a Excel Spreadsheet.
You can use OLE wrappers to manipulate Excel.
You can write the file in a binary form
You can use Excel's import methods to take delimited text in as a spreadsheet.
I chose the latter way, because 1) it is the simplest, and 2) your problem is so poorly stated as it does not require a more complex way. The solution below outputs a tab-delimited text file that Excel can easily support.
In Perl:
use IO::File;
my #field_names = split m|/|, 'No/Value/level match/top level/bottom';
#' # <-- catch runaway quote
my $input = IO::File->new( '<data.txt' );
die 'Could not open data.txt for input!' unless $input;
my #data_rows;
while ( my $line = <$input> ) {
my %fields = $line =~ /(level match|top level|bottom|Value|No)\s+(\d+\S*)/g;
push #data_rows, \%fields if exists $fields{Value};
}
$input->close();
my $tab_file = IO::File->new( '>data.tab' );
die 'Could not open data.tab for output!' unless $tab_file;
$tab_file->print( join( "\t", #field_names ), "\n" );
foreach my $data_ref ( #data ) {
$tab_file->print( join( "\t", #$data_ref{#field_names} ), "\n" );
}
$tab_file->close();
NOTE: Excel's text processing is really quite neat. Try opening the text below (replacing the \t with actual tabs) -- or even copying and pasting it:
1\t2\t3\t=SUM(A1:C1)
I chose c#, because i thought it would be fun to use a recursive lambda. This will create the csv file containing matches to the regex pattern.
string root_path = #"c:\Temp\test";
string match_filename = "test.txt";
Func<string,string,StringBuilder, StringBuilder> getdata = null;
getdata = (path,filename,content) => {
Directory.GetFiles(path)
.Where(f=>
Path.GetFileName(f)
.Equals(filename,StringComparison.OrdinalIgnoreCase))
.Select(f=>File.ReadAllText(f))
.Select(c=> Regex.Match(c, #"value[\s\t]*(\d+)",
RegexOptions.IgnoreCase))
.Where(m=>m.Success)
.Select(m=>m.Groups[1].Value)
.ToList()
.ForEach(m=>content.AppendLine(m));
Directory.GetDirectories(path)
.ToList()
.ForEach(d=>getdata(d,filename,content));
return content;
};
File.WriteAllText(
Path.Combine(root_path, "data.csv"),
getdata(root_path, match_filename, new StringBuilder()).ToString());
No.
just making sure you have a 50/50 chance of getting the right answer
(assuming it was a question answerable by Yes and No) hehehe
File_not_found
Gotta have all three binary states for the response.

Resources