I'm using the following groovy code to search a file for a string, an account number. The file I'm reading is about 30MB and contains 80,000-120,000 lines. Is there a more efficient way to find a record in a file that contains the given AcctNum? I'm a novice, so I don't know which area to investigate, the toList() or the for-loop. Thanks!
AcctNum = 1234567890
if (testfile.exists())
{
lines = testfile.readLines()
words = lines.toList()
for (word in words)
{
if (word.contains(AcctNum)) { done = true; match = 'YES' ; break }
chunks += 1
if (done) { break }
}
}
Sad to say, I don't even have Groovy installed on my current laptop - but I wouldn't expect you to have to call toList() at all. I'd also hope you could express the condition in a closure, but I'll have to refer to Groovy in Action to check...
Having said that, do you really need it split into lines? Could you just read the whole thing using getText() and then just use a single call to contains()?
EDIT: Okay, if you need to find the actual line containing the record, you do need to call readLines() but I don't think you need to call toList() afterwards. You should be able to just use:
for (line in lines)
{
if (line.contains(AcctNum))
{
// Grab the results you need here
break;
}
}
When you say efficient you usually have to decide which direction you mean: whether it should run quickly, or use as few resources (memory, ...) as possible. Often both lie on opposite sites and you have to pick a trade-off.
If you want to search memory-friendly I'd suggest reading the file line-by-line instead of reading it at once which I suspect it does (I would be wrong there, but in other languages something like readLines reads the whole file into an array of strings).
If you want it to run quickly I'd suggest, as already mentioned, reading in the whole file at once and looking for the given pattern. Instead of just checking with contains you could use indexOf to get the position and then read the record as needed from that position.
I should have explained it better, if I find a record with the AcctNum, I extract out other information on the record...so I thought I needed to split the file into multiple lines.
if you control the format of the file you are reading, the solution is to add in an index.
In fact, this is how databases are able to locate records so quickly.
But for 30MB of data, i think a modern computer with a decent harddrive should do the trick, instead of over complicating the program.
Related
I want to use printing command bellow in many places of my script. But I need to keep replacing "Survived" with some other string.
print(df.Survived.value_counts())
Can I automate the process by formating variable the same way as string? So if I want to replace "Survived" with "different" can I use something like:
var = 'different'
text = 'df.{}.value_counts()'.format(var)
print(text)
unfortunately this prints out "df.different.value_counts()" as as a string, while I need to print the value of df.different.value_counts()
I'm pretty sure alot of IDEs, have this option that is called refactoring, and it allows you to change a similar line of code/string on every line of code to what you need it to be.
I'm aware of VSCode's way of refactoring, is by selecting a part of the code and right click to select the option called change all occurances. This will replace the exact code on every line if it exists.
But if you want to do what you proposed, then eval('df.{}.value_counts()'.format(var)) is an option, but this is very unsecured and dangerous, so a more safer approach would be importing the ast module and using it's literal_eval function which is safer. ast.literal_eval('df.{}.value_counts()'.format(var)).
if ast.literal_eval() doesn't work then try this final solution that works.
def cat():
return 1
text = locals()['df.{}.value_counts'.format(var)]()
Found the way: print(df[var].value_counts())
First and foremost, I'm not familiar with Perl at all. I've been studying C++ primarily for the last 1/2 year. I'm in a class now that that is teaching Linux commands, and we have short little topics on languages used in Linux, including Perl, which is totally throwing me for a loop (no pun intended). I have a text file that contains a bunch of random numbers separated by spaces and tabs, maybe even newlines, that gets read into the program via a filehandle. I'm supposed to write 2 lines of code that split the lines of numbers and merge them into one array, inside of a foreach loop. I'm not looking for an answer, just a nudge in the right direction. I've been trying different things for multiple hours and feel totally silly I can't get it, I'm totally lost with the syntax. Its just a bit odd not working inside a compiler and out of my comfort zone working outside of C++. I really appreciate it. I've included a few photos. Basically, the code we are writing it just to store the numbers and the rest of the program will determine the smallest number and sum of all numbers. Mine is currently incorrect because I'm not sure what to do. In the output photo, it will display all the numbers being entered in via the text file, so you can see them.
Several things to fix here. First of all, please don't post screenshots of your sample data or code, as it makes it impossible to copy and paste to test your code or data. Post your code/data by indenting it with four spaces and a newline preceding the code block.
Add use strict; in your script. This should be lesson 0 in your class. After that add my to all variable declarations.
To populate #all_numbers with contents of each line's numbers, without using push, you can use something like this:
foreach my $line (#output_lines)
{
my #numbers = split /\s/, $line;
#all_numbers = (#all_numbers, #numbers);
}
You say you're "not looking for an answer," so here's your nudge:
You're almost there. You split each line well (using split/\s/) and store the numeric values in #all_numbers. However, notice that each time around in the loop, you replace (using the assignment, #all_numbers = ...) the whole contents of #all_numbers with the numbers you found in the current line. Effectively, you're throwing away everything you've stored from the previous lines.
Instead, you want to add to #all_numbers, not replace #all_numbers. Have a look at the push() function for how to do this.
NB: Your split() call is fine, but it's more customary to use split(' ', $line) in this case. (See split(): you can use a single space, ' ', instead of the pattern, /\s/, when you want to split on any whitespace.)
I hope you need to store the all splitting element into array, so you looking for push function.
foreach $line (#input_lines)
{
push(#all_numbers,split(/\s/,$line));
}
Your problem is, in every iteration, the splitted value is over written in an array not to append together. For example,
#array = qw(one two three);
#array = qw(five four seven);
print "#array";
output is five four seven not the one two three five four seven because this is reinitialize with a new values. You want to append the new values in the array in before or after use unshift or push
for example
#array = qw(one two three);
push(#array,qw(five four seven));
Another way:
my #all_numbers = map { split ' ', $_ } #output_lines;
See http://perldoc.perl.org/functions/map.html
I have to write a MATLAB function with the following description:
function counts = letterStatistics(filename, allowedChar, N)
This function is supposed to open a text file specified by filename and read its entire contents. The contents will be parsed such that any character that isn’t in allowedChar is removed. Finally it will return a count of all N-symbol combinations in the parsed text. This function should be stored in a file name “letterStatistics.m” and I made a list of some commands and things of how the function should be organized according to my professors' lecture notes:
Begin the function by setting the default value of N to 1 in case:
a. The user specifies a 0 or negative value of N.
b. The user doesn’t pass the argument N into the function, i.e., counts = letterStatistics(filename, allowedChar)
Using the fopen function, open the file filename for reading in text mode.
Using the function fscanf, read in all the contents of the opened file into a string variable.
I know there exists a MATLAB function to turn all letters in a string to lower case. Since my analysis will disregard case, I have to use this function on the string of text.
Parse this string variable as follows (use logical indexing or regular expressions – do not use for loops):
a. We want to remove all newline characters without this occurring:
e.g.
In my younger and more vulnerable years my father gave me some advice that I've been turning over in my mind ever since.
In my younger and more vulnerableyears my father gave me some advicethat I’ve been turning over in my mindever since.
Replace all newline characters (special character \n) with a single space: ' '.
b. We will treat hyphenated words as two separate words, hence do the same for hyphens '-'.
c. Remove any character that is not in allowedChar. Hint: use regexprep with an empty string '' as an argument for replace.
d. Any sequence of two or more blank spaces should be replaced by a single blank space.
Use the provided permsRep function, to create a matrix of all possible N-symbol combinations of the symbols in allowedChar.
Using the strfind function, count all the N-symbol combinations in the parsed text into an array counts. Do not loop through each character in your parsed text as you would in a C program.
Close the opened file using fclose.
HERE IS MY QUESTION: so as you can see i have made this list of what the function is, what it should do, and using which commands (fclose etc.). the trouble is that I'm aware that closing the file involves use of 'fclose' but other than that I'm not sure how to execute #8. Same goes for the whole function creation. I have a vague idea of how to create a function using what commands but I'm unable to produce the actual code.. how should I begin? Any guidance/hints would seriously be appreciated because I'm having programmers' block and am unable to start!
I think that you are new to matlab, so the documentation may be complicated. The root of the problem is the basic understanding of file I/O (input/output) I guess. So the thing is that when you open the file using fopen, matlab returns a pointer to that file, which is generally called a file ID. When you call fclose you want matlab to understand that you want to close that file. So what you have to do is to use fclose with the correct file ID.
fid = open('test.txt');
fprintf(fid,'This is a test.\n');
fclose(fid);
fid = 0; % Optional, this will make it clear that the file is not open,
% but it is not necessary since matlab will send a not open message anyway
Regarding the function creation the syntax is something like this:
function out = myFcn(x,y)
z = x*y;
fprintf('z=%.0f\n',z); % Print value of z in the command window
out = z>0;
This is a function that checks if two numbers are positive and returns true they are. If not it returns false. This may not be the best way to do this test, but it works as example I guess.
Please comment if this is not what you want to know.
I am comparing strings from a text file, but for some reason they never match. If I do it in ruby it is very easy, but in processing I can not get it to work.
this is the ruby code that works:
f=File.open("priceMap_current_new.txt")
f.each do |str|
arrstr=str.split(";")
if arrstr.length==1
puts arrstr[0].inspect if arrstr[0]=="next\n"
end
end
Now here's the processing version that doesn't work, actually it doesnt even work without reading from file:
String[] mystr={"number;zero","number;one","number;two","number;three","number;four"};
for(int i=0;i<mystr.length;i++){
String[] numbers=split(mystr[i],";");
if(numbers[0]=="number"){
println("shoooooooooooooooooout");
}
}
Additionally I would like to ask if there's a way to inspect elements like in ruby, its very handy, because if I print pts[0] in processing I get "next" when its actually "next\n"
or also how to check datatypes in processing. Thanks!
Use if (numbers[0].equals("number"))
From: Processing doco
To compare the contents of two Strings, use the equals() method, as in
"if (a.equals(b))", instead of "if (a == b)".
Does any one know how to generate the possible misspelling ?
Example : unemployment
- uemployment
- onemploymnet
-- etc.
If you just want to generate a list of possible misspellings, you might try a tool like this one. Otherwise, in SAS you might be able to use a function like COMPGED to compute a measure of the similarity between the string someone entered, and the one you wanted them to type. If the two are "close enough" by your standard, replace their text with the one you wanted.
Here is an example that computes the Generalized Edit Distance between "unemployment" and a variety of plausible mispellings.
data misspell;
input misspell $16.;
length misspell string $16.;
retain string "unemployment";
GED=compged(misspell, string,'iL');
datalines;
nemployment
uemployment
unmployment
uneployment
unemloyment
unempoyment
unemplyment
unemploment
unemployent
unemploymnt
unemploymet
unemploymen
unemploymenyt
unemploymenty
unemploymenht
unemploymenth
unemploymengt
unemploymentg
unemploymenft
unemploymentf
blahblah
;
proc print data=misspell label;
label GED='Generalized Edit Distance';
var misspell string GED;
run;
Essentially you are trying to develop a list of text strings based on some rule of thumb, such as one letter is missing from the word, that a letter is misplaced into the wrong spot, that one letter was mistyped, etc. The problem is that these rules have to be explicitly defined before you can write the code, in SAS or any other language (this is what Chris was referring to). If your requirement is reduced to this one-wrong-letter scenario then this might be managable; otherwise, the commenters are correct and you can easily create massive lists of incorrect spellings (after all, all combinations except "unemployment" constitute a misspelling of that word).
Having said that, there are many ways in SAS to accomplish this text manipulation (rx functions, some combination of other text-string functions, macros); however, there are probably better ways to accomplish this. I would suggest an external Perl process to generate a text file that can be read into SAS, but other programmers might have better alternatives.
If you are looking for a general spell checker, SAS does have proc spell.
It will take some tweaking to get it working for your situation; it's very old and clunky. It doesn't work well in this case, but you may have better results if you try and use another dictionary? A Google search will show other examples.
filename name temp lrecl=256;
options caps;
data _null_;
file name;
informat name $256.;
input name &;
put name;
cards;
uemployment
onemploymnet
;
proc spell in=name
dictionary=SASHELP.BASE.NAMES
suggest;
run;
options nocaps;