Find Unique Characters in a File - search

I have a file with 450,000+ rows of entries. Each entry is about 7 characters in length. What I want to know is the unique characters of this file.
For instance, if my file were the following;
Entry
-----
Yabba
Dabba
Doo
Then the result would be
Unique characters: {abdoy}
Notice I don't care about case and don't need to order the results. Something tells me this is very easy for the Linux folks to solve.
Update
I'm looking for a very fast solution. I really don't want to have to create code to loop over each entry, loop through each character...and so on. I'm looking for a nice script solution.
Update 2
By Fast, I mean fast to implement...not necessarily fast to run.

BASH shell script version (no sed/awk):
while read -n 1 char; do echo "$char"; done < entry.txt | tr [A-Z] [a-z] | sort -u
UPDATE: Just for the heck of it, since I was bored and still thinking about this problem, here's a C++ version using set. If run time is important this would be my recommended option, since the C++ version takes slightly more than half a second to process a file with 450,000+ entries.
#include <iostream>
#include <set>
int main() {
std::set<char> seen_chars;
std::set<char>::const_iterator iter;
char ch;
/* ignore whitespace and case */
while ( std::cin.get(ch) ) {
if (! isspace(ch) ) {
seen_chars.insert(tolower(ch));
}
}
for( iter = seen_chars.begin(); iter != seen_chars.end(); ++iter ) {
std::cout << *iter << std::endl;
}
return 0;
}
Note that I'm ignoring whitespace and it's case insensitive as requested.
For a 450,000+ entry file (chars.txt), here's a sample run time:
[user#host]$ g++ -o unique_chars unique_chars.cpp
[user#host]$ time ./unique_chars < chars.txt
a
b
d
o
y
real 0m0.638s
user 0m0.612s
sys 0m0.017s

As requested, a pure shell-script "solution":
sed -e "s/./\0\n/g" inputfile | sort -u
It's not nice, it's not fast and the output is not exactly as specified, but it should work ... mostly.
For even more ridiculousness, I present the version that dumps the output on one line:
sed -e "s/./\0\n/g" inputfile | sort -u | while read c; do echo -n "$c" ; done

Use a set data structure. Most programming languages / standard libraries come with one flavour or another. If they don't, use a hash table (or generally, dictionary) implementation and just omit the value field. Use your characters as keys. These data structures generally filter out duplicate entries (hence the name set, from its mathematical usage: sets don't have a particular order and only unique values).

Quick and dirty C program that's blazingly fast:
#include <stdio.h>
int main(void)
{
int chars[256] = {0}, c;
while((c = getchar()) != EOF)
chars[c] = 1;
for(c = 32; c < 127; c++) // printable chars only
{
if(chars[c])
putchar(c);
}
putchar('\n');
return 0;
}
Compile it, then do
cat file | ./a.out
To get a list of the unique printable characters in file.

Python w/sets (quick and dirty)
s = open("data.txt", "r").read()
print "Unique Characters: {%s}" % ''.join(set(s))
Python w/sets (with nicer output)
import re
text = open("data.txt", "r").read().lower()
unique = re.sub('\W, '', ''.join(set(text))) # Ignore non-alphanumeric
print "Unique Characters: {%s}" % unique

Here's a PowerShell example:
gc file.txt | select -Skip 2 | % { $_.ToCharArray() } | sort -CaseSensitive -Unique
which produces:
D
Y
a
b
o
I like that it's easy to read.
EDIT: Here's a faster version:
$letters = #{} ; gc file.txt | select -Skip 2 | % { $_.ToCharArray() } | % { $letters[$_] = $true } ; $letters.Keys

A very fast solution would be to make a small C program that reads its standard input, does the aggregation and spits out the result.
Why the arbitrary limitation that you need a "script" that does it?
What exactly is a script anyway?
Would Python do?
If so, then this is one solution:
import sys;
s = set([]);
while True:
line = sys.stdin.readline();
if not line:
break;
line = line.rstrip();
for c in line.lower():
s.add(c);
print("".join(sorted(s)));

Algorithm:
Slurp the file into memory.
Create an array of unsigned ints, initialized to zero.
Iterate though the in memory file, using each byte as a subscript into the array.
increment that array element.
Discard the in memory file
Iterate the array of unsigned int
if the count is not zero,
display the character, and its corresponding count.

cat yourfile |
perl -e 'while(<>){chomp;$k{$_}++ for split(//, lc $_)}print keys %k,"\n";'

Alternative solution using bash:
sed "s/./\l\0\n/g" inputfile | sort -u | grep -vc ^$
EDIT Sorry, I actually misread the question. The above code counts the unique characters. Just omitting the c switch at the end obviously does the trick but then, this solution has no real advantage to saua's (especially since he now uses the same sed pattern instead of explicit captures).

While not an script this java program will do the work. It's easy to understand an fast ( to run )
import java.util.*;
import java.io.*;
public class Unique {
public static void main( String [] args ) throws IOException {
int c = 0;
Set s = new TreeSet();
while( ( c = System.in.read() ) > 0 ) {
s.add( Character.toLowerCase((char)c));
}
System.out.println( "Unique characters:" + s );
}
}
You'll invoke it like this:
type yourFile | java Unique
or
cat yourFile | java Unique
For instance, the unique characters in the HTML of this question are:
Unique characters:[ , , , , !, ", #, $, %, &, ', (, ), +, ,, -, ., /, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, :, ;, <, =, >, ?, #, [, \, ], ^, _, a, b, c, d, e, f, g, h, i, j, k, l, m, n, o, p, q, r, s, t, u, v, w, x, y, z, {, |, }]

Print unique characters (ASCII and Unicode UTF-8)
import codecs
file = codecs.open('my_file_name', encoding='utf-8')
# Runtime: O(1)
letters = set()
# Runtime: O(n^2)
for line in file:
for character in line:
letters.add(character)
# Runtime: O(n)
letter_str = ''.join(letters)
print(letter_str)
Save as unique.py, and run as python unique.py.

in c++ i would first loop through the letters in the alphabet then run a strchr() on each with the file as a string. this will tell you if that letter exists, then just add it to the list.

Try this file with JSDB Javascript (includes the javascript engine in the Firefox browser):
var seenAlreadyMap={};
var seenAlreadyArray=[];
while (!system.stdin.eof)
{
var L = system.stdin.readLine();
for (var i = L.length; i-- > 0; )
{
var c = L[i].toLowerCase();
if (!(c in seenAlreadyMap))
{
seenAlreadyMap[c] = true;
seenAlreadyArray.push(c);
}
}
}
system.stdout.writeln(seenAlreadyArray.sort().join(''));

Python using a dictionary. I don't know why people are so tied to sets or lists to hold stuff. Granted a set is probably more efficient than a dictionary. However both are supposed to take constant time to access items. And both run circles around a list where for each character you search the list to see if the character is already in the list or not. Also Lists and Dictionaries are built in Python datatatypes that everyone should be using all the time. So even if set doesn't come to mind, dictionary should.
file = open('location.txt', 'r')
letters = {}
for line in file:
if line == "":
break
for character in line.strip():
if character not in letters:
letters[character] = True
file.close()
print "Unique Characters: {" + "".join(letters.keys()) + "}"

A C solution. Admittedly it is not the fastest to code solution in the world. But since it is already coded and can be cut and pasted, I think it counts as "fast to implement" for the poster :) I didn't actually see any C solutions so I wanted to post one for the pure sadistic pleasure :)
#include<stdio.h>
#define CHARSINSET 256
#define FILENAME "location.txt"
char buf[CHARSINSET + 1];
char *getUniqueCharacters(int *charactersInFile) {
int x;
char *bufptr = buf;
for (x = 0; x< CHARSINSET;x++) {
if (charactersInFile[x] > 0)
*bufptr++ = (char)x;
}
bufptr = '\0';
return buf;
}
int main() {
FILE *fp;
char c;
int *charactersInFile = calloc(sizeof(int), CHARSINSET);
if (NULL == (fp = fopen(FILENAME, "rt"))) {
printf ("File not found.\n");
return 1;
}
while(1) {
c = getc(fp);
if (c == EOF) {
break;
}
if (c != '\n' && c != '\r')
charactersInFile[c]++;
}
fclose(fp);
printf("Unique characters: {%s}\n", getUniqueCharacters(charactersInFile));
return 0;
}

Quick and dirty solution using grep (assuming the file name is "file"):
for char in a b c d e f g h i j k l m n o p q r s t u v w x y z; do
if [ ! -z "`grep -li $char file`" ]; then
echo -n $char;
fi;
done;
echo
I could have made it a one-liner but just want to make it easier to read.
(EDIT: forgot the -i switch to grep)

Well my friend, I think this is what you had in mind....At least this is the python version!!!
f = open("location.txt", "r") # open file
ll = sorted(list(f.read().lower())) #Read file into memory, split into individual characters, sort list
ll = [val for idx, val in enumerate(ll) if (idx == 0 or val != ll[idx-1])] # eliminate duplicates
f.close()
print "Unique Characters: {%s}" % "".join(ll) #print list of characters, carriage return will throw in a return
It does not iterate through each character, it is relatively short as well. You wouldn't want to open a 500 MB file with it (depending upon your RAM) but for shorter files it is fun :)
I also have to add my final attack!!!! Admittedly I eliminated two lines by using standard input instead of a file, I also reduced the active code from 3 lines to 2. Basically if I replaced ll in the print line with the expression from the line above it, I could have had 1 line of active code and one line of imports.....Anyway now we are having fun :)
import itertools, sys
# read standard input into memory, split into characters, eliminate duplicates
ll = map(lambda x:x[0], itertools.groupby(sorted(list(sys.stdin.read().lower()))))
print "Unique Characters: {%s}" % "".join(ll) #print list of characters, carriage return will throw in a return

This answer above mentioned using a dictionary.
If so, the code presented there can be streamlined a bit, since the Python documentation states:
It is best to think of a dictionary as
an unordered set of key: value pairs,
with the requirement that the keys are
unique (within one dictionary).... If
you store using a key that is already
in use, the old value associated with
that key is forgotten.
Therefore, this line of the code can be removed, since the dictionary keys will always be unique anyway:
if character not in letters:
And that should make it a little faster.

Where C:/data.txt contains 454,863 rows of seven random alphabetic characters, the following code
using System;
using System.IO;
using System.Collections;
using System.Diagnostics;
namespace ConsoleApplication {
class Program {
static void Main(string[] args) {
FileInfo fileInfo = new FileInfo(#"C:/data.txt");
Console.WriteLine(fileInfo.Length);
Stopwatch sw = new Stopwatch();
sw.Start();
Hashtable table = new Hashtable();
StreamReader sr = new StreamReader(#"C:/data.txt");
while (!sr.EndOfStream) {
char c = Char.ToLower((char)sr.Read());
if (!table.Contains(c)) {
table.Add(c, null);
}
}
sr.Close();
foreach (char c in table.Keys) {
Console.Write(c);
}
Console.WriteLine();
sw.Stop();
Console.WriteLine(sw.ElapsedMilliseconds);
}
}
}
produces output
4093767
mytojevqlgbxsnidhzupkfawr
c
889
Press any key to continue . . .
The first line of output tells you the number of bytes in C:/data.txt (454,863 * (7 + 2) = 4,093,767 bytes). The next two lines of output are the unique characters in C:/data.txt (including a newline). The last line of output tells you the number of milliseconds the code took to execute on a 2.80 GHz Pentium 4.

s=open("text.txt","r").read()
l= len(s)
unique ={}
for i in range(l):
if unique.has_key(s[i]):
unique[s[i]]=unique[s[i]]+1
else:
unique[s[i]]=1
print unique

Python without using a set.
file = open('location', 'r')
letters = []
for line in file:
for character in line:
if character not in letters:
letters.append(character)
print(letters)

Old question, I know, but here's a fast solution, meaning it runs fast, and it's probably also pretty fast to code if you know how to copy/paste ;)
BACKGROUND
I had a huge csv file (12 GB, 1.34 million lines, 12.72 billion characters) that I was loading into postgres that was failing because it had some "bad" characters in it, so naturally I was trying to find a character not in that file that I could use as a quote character.
1. First try: Jay's C++ solution
I started with #jay's C++ answer:
(Note: all of these code examples were compiled with g++ -O2 uniqchars.cpp -o uniqchars)
#include <iostream>
#include <set>
int main() {
std::set<char> seen_chars;
std::set<char>::const_iterator iter;
char ch;
/* ignore whitespace and case */
while ( std::cin.get(ch) ) {
if (! isspace(ch) ) {
seen_chars.insert(tolower(ch));
}
}
for( iter = seen_chars.begin(); iter != seen_chars.end(); ++iter ) {
std::cout << *iter << std::endl;
}
return 0;
}
Timing for this one:
real 10m55.026s
user 10m51.691s
sys 0m3.329s
2. Read entire file at once
I figured it'd be more efficient to read in the entire file into memory at once, rather than all those calls to cin.get(). This reduced the run time by more than half.
(I also added a filename as a command line argument, and made it print out the characters separated by spaces instead of newlines).
#include <set>
#include <string>
#include <iostream>
#include <fstream>
#include <iterator>
int main(int argc, char **argv) {
std::set<char> seen_chars;
std::set<char>::const_iterator iter;
std::ifstream ifs(argv[1]);
ifs.seekg(0, std::ios::end);
size_t size = ifs.tellg();
fprintf(stderr, "Size of file: %lu\n", size);
std::string str(size, ' ');
ifs.seekg(0);
ifs.read(&str[0], size);
/* ignore whitespace and case */
for (char& ch : str) {
if (!isspace(ch)) {
seen_chars.insert(tolower(ch));
}
}
for(iter = seen_chars.begin(); iter != seen_chars.end(); ++iter) {
std::cout << *iter << " ";
}
std::cout << std::endl;
return 0;
}
Timing for this one:
real 4m41.910s
user 3m32.014s
sys 0m17.858s
3. Remove isspace() check and tolower()
Besides the set insert, isspace() and tolower() are the only things happening in the for loop, so I figured I'd remove them. It shaved off another 1.5 minutes.
#include <set>
#include <string>
#include <iostream>
#include <fstream>
#include <iterator>
int main(int argc, char **argv) {
std::set<char> seen_chars;
std::set<char>::const_iterator iter;
std::ifstream ifs(argv[1]);
ifs.seekg(0, std::ios::end);
size_t size = ifs.tellg();
fprintf(stderr, "Size of file: %lu\n", size);
std::string str(size, ' ');
ifs.seekg(0);
ifs.read(&str[0], size);
for (char& ch : str) {
// removed isspace() and tolower()
seen_chars.insert(ch);
}
for(iter = seen_chars.begin(); iter != seen_chars.end(); ++iter) {
std::cout << *iter << " ";
}
std::cout << std::endl;
return 0;
}
Timing for final version:
real 3m12.397s
user 2m58.771s
sys 0m13.624s

The simple solution from #Triptych helped me already (my input was a file of 124 MB in size, so this approach to read the entire contents into memory still worked).
However, I had a problem with encoding, python didn't interpret the UTF8 encoded input correctly. So here's a slightly modified version which works for UTF8 encoded files (and also sorts the collected characters in the output):
import io
with io.open("my-file.csv",'r',encoding='utf8') as f:
text = f.read()
print "Unique Characters: {%s}" % ''.join(sorted(set(text)))

Related

I'm using python to remove the repeated character in a string, is there any way to do this except 'not in' and set()?

I wrote a function to do this, but unfortunately, it doesn't work. Did I miss anything? I try to avoid using 'not in' and set() and anything that C++ doesn't have. But the output is either the same as the input or just no puput at all
def find(number):
global key
for m in range(len(match)):
if number == match[m]:
key = match[m]
break
else:
key = None
def find1(number1):
if number1 == None:
return True
else:
return False
a = str(input())
match = '0'
for i in range(len(a)):
b = find(a[i])
if find1(b):
match += a[i]
print(match)
I have no idea why you'd want to do this in Python in a manner that avoids "anything that C++ doesn't have", however it seemed an interesting exercise so comparable solutions in both Python and C++ below.
The solution(s) below use regular expressions (built-in re library for Python and standard library regex header for C++). Each example matches any single character as the first capture group using (.) and then uses \1+ to identify one or more repetitions of the first character matched, if this pattern matches (i.e. there is a repeated character), this operation is performed left-to-right over the entire string and the string following this operation is returned - i.e. a copy of the original string with repeated characters pruned.
Python Solution
def remove_repeated(s):
return re.sub(r'(.)\1+', r'\1', s)
s = input("Enter a string: ")
print(remove_repeated(s))
C++ Solution
#include <iostream>
#include <string>
#include <regex>
using namespace std;
string remove_repeated(string s)
{
regex r ("(.)\\1+");
return s = regex_replace(s, r, "$1");
}
int main ()
{
string s;
cout << "Enter a string: ";
cin >> s;
s = remove_repeated(s);
cout << s;
return 0;
}
Input:
tteessttiinngg
Output:
testing
Input and output the same over both Python and C++ solutions when testing.

Extra characters and symbols outputted when doing substitution in C

When I run the code using the following key, extra characters are outputted...
TERMINAL WINDOW:
$ ./substitution abcdefghjklmnopqrsTUVWXYZI
plaintext: heTUXWVI ii ssTt
ciphertext: heUVYXWJ jj ttUuh|
This is the instructions (cs50 substitution problem)
Design and implement a program, substitution, that encrypts messages using a substitution cipher.
Implement your program in a file called substitution.c in a ~/pset2/substitution directory.
Your program must accept a single command-line argument, the key to use for the substitution. The key itself should be case-insensitive, so whether any character in the key is uppercase or lowercase should not affect the behavior of your program.
If your program is executed without any command-line arguments or with more than one command-line argument, your program should print an error message of your choice (with printf) and return from main a value of 1 (which tends to signify an error) immediately.
If the key is invalid (as by not containing 26 characters, containing any character that is not an alphabetic character, or not containing each letter exactly once), your program should print an error message of your choice (with printf) and return from main a value of 1 immediately.
Your program must output plaintext: (without a newline) and then prompt the user for a string of plaintext (using get_string).
Your program must output ciphertext: (without a newline) followed by the plaintext’s corresponding ciphertext, with each alphabetical character in the plaintext substituted for the corresponding character in the ciphertext; non-alphabetical characters should be outputted unchanged.
Your program must preserve case: capitalized letters must remain capitalized letters; lowercase letters must remain lowercase letters.
After outputting ciphertext, you should print a newline. Your program should then exit by returning 0 from main.
My code:
#include <cs50.h>
#include <stdio.h>
#include <string.h>
#include <ctype.h>
int main(int argc,string argv[])
{
char alpha[26] = {'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','x','y','z'};
string key = argv[1];
int totalchar = 0;
for (char c ='a'; c <= 'z'; c++)
{
for (int i = 0; i < strlen(key); i++)
{
if (tolower(key[i]) == c)
{
totalchar++;
}
}
}
//accept only singular 26 key
if (argc == 2 && totalchar == 26)
{
string plaint = get_string("plaintext: ");
int textlength =strlen(plaint);
char subchar[textlength];
for (int i= 0; i< textlength; i++)
{
for (int j =0; j<26; j++)
{
// substitute
if (tolower(plaint[i]) == alpha[j])
{
subchar[i] = tolower(key[j]);
// keep plaintext's case
if (plaint[i] >= 'A' && plaint[i] <= 'Z')
{
subchar[i] = (toupper(key[j]));
}
}
// if isn't char
if (!(isalpha(plaint[i])))
{
subchar[i] = plaint[i];
}
}
}
printf("ciphertext: %s\n", subchar);
return 0;
}
else
{
printf("invalid input\n");
return 1;
}
}
strcmp compares two strings. plaint[i] and alpha[j] are chars. The can be compared with "regular" comparison operators, like ==.

sed command to remove a string and everything after it in that line

The .cpp file in a directory contains this text:
/**
* Performs the standard binary search using two comparisons per level.
* Returns index where item is found or or the index where it chould
* be inserted if not found
*/
template <typename Comparable>
int binarySearch( const Comparable* a, int size, const Comparable & x )
{
int low = 0, high = size - 1; // Set the bounds for the search
while( low <= high )
{
// Examine the element at the midpoint
int mid = ( low + high ) / 2;
if( a[ mid ] < x )
low = mid + 1; // If x is in the array, it must be in the upper
else if( a[ mid ] > x )
high = mid - 1; // If x is in the array, it must be in the lower
else
return mid; // Found
}
// Return the position where x would be inserted to
// preserve the ordering within the array.
return low;
}
Using the unix sed command, how would I print the contents of the .cpp file above with all the inline comments strings deleted (which look like this: // ) and all the text after it in that row deleted? I put an example below of what I am looking for. All the // marks and everything after them on that row is gone in this desired output.
/**
* Performs the standard binary search using two comparisons per level.
* Returns index where item is found or or the index where it chould
* be inserted if not found
*/
template <typename Comparable>
int binarySearch( const Comparable* a, int size, const Comparable & x )
{
int low = 0, high = size - 1;
while( low <= high )
{
int mid = ( low + high ) / 2;
if( a[ mid ] < x )
low = mid + 1;
else if( a[ mid ] > x )
high = mid - 1;
else
return mid;
}
return low;
}
If you don't need to use sed, this can be done easily with grep:
cat file.cpp | grep -v \/\/
Explanation:
grep -v will print all lines that don't match the pattern, and the pattern \/\/ is just an escaped version of //
If you do need to use sed, this can still be done easily (it's just arguably not the right tool for the job, and quite a bit slower).
cat file.cpp | sed '/\/\//d'
This matches every line that starts with // and deletes it.
To remove every line that contains "//":
sed '/\/\//d' file.cpp
To remove "//" and all that follows it on the line:
sed 's|//.*||' file.cpp
To do both (i.e. remove the "//" and all that follows it on the line, and remove that whole line if nothing but whitespace came before it):
sed '/^ *\/\//d;s|//.*||' file.cpp

Running statistics on multiple lines in bash

I have multiple HTTP headers in one giant file, separated with one empty line.
Host
Connection
Accept
From
User-Agent
Accept-Encoding
Host
Connection
Accept
From
User-Agent
Accept-Encoding
X-Forwarded-For
cookie
Cache-Control
referer
x-fb-sim-hni
Host
Accept
user-agent
x-fb-net-sid
x-fb-net-hni
X-Purpose
accept-encoding
x-fb-http-engine
Connection
User-Agent
Host
Connection
Accept-Encoding
I have approximately 10,000,000 of headers separated with an empty line.
If I want to discover trends, like header order, I want to do aggregate headers to a one-liner (how I can aggregate lines ending with an empty line and do that separately for all headers?):
Host,Connection,Accept,From,User-Agent,Accept-Encoding
and follow with: uniq -c|sort -nk1,
so I could receive:
197897 Host,Connection,Accept,From,User-Agent,Accept-Encoding
8732233 User-Agent,Host,Connection,Accept-Encoding
What would be the best approach and most effective one to parse that massive file and get that data?
Thanks for hints.
Using GNU awk for sorted_in, all you need is:
$ cat tst.awk
BEGIN { RS=""; FS="\n"; OFS="," }
{ $1=$1; cnt[$0]++ }
END {
PROCINFO["sorted_in"] = "#val_num_desc"
for (rec in cnt) {
print cnt[rec] " " rec
}
}
After running dos2unix on the sample you posted (1.5milGETs.txt):
$ time awk -f tst.awk 1.5milGETs.txt > ou.awk
real 0m4.898s
user 0m4.758s
sys 0m0.108s
$ head -10 ou.awk
71639 Host,Accept,User-Agent,Pragma,Connection
70975 Host,ros-SecurityFlags,ros-SessionTicket,ros-Challenge,ros-HeadersHmac,Scs-Ticket,If-Modified-Since,User-Agent
40781 Host,Accept,User-Agent,Pragma,nnCoection,Connection,X-Forwarded-For
35485 Accept,ros-SecurityFlags,ros-SessionTicket,ros-Challenge,ros-HeadersHmac,Scs-Ticket,If-Modified-Since,User-Agent,Accept-Language,UA-CPU,Accept-Encoding,Host,Connection
34005 User-Agent,Host,Connection,Accept-Encoding
30668 Host,User-Agent,Accept-Encoding,Connection
25547 Host,Accept,Accept-Language,Connection,Accept-Encoding,User-Agent
22581 Host,User-Agent,Accept,Accept-Encoding
19311 Host,Connection,Accept,From,User-Agent,Accept-Encoding
14694 Host,Connection,User-Agent,Accept,Referer,Accept-Encoding,Accept-Language,Cookie
Here's an answer written in (POSIX) C, which AFAICT does what OP wants. The C solution seems to be faster than an AWK based solution. That may or may not be useful, it all depends on how frequent the program is run and the input data.
The main takeaway:
The program memory maps the input file and alters the mapped copy.
It replaces newline characters with commas where appropriate, and
newlines with nul characters to separate each entry in the input
file. IOW, foo\nbar\n\nbaz\n becomes foo,bar\0baz\0.
The program also builds a table of entries, which is just an array of
char-pointers into the memory mapped file.
The program sorts the entries using standard string functions, but only moves the pointers values, not the actual data
Then the program creates a new array of unique entries and counts how many instances there are for each string. (This part can probably be made a bit faster)
The array of unique entries is then sorted in descending order
Finally, the program prints the contents of the unique array
Anyway, here's the code. (disclaimer: It's written to be postable here on SO)
#include <unistd.h>
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/mman.h>
#include <fcntl.h>
struct uniq {
char *val;
size_t count;
};
struct entry {
char *val;
};
// Some globals
size_t g_filesize;
char* g_baseaddr;
struct entry *g_entries;
size_t g_entrysize, g_entrycapacity;
struct uniq *g_unique;
size_t g_uniquesize, g_uniquecapacity;
static inline void mapfile(const char *filename)
{
int fd;
struct stat st;
if ((fd = open(filename, O_RDWR)) == -1 || fstat(fd, &st)) {
perror(filename);
exit(__LINE__);
}
g_baseaddr = mmap(NULL, st.st_size, PROT_READ | PROT_WRITE, MAP_PRIVATE, fd, 0);
if (g_baseaddr == (void *)MAP_FAILED) {
perror(filename);
close(fd);
exit(__LINE__);
}
close(fd);
g_filesize = st.st_size;
}
// Guestimate how many entries we have. We do this only to avoid early
// reallocs, so this isn't that important. Let's say 100 bytes per entry.
static inline void setup_entry_table(void)
{
g_entrycapacity = g_filesize / 100;
g_entrysize = 0;
size_t cb = sizeof *g_entries * g_entrycapacity;
if ((g_entries = malloc(cb)) == NULL)
exit(__LINE__);
memset(g_entries, 0, cb);
}
static inline void realloc_if_needed(void)
{
if (g_entrysize == g_entrycapacity) {
size_t newcap = g_entrycapacity * 2;
size_t cb = newcap * sizeof *g_entries;
struct entry *tmp = realloc(g_entries, cb);
if (tmp == NULL)
exit(__LINE__);
g_entries = tmp;
g_entrycapacity = newcap;
}
}
static inline void add_entry(char *p)
{
realloc_if_needed();
g_entries[g_entrysize].val = p;
g_entrysize++;
}
// Convert input data to proper entries by replacing \n with either
// ',' or \0. We add \0 to separate the entries.
static inline void convert_to_entries(void)
{
char *endaddr = g_baseaddr + g_filesize;
char *prev, *s = g_baseaddr;
// First entry
prev = s;
while(s < endaddr) {
char *nl = strchr(s, '\n');
if (nl == s) {
if (nl - prev > 0) // Skip empty strings
add_entry(prev);
*nl = '\0'; // Terminate entry
s = nl + 1; // Skip to first byte after \0
prev = s; // This is the start of the 'previous' record
}
else {
*nl = ','; // Replace \n with comma
s = nl + 1; // Move pointer forward (optimization).
if (*s == '\n')
*(s - 1) = '\0';// Don't add trailing comma
}
}
if (prev < s)
add_entry(prev); // Don't forget last entry
}
static int entrycmp(const void *v1, const void *v2)
{
const struct entry *p1 = v1, *p2 = v2;
return strcmp(p1->val, p2->val);
}
// Sort the entries so the pointers point to a sorted list of strings.
static inline void sort_entries(void)
{
qsort(g_entries, g_entrysize, sizeof *g_entries, entrycmp);
}
// We keep things really simple and allocate one unique entry for each
// entry. That's the worst case anyway and then we don't have to test
// for reallocation.
static inline void setup_unique_table(void)
{
size_t cb = sizeof *g_unique * g_entrysize;
if ((g_unique = malloc(cb)) == NULL)
exit(__LINE__);
g_uniquesize = 0;
g_uniquecapacity = g_entrysize;
}
static inline void add_unique(char *s)
{
g_unique[g_uniquesize].val = s;
g_unique[g_uniquesize].count = 1;
g_uniquesize++;
}
// Now count and skip duplicate entries.
// How? Just iterate over the entries table and find duplicates.
// For each duplicate, increment count. For each non-dup,
// add a new entry.
static inline void find_unique_entries(void)
{
char *last = g_entries[0].val;
add_unique(last);
for (size_t i = 1; i < g_entrysize; i++) {
if (strcmp(g_entries[i].val, last) == 0) {
g_unique[g_uniquesize - 1].count++; // Inc last added\'s count
}
else {
last = g_entries[i].val;
add_unique(last);
}
}
}
static inline void print_unique_entries(void)
{
for (size_t i = 0; i < g_uniquesize; i++)
printf("%zu %s\n", g_unique[i].count, g_unique[i].val);
}
static inline void print_entries(void)
{
for (size_t i = 0; i < g_entrysize; i++)
printf("%s\n", g_entries[i].val);
}
static int uniquecmp(const void *v1, const void *v2)
{
const struct uniq *p1 = v1, *p2 = v2;
return (int)p2->count - (int)p1->count;
}
static inline void sort_unique_entries(void)
{
qsort(g_unique, g_uniquesize, sizeof *g_unique, uniquecmp);
}
int main(int argc, char *argv[])
{
if (argc != 2) {
fprintf(stderr, "USAGE: %s filename\n", argv[0]);
exit(__LINE__);
}
mapfile(argv[1]);
setup_entry_table();
convert_to_entries();
if (g_entrysize == 0) // no entries in file.
exit(0);
sort_entries();
setup_unique_table();
find_unique_entries();
sort_unique_entries();
if (0) print_entries();
if (1) print_unique_entries();
// cleanup
free(g_entries);
free(g_unique);
munmap(g_baseaddr, g_filesize);
exit(0);
}
Using your your 1.5milGETs.txt file (and converting the triple \n\n\n to \n\n to separate blocks) you can use ruby in paragraph mode:
$ ruby -F'\n' -lane 'BEGIN{h=Hash.new(0); $/=""
def commafy(n)
n.to_s.reverse.gsub(/...(?=.)/,"\\&,").reverse
end
}
h[$F.join(",")]+=1
# p $_
END{ printf "Total blocks: %s\n", commafy(h.values.sum)
h2=h.sort_by {|k,v| -v}
h2[0..10].map {|k,v| printf "%10s %s\n", commafy(v), k}
}' 1.5milGETs.txt
That prints the total number of blocks, sorts them large->small, prints the top 10.
Prints:
Total blocks: 1,262,522
71,639 Host,Accept,User-Agent,Pragma,Connection
70,975 Host,ros-SecurityFlags,ros-SessionTicket,ros-Challenge,ros-HeadersHmac,Scs-Ticket,If-Modified-Since,User-Agent
40,781 Host,Accept,User-Agent,Pragma,nnCoection,Connection,X-Forwarded-For
35,485 Accept,ros-SecurityFlags,ros-SessionTicket,ros-Challenge,ros-HeadersHmac,Scs-Ticket,If-Modified-Since,User-Agent,Accept-Language,UA-CPU,Accept-Encoding,Host,Connection
34,005 User-Agent,Host,Connection,Accept-Encoding
30,668 Host,User-Agent,Accept-Encoding,Connection
25,547 Host,Accept,Accept-Language,Connection,Accept-Encoding,User-Agent
22,581 Host,User-Agent,Accept,Accept-Encoding
19,311 Host,Connection,Accept,From,User-Agent,Accept-Encoding
14,694 Host,Connection,User-Agent,Accept,Referer,Accept-Encoding,Accept-Language,Cookie
12,290 Host,User-Agent,Accept-Encoding
That takes about 8 seconds on a 6 year old Mac.
Awk will be 3x faster and entirely appropriate for this job.
Ruby will give you more output options and easier analysis of the data. You can create an interactive HTML documents; output JSON, quoted csv, xml trivially; interact with a database; invert keys and values in a statement; filter; etc.
Personally, I'd use a C program, other alternatives exist as well. Here's an awk snippet which folds lines. Not perfect, but should get you started :)
$cat foo.awk
// {
if (NF == 0)
printf("\n");
else
printf("%s ", $0);
}
$ awk -f foo.awk < lots_of_data | sort | uniq -c | sort -nk1
The last statement will take "forever", which is why a C program may be a good alternative. It depends mostly on how often you need to run the commands.
If you have enough memory (10M records, in your sample about 80 chars per record, 800MB, and if you're counting them, I assume a lot of duplicates) you could hash the records to memory and count while hashing:
$ awk 'BEGIN{ RS=""; OFS=","}
{
b="" # reset buffer b
for(i=1;i<=NF;i++) # for every header element in record
b=b (b==""?"":OFS) $i # buffer them and comma separate
a[b]++ # hash to a, counting
}
END { # in the end
for(i in a) # go thru the a hash
print a[i] " " i} # print counts and records
' file
1 Host,Connection,Accept,From,User-Agent,Accept-Encoding
1 cookie,Cache-Control,referer,x-fb-sim-hni,Host,Accept,user-agent,x-fb-net-sid,x-fb-net-hni,X-Purpose,accept-encoding,x-fb-http-engine,Connection
1 User-Agent,Host,Connection,Accept-Encoding
1 Host,Connection,Accept,From,User-Agent,Accept-Encoding,X-Forwarded-For
Output order is random due to the nature of i in a so order the output afterwards as pleases.
Edit:
As #dawg kindly pointed out in the comments, $1=$1 is enough to rebuild the record to comma-separated form:
$ awk 'BEGIN{ RS=""; OFS=","}
{
$1=$1 # rebuild the record
a[$0]++ # hash $0 to a, counting
}
END { # in the end
for(i in a) # go thru the a hash
print a[i] " " i} # print counts and records
' file

C++/CLI - Split a string with a unknown number of spaces as separator?

I'm wondering how (and in which way it's best to do it) to split a string with a unknown number of spaces as separator in C++/CLI?
Edit: The problem is that the space number is unknown, so when I try to use the split method like this:
String^ line;
StreamReader^ SCR = gcnew StreamReader("input.txt");
while ((line = SCR->ReadLine()) != nullptr && line != nullptr)
{
if (line->IndexOf(' ') != -1)
for each (String^ SCS in line->Split(nullptr, 2))
{
//Load the lines...
}
}
And this is a example how Input.txt look:
ThisISSomeTxt<space><space><space><tab>PartNumberTwo<space>PartNumber3
When I then try to run the program the first line that is loaded is "ThisISSomeTxt" the second line that is loaded is "" (nothing), the third line that is loaded is also "" (nothing), the fourth line is also "" nothing, the fifth line that is loaded is " PartNumberTwo" and the sixth line is PartNumber3.
I only want ThisISSomeTxt and PartNumberTwo to be loaded :? How can I do this?
Why not just using System::String::Split(..)?
The following code example taken from http://msdn.microsoft.com/en-us/library/b873y76a(v=vs.80).aspx#Y0 , demonstrates how you can tokenize a string with the Split method.
using namespace System;
using namespace System::Collections;
int main()
{
String^ words = "this is a list of words, with: a bit of punctuation.";
array<Char>^chars = {' ',',','->',':'};
array<String^>^split = words->Split( chars );
IEnumerator^ myEnum = split->GetEnumerator();
while ( myEnum->MoveNext() )
{
String^ s = safe_cast<String^>(myEnum->Current);
if ( !s->Trim()->Equals( "" ) )
Console::WriteLine( s );
}
}
I think you can do what you need to do with the String.Split method.
First, I think you're expecting the 'count' parameter to work differently: You're passing in 2, and expecting the first and second results to be returned, and the third result to be thrown out. What it actually return is the first result, and the second & third results concatenated into one string. If all you want is ThisISSomeTxt and PartNumberTwo, you'll want to manually throw away results after the first 2.
As far as I can tell, you don't want any whitespace included in your return strings. If that's the case, I think this is what you want:
String^ line = "ThisISSomeTxt \tPartNumberTwo PartNumber3";
array<String^>^ split = line->Split((array<String^>^)nullptr, StringSplitOptions::RemoveEmptyEntries);
for(int i = 0; i < split->Length && i < 2; i++)
{
Debug::WriteLine("{0}: '{1}'", i, split[i]);
}
Results:
0: 'ThisISSomeTxt'
1: 'PartNumberTwo'

Resources