Generating all n-bit strings whose hamming distance is n/2 - string

I'm playing with some variant of Hadamard matrices. I want to generate all n-bit binary strings which satisfy these requirements:
You can assume that n is a multiple of 4.
The first string is 0n.→ a string of all 0s.
The remaining strings are sorted in alphabetic order.→ 0 comes before 1.
Every two distinct n-bit strings have Hamming distance n/2.→ Two distinct n-bit strings agree in exactly n/2 positions and disagree in exactly n/2 positions.
Due to the above condition, every string except for the first string must have the same number of 0s and 1s. → Every string other than the first string must have n/2 ones and n/2 zeros.
(Updated) All the n-bit strings begin with 0.
For example, this is the list that I want for when n=4.
0000
0011
0101
0110
You can easily see that every two distinct rows have hamming distance n/2 = 4/2 = 2 and the list satisfies all the other requirements as well.
Note that I want to generate all such strings. My algorithm may just output three strings 0000, 0011, and 0101 before terminating. This list satisfies all the requirements above but it misses 0110.
What would be a good way to generate such sets? A python pseudo-code is preferred but any high-level description will do.
What is the maximum number of such strings for a given n?For example, when n=4, the max number of such strings happen to be 4. I'm wondering whether there can be any closed form solution for this upper bound.
Thanks.

To answer question 1,
Starting with a string of n zeros (let's call it s0) and a string of n/2 zeros followed by n/2 1's (call it s1), generate the next permutation (call it p):
scan string from right to left
replace first occurrence of "01" with "10"
(unless the first occurrence is at the string start)
move all "1"'s that are on the right of the "01" to the string end
return replaced string
Use the permutation generation order to keep a record of permutations added to sets. If the number of bits set in xoring p with each number currently in the set is n/2, add p to the list; otherwise, if the number of bits set in xoring p with s1 is n/2 and p has not been recorded, start a new set search with s0, s1; and p only as an additional condition for the xor test (since the primary search will review all permutations, this set need not generate additional sets). Use p to generate the next permutation.

Related

Find if two strings are anagrams

Faced this question in an interview, which basically stated
Find if the given two strings are anagrams of each other in O(N) time without any extra space
I tried the usual solutions:
Using a character frequency count (O(N) time, O(26) space) (as a variation, iterating 26 times to calculate frequency of each character as well)
Sorting both strings and comparing (O(NlogN) time, constant space)
But the interviewer wanted a "better" approach. At the extreme end of the interview, he hinted at "XOR" for the question. Not sure how that works, since "aa" XOR "bb" should also be zero without being anagrams.
Long story short, are the given constraints possible? If so, what would be the algorithm?
Given word_a and word_b in the same length, I would try the following:
Define a variable counter and initialise the value to 0.
For each letter ii in the alphabet do the following:
2.1. for jj in length(word_a):
2.1.1. if word_a[jj] == ii increase counter by 1: counter += 1
2.1.2. if word_b[jj] == ii decrease the counter by 1: counter -= 1
2.2. if after passing all the characters in the words, counter is different than 0, you have a different number of ii characters in each word and in particular they are not anagrams, break out of the loop and return False
Return True
Explanation
In case the words are anagrams, you have the same number of each of the characters, therefore the use of the histogram makes sense, but histograms require space. Using this method, you run over the n characters of the words exactly 26 times in the case of the English alphabet or any other constant c representing the number of letters in the alphabet. Therefor, the runtime of the process is O(c*n) = O(n) since c is constant and you do not use any other space besides the one variable
I haven't proven to myself that this is infallible yet, but it's a possible solution.
Go through both strings and calculate 3 values: the sum, the accumulated xor, and the count. If all 3 are equal then the strings should be anagrams.

The Divide of array

I found this problem on an online judge and I dun know how to solve it. Can any generous guy help me to come up with a solution. The tag suggests that the problem can be solved using trie and dynamic programming.
Time limit: 1.00s
Task description
You are given sequence of N integer numbers A[0], A[1], ..., A[N-1] and integer number M. What is the maximal number of non-empty parts you can split the sequence to, so that XOR (excluding OR) of all numbers in each part is not exceeding M? Each number of sequence must be in exactly one part. Each part must be a continuous subsequence.
Input Format
The first line of input consists of two integers (1<=N<=100000) and M.
The second line of input consists of N space-separated integers A[0], A[1], ..., A[N-1].
Output Format
Output the maximal number of parts, or -1 if you cannot split the sequence.

most efficient way to sort strings with only 2 distinct characters?

If I have strings that I know have no more than 2 distinct characters,
example set:
aab
abbbbabb
bbbaa
aaaaaaa
aaaa
abab
a
aa
aaaaa
aaabba
aabbbab
What's the most efficient way to put them into alphabetical order?
the resulting sorted set:
a
aa
aaaa
aaaaa
aaaaaaa
aaabba
aab
aabbbab
abab
abbbbabb
bbbaa
edit:
I know I could just use a normal sorting algorithm (quick sort, merge sort), but the question is: Does the fact that there are not more than 2 distinct characters make something else more efficient?
If the maximum length of the string matters, I would like to know the answer for 2 different scenarios:
maximum length of the string is the same as the number of strings (n strings being sorted, n maximum length of the string)
maximum length of the string is log n, with n as the number of strings being sorted
I can also assume that all of the strings are distinct.
The String compareTo or compareToIgnoresCase method will return a negative integer, 0, or a polsitive integer depending on the alphabetical ordering of the two Strings being compared. Try that.
General sorting algorithm based on comparisons only asymptotically can't achieve results better than O(nlogn). In your case there is an additional information (2 distinct chars) which has a potential of improving this result. A simple approach that will yield a O(n) result:
Check the first character (let's mark it x).
Scan the string till the end
whenever x is encountered increase a counter.
when encountered the non-x character (let's mark it y) for the first time store it in a dedicated variable
Compare x and y.
if x < y fill the string from the beginning with x's according to the counter and the rest with y
if x > y fill the string from the beginning with y's string length-num of x's slots and the rest with x's.

Subsequences whose sum of digits is divisible by 6

Say I have a string whose characters are nothing but digits in [0 - 9] range. E.g: "2486". Now I want to find out all the subsequences whose sum of digits is divisible by 6. E.g: in "2486", the subsequences are - "6", "246" ( 2+ 4 + 6 = 12 is divisible by 6 ), "486" (4 + 8 + 6 = 18 is divisible by 6 ) etc. I know generating all 2^n combinations we can do this. But that's very costly. What is the most efficient way to do this?
Edit:
I found the following solution somewhere in quora.
int len,ar[MAXLEN],dp[MAXLEN][MAXN];
int fun(int idx,int m)
{
if(idx==len)
return (m==0);
if(dp[idx][m]!=-1)
return dp[idx][m];
int ans=fun(idx+1,m);
ans+=fun(idx+1,(m*10+ar[idx])%n);
return dp[idx][m]=ans;
}
int main()
{
// input len , n , array
memset(dp,-1,sizeof(dp));
printf("%d\n",fun(0,0));
return 0;
}
Can someone please explain what is the logic behind the code - 'm*10+ar[idx])%n' ? Why is m multiplied by 10 here?
Say you have a sequence of 16 digits You could generate all 216 subsequences and test them, which is 65536 operations.
Or you could take the first 8 digits and generate the 28 possible subsequences, and sort them based on the result of their sum modulo 6, and do the same for the last 8 digits. This is only 512 operations.
Then you can generate all subsequences of the original 16 digit string that are divisible by 6 by taking each subsequence of the first list with a modulo value equal to 0 (including the empty subsquence) and concatenating it with each subsequence of the last list with a modulo value equal to 0.
Then take each subsequence of the first list with a modulo value equal to 1 and concatenate it with each subsequence of the last list with a modulo value equal to 5. Then 2 with 4, 3 with 3, 4 with 2 and 5 with 1.
So after an initial cost of 512 operations you can generate just those subsequences whose sum is divisible by 6. You can apply this algorithm recursively for larger sequences.
Create an array with a 6-bit bitmap for each position in the string. Work from right to left and set the array of bitmaps so that bitmaps have bits set in the array when there is some subsequence starting from just after the array which sums up to that position in the bitmap. You can do this from right to left using the bitmap just after the current position. If you see a 3 and the bitmap just after the current position is 010001 then sums 1 and 5 are already accessible by just skipping the 3. Using the 3 sums 4 and 2 are now available, so the new bitmap is 011011.
Now do a depth first search for subsequences from left to right, with the choice at each character being either to take that character or not. As you do this keep track of the mod 6 sum of the characters taken so far. Use the bitmaps to work out whether there is a subsequence to the right of that position that, added to the sum so far, yields zero. Carry on as long as you can see that the current sum leads to a subsequence of sum zero, otherwise stop and recurse.
The first stage has cost linear in the size of the input (for fixed values of 6). The second stage has cost linear in the number of subsequences produced. In fact, if you have to actually write out the subsequences visited (E.g. by maintaining an explicit stack and writing out the contents of the stack) THAT will be the most expensive part of the program.
The worst case is of course input 000000...0000 when all 2^n subsequences are valid.
I'm pretty sure a user named, amit, recently answered a similar question for combinations rather than subsequences where the divisor is 4, although I can't find it right now. His answer was to create, in this case, five arrays (call them Array_i) in O(n) where each array contains the array elements with a modular relationship i with 6. With subsequences we also need a way to record element order. For example, in your case of 2486, our arrays could be:
Array_0 = [null,null,null,6]
Array_1 = []
Array_2 = [null,4,null,null]
Array_3 = []
Array_4 = [2,null,8,null]
Array_5 = []
Now just cross-combine the appropriate arrays, maintaining element order: Array_0, Array_2 & Array_4, Array_0 & any other combination of arrays:
6, 24, 48, 246, 486

What is the fastest way to sort n strings of length n each?

I have n strings, each of length n. I wish to sort them in ascending order.
The best algorithm I can think of is n^2 log n, which is quick sort. (Comparing two strings takes O(n) time). The challenge is to do it in O(n^2) time. How can I do it?
Also, radix sort methods are not permitted as you do not know the number of letters in the alphabet before hand.
Assume any letter is a to z.
Since no requirement for in-place sorting, create an array of linked list with length 26:
List[] sorted= new List[26]; // here each element is a list, where you can append
For a letter in that string, its sorted position is the difference of ascii: x-'a'.
For example, position for 'c' is 2, which will be put to position as
sorted[2].add('c')
That way, sort one string only take n.
So sort all strings takes n^2.
For example, if you have "zdcbacdca".
z goes to sorted['z'-'a'].add('z'),
d goes to sorted['d'-'a'].add('d'),
....
After sort, one possible result looks like
0 1 2 3 ... 25 <br/>
a b c d ... z <br/>
a b c <br/>
c
Note: the assumption of letter collection decides the length of sorted array.
For small numbers of strings a regular comparison sort will probably be faster than a radix sort here, since radix sort takes time proportional to the number of bits required to store each character. For a 2-byte Unicode encoding, and making some (admittedly dubious) assumptions about equal constant factors, radix sort will only be faster if log2(n) > 16, i.e. when sorting more than about 65,000 strings.
One thing I haven't seen mentioned yet is the fact that a comparison sort of strings can be enhanced by exploiting known common prefixes.
Suppose our strings are S[0], S[1], ..., S[n-1]. Let's consider augmenting mergesort with a Longest Common Prefix (LCP) table. First, instead of moving entire strings around in memory, we will just manipulate lists of indices into a fixed table of strings.
Whenever we merge two sorted lists of string indices X[0], ..., X[k-1] and Y[0], ..., Y[k-1] to produce Z[0], ..., Z[2k-1], we will also be given 2 LCP tables (LCPX[0], ..., LCPX[k-1] for X and LCPY[0], ..., LCPY[k-1] for Y), and we need to produce LCPZ[0], ..., LCPZ[2k-1] too. LCPX[i] gives the length of the longest prefix of X[i] that is also a prefix of X[i-1], and similarly for LCPY and LCPZ.
The first comparison, between S[X[0]] and S[Y[0]], cannot use LCP information and we need a full O(n) character comparisons to determine the outcome. But after that, things speed up.
During this first comparison, between S[X[0]] and S[Y[0]], we can also compute the length of their LCP -- call that L. Set Z[0] to whichever of S[X[0]] and S[Y[0]] compared smaller, and set LCPZ[0] = 0. We will maintain in L the length of the LCP of the most recent comparison. We will also record in M the length of the LCP that the last "comparison loser" shares with the next string from its block: that is, if the most recent comparison, between two strings S[X[i]] and S[Y[j]], determined that S[X[i]] was smaller, then M = LCPX[i+1], otherwise M = LCPY[j+1].
The basic idea is: After the first string comparison in any merge step, every remaining string comparison between S[X[i]] and S[Y[j]] can start at the minimum of L and M, instead of at 0. That's because we know that S[X[i]] and S[Y[j]] must agree on at least this many characters at the start, so we don't need to bother comparing them. As larger and larger blocks of sorted strings are formed, adjacent strings in a block will tend to begin with longer common prefixes, and so these LCP values will become larger, eliminating more and more pointless character comparisons.
After each comparison between S[X[i]] and S[Y[j]], the string index of the "loser" is appended to Z as usual. Calculating the corresponding LCPZ value is easy: if the last 2 losers both came from X, take LCPX[i]; if they both came from Y, take LCPY[j]; and if they came from different blocks, take the previous value of L.
In fact, we can do even better. Suppose the last comparison found that S[X[i]] < S[Y[j]], so that X[i] was the string index most recently appended to Z. If M ( = LCPX[i+1]) > L, then we already know that S[X[i+1]] < S[Y[j]] without even doing any comparisons! That's because to get to our current state, we know that S[X[i]] and S[Y[j]] must have first differed at character position L, and it must have been that the character x in this position in S[X[i]] was less than the character y in this position in S[Y[j]], since we concluded that S[X[i]] < S[Y[j]] -- so if S[X[i+1]] shares at least the first L+1 characters with S[X[i]], it must also contain x at position L, and so it must also compare less than S[Y[j]]. (And of course the situation is symmetrical: if the last comparison found that S[Y[j]] < S[X[i]], just swap the names around.)
I don't know whether this will improve the complexity from O(n^2 log n) to something better, but it ought to help.
You can build a Trie, which will cost O(s*n),
Details:
https://stackoverflow.com/a/13109908
Solving it for all cases should not be possible in better that O(N^2 Log N).
However if there are constraints that can relax the string comparison, it can be optimised.
-If the strings have high repetition rate and are from a finite ordered set. You can use ideas from count sort and use a map to store their count. later, sorting just the map keys should suffice. O(NMLogM) where M is the number of unique strings. You can even directly use TreeMap for this purpose.
-If the strings are not random but the suffixes of some super string this can well be done
O(N Log^2N). http://discuss.codechef.com/questions/21385/a-tutorial-on-suffix-arrays

Resources