Tree formation issue by the given custom inputs - python-3.x

You are given a tree of N nodes rooted at 1. Each node of the tree has a color associated with it. Now you are given Q queries. In each query, you are given a node number X and for each query you have to mark the node X as special and all the other nodes in its subtree with the same color also as special. If a node is marked as special in a query then for all the other subsequent queries, it remains marked as special.
For each query, you need to print the total number of special nodes in the tree after you perform the marking operation in the query.
Input
The first line contains an integer N as input denoting the total number of nodes in the tree. Next, N−1 lines contain two integers U and V which denotes there is an edge between the nodes U and V in the tree. 
Next line contains N space separated integers that denotes the color of each node of the tree.
Next line contains an integer Q as input that denotes the count of queries.
Next Q lines contain an integer X that denotes the node whose subtree needs to be marked as special for that query.
Output
For each query, you need to print the count of nodes that are special after this query is performed.
Sample Input:
5
3 1
3 2
2 4
2 5
1 1 2 2 1
4
2
4
5
1
Sample Output:
2
3
3
4
Issue is when I try to change the test case ,the code fails (Runtime error)
5
3 1
2 4
3 2
2 5
1 1 2 2 1
4
2
4
5
1
Expected output must be the same as above
Here is the code we have used :
#the result will be shown here.
special=[]
def remove_1_from_tuple(tup):
if(tup[0]==1):
return tup[1]
else:
return tup[0]
def make_tree(node,hashmap):
latest=node
key=latest.data
if(key in hashmap):
hmap=hashmap[key]
for val in hmap:
latest.children.append(Node(None,val))
for child in latest.children:
make_tree(child,hashmap)
class Node():
def __init__(self, tree, data, parent=None):
self.special=False
self.data = data
self.parent = parent
self.children = []
self.tree = tree
def set_color(self,color):
self.color=color
def set_special(self):
self.special=True
def find(self, x):
if self.data is x: return self
for node in self.children:
n = node.find(x)
if n: return n
return None
def get_same_color_in_sub_tree(self,x):
for node in self.children:
if(x.color==node.color):
return node
node.get_same_color_in_sub_tree(node)
return None
Nodes_length=int(input())
#the tree is a node of 1 (root node)
n = Node(None,1)
node_tuples=[]
while(Nodes_length>1):
data_A,data_B=map(int,input().split(" "))
node_tuples.append((data_A,data_B))
Nodes_length=Nodes_length-1
elem=remove_1_from_tuple(node_tuples[0])
n.children.append(Node(None,elem))
node_tuples=node_tuples[1::]
#create a hashmap..
hashmap={}
for key,value in node_tuples:
if(key in hashmap):
hashmap[key].append(value)
else:
hashmap[key]=[]
hashmap[key].append(value)
#now make the tree suign recursive functon...
make_tree(n.children[0],hashmap)
#Assign color to each node..
colors=list(map(int,input().split(" ")))
for index in range(0,len(colors)):
node_value=index+1
node=n.find(node_value)
node.set_color(colors[index])
#get the count of the operations now..
operations_count=int(input())
#run the operations
while(operations_count>0):
node_value=int(input())
CTR=0
if(len(special)!=0):
CTR=special[-1]
node=n.find(node_value)
if(node.special==False):
node.set_special()
CTR=CTR+1
#traverse throught he sub tree and check if the color is same
same_color_node=node.get_same_color_in_sub_tree(node)
if(same_color_node!=None):
#mark that node as special..
same_color_node.set_special()
#increment the counter by 1
CTR=CTR+1
special.append(CTR)
operations_count=operations_count-1
for obj in special:
print(obj)
Thanks

This is the code:
#include<bits/stdc++.h>
using namespace std;
const int maxn = 1e5;
vector<int> graph[maxn + 5];
bool special[maxn + 5];
int A[maxn + 5];
int res;
int parent[maxn + 5];
unordered_set<int> chalo;
void dfs(int u, int p, int col){
if(col == A[u])
special[u] = true;
if(special[u])
chalo.insert(u);
for (int i: graph[u]){
if(i == p) continue;
dfs(i, u, col);
}
}
void dfs1(int u, int p){
for (int i: graph[u]){
if(i == p) continue;
parent[i] = u;
dfs1(i, u);
}
}
int main(){
int n; cin >> n;
for (int i = 0, x,y; i < n-1; ++i){
cin >> x >> y;
graph[x].push_back(y);
graph[y].push_back(x);
}
for (int i = 1; i <= n; ++i)
cin >> A[i];
int q; cin >> q;
dfs1(1, -1);
while(q--){
int x; cin >> x;
int p = parent[x];
dfs(x, parent[x], A[x]);
cout << chalo.size() << endl;
}
return 0;
}

Related

Optimizing a primality test based on runtime in Python

I'm pretty new to algorithms and runtimes, and I'm trying to optimise a bit of my code for a personal project.
import math
for num in range(0, 10000000000000000000000):
if all((num**(num+1)+(num+1)**(num))%i!=0 for i in range(2,int(math.sqrt((num**(num+1)+(num+1)**(num))))+1)):
print(num)
What can I do to speed this up? I know that num=80 should work but my code isn't getting past num=0, 1, 2 (it's not fast enough).
First I define my range, then I say if 'such-and-such' is prime from range 2 to sqrt(such-and-such) + 1, then return that number. Sqrt(n) + 1 is the minimum number of factors to test for the primality of n.
This is a primality test of sequence A051442
You would probably get a minor boost from computing (num**(num+1)+(num+1)**(num)) only once per iteration instead of sqrt(num**(num+1)+(num+1)**(num)) times. As you can see, this will greatly reduce the constant factor in your complexity. It won't change the fundamental complexity because you still need to compute the remainder. Change
if all((num**(num+1)+(num+1)**(num))%i!=0 for i in range(2,int(math.sqrt((num**(num+1)+(num+1)**(num))))+1)):
to
k = num**(num+1)+(num+1)**(num)
if all(k%i for i in range(2,int(math.sqrt(k))+1)):
The != 0 is implicit in Python.
Update
All this is just trivial improvement to an extremely inefficieny algorithm. The biggest speedup I can think of is to reduce the check k % i to only prime i. For any composite i = a * b such that k % i == 0, it must be the case that k % a == 0 and k % b == 0 (if k is divisible by i, it must also be divisible by the factors of i).
I am assuming that you don't want to use any kind of pre-computed prime tables in your code. In that case, you can compute the table yourself. This will involve checking all the numbers up to a given sqrt(k) only once ever, instead of once per iteration of num, since we can stash the previously computed primes in say a list. That will effectively increase the lower limit of the range in your current all from 2 to the square root of the previous k.
Let's define a function to extend our set of primes using the seive of Eratosthenes:
from math import sqrt
def extend(primes, from_, to):
"""
primes: a sequence containing prime numbers from 2 to `from - 1`, in order
from_: the number to start checking with
to: the number to end with (inclusive)
"""
if not primes:
primes.extend([2, 3])
return
for k in range(max(from_, 5), to + 1):
s = int(sqrt(k)) # No need to compute this more than once per k
for p in primes:
if p > s:
# Reached sqrt(k) -> short circuit success
primes.append(k)
break
elif not k % p:
# Found factor -> short circuit failure
break
Now we can use this function to extend our list of primes at every iteration of the original loop. This allows us to check the divisibility of k only against the slowly growing list of primes, not against all numbers:
primes = []
prev = 0
for num in range(10000000000000000000000):
k = num**(num + 1) + (num + 1)**num
lim = int(sqrt(k)) + 1
extend(primes, prev, lim)
#print('Num={}, K={}, checking {}-{}, p={}'.format(num, k, prev, lim, primes), end='... ')
if k <= 3 and k in primes or all(k % i for i in primes):
print('{}: {} Prime!'.format(num, k))
else:
print('{}: {} Nope'.format(num, k))
prev = lim + 1
I am not 100% sure that my extend function is optimal, but I am able to get to num == 13, k == 4731091158953433 in <10 minutes on my ridiculously old and slow laptop, so I guess it's not too bad. That means that the algorithm builds a complete table of primes up to ~7e7 in that time.
Update #2
A sort-of-but-not-really optimization you could do would be to check all(k % i for i in primes) before calling extend. This would save you a lot of cycles for numbers that have small prime factors, but would probably catch up to you later on, when you would end up having to compute all the primes up to some enormous number. Here is a sample of how you could do that:
primes = []
prev = 0
for num in range(10000000000000000000000):
k = num**(num + 1) + (num + 1)**num
lim = int(sqrt(k)) + 1
if not all(k % i for i in primes):
print('{}: {} Nope'.format(num, k))
continue
start = len(primes)
extend(primes, prev, lim)
if all(k % i for i in primes[start:]):
print('{}: {} Prime!'.format(num, k))
else:
print('{}: {} Nope'.format(num, k))
prev = lim + 1
While this version does not do much for the long run, it does explain why you were able to get to 15 so quickly in your original run. The prime table does note get extended after num == 3, until num == 16, which is when the terrible delay occurs in this version as well. The net runtime to 16 should be identical in both versions.
Update #3
As #paxdiablo suggests, the only numbers we need to consider in extend are multiples of 6 +/- 1. We can combine that with the fact that only a small number of primes generally need to be tested, and convert the functionality of extend into a generator that will only compute as many primes as absolutely necessary. Using Python's lazy generation should help. Here is a completely rewritten version:
from itertools import count
from math import ceil, sqrt
prime_table = [2, 3]
def prime_candidates(start=0):
"""
Infinite generator of prime number candidates starting with the
specified number.
Candidates are 2, 3 and all numbers that are of the form 6n-1 and 6n+1
"""
if start <= 3:
if start <= 2:
yield 2
yield 3
start = 5
delta = 2
else:
m = start % 6
if m < 2:
start += 1 - m
delta = 4
else:
start += 5 - m
delta = 2
while True:
yield start
start += delta
delta = 6 - delta
def isprime(n):
"""
Checks if `n` is prime.
All primes up to sqrt(n) are expected to already be present in
the generated `prime_table`.
"""
s = int(ceil(sqrt(n)))
for p in prime_table:
if p > s:
break
if not n % p:
return False
return True
def generate_primes(max):
"""
Generates primes up to the specified maximum.
First the existing table is yielded. Then, the new primes are
found in the sequence generated by `prime_candidates`. All verified
primes are added to the existing cache.
"""
for p in prime_table:
if p > max:
return
yield p
for k in prime_candidates(prime_table[-1] + 1):
if isprime(k):
prime_table.append(k)
if k > max:
# Putting the return here ensures that we always stop on a prime and therefore don't do any extra work
return
else:
yield k
for num in count():
k = num**(num + 1) + (num + 1)**num
lim = int(ceil(sqrt(k)))
b = all(k % i for i in generate_primes(lim))
print('n={}, k={} is {}prime'.format(num, k, '' if b else 'not '))
This version gets to 15 almost instantly. It gets stuck at 16 because the smallest prime factor for k=343809097055019694337 is 573645313. Some future expectations:
17 should be a breeze: 16248996011806421522977 has factor 19
18 will take a while: 812362695653248917890473 has factor 22156214713
19 is easy: 42832853457545958193355601 is divisible by 3
20 also easy: 2375370429446951548637196401 is divisible by 58967
21: 138213776357206521921578463913 is divisible by 13
22: 8419259736788826438132968480177 is divisible by 103
etc... (link to sequence)
So in terms of instant gratification, this method will get you much further if you can make it past 18 (which will take >100 times longer than getting past 16, which in my case took ~1.25hrs).
That being said, your greatest speedup at this point would be re-writing this in C or some similar low-level language that does not have as much overhead for loops.
Update #4
Just for giggles, here is an implementation of the latest Python version in C. I chose to go with GMP for arbitrary precision integers, because it is easy to use and install on my Red Hat system, and the docs are very clear:
#include <stdio.h>
#include <stdlib.h>
#include <gmp.h>
typedef struct {
size_t alloc;
size_t size;
mpz_t *numbers;
} PrimeTable;
void init_table(PrimeTable *buf)
{
buf->alloc = 0x100000L;
buf->size = 2;
buf->numbers = malloc(buf->alloc * sizeof(mpz_t));
if(buf == NULL) {
fprintf(stderr, "No memory for prime table\n");
exit(1);
}
mpz_init_set_si(buf->numbers[0], 2);
mpz_init_set_si(buf->numbers[1], 3);
return;
}
void append_table(PrimeTable *buf, mpz_t number)
{
if(buf->size == buf->alloc) {
size_t new = 2 * buf->alloc;
mpz_t *tmp = realloc(buf->numbers, new * sizeof(mpz_t));
if(tmp == NULL) {
fprintf(stderr, "Ran out of memory for prime table\n");
exit(1);
}
buf->alloc = new;
buf->numbers = tmp;
}
mpz_set(buf->numbers[buf->size], number);
buf->size++;
return;
}
size_t print_table(PrimeTable *buf, FILE *file)
{
size_t i, n;
n = fprintf(file, "Array contents = [");
for(i = 0; i < buf->size; i++) {
n += mpz_out_str(file, 10, buf->numbers[i]);
if(i < buf->size - 1)
n += fprintf(file, ", ");
}
n += fprintf(file, "]\n");
return n;
}
void free_table(PrimeTable *buf)
{
for(buf->size--; ((signed)(buf->size)) >= 0; buf->size--)
mpz_clear(buf->numbers[buf->size]);
free(buf->numbers);
return;
}
int isprime(mpz_t num, PrimeTable *table)
{
mpz_t max, rem, next;
size_t i, d, r;
mpz_inits(max, rem, NULL);
mpz_sqrtrem(max, rem, num);
// Check if perfect square: definitely not prime
if(!mpz_cmp_si(rem, 0)) {
mpz_clears(rem, max, NULL);
return 0;
}
/* Normal table lookup */
for(i = 0; i < table->size; i++) {
// Got to sqrt(n) -> prime
if(mpz_cmp(max, table->numbers[i]) < 0) {
mpz_clears(rem, max, NULL);
return 1;
}
// Found a factor -> not prime
if(mpz_divisible_p(num, table->numbers[i])) {
mpz_clears(rem, max, NULL);
return 0;
}
}
/* Extend table and do lookup */
// Start with last found prime + 2
mpz_init_set(next, table->numbers[i - 1]);
mpz_add_ui(next, next, 2);
// Find nearest number of form 6n-1 or 6n+1
r = mpz_fdiv_ui(next, 6);
if(r < 2) {
mpz_add_ui(next, next, 1 - r);
d = 4;
} else {
mpz_add_ui(next, next, 5 - r);
d = 2;
}
// Step along numbers of form 6n-1/6n+1. Check each candidate for
// primality. Don't stop until next prime after sqrt(n) to avoid
// duplication.
for(;;) {
if(isprime(next, table)) {
append_table(table, next);
if(mpz_divisible_p(num, next)) {
mpz_clears(next, rem, max, NULL);
return 0;
}
if(mpz_cmp(max, next) <= 0) {
mpz_clears(next, rem, max, NULL);
return 1;
}
}
mpz_add_ui(next, next, d);
d = 6 - d;
}
// Return can only happen from within loop.
}
int main(int argc, char *argv[])
{
PrimeTable table;
mpz_t k, a, b;
size_t n, next;
int p;
init_table(&table);
mpz_inits(k, a, b, NULL);
for(n = 0; ; n = next) {
next = n + 1;
mpz_set_ui(a, n);
mpz_pow_ui(a, a, next);
mpz_set_ui(b, next);
mpz_pow_ui(b, b, n);
mpz_add(k, a, b);
p = isprime(k, &table);
printf("n=%ld k=", n);
mpz_out_str(stdout, 10, k);
printf(" p=%d\n", p);
//print_table(&table, stdout);
}
mpz_clears(b, a, k, NULL);
free_table(&table);
return 0;
}
While this version has the exact same algorithmic complexity as the Python one, I expect it to run a few orders of magnitude faster because of the relatively minimal overhead incurred in C. And indeed, it took about 15 minutes to get stuck at n == 18, which is ~5 times faster than the Python version so far.
Update #5
This is going to be the last one, I promise.
GMP has a function called mpz_nextprime, which offers a potentially much faster implementation of this algorightm, especially with caching. According to the docs:
This function uses a probabilistic algorithm to identify primes. For practical purposes it’s adequate, the chance of a composite passing will be extremely small.
This means that it is probably much faster than the current prime generator I implemented, with a slight cost offset of some false primes being added to the cache. This cost should be minimal: even adding a few thousand extra modulo operations should be fine if the prime generator is faster than it is now.
The only part that needs to be replaced/modified is the portion of isprime below the comment /* Extend table and do lookup */. Basically that whole section just becomes a series of calls to mpz_nextprime instead of recursion.
At that point, you may as well adapt isprime to use mpz_probab_prime_p when possible. You only need to check for sure if the result of mpz_probab_prime_p is uncertain:
int isprime(mpz_t num, PrimeTable *table)
{
mpz_t max, rem, next;
size_t i, r;
int status;
status = mpz_probab_prime_p(num, 50);
// Status = 2 -> definite yes, Status = 0 -> definite no
if(status != 1)
return status != 0;
mpz_inits(max, rem, NULL);
mpz_sqrtrem(max, rem, num);
// Check if perfect square: definitely not prime
if(!mpz_cmp_si(rem, 0)) {
mpz_clears(rem, max, NULL);
return 0;
}
mpz_clear(rem);
/* Normal table lookup */
for(i = 0; i < table->size; i++) {
// Got to sqrt(n) -> prime
if(mpz_cmp(max, table->numbers[i]) < 0) {
mpz_clear(max);
return 1;
}
// Found a factor -> not prime
if(mpz_divisible_p(num, table->numbers[i])) {
mpz_clear(max);
return 0;
}
}
/* Extend table and do lookup */
// Start with last found prime + 2
mpz_init_set(next, table->numbers[i - 1]);
mpz_add_ui(next, next, 2);
// Step along probable primes
for(;;) {
mpz_nextprime(next, next);
append_table(table, next);
if(mpz_divisible_p(num, next)) {
r = 0;
break;
}
if(mpz_cmp(max, next) <= 0) {
r = 1;
break;
}
}
mpz_clears(next, max, NULL);
return r;
}
Sure enough, this version makes it to n == 79 in a couple of seconds at most. It appears to get stuck on n == 80, probably because mpz_probab_prime_p can't determine if k is a prime for sure. I doubt that computing all the primes up to ~10^80 is going to take a trivial amount of time.

coin change recurrence solution

Given a value N, if we want to make change for N cents, and we have infinite supply of each of S = { S1, S2, .. , Sm} valued coins, how many ways can we make the change? The order of coins doesn’t matter.There is additional restriction though: you can only give change with exactly K coins.
For example, for N = 4, k = 2 and S = {1,2,3}, there are two solutions: {2,2},{1,3}. So output should be 2.
Solution:
int getways(int coins, int target, int total_coins, int *denomination, int size, int idx)
{
int sum = 0, i;
if (coins > target || total_coins < 0)
return 0;
if (target == coins && total_coins == 0)
return 1;
if (target == coins && total_coins < 0)
return 0;
for (i=idx;i<size;i++) {
sum += getways(coins+denomination[i], target, total_coins-1, denomination, size, i);
}
return sum;
}
int main()
{
int target = 49;
int total_coins = 15;
int denomination[] = {1, 2, 3, 4, 5};
int size = sizeof(denomination)/sizeof(denomination[0]);
printf("%d\n", getways(0, target, total_coins, denomination, size, 0));
}
Above is recursive solution. However i need help with my dynamic programming solution:
Let dp[i][j][k] represent sum up to i with j elements and k coins.
So,
dp[i][j][k] = dp[i][j-1][k] + dp[i-a[j]][j][k-1]
Is my recurrence relation right?
I don't really understand your recurrence relation:
Let dp[i][j][k] represent sum up to i with j elements and k coins.
I think you're on the right track, but I suggest simply dropping the middle dimension [j], and use dp[sum][coinsLeft] as follows:
dp[0][0] = 1 // coins: 0, desired sum: 0 => 1 solution
dp[i][0] = 0 // coins: 0, desired sum: i => 0 solutions
dp[sum][coinsLeft] = dp[sum - S1][coinsLeft-1]
+ dp[sum - S2][coinsLeft-1]
+ ...
+ dp[sum - SM][coinsLeft-1]
The answer is then to be found at dp[N][K] (= number of ways to add K coins to get N cents)
Here's some sample code (I advice you to not look until you've tried to solve it yourself. It's a good exercise):
public static int combinations(int numCoinsToUse, int targetSum, int[] denom) {
// dp[numCoins][sum] == ways to get sum using numCoins
int[][] dp = new int[numCoinsToUse+1][targetSum];
// Any sum (except 0) is impossible with 0 coins
for (int sum = 0; sum < targetSum; sum++) {
dp[0][sum] = sum == 0 ? 1 : 0;
}
// Gradually increase number of coins
for (int c = 1; c <= numCoinsToUse; c++)
for (int sum = 0; sum < targetSum; sum++)
for (int d : denom)
if (sum >= d)
dp[c][sum] += dp[c-1][sum - d];
return dp[numCoinsToUse][targetSum-1];
}
Using your example input:
combinations(2, 4, new int[] {1, 2, 3} ) // gives 2

searching for dynamic programming solution

Problem :
There is a stack consisting of N bricks. You and your friend decide to play a game using this stack. In this game, one can alternatively remove 1/2/3 bricks from the top and the numbers on the bricks removed by the player is added to his score. You have to play in such a way that you obtain maximum possible score while it is given that your friend will also play optimally and you make the first move.
Input Format
First line will contain an integer T i.e. number of test cases. There will be two lines corresponding to each test case, first line will contain a number N i.e. number of element in stack and next line will contain N numbers i.e. numbers written on bricks from top to bottom.
Output Format
For each test case, print a single line containing your maximum score.
I have tried with recursion but didn't work
int recurse(int length, int sequence[5], int i) {
if(length - i < 3) {
int sum = 0;
for(i; i < length; i++) sum += sequence[i];
return sum;
} else {
int sum1 = 0;
int sum2 = 0;
int sum3 = 0;
sum1 += recurse(length, sequence, i+1);
sum2 += recurse(length, sequence, i+2);
sum3 += recurse(length, sequence, i+3);
return max(max(sum1,sum2),sum3);
}
}
int main() {
int sequence[] = {0, 0, 9, 1, 999};
int length = 5;
cout << recurse(length, sequence, 0);
return 0;
}
My approach to solving this problem was as follows:
Both players play optimally.
So, the solution is to be built in a manner that need not take the player into account. This is because both players are going to pick the best choice available to them for any given state of the stack of bricks.
The base cases:
Either player, when left with the last one/two/three bricks, will choose to remove all bricks.
For the sake of convenience, let's assume that the array is actually in reverse order (i.e. a[0] is the value of the bottom-most brick in the stack) (This can easily be incorporated by performing a reverse operation on the array.)
So, the base cases are:
# Base Cases
dp[0] = a[0]
dp[1] = a[0]+a[1]
dp[2] = a[0]+a[1]+a[2]
Building the final solution:
Now, in each iteration, a player has 3 choices.
pick brick (i), or,
pick brick (i and i-1) , or,
pick brick (i,i-1 and i-2)
If the player opted for choice 1, the following would result:
player secures a[i] points from the brick (i) (+a[i])
will not be able to procure the points on the bricks removed by the opponent. This value is stored in dp[i-1] (which the opponent will end up scoring by virtue of this choice made by the player).
will surely procure the points on the bricks not removed by the opponent. (+ Sum of all the bricks up until brick (i-1) not removed by opponent )
A prefix array to store the partial sums of points of bricks can be computed as follows:
# build prefix sum array
pre = [a[0]]
for i in range(1,n):
pre.append(pre[-1]+a[i])
And, now, if player opted for choice 1, the score would be:
ans1 = a[i] + (pre[i-1] - dp[i-1])
Similarly, for choices 2 and 3. So, we get:
ans1 = a[i]+ (pre[i-1] - dp[i-1]) # if we pick only ith brick
ans2 = a[i]+a[i-1]+(pre[i-2] - dp[i-2]) # pick 2 bricks
ans3 = a[i]+a[i-1]+a[i-2]+(pre[i-3] - dp[i-3]) # pick 3 bricks
Now, each player wants to maximize this value. So, in each iteration, we pick the maximum among ans1, ans2 and ans3.
dp[i] = max(ans1, ans2, ans3)
Now, all we have to do is to iterate from 3 through to n-1 to get the required solution.
Here is the final snippet in python:
a = map(int, raw_input().split())
a.reverse() # so that a[0] is bottom brick of stack
dp = [0 for x1 in xrange(n)]
dp[0] = a[0]
dp[1] = a[0]+a[1]
dp[2] = a[0]+a[1]+a[2]
# build prefix sum array
pre = [a[0]]
for i in range(1,n):
pre.append(pre[-1]+a[i])
for i in xrange(3,n):
# We can pick brick i, (i,i-1) or (i,i-1,i-2)
ans1 = a[i]+ (pre[i-1] - dp[i-1]) # if we pick only ith brick
ans2 = a[i]+a[i-1]+(pre[i-2] - dp[i-2]) # pick 2
ans3 = a[i]+a[i-1]+a[i-2]+(pre[i-3] - dp[i-3]) #pick 3
# both players maximise this value. Doesn't matter who is playing
dp[i] = max(ans1, ans2, ans3)
print dp[n-1]
At a first sight your code seems totally wrong for a couple of reasons:
The player is not taken into account. You taking a brick or your friend taking a brick is not the same (you've to maximize your score, the total is of course always the total of the score on the bricks).
Looks just some form of recursion with no memoization and that approach will obviously explode to exponential computing time (you're using the "brute force" approach, enumerating all possible games).
A dynamic programming approach is clearly possible because the best possible continuation of a game doesn't depend on how you reached a certain state. For the state of the game you'd need
Who's next to play (you or your friend)
How many bricks are left on the stack
With these two input you can compute how much you can collect from that point to the end of the game. To do this there are two cases
1. It's your turn
You need to try to collect 1, 2 or 3 and call recursively on the next game state where the opponent will have to choose. Of the three cases you keep what is the highest result
2. It's opponent turn
You need to simulate collection of 1, 2 or 3 bricks and call recursively on next game state where you'll have to choose. Of the three cases you keep what is the lowest result (because the opponent is trying to maximize his/her result, not yours).
At the very begin of the function you just need to check if the same game state has been processed before, and when returning from a computation you need to store the result. Thanks to this lookup/memorization the search time will not be exponential, but linear in the number of distinct game states (just 2*N where N is the number of bricks).
In Python:
memory = {}
bricks = [0, 0, 9, 1, 999]
def maxResult(my_turn, index):
key = (my_turn, index)
if key in memory:
return memory[key]
if index == len(bricks):
result = 0
elif my_turn:
result = None
s = 0
for i in range(index, min(index+3, len(bricks))):
s += bricks[i]
x = s + maxResult(False, i+1)
if result is None or x > result:
result = x
else:
result = None
for i in range(index, min(index+3, len(bricks))):
x = maxResult(True, i+1)
if result is None or x < result:
result = x
memory[key] = result
return result
print maxResult(True, 0)
import java.io.*;
import java.util.*;
import java.text.*;
import java.math.*;
import java.util.regex.*;
public class Solution {
public static void main(String[] args){
Scanner sc=new Scanner(System.in);
int noTest=sc.nextInt();
for(int i=0; i<noTest; i++){
int noBrick=sc.nextInt();
ArrayList<Integer> arr=new ArrayList<Integer>();
for (int j=0; j<noBrick; j++){
arr.add(sc.nextInt());
}
long sum[]= new long[noBrick];
sum[noBrick-1]= arr.get(noBrick-1);
for (int j=noBrick-2; j>=0; j--){
sum[j]= sum[j+1]+ arr.get(j);
}
long[] max=new long[noBrick];
if(noBrick>=1)
max[noBrick-1]=arr.get(noBrick-1);
if(noBrick>=2)
max[noBrick-2]=(int)Math.max(arr.get(noBrick-2),max[noBrick-1]+arr.get(noBrick-2));
if(noBrick>=3)
max[noBrick-3]=(int)Math.max(arr.get(noBrick-3),max[noBrick-2]+arr.get(noBrick-3));
if(noBrick>=4){
for (int j=noBrick-4; j>=0; j--){
long opt1= arr.get(j)+sum[j+1]-max[j+1];
long opt2= arr.get(j)+arr.get(j+1)+sum[j+2]-max[j+2];
long opt3= arr.get(j)+arr.get(j+1)+arr.get(j+2)+sum[j+3]-max[j+3];
max[j]= (long)Math.max(opt1,Math.max(opt2,opt3));
}
}
long cost= max[0];
System.out.println(cost);
}
}
}
I tried this using Java, seems to work alright.
here a better solution that i found on the internet without recursion.
#include <iostream>
#include <fstream>
#include <algorithm>
#define MAXINDEX 10001
using namespace std;
long long maxResult(int a[MAXINDEX], int LENGTH){
long long prefixSum [MAXINDEX] = {0};
prefixSum[0] = a[0];
for(int i = 1; i < LENGTH; i++){
prefixSum[i] += prefixSum[i-1] + a[i];
}
long long dp[MAXINDEX] = {0};
dp[0] = a[0];
dp[1] = dp[0] + a[1];
dp[2] = dp[1] + a[2];
for(int k = 3; k < LENGTH; k++){
long long x = prefixSum[k-1] + a[k] - dp[k-1];
long long y = prefixSum[k-2] + a[k] + a[k-1] - dp[k-2];
long long z = prefixSum[k-3] + a[k] + a[k-1] + a[k-2] - dp[k-3];
dp[k] = max(x,max(y,z));
}
return dp[LENGTH-1];
}
using namespace std;
int main(){
int cases;
int bricks[MAXINDEX];
ifstream fin("test.in");
fin >> cases;
for (int i = 0; i < cases; i++){
long n;
fin >> n;
for(int j = 0; j < n; j++) fin >> bricks[j];
reverse(bricks, bricks+n);
cout << maxResult(bricks, n)<< endl;
}
return 0;
}

CodeJam 2014: How to solve task "New Lottery Game"?

I want to know efficient approach for the New Lottery Game problem.
The Lottery is changing! The Lottery used to have a machine to generate a random winning number. But due to cheating problems, the Lottery has decided to add another machine. The new winning number will be the result of the bitwise-AND operation between the two random numbers generated by the two machines.
To find the bitwise-AND of X and Y, write them both in binary; then a bit in the result in binary has a 1 if the corresponding bits of X and Y were both 1, and a 0 otherwise. In most programming languages, the bitwise-AND of X and Y is written X&Y.
For example:
The old machine generates the number 7 = 0111.
The new machine generates the number 11 = 1011.
The winning number will be (7 AND 11) = (0111 AND 1011) = 0011 = 3.
With this measure, the Lottery expects to reduce the cases of fraudulent claims, but unfortunately an employee from the Lottery company has leaked the following information: the old machine will always generate a non-negative integer less than A and the new one will always generate a non-negative integer less than B.
Catalina wants to win this lottery and to give it a try she decided to buy all non-negative integers less than K.
Given A, B and K, Catalina would like to know in how many different ways the machines can generate a pair of numbers that will make her a winner.
For small input we can check all possible pairs but how to do it with large inputs. I guess we represent the binary number into string first and then check permutations which would give answer less than K. But I can't seem to figure out how to calculate possible permutations of 2 binary strings.
I used a general DP technique that I described in a lot of detail in another answer.
We want to count the pairs (a, b) such that a < A, b < B and a & b < K.
The first step is to convert the numbers to binary and to pad them to the same size by adding leading zeroes. I just padded them to a fixed size of 40. The idea is to build up the valid a and b bit by bit.
Let f(i, loA, loB, loK) be the number of valid suffix pairs of a and b of size 40 - i. If loA is true, it means that the prefix up to i is already strictly smaller than the corresponding prefix of A. In that case there is no restriction on the next possible bit for a. If loA ist false, A[i] is an upper bound on the next bit we can place at the end of the current prefix. loB and loK have an analogous meaning.
Now we have the following transition:
long long f(int i, bool loA, bool loB, bool loK) {
// TODO add memoization
if (i == 40)
return loA && loB && loK;
int hiA = loA ? 1: A[i]-'0'; // upper bound on the next bit in a
int hiB = loB ? 1: B[i]-'0'; // upper bound on the next bit in b
int hiK = loK ? 1: K[i]-'0'; // upper bound on the next bit in a & b
long long res = 0;
for (int a = 0; a <= hiA; ++a)
for (int b = 0; b <= hiB; ++b) {
int k = a & b;
if (k > hiK) continue;
res += f(i+1, loA || a < A[i]-'0',
loB || b < B[i]-'0',
loK || k < K[i]-'0');
}
return res;
}
The result is f(0, false, false, false).
The runtime is O(max(log A, log B)) if memoization is added to ensure that every subproblem is only solved once.
What I did was just to identify when the answer is A * B.
Otherwise, just brute force the rest, this code passed the large input.
// for each test cases
long count = 0;
if ((K > A) || (K > B)) {
count = A * B;
continue; // print count and go to the next test case
}
count = A * B - (A-K) * (B-K);
for (int i = K; i < A; i++) {
for (int j = K; j < B; j++) {
if ((i&j) < K) count++;
}
}
I hope this helps!
just as Niklas B. said.
the whole answer is.
#include <algorithm>
#include <cstring>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <map>
#include <sstream>
#include <string>
#include <vector>
using namespace std;
#define MAX_SIZE 32
int A, B, K;
int arr_a[MAX_SIZE];
int arr_b[MAX_SIZE];
int arr_k[MAX_SIZE];
bool flag [MAX_SIZE][2][2][2];
long long matrix[MAX_SIZE][2][2][2];
long long
get_result();
int main(int argc, char *argv[])
{
int case_amount = 0;
cin >> case_amount;
for (int i = 0; i < case_amount; ++i)
{
const long long result = get_result();
cout << "Case #" << 1 + i << ": " << result << endl;
}
return 0;
}
long long
dp(const int h,
const bool can_A_choose_1,
const bool can_B_choose_1,
const bool can_K_choose_1)
{
if (MAX_SIZE == h)
return can_A_choose_1 && can_B_choose_1 && can_K_choose_1;
if (flag[h][can_A_choose_1][can_B_choose_1][can_K_choose_1])
return matrix[h][can_A_choose_1][can_B_choose_1][can_K_choose_1];
int cnt_A_max = arr_a[h];
int cnt_B_max = arr_b[h];
int cnt_K_max = arr_k[h];
if (can_A_choose_1)
cnt_A_max = 1;
if (can_B_choose_1)
cnt_B_max = 1;
if (can_K_choose_1)
cnt_K_max = 1;
long long res = 0;
for (int i = 0; i <= cnt_A_max; ++i)
{
for (int j = 0; j <= cnt_B_max; ++j)
{
int k = i & j;
if (k > cnt_K_max)
continue;
res += dp(h + 1,
can_A_choose_1 || (i < cnt_A_max),
can_B_choose_1 || (j < cnt_B_max),
can_K_choose_1 || (k < cnt_K_max));
}
}
flag[h][can_A_choose_1][can_B_choose_1][can_K_choose_1] = true;
matrix[h][can_A_choose_1][can_B_choose_1][can_K_choose_1] = res;
return res;
}
long long
get_result()
{
cin >> A >> B >> K;
memset(arr_a, 0, sizeof(arr_a));
memset(arr_b, 0, sizeof(arr_b));
memset(arr_k, 0, sizeof(arr_k));
memset(flag, 0, sizeof(flag));
memset(matrix, 0, sizeof(matrix));
int i = 31;
while (i >= 1)
{
arr_a[i] = A % 2;
A /= 2;
arr_b[i] = B % 2;
B /= 2;
arr_k[i] = K % 2;
K /= 2;
i--;
}
return dp(1, 0, 0, 0);
}

Generate all compositions of an integer into k parts

I can't figure out how to generate all compositions (http://en.wikipedia.org/wiki/Composition_%28number_theory%29) of an integer N into K parts, but only doing it one at a time. That is, I need a function that given the previous composition generated, returns the next one in the sequence. The reason is that memory is limited for my application. This would be much easier if I could use Python and its generator functionality, but I'm stuck with C++.
This is similar to Next Composition of n into k parts - does anyone have a working algorithm?
Any assistance would be greatly appreciated.
Preliminary remarks
First start from the observation that [1,1,...,1,n-k+1] is the first composition (in lexicographic order) of n over k parts, and [n-k+1,1,1,...,1] is the last one.
Now consider an exemple: the composition [2,4,3,1,1], here n = 11 and k=5. Which is the next one in lexicographic order? Obviously the rightmost part to be incremented is 4, because [3,1,1] is the last composition of 5 over 3 parts.
4 is at the left of 3, the rightmost part different from 1.
So turn 4 into 5, and replace [3,1,1] by [1,1,2], the first composition of the remainder (3+1+1)-1 , giving [2,5,1,1,2]
Generation program (in C)
The following C program shows how to compute such compositions on demand in lexicographic order
#include <stdio.h>
#include <stdbool.h>
bool get_first_composition(int n, int k, int composition[k])
{
if (n < k) {
return false;
}
for (int i = 0; i < k - 1; i++) {
composition[i] = 1;
}
composition[k - 1] = n - k + 1;
return true;
}
bool get_next_composition(int n, int k, int composition[k])
{
if (composition[0] == n - k + 1) {
return false;
}
// there'a an i with composition[i] > 1, and it is not 0.
// find the last one
int last = k - 1;
while (composition[last] == 1) {
last--;
}
// turn a b ... y z 1 1 ... 1
// ^ last
// into a b ... (y+1) 1 1 1 ... (z-1)
// be careful, there may be no 1's at the end
int z = composition[last];
composition[last - 1] += 1;
composition[last] = 1;
composition[k - 1] = z - 1;
return true;
}
void display_composition(int k, int composition[k])
{
char *separator = "[";
for (int i = 0; i < k; i++) {
printf("%s%d", separator, composition[i]);
separator = ",";
}
printf("]\n");
}
void display_all_compositions(int n, int k)
{
int composition[k]; // VLA. Please don't use silly values for k
for (bool exists = get_first_composition(n, k, composition);
exists;
exists = get_next_composition(n, k, composition)) {
display_composition(k, composition);
}
}
int main()
{
display_all_compositions(5, 3);
}
Results
[1,1,3]
[1,2,2]
[1,3,1]
[2,1,2]
[2,2,1]
[3,1,1]
Weak compositions
A similar algorithm works for weak compositions (where 0 is allowed).
bool get_first_weak_composition(int n, int k, int composition[k])
{
if (n < k) {
return false;
}
for (int i = 0; i < k - 1; i++) {
composition[i] = 0;
}
composition[k - 1] = n;
return true;
}
bool get_next_weak_composition(int n, int k, int composition[k])
{
if (composition[0] == n) {
return false;
}
// there'a an i with composition[i] > 0, and it is not 0.
// find the last one
int last = k - 1;
while (composition[last] == 0) {
last--;
}
// turn a b ... y z 0 0 ... 0
// ^ last
// into a b ... (y+1) 0 0 0 ... (z-1)
// be careful, there may be no 0's at the end
int z = composition[last];
composition[last - 1] += 1;
composition[last] = 0;
composition[k - 1] = z - 1;
return true;
}
Results for n=5 k=3
[0,0,5]
[0,1,4]
[0,2,3]
[0,3,2]
[0,4,1]
[0,5,0]
[1,0,4]
[1,1,3]
[1,2,2]
[1,3,1]
[1,4,0]
[2,0,3]
[2,1,2]
[2,2,1]
[2,3,0]
[3,0,2]
[3,1,1]
[3,2,0]
[4,0,1]
[4,1,0]
[5,0,0]
Similar algorithms can be written for compositions of n into k parts greater than some fixed value.
You could try something like this:
start with the array [1,1,...,1,N-k+1] of (K-1) ones and 1 entry with the remainder. The next composition can be created by incrementing the (K-1)th element and decreasing the last element. Do this trick as long as the last element is bigger than the second to last.
When the last element becomes smaller, increment the (K-2)th element, set the (K-1)th element to the same value and set the last element to the remainder again. Repeat the process and apply the same principle for the other elements when necessary.
You end up with a constantly sorted array that avoids duplicate compositions

Resources