Method to convert binary number to decimal - groovy

I tried to write a program that converts a binary number to a decimal but am getting a number of errors and I cannot figure out where I am going wrong.
// Takes exponent from the user and calculates 2 ** exponent
int power2(int exponent) {
result = 2 ** exponent
return result
}
// Converts binary number to decimal
int binary2decimal(String binary) {
result = 0
count = 0
for (i = binary.length(); i-- > 0;) {
int d = Integer.parseInt(binary.charAt(i))
if (d == 1) {
result = result + power2(count)
}
count ++
}
return result
}
binary2decimal("101110")

Change
int d = Integer.parseInt(binary.charAt(i))
to
int d = Integer.parseInt("${binary[i]}")
and it will work.
An alternative implementation to yours is:
int binary2decimal2(String binary) {
binary.reverse()
.toList()
.indexed()
.collect { Integer idx, String val -> Integer.parseInt(val) * (2 ** idx)}.sum()
}

Assuming you want the shortest route to a clean solution, use:
Integer.parseInt(String base2num, int radix), where radix = 2.
See http://docs.oracle.com/javase/7/docs/api/java/lang/Integer.html#parseInt(java.lang.String,%20int)

Related

Is there a more efficient way to reverse a integer number (both positive and negative) in SCALA?

I am making a program in SCALA that takes a integer number and reverses it. For example, an input of 30 returns an output of 3. This program must also work for negative numbers, For instance, an input of -89 returns an output of -98. Also, if in the reversal the first digit is 0, it should be truncated (30 to 3). This is the code I have written.
import io.StdIn._
val twoDigitNumber : Int = takeInput()
println("The reversal is " + reverse(twoDigitNumber))
//define a function name reverse to handle the actual reverse process for -ve and +ve numbers
def reverse(x: Integer): Integer = {
//4 possibilities: +ve, 1st digit 0; -ve, 1st digit zero; -ve, 1st digit not zero; +ve, 1st digit not zero
if (x> 0 && x.toString.reverse.charAt(0) == 0) {
x.toString.reverse.substring(1).toInt
} else if (x<0 && x.toString.substring(1).reverse.charAt(0) == 0) {
('-' + x.toString.substring(1).reverse.substring(1)).toInt
} else if (x<0 && x.toString.substring(1).reverse.charAt(0)!= 0) {
('-'+ x.toString.substring(1).reverse).toInt
} else {
x.toString.reverse.toInt
}
}
//reads an integer number
def takeInput() : Int ={
print("Enter a two-digit integer number: ")
readInt()
}
Is there a more efficient way to do this?
The shortest I found:
x.signum * x.abs.toString.reverse.toInt
It can be like below considering x is your integer input:
val reverseOutput = if (x>0) x.toString.reverse.toInt else -1* ((x * -1).toString.reverse.toInt)
def reverseANumber(n: Int): Int = {
def _reverseANumber(i: Int, i1: Int): Int = i match
case 0 => i1
case i =>
val n = i % 10
val n1 = n * math.pow(10, (((math.log10(i) + 1).toInt) - 1)).toInt
_reverseANumber(i / 10, i1 + n1)
_reverseANumber(n, 0)
}

Finding the binary composition of a binary number

Very new to C#, so this could be a silly question.
I am working with alot of UInt64's. These are expressed as hex right? If we look at its binary representation, can we return such an array that if we apply the 'or' operation to, we will arrive back at the original UInt64?
For example, let's say
x = 1011
Then, I am looking for an efficient way to arrive at,
f(x) = {1000, 0010, 0001}
Where these numbers are in hex, rather than binary. Sorry, I am new to hex too.
I have a method already, but it feels inefficient. I first convert to a binary string, and loop over that string to find each '1'. I then add the corresponding binary number to an array.
Any thoughts?
Here is a better example. I have a hexadecimal number x, in the form of,
UInt64 x = 0x00000000000000FF
Where the binary representation of x is
0000000000000000000000000000000000000000000000000000000011111111
I wish to find an array consisting of hexadecimal numbers (UInt64??) such that the or operation applied to all members of that array would result in x again. For example,
f(x) = {0x0000000000000080, // 00000....10000000
0x0000000000000040, // 00000....01000000
0x0000000000000020, // 00000....00100000
0x0000000000000010, // 00000....00010000
0x0000000000000008, // 00000....00001000
0x0000000000000004, // 00000....00000100
0x0000000000000002, // 00000....00000010
0x0000000000000001 // 00000....00000001
}
I think the question comes down to finding an efficient way to find the index of the '1's in the binary expansion...
public static UInt64[] findOccupiedSquares(UInt64 pieces){
UInt64[] toReturn = new UInt64[BitOperations.PopCount(pieces)];
if (BitOperations.PopCount(pieces) == 1){
toReturn[0] = pieces;
}
else{
int i = 0;
int index = 0;
while (pieces != 0){
i += 1;
pieces = pieces >> 1;
if (BitOperations.TrailingZeroCount(pieces) == 0){ // One
int rank = (int)(i / 8);
int file = i - (rank * 8);
toReturn[index] = LUTable.MaskRank[rank] & LUTable.MaskFile[file];
index += 1;
}
}
}
return toReturn;
}
Your question still confuses me as you seem to be mixing the concepts of numbers and number representations. i.e. There is an integer and then there is a hexadecimal representation of that integer.
You can very simply break any integer into its base-2 components.
ulong input = 16094009876; // example input
ulong x = 1;
var bits = new List<ulong>();
do
{
if ((input & x) == x)
{
bits.Add(x);
}
x <<= 1;
} while (x != 0);
bits is now a list of integers which each represent one of the binary 1 bits within the input. This can be verified by adding (or ORing - same thing) all the values. So this expression is true:
bits.Aggregate((a, b) => a | b) == input
If you want hexadecimal representations of those integers in the list, you can simply use ToString():
var hexBits = bits.Select(b => b.ToString("X16"));
If you want the binary representations of the integers, you can use Convert:
var binaryBits = bits.Select(b => Convert.ToString((long)b, 2).PadLeft(64, '0'));

Number of substrings with count of each character as k

Source: https://www.geeksforgeeks.org/number-substrings-count-character-k/
Given a string and an integer k, find number of substrings in which all the different characters occurs exactly k times.
Looking for a solution in O(n), using two pointers/sliding window approach. I'm able to find only longest substrings satisfying this criteria but not substrings within that long substring.
For ex: ababbaba, k = 2
My solution finds abab, ababba etc, but not bb within ababba.
Can someone help me with the logic?
If you could edit your question to include your solution code, I'd be happy to help you with that.
For now I'm sharing my solution code (in java) which runs in O(n2). I've added enough comments to make the code self explanatory. Nonetheless the logic for the solution is as follows:
As you correctly pointed out, the problem can be solved using sliding window approach (with variable window size). The solution below considers all possible sub-strings, using nested for loops for setting start and end indices. For each sub-string, we check if every element in the sub-string occurs exactly k times.
To avoid recalculating the count for every sub-string, we maintain the count in a map, and keep putting new elements in the map as we increment the end index (slide the window). This ensures that our solution runs in O(n2) and not O(n3).
To further improve efficiency, we only check the count of individual elements if the sub-string's size matches our requirement. e.g. for n unique elements (keys in the map), the size of required sub-string would be n*k. If the sub-string's size doesn't match this value, there's no need to check how many times the individual characters occur.
import java.util.*;
/**
* Java program to count the number of perfect substrings in a given string. A
* substring is considered perfect if all the elements within the substring
* occur exactly k number of times.
*
* #author Codextor
*/
public class PerfectSubstring {
public static void main(String[] args) {
String s = "aabbcc";
int k = 2;
System.out.println(perfectSubstring(s, k));
s = "aabccc";
k = 2;
System.out.println(perfectSubstring(s, k));
}
/**
* Returns the number of perfect substrings in the given string for the
* specified value of k
*
* #param s The string to check for perfect substrings
* #param k The number of times every element should occur within the substring
* #return int The number of perfect substrings
*/
public static int perfectSubstring(String s, int k) {
int finalCount = 0;
/*
* Set the initial starting index for the subarray as 0, and increment it with
* every iteration, till the last index of the string is reached.
*/
for (int start = 0; start < s.length(); start++) {
/*
* Use a HashMap to store the count of every character in the subarray. We'll
* start with an empty map everytime we update the starting index
*/
Map<Character, Integer> frequencyMap = new HashMap<>();
/*
* Set the initial ending index for the subarray equal to the starting index and
* increment it with every iteration, till the last index of the string is
* reached.
*/
for (int end = start; end < s.length(); end++) {
/*
* Get the count of the character at end index and increase it by 1. If the
* character is not present in the map, use 0 as the default count
*/
char c = s.charAt(end);
int count = frequencyMap.getOrDefault(c, 0);
frequencyMap.put(c, count + 1);
/*
* Check if the length of the subarray equals the desired length. The desired
* length is the number of unique characters we've seen so far (size of the map)
* multilied by k (the number of times each character should occur). If the
* length is as per requiremets, check if each element occurs exactly k times
*/
if (frequencyMap.size() * k == (end - start + 1)) {
if (check(frequencyMap, k)) {
finalCount++;
}
}
}
}
return finalCount;
}
/**
* Returns true if every value in the map is equal to k
*
* #param map The map whose values are to be checked
* #param k The required value for keys in the map
* #return true if every value in the map is equal to k
*/
public static boolean check(Map<Character, Integer> map, int k) {
/*
* Iterate through all the values (frequency of each character), comparing them
* with k
*/
for (Integer i : map.values()) {
if (i != k) {
return false;
}
}
return true;
}
}
For a given value k and a string s of length n with alphabet size D, we can solve the problem in O(n*D).
We need to find sub-strings with each character having exactly k-occurences
Minimum size of such sub-string = k (when only one character is there)
Maximum size of such sub-string = k*D (when all characters are there)
So we will check for all sub-strings of sizes in range [k, k*D]
from collections import defaultdict
ALPHABET_SIZE = 26
def check(count, k):
for v in count.values():
if v != k and v != 0:
return False
return True
def countSubstrings(s, k):
total = 0
for d in range(1, ALPHABET_SIZE + 1):
size = d * k
count = defaultdict(int)
l = r = 0
while r < len(s):
count[s[r]] += 1
# if window size exceed `size`, then fix left pointer and count
if r - l + 1 > size:
count[s[l]] -= 1
l += 1
# if window size is adequate then check and update count
if r - l + 1 == size:
total += check(count, k)
r += 1
return total
def main():
string1 = "aabbcc"
k1 = 2
print(countSubstrings(string1, k1)) # output: 6
string2 = "bacabcc"
k2 = 2
print(countSubstrings(string2, k2)) # output: 2
main()
I can't give you a O(n) solution but I can give you a O(k*n) solution (better than O(n^2) mentioned in the geeksforgeeks page).
The idea is that max no. elements are 26. So, we don't have to check all the substrings, we just have to check substrings with length<=26*k (26*k length is the case when all elements will occur k times. If length is more than that then at least one element will have to occur at least k+1 times). Also, we need to check only those substrings whose lengths are a factor of k.
So, check all 26*k*l possible substrings! (assuming k<<l). Thus, solution is O(k*n) but with a bit high constant (26).
There are few observation which will help optimize the solution
Notice that, you don't need to check every possible size substrings, you just need to check substrings of size k, 2k, 3k so on up to ALPHABET_SIZE * k (remember Pigeonhole principle)
You can pre-calculate frequency of alphabets till certain index from any end and later you can use it to find the frequency of alphabets between any two indexes in O(26)
C++ Implementation of your problem in O(n * ALPHABET_SIZE^2)
I have added comments and diagrams to help you out in understanding code quickly
diagram 1
diagram 2
#include <bits/stdc++.h>
#define ll long long
#define ALPHABET_SIZE 26
using namespace std;
int main()
{
ios_base::sync_with_stdio(false);
cin.tie(NULL);
cout.tie(NULL);
int n, k;
string s;
cin >> n >> k;
cin >> s;
ll cnt = 0;
/**
* It will be storing frequency of each alphabets
**/
vector<int> f(ALPHABET_SIZE, 0);
/**
* It will store alphabets frequency till that index
**/
vector<vector<int>> v;
v.push_back(f);
/**
* Scan array from left to right and calculate the frequency of each alphabets till that index
* Now push that frequency array in v
* This loop will run for n times
**/
for (int i = 1; i <= n; i++)
{
f[s[i - 1] - 'a']++;
v.push_back(f);
}
/**
* This loop will run for k times
**/
for (int i = 0; i < k; i++)
{
/**
* start is the lower bound (left end from where window will start sliding)
**/
int start = i;
/**
* end is the upper bound (right end till where window will be sliding)
**/
int end = (n / k) * k + i;
if (end > n)
{
end -= k;
}
/**
* This loop will run for n/k times
**/
for (int j = start; j <= end; j += k)
{
/**
* This is a ALPHABET_SIZE * k size window
* It will be sliding between start and end (inclusive)
* This loop will run for at most ALPHABET_SIZE times
**/
for (int d = j + k; d <= min(ALPHABET_SIZE * k + j, end); d += k)
{
/**
* A flag to check weather substring is valid or not
**/
bool flag = true;
/**
* Check if frequencies at two different indexes differ only by zero or k (element wise)
* Note that frequencies at two different index can't be same
* This loop will run for ALPHABET_SIZE times
**/
for (int idx = 0; idx < ALPHABET_SIZE; idx++)
{
if (abs(v[j][idx] - v[d][idx]) != k && abs(v[j][idx] - v[d][idx]) != 0)
{
flag = false;
}
}
/**
* Increase the total count if flag is true
**/
if (flag)
{
cnt++;
}
}
}
}
/**
* Print the total count
**/
cout << cnt;
return 0;
}
if you want solution in simple way and not worried about time complexity. Here is the solution.
public class PerfecSubstring {
public static void main(String[] args) {
String st = "aabbcc";
int k = 2;
System.out.println(perfect(st, k));
}
public static int perfect(String st, int k) {
int count = 0;
for (int i = 0; i < st.length(); i++) {
for (int j = st.length(); j > i; j--) {
String sub = st.substring(i, j);
if (sub.length() > k && check(sub, k)) {
System.out.println(sub);
count++;
}
}
}
return count;
}
public static boolean check(String st, int k) {
Map<Character, Integer> map = new HashMap<>();
for (int i = 0; i < st.length(); i++) {
Character c = st.charAt(i);
map.put(c, map.getOrDefault(c, 0) + 1);
}
return map.values().iterator().next() == k && new HashSet<>(map.values()).size() == 1;
}
}
Here is an answer I did in C#, with O(n^2) complexity. I probably should have used a helper method to avoid having a large chunk of code, but it does the job. :)
namespace CodingChallenges
{
using System;
using System.Collections.Generic;
class Solution
{
// Returns the number of perfect substrings of repeating character value 'num'.
public static int PerfectSubstring(string str, int num)
{
int count = 0;
for (int startOfSliceIndex = 0; startOfSliceIndex < str.Length - 1; startOfSliceIndex++)
{
for (int endofSliceIndex = startOfSliceIndex + 1; endofSliceIndex < str.Length; endofSliceIndex++)
{
Dictionary<char, int> dict = new Dictionary<char, int>();
string slice = str.Substring(startOfSliceIndex, (endofSliceIndex - startOfSliceIndex) + 1);
for (int i = 0; i < slice.Length; i++)
{
if (dict.ContainsKey(slice[i]))
{
dict[slice[i]]++;
}
else
{
dict[slice[i]] = 1;
}
}
bool isPerfect = true;
foreach (var entry in dict)
{
if (entry.Value != num)
{
isPerfect = false;
}
}
if (isPerfect)
{
Console.WriteLine(slice);
count++;
}
}
}
if (count == 1)
{
Console.WriteLine(count + " perfect substring.");
}
else
{
Console.WriteLine(count + " perfect substrings.");
}
return count;
}
public static void Main(string[] args)
{
string test = "1102021222";
PerfectSubstring(test, 2);
}
}
}
This solution works in O(n*D)
I think it can be upgraded to be O(n) by replacing the hash_map(frozenset(head_sum_mod_k.items())) with a map implementation that updates its hash rather than recalculating it -
this can be done because only one entry of head_sum_mod_k is changed per iteration.
from copy import deepcopy
def countKPerfectSequences(string:str, k):
print(f'Processing \'{string}\', k={k}')
# init running sum
head_sum = {char: 0 for char in string}
tail_sum = deepcopy(head_sum)
tail_position = 0
# to match both 0 & k sequence lengths, test for mod k == 0
head_sum_mod_k = deepcopy(head_sum)
occurrence_positions = {frozenset(head_sum_mod_k.items()): [0]}
# iterate over string
perfect_counter = 0
for i, val in enumerate(string):
head_sum[val] += 1
head_sum_mod_k[val] = head_sum[val] % k
while head_sum[val] - tail_sum[val] > k:
# update tail to avoid longer than k sequnces
tail_sum[string[tail_position]] += 1
tail_position += 1
# print(f'str[{tail_position}..{i}]=\'{string[tail_position:i+1]}\', head_sum_mod_k={head_sum_mod_k} occurrence_positions={occurrence_positions}')
# get matching sequences between head and tail
indices = list(filter(lambda i: i >= tail_position, occurrence_positions.get(frozenset(head_sum_mod_k.items()), [])))
# for start in indices:
# print(f'{string[start:i+1]}')
perfect_counter += len(indices)
# add head
indices.append(i+1)
occurrence_positions[frozenset(head_sum_mod_k.items())] = indices
return perfect_counter

garbage in loop for no reason

i wrote a function that receives a string as a char array and converts it to an int:
int makeNumFromString(char Str[])
{
int num = 0, len = 0;
int p;
len = strlen(Str);
for (p = 0; p<len; p++)
{
num = num * 10 + (Str[p] - 48);
}
return num;
}
the problem is that no matter how long the string i input is, when "p" gets to 10 the value of "num" turns to garbage!!!
i tried debbuging and checking the function outside of the larger code but no success.
what could be the problem and how can i fix it?
THANKS
Perhaps your int can only store 32 bits, so the number cannot be higher than 2,147,483,647.
Try using a type for num with more storage, like long.

Creating a substring with a varying number of chars

void findNthRoot(string x) {
if(x.find("rt:") != string::npos) {
unsigned position = x.find("rt:");
double n = position - 1;
double z = position + 3;
string str1 = x.substr(n, 1);
string str2 = x.substr(z, 1);
}
}
int main() {
findNthRoot("2 - 1 + 32rt:43 - 89");
}
As part of a calculator program, users can enter nrt:x to represent the nth root of a number x. The code I have written above works perfectly fine, but only if the user inputs a root with a single digit n and a single digit x. So my substrings str1 and str2 in this case would be 2 and 4 respectively. I need the substrings to take the entire number followng rt: and the entire number before rt:. I cannot figure out how to do this. I know this is because I have 1 in x.substr(n, 1) but since the user can enter any number they want, I need to somehow be able to account for that.
In <cctype> header you have the function std::isdigit() that tells you if a character is a digit.
That way you can find the length of your substring with a simple loop :
void findNthRoot(string x)
{
size_t position = x.find("rt:");
if (position != string::npos)
{
size_t length = 1;
size_t n = position - 1;
while (n > 0 && std::isdigit(x[n - 1]))
{
--n;
++length;
}
string str1 = x.substr(n, length);
length = 1;
size_t z = position + 3;
while (z < x.size() - 1 && std::isdigit(x[z + length]))
++length;
string str2 = x.substr(z, length);
}
}
int main()
{
findNthRoot("2 - 1 + 32rt:43 - 89");
}

Resources