How to convert string to binary representation in game maker? - game-maker-language

I found a script that converts binary to string but how can I input a string and get the binary representation? so say I put in "P" I want it to output 01010000 as a string.
I have this but it is not what I am trying to do - it converts a string containing a binary number into a real value of that number:
///string_to_binary(string)
var str = argument0;
var output = "";
for(var i = 0; i < string_length(str); i++){
if(string_char_at(str, i + 1) == "0"){
output += "0";
}
else{
output += "1";
}
}
return real(output);

Tip: search for GML or other language term, these questions answered many times. Also please check your tag as it is the IDE tag, not language tag.
Im not familiar with GML myself, but a quick search showed this:
At least semi-official method for exactly this: http://www.gmlscripts.com/script/bytes_to_bin
/// bytes_to_bin(str)
//
// Returns a string of binary digits, 1 bit each.
//
// str raw bytes, 8 bits each, string
//
/// GMLscripts.com/license
{
var str, bin, p, byte;
str = argument0;
bin = "";
p = string_length(str);
repeat (p) {
byte = ord(string_char_at(str,p));
repeat (8) {
if (byte & 1) bin = "1" + bin else bin = "0" + bin;
byte = byte >> 1;
}
p -= 1;
}
return bin;
}
GML forum (has several examples) https://www.reddit.com/r/gamemaker/comments/4opzhu/how_could_i_convert_a_string_to_binary/
///string_to_binary(string)
var str = argument0;
var output = "";
for(var i = 0; i < string_length(str); i++){
if(string_char_at(str, i + 1) == "0"){
output += "0";
}
else{
output += "1";
}
}
return real(output);
And other language examples:
C++ Fastest way to Convert String to Binary?
#include <string>
#include <bitset>
#include <iostream>
using namespace std;
int main(){
string myString = "Hello World";
for (std::size_t i = 0; i < myString.size(); ++i)
{
cout << bitset<8>(myString.c_str()[i]) << endl;
}
}
Java: Convert A String (like testing123) To Binary In Java
String s = "foo";
byte[] bytes = s.getBytes();
StringBuilder binary = new StringBuilder();
for (byte b : bytes)
{
int val = b;
for (int i = 0; i < 8; i++)
{
binary.append((val & 128) == 0 ? 0 : 1);
val <<= 1;
}
binary.append(' ');
}
System.out.println("'" + s + "' to binary: " + binary);
JS: How to convert text to binary code in JavaScript?
function convert() {
var output = document.getElementById("ti2");
var input = document.getElementById("ti1").value;
output.value = "";
for (var i = 0; i < input.length; i++) {
output.value += input[i].charCodeAt(0).toString(2) + " ";
}
}

I was looking around for a simple GML script to convert a decimal to binary and return the bits in an array. I didn't find anything for my need and to my liking so I rolled my own. Short and sweet.
The first param is the decimal number (string or decimal) and the second param is the bit length.
// dec_to_bin(num, len);
// argument0, decimal string
// argument1, integer
var num = real(argument0);
var len = argument1;
var bin = array_create(len, 0);
for (var i = len - 1; i >= 0; --i) {
bin[i] = floor(num % 2);
num -= num / 2;
}
return bin;
Usage:
dec_to_bin("48", 10);
Output:
{ { 0,0,0,0,1,1,0,0,0,0 }, }

i think the binary you mean is the one that computers use, if thats the case, just use the common binary and add a kind of identification.
binary is actually simple, instead of what most people think.
every digit represents the previous number *2 (2¹, 2², 2³...) so we get:
1, 2, 4, 8, 16, 32, 64, 128, 256, 512...
flip it and get:
...512, 256, 128, 64, 32, 16, 8, 4, 2, 1
every digit is "activated" with 1's, plus all the activated number ant thats the value.
ok, so binary is basically another number system, its not like codes or something. Then how are letters and other characters calculated?
they arent ;-;
we just represent then as their order on their alphabets, so:
a=1
b=2
c=3
...
this means that "b" in binary would be "10", but "2" is also "10". So thats where computer's binary enter.
they just add a identification before the actual number, so:
letter_10 = b
number_10 = 2
signal_10 = "
wait, but if thats binary there cant be letter on it, instead another 0's and 1's are used, so:
011_10 = b
0011_10 = 2
001_10 = "
computers also cant know where the number starts and ends, so you have to always use the same amount of numbers, which is 8. now we get:
011_00010 = b
0011_0010 = 2
001_00010 = "
then remove the "_" cuz again, computers will only use 0's and 1's. and done!
so what i mean is, just use the code you had and add 00110000 to the value, or if you want to translate these numbers to letters as i wanted just add 01100000
in that case where you have the letter and wants the binary, first convert the letter to its number, for it just knows that the letters dont start at 1, capitalized letters starts at 64 and the the non-capitalized at 96.
ord("p")=112
112-96=16
16 in binary is 10000
10000 + 01100000 = 01110000
"p" in binary is 01110000
ord("P")=80
80-64=16
16 in binary is 10000
10000 + 01000000 = 01010000
"P" in binary is 01010000
thats just a explanation of what the code should do, actually im looking for a simple way to turn binary cuz i cant understand much of the code you showed.
(011)
1000 1111 10000 101 1001 1000 101 1100 10000 101 100

Related

Finding the binary composition of a binary number

Very new to C#, so this could be a silly question.
I am working with alot of UInt64's. These are expressed as hex right? If we look at its binary representation, can we return such an array that if we apply the 'or' operation to, we will arrive back at the original UInt64?
For example, let's say
x = 1011
Then, I am looking for an efficient way to arrive at,
f(x) = {1000, 0010, 0001}
Where these numbers are in hex, rather than binary. Sorry, I am new to hex too.
I have a method already, but it feels inefficient. I first convert to a binary string, and loop over that string to find each '1'. I then add the corresponding binary number to an array.
Any thoughts?
Here is a better example. I have a hexadecimal number x, in the form of,
UInt64 x = 0x00000000000000FF
Where the binary representation of x is
0000000000000000000000000000000000000000000000000000000011111111
I wish to find an array consisting of hexadecimal numbers (UInt64??) such that the or operation applied to all members of that array would result in x again. For example,
f(x) = {0x0000000000000080, // 00000....10000000
0x0000000000000040, // 00000....01000000
0x0000000000000020, // 00000....00100000
0x0000000000000010, // 00000....00010000
0x0000000000000008, // 00000....00001000
0x0000000000000004, // 00000....00000100
0x0000000000000002, // 00000....00000010
0x0000000000000001 // 00000....00000001
}
I think the question comes down to finding an efficient way to find the index of the '1's in the binary expansion...
public static UInt64[] findOccupiedSquares(UInt64 pieces){
UInt64[] toReturn = new UInt64[BitOperations.PopCount(pieces)];
if (BitOperations.PopCount(pieces) == 1){
toReturn[0] = pieces;
}
else{
int i = 0;
int index = 0;
while (pieces != 0){
i += 1;
pieces = pieces >> 1;
if (BitOperations.TrailingZeroCount(pieces) == 0){ // One
int rank = (int)(i / 8);
int file = i - (rank * 8);
toReturn[index] = LUTable.MaskRank[rank] & LUTable.MaskFile[file];
index += 1;
}
}
}
return toReturn;
}
Your question still confuses me as you seem to be mixing the concepts of numbers and number representations. i.e. There is an integer and then there is a hexadecimal representation of that integer.
You can very simply break any integer into its base-2 components.
ulong input = 16094009876; // example input
ulong x = 1;
var bits = new List<ulong>();
do
{
if ((input & x) == x)
{
bits.Add(x);
}
x <<= 1;
} while (x != 0);
bits is now a list of integers which each represent one of the binary 1 bits within the input. This can be verified by adding (or ORing - same thing) all the values. So this expression is true:
bits.Aggregate((a, b) => a | b) == input
If you want hexadecimal representations of those integers in the list, you can simply use ToString():
var hexBits = bits.Select(b => b.ToString("X16"));
If you want the binary representations of the integers, you can use Convert:
var binaryBits = bits.Select(b => Convert.ToString((long)b, 2).PadLeft(64, '0'));

how to find decode way to decode a USSD Command's result in c#?

I'm working on my GSM modem (Huawei E171) to send USSD commands.
to do this i use this commands at the first:
AT+CMGF=1
AT+CSCS=? ----> result is "IRA" this is my modem default
after that i sent these commands and i have got these results and everything works fine.
//*141*1# ----->to check my balance
+CUSD:
0,"457A591C96EB40B41A8D0692A6C36C17688A2E9FCB667AD87D4EEB4130103D
0C8281E4753D0B1926E7CB2018881E06C140F2BADE5583819A4250D24D2FC
BDD653A485AD787DD65504C068381A8EF76D80D2287E53A55AD5653D554
31956D04",15
//*100# ----> this command give me some options to charge my mobile
+CUSD:
1,"06280627062C06470020062706CC06310627064606330644000A0030002E062E0
63106CC062F00200634062706310698000A0031002E067E062706330627063106A
F0627062F000A0032002E0622067E000A0033002E06450644062A000A003
4002E06330627064506270646000A0035002E067E0627063106330
6CC06270646000A002300200028006E0065007800740029000A",72
i found some codes to decode these result:
to decode checking balance result i used:
string result141="457A591C96EB40B41A8D0692A6C36C17688A......."
byte[] packedBytes = ConvertHexToBytes(result141);
byte[] unpackedBytes = UnpackBytes(packedBytes);
//gahi in kar mikone gahi balkaee nafahmidam chera
string o = Encoding.Default.GetString(unpackedBytes);
my function's codes are:
public static byte[] ConvertHexToBytes(string hexString)
{
if (hexString.Length % 2 != 0)
return null;
int len = hexString.Length / 2;
byte[] array = new byte[len];
for (int i = 0; i < array.Length; i++)
{
string tmp = hexString.Substring(i * 2, 2);
array[i] =
byte.Parse(tmp, System.Globalization.NumberStyles.HexNumber);
}
return array;
}
public static byte[] UnpackBytes(byte[] packedBytes)
{
byte[] shiftedBytes = new byte[(packedBytes.Length * 8) / 7];
int shiftOffset = 0;
int shiftIndex = 0;
// Shift the packed bytes to the left according
//to the offset (position of the byte)
foreach (byte b in packedBytes)
{
if (shiftOffset == 7)
{
shiftedBytes[shiftIndex] = 0;
shiftOffset = 0;
shiftIndex++;
}
shiftedBytes[shiftIndex] = (byte)((b << shiftOffset) & 127);
shiftOffset++;
shiftIndex++;
}
int moveOffset = 0;
int moveIndex = 0;
int unpackIndex = 1;
byte[] unpackedBytes = new byte[shiftedBytes.Length];
//
if (shiftedBytes.Length > 0)
{
unpackedBytes[unpackIndex - 1] =
shiftedBytes[unpackIndex - 1];
}
// Move the bits to the appropriate byte (unpack the bits)
foreach (byte b in packedBytes)
{
if (unpackIndex != shiftedBytes.Length)
{
if (moveOffset == 7)
{
moveOffset = 0;
unpackIndex++;
unpackedBytes[unpackIndex - 1] =
shiftedBytes[unpackIndex - 1];
}
if (unpackIndex != shiftedBytes.Length)
{
// Extract the bits to be moved
int extractedBitsByte = (packedBytes[moveIndex] &
_decodeMask[moveOffset]);
// Shift the extracted bits to the proper offset
extractedBitsByte =
(extractedBitsByte >> (7 - moveOffset));
// Move the bits to the appropriate byte
//(unpack the bits)
int movedBitsByte =
(extractedBitsByte | shiftedBytes[unpackIndex]);
unpackedBytes[unpackIndex] = (byte)movedBitsByte;
moveOffset++;
unpackIndex++;
moveIndex++;
}
}
}
// Remove the padding if exists
if (unpackedBytes[unpackedBytes.Length - 1] == 0)
{
byte[] finalResultBytes = new byte[unpackedBytes.Length - 1];
Array.Copy(unpackedBytes, 0,
finalResultBytes, 0, finalResultBytes.Length);
return finalResultBytes;
}
return unpackedBytes;
}
but to decode second result i used:
string strHex= "06280627062C06470020062706CC06310......";
strHex = strHex.Replace(" ", "");
int nNumberChars = strHex.Length / 2;
byte[] aBytes = new byte[nNumberChars];
using (var sr = new StringReader(strHex))
{
for (int i = 0; i < nNumberChars; i++)
aBytes[i] = Convert.ToByte(
new String(new char[2] {
(char)sr.Read(), (char)sr.Read() }), 16);
}
string decodedmessage= Encoding.BigEndianUnicode.
GetString(aBytes, 0, aBytes.Length);
both of theme works current but why i should different decoding way to decode these results?
from where i can find, i should use which one of these two types of decoding?
USSD command responses +CUSD unsolicited responses are formatted as follows:
+CUSD: <m>[<str_urc>[<dcs>]]
Where "m" is the type of action required, "str_urc" is the response string, and "dcs" is the response string encoding.
This quote is from a Siemens Cinterion MC55i manual but applies generally to other modem manufacturers:
If dcs indicates that GSM 03.38 default alphabet is used TA converts GSM alphabet into current TE character
set according to rules of GSM 07.05 Annex A. Otherwise in case of invalid or omitted dcs conversion of
str_urc is not possible.
USSD's can be sent in 7-Bit encoded format or UC2 hence when looking at your two example responses you can see either a DCS of 15 or 72.
GSM 03.38 Cell Broadcast Data Coding Scheme in integer format (default 15). In case of an invalid or omitted
dcs from the network side (MT) will not be given out.
So if you get a DCS of 15 then it is 7-Bit encoded. And if it's 72 then it will be UC2. So from this you can easily select either your first decoding routine or second.

Finding minimum moves required for making 2 strings equal

This is a question from one of the online coding challenge (which has completed).
I just need some logic for this as to how to approach.
Problem Statement:
We have two strings A and B with the same super set of characters. We need to change these strings to obtain two equal strings. In each move we can perform one of the following operations:
1. swap two consecutive characters of a string
2. swap the first and the last characters of a string
A move can be performed on either string.
What is the minimum number of moves that we need in order to obtain two equal strings?
Input Format and Constraints:
The first and the second line of the input contains two strings A and B. It is guaranteed that the superset their characters are equal.
1 <= length(A) = length(B) <= 2000
All the input characters are between 'a' and 'z'
Output Format:
Print the minimum number of moves to the only line of the output
Sample input:
aab
baa
Sample output:
1
Explanation:
Swap the first and last character of the string aab to convert it to baa. The two strings are now equal.
EDIT : Here is my first try, but I'm getting wrong output. Can someone guide me what is wrong in my approach.
int minStringMoves(char* a, char* b) {
int length, pos, i, j, moves=0;
char *ptr;
length = strlen(a);
for(i=0;i<length;i++) {
// Find the first occurrence of b[i] in a
ptr = strchr(a,b[i]);
pos = ptr - a;
// If its the last element, swap with the first
if(i==0 && pos == length-1) {
swap(&a[0], &a[length-1]);
moves++;
}
// Else swap from current index till pos
else {
for(j=pos;j>i;j--) {
swap(&a[j],&a[j-1]);
moves++;
}
}
// If equal, break
if(strcmp(a,b) == 0)
break;
}
return moves;
}
Take a look at this example:
aaaaaaaaab
abaaaaaaaa
Your solution: 8
aaaaaaaaab -> aaaaaaaaba -> aaaaaaabaa -> aaaaaabaaa -> aaaaabaaaa ->
aaaabaaaaa -> aaabaaaaaa -> aabaaaaaaa -> abaaaaaaaa
Proper solution: 2
aaaaaaaaab -> baaaaaaaaa -> abaaaaaaaa
You should check if swapping in the other direction would give you better result.
But sometimes you will also ruin the previous part of the string. eg:
caaaaaaaab
cbaaaaaaaa
caaaaaaaab -> baaaaaaaac -> abaaaaaaac
You need another swap here to put back the 'c' to the first place.
The proper algorithm is probably even more complex, but you can see now what's wrong in your solution.
The A* algorithm might work for this problem.
The initial node will be the original string.
The goal node will be the target string.
Each child of a node will be all possible transformations of that string.
The current cost g(x) is simply the number of transformations thus far.
The heuristic h(x) is half the number of characters in the wrong position.
Since h(x) is admissible (because a single transformation can't put more than 2 characters in their correct positions), the path to the target string will give the least number of transformations possible.
However, an elementary implementation will likely be too slow. Calculating all possible transformations of a string would be rather expensive.
Note that there's a lot of similarity between a node's siblings (its parent's children) and its children. So you may be able to just calculate all transformations of the original string and, from there, simply copy and recalculate data involving changed characters.
You can use dynamic programming. Go over all swap possibilities while storing all the intermediate results along with the minimal number of steps that took you to get there. Actually, you are going to calculate the minimum number of steps for every possible target string that can be obtained by applying given rules for a number times. Once you calculate it all, you can print the minimum number of steps, which is needed to take you to the target string. Here's the sample code in JavaScript, and its usage for "aab" and "baa" examples:
function swap(str, i, j) {
var s = str.split("");
s[i] = str[j];
s[j] = str[i];
return s.join("");
}
function calcMinimumSteps(current, stepsCount)
{
if (typeof(memory[current]) !== "undefined") {
if (memory[current] > stepsCount) {
memory[current] = stepsCount;
} else if (memory[current] < stepsCount) {
stepsCount = memory[current];
}
} else {
memory[current] = stepsCount;
calcMinimumSteps(swap(current, 0, current.length-1), stepsCount+1);
for (var i = 0; i < current.length - 1; ++i) {
calcMinimumSteps(swap(current, i, i + 1), stepsCount+1);
}
}
}
var memory = {};
calcMinimumSteps("aab", 0);
alert("Minimum steps count: " + memory["baa"]);
Here is the ruby logic for this problem, copy this code in to rb file and execute.
str1 = "education" #Sample first string
str2 = "cnatdeiou" #Sample second string
moves_count = 0
no_swap = 0
count = str1.length - 1
def ends_swap(str1,str2)
str2 = swap_strings(str2,str2.length-1,0)
return str2
end
def swap_strings(str2,cp,np)
current_string = str2[cp]
new_string = str2[np]
str2[cp] = new_string
str2[np] = current_string
return str2
end
def consecutive_swap(str,current_position, target_position)
counter=0
diff = current_position > target_position ? -1 : 1
while current_position!=target_position
new_position = current_position + diff
str = swap_strings(str,current_position,new_position)
# p "-------"
# p "CP: #{current_position} NP: #{new_position} TP: #{target_position} String: #{str}"
current_position+=diff
counter+=1
end
return counter,str
end
while(str1 != str2 && count!=0)
counter = 1
if str1[-1]==str2[0]
# p "cross match"
str2 = ends_swap(str1,str2)
else
# p "No match for #{str2}-- Count: #{count}, TC: #{str1[count]}, CP: #{str2.index(str1[count])}"
str = str2[0..count]
cp = str.rindex(str1[count])
tp = count
counter, str2 = consecutive_swap(str2,cp,tp)
count-=1
end
moves_count+=counter
# p "Step: #{moves_count}"
# p str2
end
p "Total moves: #{moves_count}"
Please feel free to suggest any improvements in this code.
Try this code. Hope this will help you.
public class TwoStringIdentical {
static int lcs(String str1, String str2, int m, int n) {
int L[][] = new int[m + 1][n + 1];
int i, j;
for (i = 0; i <= m; i++) {
for (j = 0; j <= n; j++) {
if (i == 0 || j == 0)
L[i][j] = 0;
else if (str1.charAt(i - 1) == str2.charAt(j - 1))
L[i][j] = L[i - 1][j - 1] + 1;
else
L[i][j] = Math.max(L[i - 1][j], L[i][j - 1]);
}
}
return L[m][n];
}
static void printMinTransformation(String str1, String str2) {
int m = str1.length();
int n = str2.length();
int len = lcs(str1, str2, m, n);
System.out.println((m - len)+(n - len));
}
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
String str1 = scan.nextLine();
String str2 = scan.nextLine();
printMinTransformation("asdfg", "sdfg");
}
}

How do I convert a string version of a number in an arbitrary base to an integer?

how to convert string to integer??
for ex:
"5328764",to int base 10
"AB3F3A", to int base 16
any code will be helpfull
Assuming arbitrary base (not 16, 10, 8, 2):
In C (C++), use strtol
return strtol("AB3F3A", NULL, 16);
In Javascript, use parseInt.
return parseInt("AB3F3A", 16);
In Python, use int(string, base).
return int("AB3F3A", 16)
In Java, use Integer.parseInt (thanks Michael.)
return Integer.parseInt("AB3F3A", 16);
In PHP, use base_convert.
return intval(base_convert('AB3F3A', 16, 10));
In Ruby, use to_i
"AB3F3A".to_i(16)
In C#, write one yourself.
in C#, i think it is: Convert.ToInt64(value, base)
and the base must be 2, 8, 10, or 16
9999 is really 9000 + 900 + 90 + 9
So, start at the right hand side of the string, and pick off the numbers one at a time.
Each character number has an ASCII code, which can be translated to the number, and multiplied by the appropriate amount.
Two functions in java, in both directions: "code" parameter represent the numerical system: "01" for base 2, "0123456789" for base 10, "0123456789abcdef" for hexdecimal and so on...
public String convert(long num, String code) {
final int base = code.length();
String text = "";
while (num > 0) {
text = code.charAt((int) (num%base)) + text;
num /= base;
}
return text;
}
public long toLong(String text, String code) {
final long base = code.length();
long num = 0;
long pow = 1;
int len = text.length();
for(int i = 0; i < len; i++) {
num += code.indexOf(text.charAt(len - i - 1)) * pow;
pow *= base;
}
return num;
}
println(convert(9223372036854775807L,"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"));
println(convert(9223372036854775807L,"0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ#=-+*/^%$#&()!?.,:;[]"));
println(toLong("Ns8T$87=uh","0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ#=-+*/^%$#&()!?.,:;[]"));```
in your example:
toLong("5328764", "0123456789") = 5328764
toLong("AB3F3A", "0123456789ABCDEF") = 11222842

String Format Issue

I've got the following method:
public static string ReturnFormat(string input, int maxLength, int decimalPrecision, char formatChar)
{
string[] format = new string[2];
string[] inputs = new string[2];
inputs = input.Split(CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator[0]);
if (input.Length > maxLength)
{
int offset = 0;
int counter = 0;
if (inputs[0].Length > maxLength - (1 + decimalPrecision))
{
offset = maxLength - (1 + decimalPrecision);
}
else
offset = inputs[0].Length;
for (int i = 0; i < offset; i++)
{
format[0] += formatChar;
if (counter < decimalPrecision)
{
format[1] += '0';
counter++;
}
}
System.Windows.Forms.MessageBox.Show("{0:" + format[0] + "." + format[1] + "}");
return String.Format(CultureInfo.CurrentCulture, "{0:" + format[0] + "." + format[1] + "}", input);
}
else
return input;
}
Which say I'm using as:
ReturnFormat("12.3456789011243", 10, 2, '#') // format is {0:##.00} // output 12.3456789011243
ReturnFormat("12345678901.1243", 10, 2, '#') // format is {0:#######.00} // output 12345678901.1243
Now my issue is that the input string is not formatted well, still the format strig appears to be ok.
Any ideas of what I'm doing wrong?
Your input is a String not a Double, so it gets formatted like a string: the formatting does not know about decimal places in that case.
You could use Double.Parse() to transform the string into a Double value, but take care of using the right culture.
Another thing, is there a specific reason for not using the more natural format {0:0.00} in both cases? If you really mean to use a placeholder for digits then # is ok, otherwise 0 is best.
Tested solution (beware it truncates and does not round) I needed some time to understand what was actually wanted:
public static string ReturnFormat(string input, int maxLength, int decimalPrecision)
{
if (input.Length <= maxLength)
return input;
Char separator = CultureInfo.CurrentCulture.NumberFormat.NumberDecimalSeparator[0];
string[] inputs = input.Split(separator);
// NB: truncating rather than rounding
if (inputs[1].Length > decimalPrecision)
inputs[1] = inputs[1].Substring(0, decimalPrecision);
int digits = (maxLength - decimalPrecision - 1);
// NB: truncating rather than rounding, adding ~ to signalize the
// presence of missing significant digits
if (inputs[0].Length > digits)
inputs[0] = inputs[0].Substring(0, digits-1) + "~";
return inputs[0] + separator + inputs[1];
}

Resources