I made an encryption tool for encoding texts and txt files. The tool gets 8 bit binary code of input char then encrypts it with other functions for ex:
a.txt file has:
xxyz
reader = fs.createReadStream('./a.txt', {
flag: 'r',
encoding: 'UTF-8',
highWaterMark: 1
})
reader.on('data', function (chunk) {
console.log( chunk + " --> " + toBin(chunk) )
//writer.write( toStr(reverse(toBin(chunk))) );
});
////////////// TRANSLATORS //////////////
function toBin(text)
return (
Array
.from(text)
.reduce((acc, char) => acc.concat(char.charCodeAt().toString(2)), [])
.map(bin => '0'.repeat(8 - bin.length) + bin )
.join('')
);
}
function toStr(bin) {
return String.fromCharCode(parseInt(bin, 2))
}
OUTPUT:
x --> 01111000
x --> 01111000
y --> 01111001
z --> 01111010
--> 00001010
The last one is EOL, I think.
To encrypt this I basically use my functions like:
function swap(bin) {
return bin.slice(4, 8) + bin.slice(0, 4)
}
function reverse(bin) {
return bin.split("").reverse().join("")
}
Then these functions works well for txt files. I can decrypt and encrypt.
When I try the same way on a png file for ex, there is a problem:
console.log( chunk + " --> " + toStr(toBin(chunk)) )
writer.write( toStr(toBin(chunk)) );
/* OUT
® --> ®
B --> B
` --> `
-->
*/
That looks nice when we look the output, but when we try to open the file it created and not empty it says: "Couldn't load image , unrecognized image file format.
When I try to open image with text editor:
Original png image
Just used string to binary, binary to string, wrote to new file.
As you can see, they are not same. I think I shouldn't read it like reading a text file. So how should I read it?
NOTE: Tried more tobin functions but that's the most true one because some of them were saying range error because of reading a big file than txt files, some of them were giving 7 bit codes, and some of them was giving 000 or undefined sometimes.
Thanks.
I think it is about how you write the image file. I hope this helps.
You need write it as: buffer or binary
// in this states data must be as binary
fs.writeFile("file.png", data, "binary", cb)
// other case you can write with streams
fs.createWriteStream("file.png", {
encoding: "binary"
})
writer.write(chunks);
Related
So basically I want to convert a normal UTxO hash like:
550665309dee7e2f64d13f999297f001763f65fe50bb05524afc0990c7dce0c3
to a TransactionUnspentOutput as a hex encoded bytes string like:
828258205537396d59c1b0546bb9cec5cb6b930238af2d8998d24ca1d47e89a3dd400a8701825839016af9a0d2c9b5bce8999bc6430eb48f424399b73f0ecc143f40e8cac89b130cc3198a8594862fe25df331cb79447304dcd49712c86834fdf1821a00150bd0a1581cb0df0ee7dbb96b18b682a1091514f250eb0ec1122e6c4bf3b4d45123a14b436f6e766963743033363701
This is how it is done with a nami wallet implementation:
cardano.getUtxos(amount?: Value, paginate?: {page: number, limit: number}) : [TransactionUnspentOutput]
I tried to pass a UTxO into the lucid utxoToCore() function:
export const utxoToCore = (utxo: UTxO): Core.TransactionUnspentOutput => {
const output = C.TransactionOutput.new(
C.Address.from_bech32(utxo.address),
assetsToValue(utxo.assets)
);
if (utxo.datumHash) {
output.set_datum(
C.Datum.new_data_hash(C.DataHash.from_bytes(fromHex(utxo.datumHash)))
);
}
return C.TransactionUnspentOutput.new(
C.TransactionInput.new(
C.TransactionHash.from_bytes(fromHex(utxo.txHash)),
C.BigNum.from_str(utxo.outputIndex.toString())
),
output
);
};
However the only output I get is:
TransactionUnspentOutput { ptr: 1247376 }
How to get the unpacked (?), or at least the right format I want, TransactionUnspentOutput?
It looks like you are trying to call an external library. Usually, in such cases, it will return you a memory address. Just like you have it in wasm calls through browser client. One way is to deserialize your object inside your node js code.
Maybe you can use some sorta utility function to get it as a string.
E.g. https://github.com/Emurgo/cardano-serialization-lib/blob/master/doc/getting-started/metadata.md#json-conversion
Hope this helps
I am trying to encode a text to Base64 and using NodeJS, and then while getting the data I am decoding it back from Base64. Now I need to change the data into JSON so I could fill the relevant fields but its not converting to JSON.
Here is my code:
fs.readFile('./setting.txt', 'utf8', function(err, data) {
if (err) throw err;
var encodedData = base64.encode(data);
var decoded = base64.decode(encodedData).replace(/\n/g, '').replace(/\s/g, " ");
return res.status(200).send(decoded);
});
In setting.txt I have the following text:
LENGTH=1076
CRC16=28653
OFFSET=37
MEASUREMENT_SAMPLING_RATE=4
MEASUREMENT_RANGE=1
MEASUREMENT_TRACE_LENGTH=16384
MEASUREMENT_PRETRIGGER_LENGTH=0
MEASUREMENT_UNIT=2
MEASUREMENT_OFFSET_REMOVER=1
This decodes the result properly but when I use JSON.parse(JSON.stringify(decoded ) its not converting to JSON.
Can someone help me with it.
Try below snippet
let base64Json= new Buffer(JSON.stringify({}),"base64").toString('base64');
let json = new Buffer(base64Json, 'ascii').toString('ascii');
What does base-64 encoding/decoding have to do with mapping a list of tuples (key/value pairs) like this:
LENGTH=1076
CRC16=28653
OFFSET=37
MEASUREMENT_SAMPLING_RATE=4
MEASUREMENT_RANGE=1
MEASUREMENT_TRACE_LENGTH=16384
MEASUREMENT_PRETRIGGER_LENGTH=0
MEASUREMENT_UNIT=2
MEASUREMENT_OFFSET_REMOVER=1
into JSON?
If you want to "turn it (the above) into JSON", you need to:
Decide on what its JSON representation should be, then
Parse it into its component bits, convert that into an appropriate data struct, and then
use JSON.stringify() to convert it to JSON.
For instance:
function jsonify( document ) {
const tuples = document
.split( /\n|\r\n?/ )
.map( x => x.split( '=', 2) )
.map( ([k,v]) => {
const n = Number(n);
return [ k , n === NaN ? v : n ];
});
const obj = Object.fromEntries(tuples);
const json = JSON.stringify(obj);
return json;
}
I have a JAR file with following structure:
com
-- pack1
-- A.class
-- pack2
-- AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA.class
When I try to read, extract or rename pack2/AA...AA.class (which has a 262 byte long filename) both Linux and Windows say filename is too long. Renaming inside the JAR file doesn't also work.
Any ideas how to solve this issue and make the long class file readable?
This pages lists the usual limits of file systems: http://en.wikipedia.org/wiki/Comparison_of_file_systems
As you can see in the "Limits" section, almost no file system allows more than 255 characters.
Your only chance is to write a program that extracts the files and shortens file names which are too long. Java at least should be able to open the archive (try jar -tvf to list the content; if that works, truncating should work as well).
java.util.jar can handle it:
try {
JarFile jarFile = new JarFile("/path/to/target.jar");
Enumeration<JarEntry> jarEntries = jarFile.entries();
int i = 0;
while (jarEntries.hasMoreElements()) {
JarEntry jarEntry = jarEntries.nextElement();
System.out.println("processing entry: " + jarEntry.getName());
InputStream jarFileInputStream = jarFile.getInputStream(jarEntry);
OutputStream jarOutputStream = new FileOutputStream(new File("/tmp/test/test" + (i++) + ".class")); // give temporary name to class
while (jarFileInputStream.available() > 0) {
jarOutputStream.write(jarFileInputStream.read());
}
jarOutputStream.close();
jarFileInputStream.close();
}
} catch (IOException ex) {
Logger.getLogger(JARExtractor.class.getName()).log(Level.SEVERE, null, ex);
}
The output willbe test<n>.class for each class.
I'm using lazy and fs to read a file line-by-line (reference):
Lazy = require('lazy');
fs = require('fs');
lazy = new Lazy(fs.createReadStream(filename, {
encoding: 'utf8'
}));
lazy.lines.forEach(function(line) {
line = String(line);
// more stuff...
}
The weird thing is, when an empty line is read, String(line) results in the string 0. This is a problem because I can't find a way to distinguish whether the 0 is a result of an empty line, or if the line actually contains a single character 0.
Why is it so? And how to solve this problem?
It's certainly a bug in the Lazy module, I can reproduce it.
The problem is this line:
finalBufferLength = buffers.reduce(function(left, right) { return (left.length||left) + (right.length||right); }, 0);
There's an implicit string conversion in there (so the number 0 is converted to the string "0"), because this fixes it:
finalBufferLength = buffers.reduce(function(left, right) { return Number(left.length||left) + Number(right.length||right); }, 0);
It seems Lazy was only tested with DOS-style line-endings, because when I feed it a file with those, it works just fine. Because of that, as a quick fix, I think you could use this:
lazy
.map(function(chunk) {
// replace Unix-style (LF) line ending with DOS-style (CRLF)
return chunk.replace(/(?!\r)\n/g, '\r\n');
})
.lines
.forEach(function(line) {
console.log('L', line.toString());
});
I am writing a list for couchDB. All the documentation I have read assumes you would want to return data in html or plain text. I, however, need it to be returned in JSON format, in exactly the same way that a view would return (the application I am writing relies on this).
What is the correct way to have a list return its data in JSON format?
Try toJSON(), see the example.
You need to format your output with send to mimmic a JSON output. Here is an example of how we do that in a real case:
function(head, req) {
start({"headers": {"Content-Type": "application/json"}});
var keys = {};
while (row = getRow()) {
//Code goes here
send("{\"rows\":[");
var init = true;
for (var key in keys) {
if (init) {
send("\n");
init = false;
}
else send(",\n");
send("{\"key\": " + key + ",\"value\":");
send("{\"first_val\":" + val1);
send(", \"second_val\":" + val2);
send(", \"third_val\":" + val3 + "}}");
}
send("\n]}");
}
In this way, the output of the list has the same format as the underlying view.