How incomplete ClojureScript now? (range) (iterate) etc - node.js

I'm trying to use ClojureScript instead Clojure lately.
When I compile and run on node.js
(.log js/console (range 10))
I've got
$ node app
{ meta: null,
start: 0,
end: 10,
step: 1,
__hash: null,
'cljs$lang$protocol_mask$partition1$': 0,
'cljs$lang$protocol_mask$partition0$': 32375006 }
I'm a bit surprised to see this simple code does not work.
Is this due to my specific environment? I hope so, and if it's a problem of my side, please advise.
Here is the compiled js:
cljs.nodejs = {};
cljs.nodejs.require = require;
cljs.nodejs.process = process;
cljs.core.string_print = cljs.nodejs.require.call(null, "util").print;
var rxcljs = {core:{}};
console.log(cljs.core.range.call(null, 10));

You can either console.log the string representation of (range 10):
(.log js/console (pr-str (range 10)))
or simply use the println function:
(println (range 10))
In either case, (0 1 2 3 4 5 6 7 8 9) is printed as expected.

Looks like you want to print the vector instead; range returns a lazy seq.
Try this:
(.log js/console (vec (range 10)))

Related

Programmatically Lighten or Darken a hex color in lua - nvim highlight colors

The goal is to programmatically change a hex colors brightness in lua.
This post contains several nice examples for js: Programmatically Lighten or Darken a hex color (or rgb, and blend colors)
I tried my luck to convert one of these functions, but I'm still pretty new to lua programming. It just needs to work with hex values, rgb or other variants are not needed. Therefore, I thought the "simpler" answers could serve as inspiration, but I still had no luck with it.
Eventually it shall be used to manipulate highlight colors in nvim. I'm getting the colorcodes with a function I wrote:
local function get_color(synID, what)
local command = 'echo synIDattr(hlID("' .. synID .. '"),' .. '"' .. what .. '"' .. ')'
return vim.api.nvim_command_output(command)
end
I wouldn't resort to bit ops in Lua 5.2 and lower, especially as Lua 5.1 lacks them (LuaJIT however does provide them); use multiplication, floor division & mod instead, and take care to clamp your values:
local function clamp(component)
return math.min(math.max(component, 0), 255)
end
function LightenDarkenColor(col, amt)
local num = tonumber(col, 16)
local r = math.floor(num / 0x10000) + amt
local g = (math.floor(num / 0x100) % 0x100) + amt
local b = (num % 0x100) + amt
return string.format("%#x", clamp(r) * 0x10000 + clamp(g) * 0x100 + clamp(b))
end
Especially with the introduction of bit operators in 5.3, the Javascript references work with minimal changes:
function LightenDarkenColor(col, amt)
col = tonumber(col, 16)
return string.format("%#x", ((col & 0x0000FF) + amt) | ((((col >> 8) & 0x00FF) + amt) << 8) | (((col >> 16) + amt) << 16))
end
print(LightenDarkenColor("3F6D2A", 40))
parseInt became tonumber and toString(16) string.format("%#x", ...)
Note that this function does not perform any error handling on overflows.
The second function on the linked page can be ported the same way. var would be a local in Lua.
For Lua 5.2 and below, you need to use the bit functions. I ported the second function instead, since it would get very unreadable very quickly:
function LightenDarkenColor(col, amt)
local num = tonumber(col, 16)
local r = bit.rshift(num, 16) + amt
local b = bit.band(bit.rshift(num, 8), 0x00FF) + amt
local g = bit.band(num, 0x0000FF) + amt
local newColor = bit.bor(g, bit.bor(bit.lshift(b, 8), bit.lshift(r, 16)))
return string.format("%#x", newColor)
end

Possible? Rust macro to define a bunch of constants?

Let's assume, we want a bunch of constants, associating each square of a chess board with its coordinates, so we can use those constants in our Rust code.
One such definition could be:
#[allow(dead_code)]
const A1: (usize,usize) = (0, 0);
and there would be 64 of them.
Now, as a emacs user, I could generate the source code easily, for example with:
(dolist (col '(?A ?B ?C ?D ?E ?F ?G ?H))
(dolist (row '(?1 ?2 ?3 ?4 ?5 ?6 ?7 ?8))
(insert "#[allow(dead_code)]")
(end-of-line)
(newline-and-indent)
(insert "const " col row ": (usize,usize) = ("
(format "%d" (- col ?A))
", "
(format "%d" (- row ?1))
");")
(end-of-line)
(newline-and-indent)))
With the drawback, that now my file just grew by 128 exceptionally boring lines.
In Common Lisp, I would solve this aspect, by defining myself a macro, for example:
(defmacro defconst-square-names ()
(labels ((square-name (row col)
(intern
(format nil "+~C~D+"
(code-char (+ (char-code #\A) col))
(+ row 1))))
(one-square (row col)
`(defconstant ,(square-name row col)
(cons ,row ,col))))
`(eval-when (:compile-toplevel :load-toplevel :execute)
,#(loop
for col below 8
appending
(loop for row below 8
collecting (one-square row col))))))
(defconst-square-names) ;; nicer packaging of those 64 boring lines...
Now, the question arises, of course,
if Rust macro system is able to accomplish this?
can someone show such a macro?
I read, you need to put such Rust macro into a separate crate or whatnot?!
UPDATE
#aedm pointed me with the comment about seq-macro crate to my first attempt to get it done. But unfortunately, from skimming over various Rust documents about macros, I still don't know how to define and call compile time functions from within such a macro:
fn const_name(index:usize) -> String {
format!("{}{}",
char::from_u32('A' as u32
+ (index as u32 % 8)).unwrap()
, index / 8)
}
seq!(index in 0..64 {
#[allow(dead_code)]
const $crate::const_name(index) : (usize,usize) = ($(index / 8), $(index %8));
});
In my Common Lisp solution, I just defined local functions within the macro to get such things done. What is the Rust way?
Here's one way to do it only with macro_rules! ("macros by example") and the paste crate (to construct the identifiers). It's not especially elegant, but it is fairly short and doesn't require you to write a proc-macro crate.
It needs to be invoked with all of the involved symbols since macro_rules! can't do arithmetic. (Maybe seq-macro would help some with that, but I'm not familiar with it.)
use paste::paste;
macro_rules! board {
// For each column, call column!() passing the details of that column
// and all of the rows. (This can't be done in one macro because macro
// repetition works like "zip", not like "cartesian product".)
( ($($cols:ident $colnos:literal),*), $rows:tt ) => {
$( column!($cols, $colnos, $rows); )*
};
}
/// Helper for board!
macro_rules! column {
( $col:ident, $colno:literal, ($($rows:literal),*) ) => {
$(
paste! {
// [< >] are special brackets that tell the `paste!` macro to
// paste together all the pieces appearing within them into
// a single identifier.
#[allow(dead_code)]
const [< $col $rows >]: (usize, usize) = ($colno, $rows - 1);
}
)*
};
}
board!((A 0, B 1, C 2, D 3, E 4, F 5, G 6, H 7), (1, 2, 3, 4, 5, 6, 7, 8));
fn main() {
dbg!(A1, A8, H1, H8);
}

Problem in implementing Persistent Segment Tree

I am trying to implement Persistent Segment Tree. The queries are of 2 types: 1 and 2.
1 ind val : update the value at ind to val in the array
2 k l r : find the sum of elements from index l to r after the kth update operation.
I have implemented the update and query functions properly and they are working fine on an array. But the problem arises when I am forming different versions. Basically this is my part of code
while (q--) {
cin >> type;
if (type == 1) {
cin >> ind >> val;
node *t = new node;
*t = *ver[size - 1];
update(t, ind, val);
ver.pb(t);
size++;
}
}
cout << query(ver[0], 0, 1) << ' ' << query(ver[1], 0, 1) << query(ver[2], 0, 1);
Now the problem is it is also changing the parameters for the all the node is the array. That means after 3 updates all the versions are storing the latest tree. This is probably because I am not properly allocating the new pointer. The changes made to the new pointer are getting reflected in all the pointers in the array
For example if I give this input
5
1 2 3 4 5
2
1 1 10
1 0 5
where 5 is the number of elements in the array and following is the array. Then there is q, number of queries and then all the queries. After carrying out the update the value of query function called for (l, r) = (0, 1) for all the 3 versions are 15. But it should be 3, 11, 15. What am I doing wrong
So let's say we have some simple segment tree like this:
For Persistant segment tree, during update we generate new nodes for all changed nodes and replace pointers to new nodes where needed, so let's say we update node 4, then we get a persistent segment tree like this (new nodes marked with *):
And all you're doing is replacing the root and copying all data so you get something like this:

How to read/write a bigint from buffer in node.js 10?

I see that BigInt is supported in node 10. However, there's no ReadBigInt() functionality in the Buffer class.
Is it possible to somehow go around it? Perhaps read 2 ints, cast them to BigInt, shift the upper one and add them to reconstruct the bigint?
A little late to the party here, but as the BigInt ctor accepts a hex string we can just convert the Buffer to a hex string and pass that in to the BigInt ctor. This also works for numbers > 2 ** 64 and doesn't require any dependencies.
function bufferToBigInt(buffer, start = 0, end = buffer.length) {
const bufferAsHexString = buffer.slice(start, end).toString("hex");
return BigInt(`0x${bufferAsHexString}`};
}
I recently had encountered the need to do this as well, and managed to find this npm library: https://github.com/no2chem/bigint-buffer ( https://www.npmjs.org/package/bigint-buffer ) which can read from a buffer as a BigInt.
Example Usage (reading, there is more examples on the linked github/npm):
const BigIntBuffer = require('bigint-buffer');
let testBuffer = Buffer.alloc(16);
testBuffer[0] = 0xff; // 255
console.log(BigIntBuffer.toBigIntBE(testBuffer));
// -> 338953138925153547590470800371487866880n
That will read the 16byte (128bit) number from the buffer.
If you wish to read only part of it as a BigInt, then slicing the buffer should work.
With Node v12, functions for reading bigint from buffers was added, so if possible, you should try to use Node v12 or later.
But these functions are just pure math based on reading integers from the buffer, so you can pretty much copy them into your Node 10-11 code.
https://github.com/nodejs/node/blob/v12.6.0/lib/internal/buffer.js#L78-L152
So modifying these methods to not be class methods could look something like this
function readBigUInt64LE(buffer, offset = 0) {
const first = buffer[offset];
const last = buffer[offset + 7];
if (first === undefined || last === undefined) {
throw new Error('Out of bounds');
}
const lo = first +
buffer[++offset] * 2 ** 8 +
buffer[++offset] * 2 ** 16 +
buffer[++offset] * 2 ** 24;
const hi = buffer[++offset] +
buffer[++offset] * 2 ** 8 +
buffer[++offset] * 2 ** 16 +
last * 2 ** 24;
return BigInt(lo) + (BigInt(hi) << 32n);
}
EDIT: For anyone else having the same issue, I created a package for this.
https://www.npmjs.com/package/read-bigint
One liner: BigInt('0x'+buffer.toString('hex'))

Snap SVG: Using 'for loop' to Transform/Translate x-position

Using snap.svg.js. Trying to translate the xPos but nothing happens.
Here is the example jsfiddle.net/hswuhdj4
window.objectPool = {
rectQ1: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue}),
rectQ2: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue}),
rectQ3: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue}),
rectQ4: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue}),
rectQ5: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue}),
rectQ6: paper.rect(0,0,0, svgHeight).attr({fill:lighterBlue})
}
I use an objectpool so i can reuse my objects to keep performance.
window.rectsQ = [
objectPool.rectQ1,
objectPool.rectQ2,
objectPool.rectQ3,
objectPool.rectQ4,
objectPool.rectQ5,
objectPool.rectQ6
];
pushing them in an Array rectsQ for easy future access
var rectAmount = 6;
var rectWidth = 100;
for(i=0;i<rectAmount;i++){
paper.node.appendChild(window.rectsQ[i].node); //immitates toFront() function of Raphael.
window.rectsQ[i].attr({width:rectWidth}); //assigning a width
window.rectsQ[i].transform('T' + (svgWidth-(rectWidth*(i+1))) + ' 0');
}
First, I call the object back to the front, then assign a width, finally translate the x-pos, to the right side of the svg-tag.
It doesn't seem too difficult, but for some reason, no matter what transform i do, the object doesn't move.
//It stays right at these coordinates:
x = 0,
y = 0
//while it should be moved to:
x = svgWidth-rectWidth,
y = 0
I've tried using a Relative Translation ('t') instead of absolute Translation ('T'). No luck though.
Does anyone have an idea to why these snap objects won't move, and how to fix it?
Removing the 2 extra arguments helped in the JSFiddle i made, but weirdly enough not in my project.
This is the JSFiddle: http://jsfiddle.net/hswuhdj4/3
FIXED!!
What caused the problem was the local snap.svg.js file.
Changing the directory with the link raw.githubusercontent.com/adobe-webplatform/Snap.svg/master/dist/snap.svg-min.js fixed the problem for me.
Does anyone know how this occurred?
Transform usually takes 2 arguments. I.e. T x y you're giving it 4 though.
The documentation says it works like paths, and if so T x y 0 0 would be the same as T x y T 0 0 which would move the rect to x y and then move it back again to 0 0.

Resources