How to write a nested loop over several arguments in a macro? - rust

The following code
macro_rules! test {
( $( $x1:expr ),*; blub $( $x2:expr ),* ) => {
$(
println!("{} * {} = {}", $x1, $x2, $x1 * $x2);
)*
}
}
fn main() {
test!{1, 2, 3; blub 4, 5, 6};
}
prints:
1 * 4 = 4
2 * 5 = 10
3 * 6 = 18
However I want to loop over both lists seperately like a nested loop. It should print:
1 * 4 = 4
1 * 5 = 5
1 * 6 = 6
2 * 4 = 8
2 * 5 = 10
2 * 6 = 12
3 * 4 = 12
3 * 5 = 15
3 * 6 = 18
How do I do that?

The only way I found was to cheat a little by using a token tree as one of the arguments like this:
macro_rules! test {
// secondary invocation with a `[]` delimited list of parameters
// as the first arguments and a single second argument.
( [ $( $x1:expr),* ] ; $x2:expr ) => {
$(
println!("{:?} * {:?} = {:?}", $x1, $x2, $x1 * $x2);
)*
};
// the main invocation of the macro, takes a token tree `x1`
// and a `[]` delimited `,` separated list of arguments for
// each of which it calls itself again with `x1` as first
// parameter and the element of the list as the second
( $x1:tt [ $( $x2:expr ),* ] ) => {
$(
test!($x1; $x2);
)*
};
}
fn main() {
test!{
[1, 2, 3]
[4, 5, 6]
};
}

You'll need to use a nested loop in some form or another because you need to iterate over x2 a variable number of times. You can expand each of your repeated tokens separately into an array, and loop over it the same way you would any iterable collection:
macro_rules! test {
( $($x1:expr ),*; blub $($x2:expr ),* ) => {
for x1 in [$($x1),*] {
for x2 in [$($x2),*] {
println!("{} * {} = {}", x1, x2, x1 * x2);
}
}
}
}
fn main() {
test!{1, 2, 3; blub 4, 5, 6};
}

Related

how rust macros use + separator?

I wrote a macro:
macro_rules! test {
($trait:tt<$($param:literal)+>) => {
{
println!("{}", stringify!($($param)++));
}
};
}
for the $param repeat separator, I couldn't use the + char which is the repeat 'op'.
The output looks like 1 2 3 + but how can I make it be 1 + 2 + 3 instead?

Perl hashes don't work as expected

#!/usr/bin/perl
sub f { {
a => 1,
b => 2
} }
sub g { {
%{f()},
c => 3,
d => 4,
} }
use Data::Dumper;
print Dumper g();
The above code outputs
$VAR1 = 'a';
$VAR2 = 1;
$VAR3 = 'b';
$VAR4 = 2;
$VAR5 = 'c';
$VAR6 = 3;
$VAR7 = 'd';
$VAR8 = 4;
despite in my understanding it should output
$VAR1 = {
'a' => 1,
'c' => 3,
'b' => 2,
'd' => 4
};
What is my misunderstanding?
The problem is that a pair of braces is ambiguous in Perl, and may be either a block or an anonymous hash
Because of the contents of the hash in your g (please use better names) perl has assumed that you are writing a code block, which is just a list of scalar values
Make it more explicit like this and your code will function as you expect
use strict;
use warnings 'all';
sub foo {
{
a => 1,
b => 2,
}
}
sub bar {
my $href = {
%{ foo() },
c => 3,
d => 4,
}
}
use Data::Dump;
dd bar();
output
{ a => 1, b => 2, c => 3, d => 4 }
The Perl language has ambiguities. Take for example
sub some_sub {
{ } # Is this a hash constructor or a block?
}
{ } is valid syntax for a block ("bare loop").
{ } is valid syntax for a hash constructor.
And both are allowed as a statement!
So Perl has to guess. Perl usually guesses correctly, but not always. In your case, it guessed correctly for f(), but not for g().
To fix this, you can give Perl "hints". Unary-+ can be used to do this. Unary-+ is a completely transparent operator; it does nothing at all. However, it must be followed by an expression (not a statement). { } has only one possible meaning as an expression.
+{ } # Must be a hash constructor.
Similarly, you can trick Perl to guess the other way.
{; } # Perl looks ahead, and sees that this must be a block.
So in this case, you could use
sub g { +{
%{f()},
c => 3,
d => 4,
} }
or
sub g { return {
%{f()},
c => 3,
d => 4,
} }
(return must also be followed by an expression if anything.)

How to implement Haskell's splitEvery in Swift?

PROBLEM
let x = (0..<10).splitEvery( 3 )
XCTAssertEqual( x, [(0...2),(3...5),(6...8),(9)], "implementation broken" )
COMMENTS
I am running into problems calculating number of elements in the Range, etc...
extension Range
{
func splitEvery( nInEach: Int ) -> [Range]
{
let n = self.endIndex - self.startIndex // ERROR - cannot invoke '-' with an argument list of type (T,T)
}
}
The values in a range are of ForwardIndexType, so you can only advance() them,
or compute the distance(), but the subtraction - is not defined. The advance amount has to be of the corresponding
type T.Distance. So this would be a possible implementation:
extension Range {
func splitEvery(nInEach: T.Distance) -> [Range] {
var result = [Range]() // Start with empty array
var from = self.startIndex
while from != self.endIndex {
// Advance position, but not beyond the end index:
let to = advance(from, nInEach, self.endIndex)
result.append(from ..< to)
// Continue with next interval:
from = to
}
return result
}
}
Example:
println( (0 ..< 10).splitEvery(3) )
// Output: [0..<3, 3..<6, 6..<9, 9..<10]
Note however that 0 ..< 10 is not a list (or array) of integers. To split an array into subarrays you could define a similar extension:
extension Array {
func splitEvery(nInEach: Int) -> [[T]] {
var result = [[T]]()
for from in stride(from: 0, to: self.count, by: nInEach) {
let to = advance(from, nInEach, self.count)
result.append(Array(self[from ..< to]))
}
return result
}
}
Example:
println( [1, 1, 2, 3, 5, 8, 13].splitEvery(3) )
// Output: [[1, 1, 2], [3, 5, 8], [13]]
A more general approach could be to split all sliceable objects. But Sliceable
is protocol and protocols cannot be extended. What you can do instead is to
define a function that takes the sliceable object as the first argument:
func splitEvery<S : Sliceable>(seq : S, nInEach : S.Index.Distance) -> [S.SubSlice] {
var result : [S.SubSlice] = []
var from = seq.startIndex
while from != seq.endIndex {
let to = advance(from, nInEach, seq.endIndex)
result.append(seq[from ..< to])
from = to
}
return result
}
(Note that this function is completely unrelated to the (extension) methods
defined above.)
Example:
println( splitEvery("abcdefg", 2) )
// Output: [ab, cd, ef, g]
println( splitEvery([3.1, 4.1, 5.9, 2.6, 5.3], 2) )
// Output: [[3.1, 4.1], [5.9, 2.6], [5.3]]
Ranges are not sliceable, but you could define a separate function that takes a
range argument:
func splitEvery<T>(range : Range<T>, nInEach : T.Distance) -> [Range<T>] {
var result : [Range<T>] = []
var from = range.startIndex
while from != range.endIndex {
let to = advance(from, nInEach, range.endIndex)
result.append(from ..< to)
from = to
}
return result
}
Example:
println( splitEvery(0 ..< 10, 3) )
// Output: [0..<3, 3..<6, 6..<9, 9..<10]

Intersecting two iterables lazily in groovy

If I have two iterables and I want to create a third iterable by lazily intersecting them. How can this be done in groovy. I can and have done this using guava before. 
Assuming you mean intersect by "joining the elements together, one at a time", I wrote a package groovy-stream which can do this lazily:
#Grab( 'com.bloidonia:groovy-stream:0.8.1' )
import groovy.stream.*
Iterable a = [ 1, 2, 3 ]
Iterable b = [ 4, 5, 6 ]
Stream.from( a )
.zip( b ) { x, y -> [ x, y ] }
.each { println it }
Which will print:
[1, 4]
[2, 5]
[3, 6]
Edit:
You could of course roll your own:
Iterable a = [ 1, 2, 3 ]
Iterable b = [ 4, 5, 6 ]
Iterable zip = new LazyZipper( a:a, b:b, joiner:{ x, y -> [ x, y ] } )
for( r in zip ) {
println r
}
class LazyZipper implements Iterable {
Iterable a
Iterable b
Closure joiner
class LazyIterator implements Iterator {
Iterator ia
Iterator ib
void remove() { throw new UnsupportedOperationException() }
boolean hasNext() { ia.hasNext() && ib.hasNext() }
Object next() { joiner( ia.next(), ib.next() ) }
}
Iterator iterator() {
new LazyIterator( ia:a.iterator(), ib:b.iterator() )
}
}

I need an optimal code for parallel mergesort using intel thread building block

I need an optimal code for parallel mergesort using intel thread building block in c++
First off, let me say that in my experience tbb::parallel_sort() is quite efficient and is a bit faster than the code I'm about to post (at least for input on the order of thousands of elements for which I've tested).
Having said that, I think the following code is exactly what you are looking for. Variables should be self explanatory and documentation in the code should explain the rest -
This will be needed for parallelization :
#include<tbb/parallel_invoke.h>
If you choose to use Concurrency::parallel_invoke(), which may work faster, then include this :
#include<ppl.h>
I recommend these settings -
#define MIN_ELEMENTS_FOR_RECURSION (50)
#define MIN_ELEMENTS_FOR_PARALLEL_PROCESSING (100)
Following is the main function to call. Parameters are iterators to start and end of a random access class (e.g., vector, deque, etc.) and a compare function -
template <typename T_it, typename T_it_dereferenced>
void parallelMergeSort( T_it first, T_it last, bool (*firstLessThanSecond)(const T_it_dereferenced& a, const T_it_dereferenced& b) )
{
// create copy of container for extra space
std::vector<T_it_dereferenced> copy(first, last);
parallelMergeSortRecursive( first, last, copy.begin(), copy.end(), firstLessThanSecond );
}
This is called recursively from parallelMergeSort() in order to sort each half -
template <typename T_it, typename T_it_dereferenced>
void parallelMergeSortRecursive( T_it source_first, T_it source_last, T_it copy_first, T_it copy_last,
bool (*firstLessThanSecond)(const T_it_dereferenced& a, const T_it_dereferenced& b), int recursion_depth = 0 )
{
// divide the array in two, and sort the two halves
long num_elements = source_last - source_first;
if ( num_elements > MIN_ELEMENTS_FOR_RECURSION )
{
T_it source_middle = source_first + num_elements / 2;
T_it copy_middle = copy_first + num_elements / 2;
if ( num_elements > MIN_ELEMENTS_FOR_PARALLEL_PROCESSING )
{
// Concurrency::parallel_invoke() may work faster
tbb::parallel_invoke(
[=] { parallelMergeSortRecursive( source_first, source_middle, copy_first, copy_middle, firstLessThanSecond, recursion_depth + 1 ); },
[=] { parallelMergeSortRecursive( source_middle, source_last, copy_middle, copy_last, firstLessThanSecond, recursion_depth + 1 ); }
);
}
else // sort serially rather than in parallel
{
parallelMergeSortRecursive( source_first, source_middle, copy_first, copy_middle, firstLessThanSecond, recursion_depth + 1 );
parallelMergeSortRecursive( source_middle, source_last, copy_middle, copy_last, firstLessThanSecond, recursion_depth + 1 );
}
// merge the two sorted halves
// we switch source <--> target with each level of recursion.
// at even recursion depths (including zero which is the root level) we assume the source is sorted and merge into the target
if ( recursion_depth % 2 == 0 )
{
merge( source_first, copy_first, copy_middle, copy_last, firstLessThanSecond );
}
else
{
merge( copy_first, source_first, source_middle, source_last, firstLessThanSecond );
}
}
else // very few elements remain to be sorted, stop the recursion and sort in place
{
if ( recursion_depth % 2 == 0 )
{
std::stable_sort(source_first, source_last, firstLessThanSecond);
}
else
{
std::stable_sort(copy_first, copy_last, firstLessThanSecond);
}
}
}
This is called from the recursive function in order to merge two halves -
template <typename T_it, typename T_it_dereferenced>
void merge( T_it target_first, T_it source_first, T_it source_middle, T_it source_last,
bool (*firstLessThanSecond)(const T_it_dereferenced& a, const T_it_dereferenced& b) )
{
// source is assumed to contain two sorted sequences (from first to middle and from middle to last)
T_it source_it1 = source_first;
T_it source_it2 = source_middle;
T_it target_it = target_first;
for ( /* intentional */ ; source_it1 < source_middle && source_it2 < source_last ; ++target_it )
{
//if ( source_container[i] < source_container[j] )
if ( firstLessThanSecond(*source_it1, *source_it2) )
{
*target_it = *source_it1;
++source_it1;
}
else
{
*target_it = *source_it2;
++source_it2;
}
}
// insert remaining elements in non-completely-traversed-half into original container
// only one of these two whiles will execute since one of the conditions caused the previous while to stop
for ( /* intentional */ ; source_it1 < source_middle ; ++target_it )
{
*target_it = *source_it1;
++source_it1;
}
for ( /* intentional */ ; source_it2 < source_last ; ++target_it )
{
*target_it = *source_it2;
++source_it2;
}
}
TBB already includes a sort method (parallel quicksort), which is -however- implemented quite poorly (runtime is at least linear independent of the number of processors).
My proposal would be you port parallel merge sort from an existing implementation.
For example gnu parallel mode sort (included in any recent gcc with source files) wich uses OpenMP.
Just replace all #pragma omp by some tbb parallel code.

Resources