How to simplify mathematical formulas with rust macros? - rust

I must admit I'm a bit lost with macros.
I want to build a macro that does the following task and
I'm not sure how to do it. I want to perform a scalar product
of two arrays, say x and y, which have the same length N.
The result I want to compute is of the form:
z = sum_{i=0}^{N-1} x[i] * y[i].
x is const which elements are 0, 1, or -1 which are known at compile time,
while y's elements are determined at runtime. Because of the
structure of x, many computations are useless (terms multiplied by 0
can be removed from the sum, and multiplications of the form 1 * y[i], -1 * y[i] can be transformed into y[i], -y[i] respectively).
As an example if x = [-1, 1, 0], the scalar product above would be
z=-1 * y[0] + 1 * y[1] + 0 * y[2]
To speed up my computation I can unroll the loop by hand and rewrite
the whole thing without x[i], and I could hard code the above formula as
z = -y[0] + y[1]
But this procedure is not elegant, error prone
and very tedious when N becomes large.
I'm pretty sure I can do that with a macro, but I don't know where to
start (the different books I read are not going too deep into macros and
I'm stuck)...
Would anyone of you have any idea how to (if it is possible) this problem using macros?
Thank you in advance for your help!
Edit: As pointed out in many of the answers, the compiler is smart enough to remove optimize the loop in the case of integers. I am not only using integers but also floats (the x array is i32s, but in general y is f64s), so the compiler is not smart enough (and rightfully so) to optimize the loop. The following piece of code gives the following asm.
const X: [i32; 8] = [0, 1, -1, 0, 0, 1, 0, -1];
pub fn dot_x(y: [f64; 8]) -> f64 {
X.iter().zip(y.iter()).map(|(i, j)| (*i as f64) * j).sum()
}
playground::dot_x:
xorpd %xmm0, %xmm0
movsd (%rdi), %xmm1
mulsd %xmm0, %xmm1
addsd %xmm0, %xmm1
addsd 8(%rdi), %xmm1
subsd 16(%rdi), %xmm1
movupd 24(%rdi), %xmm2
xorpd %xmm3, %xmm3
mulpd %xmm2, %xmm3
addsd %xmm3, %xmm1
unpckhpd %xmm3, %xmm3
addsd %xmm1, %xmm3
addsd 40(%rdi), %xmm3
mulsd 48(%rdi), %xmm0
addsd %xmm3, %xmm0
subsd 56(%rdi), %xmm0
retq

First of all, a (proc) macro can simply not look inside your array x. All it gets are the tokens you pass it, without any context. If you want it to know about the values (0, 1, -1), you need to pass those directly to your macro:
let result = your_macro!(y, -1, 0, 1, -1);
But you don't really need a macro for this. The compiler optimizes a lot, as also shown in the other answers. However, it will not, as you already mention in your edit, optimize away 0.0 * x[i], as the result of that is not always 0.0. (It could be -0.0 or NaN for example.) What we can do here, is simply help the optimizer a bit by using a match or if, to make sure it does nothing for the 0.0 * y case:
const X: [i32; 8] = [0, -1, 0, 0, 0, 0, 1, 0];
fn foobar(y: [f64; 8]) -> f64 {
let mut sum = 0.0;
for (&x, &y) in X.iter().zip(&y) {
if x != 0 {
sum += x as f64 * y;
}
}
sum
}
In release mode, the loop is unrolled and the values of X inlined, resulting in most iterations being thrown away as they don't do anything. The only thing left in the resulting binary (on x86_64), is:
foobar:
xorpd xmm0, xmm0
subsd xmm0, qword, ptr, [rdi, +, 8]
addsd xmm0, qword, ptr, [rdi, +, 48]
ret
(As suggested by #lu-zero, this can also be done using filter_map. That will look like this: X.iter().zip(&y).filter_map(|(&x, &y)| match x { 0 => None, _ => Some(x as f64 * y) }).sum(), and gives the exact same generated assembly. Or even without a match, by using filter and map separately: .filter(|(&x, _)| x != 0).map(|(&x, &y)| x as f64 * y).sum().)
Pretty good! However, this function calculates 0.0 - y[1] + y[6], since sum started at 0.0 and we only subtract and add things to it. The optimizer is again not willing to optimize away a 0.0. We can help it a bit more by not starting at 0.0, but starting with None:
fn foobar(y: [f64; 8]) -> f64 {
let mut sum = None;
for (&x, &y) in X.iter().zip(&y) {
if x != 0 {
let p = x as f64 * y;
sum = Some(sum.map_or(p, |s| s + p));
}
}
sum.unwrap_or(0.0)
}
This results in:
foobar:
movsd xmm0, qword, ptr, [rdi, +, 48]
subsd xmm0, qword, ptr, [rdi, +, 8]
ret
Which simply does y[6] - y[1]. Bingo!

You may be able to achieve your goal with a macro that returns a function.
First, write this function without a macro. This one takes a fixed number of parameters.
fn main() {
println!("Hello, world!");
let func = gen_sum([1,2,3]);
println!("{}", func([4,5,6])) // 1*4 + 2*5 + 3*6 = 4 + 10 + 18 = 32
}
fn gen_sum(xs: [i32; 3]) -> impl Fn([i32;3]) -> i32 {
move |ys| ys[0]*xs[0] + ys[1]*xs[1] + ys[2]*xs[2]
}
Now, completely rewrite it because the prior design doesn't work well as a macro. We had to give up on fixed sized arrays, as macros appear unable to allocate fixed-sized arrays.
Rust Playground
fn main() {
let func = gen_sum!(1,2,3);
println!("{}", func(vec![4,5,6])) // 1*4 + 2*5 + 3*6 = 4 + 10 + 18 = 32
}
#[macro_export]
macro_rules! gen_sum {
( $( $x:expr ),* ) => {
{
let mut xs = Vec::new();
$(
xs.push($x);
)*
move |ys:Vec<i32>| {
if xs.len() != ys.len() {
panic!("lengths don't match")
}
let mut total = 0;
for i in 0 as usize .. xs.len() {
total += xs[i] * ys[i];
}
total
}
}
};
}
What does this do/What should it do
At compile time, it generates a lambda. This lambda accepts a list of numbers and multiplies it by a vec that was generated at compile time. I don't think this was exactly what you were after, as it does not optimize away zeroes at compile time. You could optimize away zeroes at compile time, but you would necessarily incur some cost at run-time by having to check where the zeroes were in x to determine which elements to multiply by in y. You could even make this lookup process in constant time using a hashset. It's still probably not worth it in general (where I presume 0 is not all that common). Computers are better at doing one thing that's "inefficient" than they are at detecting that the thing they're about to do is "inefficient" then skipping that thing. This abstraction breaks down when a significant portion of the operations they do are "inefficient"
Follow-up
Was that worth it? Does it improve run times? I didn't measure, but it seems like understanding and maintaining the macro I wrote isn't worth it compared to just using a function. Writing a macro that does the zero optimization you talked about would probably be even less pleasant.

In many cases, the optimisation stage of the compiler will take care of this for you. To give an example, this function definition
const X: [i32; 8] = [0, 1, -1, 0, 0, 1, 0, -1];
pub fn dot_x(y: [i32; 8]) -> i32 {
X.iter().zip(y.iter()).map(|(i, j)| i * j).sum()
}
results in this assembly output on x86_64:
playground::dot_x:
mov eax, dword ptr [rdi + 4]
sub eax, dword ptr [rdi + 8]
add eax, dword ptr [rdi + 20]
sub eax, dword ptr [rdi + 28]
ret
You won't be able to get any more optimised version than this, so simply writing the code in a naïve way is the best solution. Whether the compiler will unroll the loop for longer vectors is unclear, and it may change with compiler versions.
For floating-point numbers, the compiler is not normally able to perform all the optimisations above, since the numbers in y are not guaranteed to be finite – they could also be NaN, inf or -inf. For this reason, multiplying with 0.0 is not guaranteed to result in 0.0 again, so the compiler needs to keep the multiplication instructions in the code. You can explicitly allow it to assume all numbers are finite, though, by using the fmul_fast() instrinsic function:
#![feature(core_intrinsics)]
use std::intrinsics::fmul_fast;
const X: [i32; 8] = [0, 1, -1, 0, 0, 1, 0, -1];
pub fn dot_x(y: [f64; 8]) -> f64 {
X.iter().zip(y.iter()).map(|(i, j)| unsafe { fmul_fast(*i as f64, *j) }).sum()
}
This results in the following assembly code:
playground::dot_x: # #playground::dot_x
# %bb.0:
xorpd xmm1, xmm1
movsd xmm0, qword ptr [rdi + 8] # xmm0 = mem[0],zero
addsd xmm0, xmm1
subsd xmm0, qword ptr [rdi + 16]
addsd xmm0, xmm1
addsd xmm0, qword ptr [rdi + 40]
addsd xmm0, xmm1
subsd xmm0, qword ptr [rdi + 56]
ret
This still redundantly adds zeros between the steps, but I would not expect this to result in any measurable overhead for realistic CFD simulations, since such simulations tend to be limited by memory bandwidth rather than CPU. If you want to avoid these additions as well, you need to use fadd_fast() for the additions to allow the compiler to optimise further:
#![feature(core_intrinsics)]
use std::intrinsics::{fadd_fast, fmul_fast};
const X: [i32; 8] = [0, 1, -1, 0, 0, 1, 0, -1];
pub fn dot_x(y: [f64; 8]) -> f64 {
let mut result = 0.0;
for (&i, &j) in X.iter().zip(y.iter()) {
unsafe { result = fadd_fast(result, fmul_fast(i as f64, j)); }
}
result
}
This results in the following assembly code:
playground::dot_x: # #playground::dot_x
# %bb.0:
movsd xmm0, qword ptr [rdi + 8] # xmm0 = mem[0],zero
subsd xmm0, qword ptr [rdi + 16]
addsd xmm0, qword ptr [rdi + 40]
subsd xmm0, qword ptr [rdi + 56]
ret
As with all optmisations, you should start with the most readable and maintainable version of the code. If performance becomes an issue, you should profile your code and find the bottlenecks. As the next step, try to improve the fundamental approach, e.g. by using an algorithm with a better asymptotical complexity. Only then should you turn to micro-optimisations like the one you suggested in the question.

If you can spare an #[inline(always)] probably using an explicit filter_map() should be enough to have the compiler do what you want.

Related

Range optimizations opportunities [duplicate]

These two loops are equivalent in C++ and Rust:
#include <cstdint>
uint64_t sum1(uint64_t n) {
uint64_t sum = 0;
for (uint64_t j = 0; j <= n; ++j) {
sum += 1;
}
return sum;
}
pub fn sum1(num: u64) -> u64 {
let mut sum: u64 = 0;
for j in 0u64..=num {
sum += 1;
}
return sum;
}
However the C++ version generates a very terse assembly:
sum1(unsigned long): # #sum1(unsigned long)
xor eax, eax
.LBB0_1: # =>This Inner Loop Header: Depth=1
add rax, 1
cmp rax, rdi
jbe .LBB0_1
ret
while Rust's version is very long with two checks in the loop instead of one:
example::sum1:
xor eax, eax
xor ecx, ecx
.LBB0_1:
mov rdx, rcx
cmp rcx, rdi
adc rcx, 0
add rax, 1
cmp rdx, rdi
jae .LBB0_3
cmp rcx, rdi
jbe .LBB0_1
.LBB0_3:
ret
Godbolt: https://godbolt.org/z/xYW94qxjK
What is Rust intrinsically trying to prevent that C++ is carefree about?
Overflow in the iterator state.
The C++ version will loop forever when given a large enough input:
#include <cstdint>
std::uint64_t sum1(std::uint64_t n) {
std::uint64_t sum = 0;
for (std::uint64_t j = 0; j <= n; ++j) {
__asm__ ("");
sum += 1;
}
return sum;
}
#include <iostream>
int main() {
std::cout << sum1(UINT64_C(0xffffffff'ffffffff)) << std::endl;
return 0;
}
This is because when the loop counter j finally reaches 0xffffffff'ffffffff, incrementing it wraps around to 0, which means the loop invariant j <= n is always fulfilled and the loop never exits.
Strictly speaking, invoking the original version of sum1 with 0xffffffff'ffffffff infamously triggers undefined behaviour, though not because of overflow alone, but since infinite loops without externally-visible side effects are UB ([intro.progress]/1). This is why in my version I added an empty __asm__ statement to the function to act as a dummy ‘side effect’ preventing the compiler from taking ‘advantage’ of that in optimisation passes.
The Rust version, on the other hand, is not only perfectly well-defined, but iterates exactly as many times as the cardinality of the range:
use std::num::Wrapping;
fn sum1(num: u64) -> u64 {
let mut sum = Wrapping(0);
for _ in 0..=num {
sum += Wrapping(1);
}
return sum.0;
}
fn main() {
println!("{}", sum1(0xffffffff_ffffffff));
}
The above program (slightly modified to avoid getting bogged down in debug versus release mode differences with respect to the summation) will terminate after exactly 18 446 744 073 709 551 616 iterations and print 0; the Rust version carefully maintains iterator state so that overflow does not happen in the iterator.
#user3840170 correctly identified the difference: overflow check in Rust, and not in C++.
Still, the question remains as to why there are 2 checks per loop in the Rust version instead of 1, and the answer to that is that LLVM is not sufficiently smart and/or the RangeInclusive design is not well adapted to LLVM1.
The optimal code generation for short loops, is to split the loop, transforming:
for j in 0u64..=num {
sum += 1;
}
Into:
for j in 0u64..num { // equivalent to for (auto j = 0; j < num; ++j)
sum += 1;
}
if 0 <= num {
sum += 1;
}
This would lead to having a single branch in the main loop, and allow LLVM to switch this to a closed formula2.
The Loop Splitting optimization applies to RangeInclusive and most other Chain iterators, as indeed a RangeInclusive can be thought of as a chain of a once iterator and half-exclusive range iterator (in either order). It is not always a win: like inlining, it implies duplicating the code of the loop body, which if large may lead to a significant overhead in code size.
Unfortunately, LLVM fails to split the loop. I am not sure if it's because the optimization is missing altogether, or it just fails to apply it here for some reason. I'm looking forward to the rustc_codegen_gcc backend, as GCC 7 added Loop Splitting to GCC, and it may be able to generate better code there.
1 See this comment I left over performance issues with RangeInclusive. I spent significant time banging my head over the issue in 2019, and I dearly wish RangeInclusive (and all ranges) were NOT Iterator; it's a costly mistake.
2 Chain iterators, in general, perform much better using .for_each(...), specifically because there the loop is (manually) split. See the playground for (0..=num).for_each(|_| sum += 1) being reduced to num + 1.
These two loops are equivalent in C++ and Rust
Your two code snippets don't share the same behavior. for (uint64_t j = 0; j <= n; ++j) doesn't handle n == uint64_t::MAX (make it infinite looping) while for j in 0u64..=num do (will never go into an infinite loop).
A rust equivalent code could be:
pub fn sum1(num: u64) -> u64 {
let mut sum: u64 = 0;
let mut j = 0;
while j <= num {
sum = sum.wrapping_add(1);
j = j.wrapping_add(1);
}
sum
}
currently produce the following asm godbolt:
example::sum1:
xor eax, eax
.LBB0_1:
add rax, 1
cmp rax, rdi
jbe .LBB0_1
ret

Why does iteration over an inclusive range generate longer assembly in Rust?

These two loops are equivalent in C++ and Rust:
#include <cstdint>
uint64_t sum1(uint64_t n) {
uint64_t sum = 0;
for (uint64_t j = 0; j <= n; ++j) {
sum += 1;
}
return sum;
}
pub fn sum1(num: u64) -> u64 {
let mut sum: u64 = 0;
for j in 0u64..=num {
sum += 1;
}
return sum;
}
However the C++ version generates a very terse assembly:
sum1(unsigned long): # #sum1(unsigned long)
xor eax, eax
.LBB0_1: # =>This Inner Loop Header: Depth=1
add rax, 1
cmp rax, rdi
jbe .LBB0_1
ret
while Rust's version is very long with two checks in the loop instead of one:
example::sum1:
xor eax, eax
xor ecx, ecx
.LBB0_1:
mov rdx, rcx
cmp rcx, rdi
adc rcx, 0
add rax, 1
cmp rdx, rdi
jae .LBB0_3
cmp rcx, rdi
jbe .LBB0_1
.LBB0_3:
ret
Godbolt: https://godbolt.org/z/xYW94qxjK
What is Rust intrinsically trying to prevent that C++ is carefree about?
Overflow in the iterator state.
The C++ version will loop forever when given a large enough input:
#include <cstdint>
std::uint64_t sum1(std::uint64_t n) {
std::uint64_t sum = 0;
for (std::uint64_t j = 0; j <= n; ++j) {
__asm__ ("");
sum += 1;
}
return sum;
}
#include <iostream>
int main() {
std::cout << sum1(UINT64_C(0xffffffff'ffffffff)) << std::endl;
return 0;
}
This is because when the loop counter j finally reaches 0xffffffff'ffffffff, incrementing it wraps around to 0, which means the loop invariant j <= n is always fulfilled and the loop never exits.
Strictly speaking, invoking the original version of sum1 with 0xffffffff'ffffffff infamously triggers undefined behaviour, though not because of overflow alone, but since infinite loops without externally-visible side effects are UB ([intro.progress]/1). This is why in my version I added an empty __asm__ statement to the function to act as a dummy ‘side effect’ preventing the compiler from taking ‘advantage’ of that in optimisation passes.
The Rust version, on the other hand, is not only perfectly well-defined, but iterates exactly as many times as the cardinality of the range:
use std::num::Wrapping;
fn sum1(num: u64) -> u64 {
let mut sum = Wrapping(0);
for _ in 0..=num {
sum += Wrapping(1);
}
return sum.0;
}
fn main() {
println!("{}", sum1(0xffffffff_ffffffff));
}
The above program (slightly modified to avoid getting bogged down in debug versus release mode differences with respect to the summation) will terminate after exactly 18 446 744 073 709 551 616 iterations and print 0; the Rust version carefully maintains iterator state so that overflow does not happen in the iterator.
#user3840170 correctly identified the difference: overflow check in Rust, and not in C++.
Still, the question remains as to why there are 2 checks per loop in the Rust version instead of 1, and the answer to that is that LLVM is not sufficiently smart and/or the RangeInclusive design is not well adapted to LLVM1.
The optimal code generation for short loops, is to split the loop, transforming:
for j in 0u64..=num {
sum += 1;
}
Into:
for j in 0u64..num { // equivalent to for (auto j = 0; j < num; ++j)
sum += 1;
}
if 0 <= num {
sum += 1;
}
This would lead to having a single branch in the main loop, and allow LLVM to switch this to a closed formula2.
The Loop Splitting optimization applies to RangeInclusive and most other Chain iterators, as indeed a RangeInclusive can be thought of as a chain of a once iterator and half-exclusive range iterator (in either order). It is not always a win: like inlining, it implies duplicating the code of the loop body, which if large may lead to a significant overhead in code size.
Unfortunately, LLVM fails to split the loop. I am not sure if it's because the optimization is missing altogether, or it just fails to apply it here for some reason. I'm looking forward to the rustc_codegen_gcc backend, as GCC 7 added Loop Splitting to GCC, and it may be able to generate better code there.
1 See this comment I left over performance issues with RangeInclusive. I spent significant time banging my head over the issue in 2019, and I dearly wish RangeInclusive (and all ranges) were NOT Iterator; it's a costly mistake.
2 Chain iterators, in general, perform much better using .for_each(...), specifically because there the loop is (manually) split. See the playground for (0..=num).for_each(|_| sum += 1) being reduced to num + 1.
These two loops are equivalent in C++ and Rust
Your two code snippets don't share the same behavior. for (uint64_t j = 0; j <= n; ++j) doesn't handle n == uint64_t::MAX (make it infinite looping) while for j in 0u64..=num do (will never go into an infinite loop).
A rust equivalent code could be:
pub fn sum1(num: u64) -> u64 {
let mut sum: u64 = 0;
let mut j = 0;
while j <= num {
sum = sum.wrapping_add(1);
j = j.wrapping_add(1);
}
sum
}
currently produce the following asm godbolt:
example::sum1:
xor eax, eax
.LBB0_1:
add rax, 1
cmp rax, rdi
jbe .LBB0_1
ret

Can compilers (specifically rustc) really simplify triangle-summation to avoid a loop? How?

On page 322 of Programming Rust by Blandy and Orendorff is this claim:
...Rust...recognizes that there's a simpler way to sum the numbers from one to n: the sum is always equal to n * (n+1) / 2.
This is of course a fairly well-known equivalence, but how does the compiler recognize it? I'm guessing it's in an LLVM optimization pass, but is LLVM somehow deriving the equivalence from first principles, or does it just have some set of "common loop computations" that can be simplified to arithmetic operations?
First of all, let's demonstrate that this actually happens.
Starting with this code:
pub fn sum(start: i32, end: i32) -> i32 {
let mut result = 0;
for i in start..end {
result += i;
}
return result;
}
And compiling in Release, we get:
; playground::sum
; Function Attrs: nounwind nonlazybind readnone uwtable
define i32 #_ZN10playground3sum17h41f12649b0533596E(i32 %start1, i32 %end) {
start:
%0 = icmp slt i32 %start1, %end
br i1 %0, label %bb5.preheader, label %bb6
bb5.preheader: ; preds = %start
%1 = xor i32 %start1, -1
%2 = add i32 %1, %end
%3 = add i32 %start1, 1
%4 = mul i32 %2, %3
%5 = zext i32 %2 to i33
%6 = add i32 %end, -2
%7 = sub i32 %6, %start1
%8 = zext i32 %7 to i33
%9 = mul i33 %5, %8
%10 = lshr i33 %9, 1
%11 = trunc i33 %10 to i32
%12 = add i32 %4, %start1
%13 = add i32 %12, %11
br label %bb6
bb6: ; preds = %bb5.preheader, %start
%result.0.lcssa = phi i32 [ 0, %start ], [ %13, %bb5.preheader ]
ret i32 %result.0.lcssa
}
Where we can indeed observe that there is no loop any longer.
Thus we validate the claim by Bandy and Orendorff.
As for how this occurs, my understanding is that this all happens in ScalarEvolution.cpp in LLVM. Unfortunately, that file is a 12,000+ lines monstruosity, so navigating it is a tad complicated; still, the head comment hints that we should be in the right place, and points to the papers it used which mention optimizing loops and closed-form functions1:
//===----------------------------------------------------------------------===//
//
// There are several good references for the techniques used in this analysis.
//
// Chains of recurrences -- a method to expedite the evaluation
// of closed-form functions
// Olaf Bachmann, Paul S. Wang, Eugene V. Zima
//
// On computational properties of chains of recurrences
// Eugene V. Zima
//
// Symbolic Evaluation of Chains of Recurrences for Loop Optimization
// Robert A. van Engelen
//
// Efficient Symbolic Analysis for Optimizing Compilers
// Robert A. van Engelen
//
// Using the chains of recurrences algebra for data dependence testing and
// induction variable substitution
// MS Thesis, Johnie Birch
//
//===----------------------------------------------------------------------===//
According to this blog article by Krister Walfridsson, it builds up chains of recurrences, which can be used to obtain a closed-form formula for each inductive variable.
This is a mid-point between full reasoning and full hardcoding:
Pattern-matching is used to build the chains of recurrence, so LLVM may not recognize all ways of expressing a certain computation.
A large variety of formulas can be optimized, not only the triangle sum.
The article also notes that the optimization may end up pessimizing the code: a small number of iterations can be faster if the "optimized" code requires a larger number of operations compared to the inner body of the loop.
1 n * (n+1) / 2 is the closed-form function to compute the sum of numbers in [0, n].

VC++ SSE code generation - is this a compiler bug?

A very particular code sequence in VC++ generated the following instruction (for Win32):
unpcklpd xmm0,xmmword ptr [ebp-40h]
2 questions arise:
(1) As far as I understand the intel manual, unpcklpd accepts as 2nd argument a 128-aligned memory address. If the address is relative to a stack frame alignment cannot be forced. Is this really a compiler bug?
(2) Exceptions are thrown from at the execution of this instruction only when run from the debugger, and even then not always. Even attaching to the process and executing this code does not throw. How can this be??
The particular exception thrown is access violation at 0xFFFFFFFF, but AFAIK that's just a code for misalignment.
[Edit:]
Here's some source that demonstrates the bad code generation - but typically doesn't cause a crash. (that's mostly what I'm wondering about)
[Edit 2:]
The code sample now reproduces the actual crash. This one also crashes outside the debugger - I suspect the difference occurs because the debugger launches the program at different typical base addresses.
// mock.cpp
#include <stdio.h>
struct mockVect2d
{
double x, y;
mockVect2d() {}
mockVect2d(double a, double b) : x(a), y(b) {}
mockVect2d operator + (const mockVect2d& u) {
return mockVect2d(x + u.x, y + u.y);
}
};
struct MockPoly
{
MockPoly() {}
mockVect2d* m_Vrts;
double m_Area;
int m_Convex;
bool m_ParClear;
void ClearPar() { m_Area = -1.; m_Convex = 0; m_ParClear = true; }
MockPoly(int len) { m_Vrts = new mockVect2d[len]; }
mockVect2d& Vrt(int i) {
if (!m_ParClear) ClearPar();
return m_Vrts[i];
}
const mockVect2d& GetCenter() { return m_Vrts[0]; }
};
struct MockItem
{
MockItem() : Contour(1) {}
MockPoly Contour;
};
struct Mock
{
Mock() {}
MockItem m_item;
virtual int GetCount() { return 2; }
virtual mockVect2d GetCenter() { return mockVect2d(1.0, 2.0); }
virtual MockItem GetItem(int i) { return m_item; }
};
void testInner(int a)
{
int c = 8;
printf("%d", c);
Mock* pMock = new Mock;
int Flag = true;
int nlr = pMock->GetCount();
if (nlr == 0)
return;
int flr = 1;
if (flr == nlr)
return;
if (Flag)
{
if (flr < nlr && flr>0) {
int c = 8;
printf("%d", c);
MockPoly pol(2);
mockVect2d ctr = pMock->GetItem(0).Contour.GetCenter();
// The mess happens here:
// ; 74 : pol.Vrt(1) = ctr + mockVect2d(0., 1.0);
//
// call ? Vrt#MockPoly##QAEAAUmockVect2d##H#Z; MockPoly::Vrt
// movdqa xmm0, XMMWORD PTR $T4[ebp]
// unpcklpd xmm0, QWORD PTR tv190[ebp] **** crash!
// movdqu XMMWORD PTR[eax], xmm0
pol.Vrt(0) = ctr + mockVect2d(1.0, 0.);
pol.Vrt(1) = ctr + mockVect2d(0., 1.0);
}
}
}
void main()
{
testInner(2);
return;
}
If you prefer, download a ready vcxproj with all the switches set from here. This includes the complete ASM too.
Update: this is now a confirmed VC++ compiler bug, hopefully to be resolved in VS2015 RTM.
Edit: The connect report, like many others, is now garbage. However the compiler bug seems to be resolved in VS2017 - not in 2015 update 3.
Since no one else has stepped up, I'm going to take a shot.
1) If the address is relative to a stack frame alignment cannot be forced. Is this really a compiler bug?
I'm not sure it is true that you cannot force alignment for stack variables. Consider this code:
struct foo
{
char a;
int b;
unsigned long long c;
};
int wmain(int argc, wchar_t* argv[])
{
foo moo;
moo.a = 1;
moo.b = 2;
moo.c = 3;
}
Looking at the startup code for main, we see:
00E31AB0 push ebp
00E31AB1 mov ebp,esp
00E31AB3 sub esp,0DCh
00E31AB9 push ebx
00E31ABA push esi
00E31ABB push edi
00E31ABC lea edi,[ebp-0DCh]
00E31AC2 mov ecx,37h
00E31AC7 mov eax,0CCCCCCCCh
00E31ACC rep stos dword ptr es:[edi]
00E31ACE mov eax,dword ptr [___security_cookie (0E440CCh)]
00E31AD3 xor eax,ebp
00E31AD5 mov dword ptr [ebp-4],eax
Adding __declspec(align(16)) to moo gives
01291AB0 push ebx
01291AB1 mov ebx,esp
01291AB3 sub esp,8
01291AB6 and esp,0FFFFFFF0h <------------------------
01291AB9 add esp,4
01291ABC push ebp
01291ABD mov ebp,dword ptr [ebx+4]
01291AC0 mov dword ptr [esp+4],ebp
01291AC4 mov ebp,esp
01291AC6 sub esp,0E8h
01291ACC push esi
01291ACD push edi
01291ACE lea edi,[ebp-0E8h]
01291AD4 mov ecx,3Ah
01291AD9 mov eax,0CCCCCCCCh
01291ADE rep stos dword ptr es:[edi]
01291AE0 mov eax,dword ptr [___security_cookie (12A40CCh)]
01291AE5 xor eax,ebp
01291AE7 mov dword ptr [ebp-4],eax
Apparently the compiler (VS2010 compiled debug for Win32), recognizing that we will need specific alignments for the code, takes steps to ensure it can provide that.
2) Exceptions are thrown from at the execution of this instruction only when run from the debugger, and even then not always. Even attaching to the process and executing this code does not throw. How can this be??
So, a couple of thoughts:
"and even then not always" - Not standing over your shoulder when you run this, I can't say for certain. However it seems plausible that just by random chance, stacks could get created with the alignment you need. By default, x86 uses 4byte stack alignment. If you need 16 byte alignment, you've got a 1 in 4 shot.
As for the rest (from https://msdn.microsoft.com/en-us/library/aa290049%28v=vs.71%29.aspx#ia64alignment_topic4):
On the x86 architecture, the operating system does not make the alignment fault visible to the application. ...you will also suffer performance degradation on the alignment fault, but it will be significantly less severe than on the Itanium, because the hardware will make the multiple accesses of memory to retrieve the unaligned data.
TLDR: Using __declspec(align(16)) should give you the alignment you want, even for stack variables. For unaligned accesses, the OS will catch the exception and handle it for you (at a cost of performance).
Edit1: Responding to the first 2 comments below:
Based on MS's docs, you are correct about the alignment of stack parameters, but they propose a solution as well:
You cannot specify alignment for function parameters. When data that
has an alignment attribute is passed by value on the stack, its
alignment is controlled by the calling convention. If data alignment
is important in the called function, copy the parameter into correctly
aligned memory before use.
Neither your sample on Microsoft connect nor the code about produce the same code for me (I'm only on vs2010), so I can't test this. But given this code from your sample:
struct mockVect2d
{
double x, y;
mockVect2d(double a, double b) : x(a), y(b) {}
It would seem that aligning either mockVect2d or the 2 doubles might help.

SSE SIMD Optimization For Loop

I have some code in a loop
for(int i = 0; i < n; i++)
{
u[i] = c * u[i] + s * b[i];
}
So, u and b are vectors of the same length, and c and s are scalars. Is this code a good candidate for vectorization for use with SSE in order to get a speedup?
UPDATE
I learnt vectorization (turns out it's not so hard if you use intrinsics) and implemented my loop in SSE. However, when setting the SSE2 flag in the VC++ compiler, I get about the same performance as with my own SSE code. The Intel compiler on the other hand was much faster than my SSE code or the VC++ compiler.
Here is the code I wrote for reference
double *u = (double*) _aligned_malloc(n * sizeof(double), 16);
for(int i = 0; i < n; i++)
{
u[i] = 0;
}
int j = 0;
__m128d *uSSE = (__m128d*) u;
__m128d cStore = _mm_set1_pd(c);
__m128d sStore = _mm_set1_pd(s);
for (j = 0; j <= i - 2; j+=2)
{
__m128d uStore = _mm_set_pd(u[j+1], u[j]);
__m128d cu = _mm_mul_pd(cStore, uStore);
__m128d so = _mm_mul_pd(sStore, omegaStore);
uSSE[j/2] = _mm_add_pd(cu, so);
}
for(; j <= i; ++j)
{
u[j] = c * u[j] + s * omegaCache[j];
}
Yes, this is an excellent candidate for vectorization. But, before you do so, make sure you've profiled your code to be sure that this is actually worth optimizing. That said, the vectorization would go something like this:
int i;
for(i = 0; i < n - 3; i += 4)
{
load elements u[i,i+1,i+2,i+3]
load elements b[i,i+1,i+2,i+3]
vector multiply u * c
vector multiply s * b
add partial results
store back to u[i,i+1,i+2,i+3]
}
// Finish up the uneven edge cases (or skip if you know n is a multiple of 4)
for( ; i < n; i++)
u[i] = c * u[i] + s * b[i];
For even more performance, you can consider prefetching further array elements, and/or unrolling the loop and using software pipelining to interleave the computation in one loop with the memory accesses from a different iteration.
_mm_set_pd is not vectorized. If taken literally, it reads the two doubles using scalar operations, then combines the two scalar doubles and copy them into the SSE register. Use _mm_load_pd instead.
probably yes, but you have to help compiler with some hints.
__restrict__ placed on pointers tells compiler that there is no alias between two pointers.
if you know alignment of your vectors, communicate that to compiler (Visual C++ may have some facility).
I am not familiar with Visual C++ myself, but I have heard it is no good for vectorization.
Consider using Intel compiler instead.
Intel allows pretty fine-grained control over assembly generated: http://www.intel.com/software/products/compilers/docs/clin/main_cls/cref_cls/common/cppref_pragma_vector.htm
Yes, this is a great candidate for vectorizaton, assuming there is no overlap of U and B array. But the code is bound by memory access(load/store). Vectorization helps reduce cycles per loop, but the instructions will stall due to cache-miss on U and B array . The Intel C/C++ Compiler generates the following code with default flags for Xeon x5500 processor. The compiler unrolls the loop by 8 and employs SIMD ADD (addpd) and MULTIPLY (mulpd) instructions using xmm[0-16] SIMD registers. In each cycle, the processor can issue 2 SIMD instructions yielding 4-way scalar ILP, assuming you have the data ready in the registers.
Here U, B, C and S are Double Precision (8 bytes).
..B1.14: # Preds ..B1.12 ..B1.10
movaps %xmm1, %xmm3 #5.1
unpcklpd %xmm3, %xmm3 #5.1
movaps %xmm0, %xmm2 #6.12
unpcklpd %xmm2, %xmm2 #6.12
# LOE rax rcx rbx rbp rsi rdi r8 r12 r13 r14 r15 xmm0 xmm1 xmm2 xmm3
..B1.15: # Preds ..B1.15 ..B1.14
movsd (%rsi,%rcx,8), %xmm4 #6.21
movhpd 8(%rsi,%rcx,8), %xmm4 #6.21
mulpd %xmm2, %xmm4 #6.21
movaps (%rdi,%rcx,8), %xmm5 #6.12
mulpd %xmm3, %xmm5 #6.12
addpd %xmm4, %xmm5 #6.21
movaps 16(%rdi,%rcx,8), %xmm7 #6.12
movaps 32(%rdi,%rcx,8), %xmm9 #6.12
movaps 48(%rdi,%rcx,8), %xmm11 #6.12
movaps %xmm5, (%rdi,%rcx,8) #6.3
mulpd %xmm3, %xmm7 #6.12
mulpd %xmm3, %xmm9 #6.12
mulpd %xmm3, %xmm11 #6.12
movsd 16(%rsi,%rcx,8), %xmm6 #6.21
movhpd 24(%rsi,%rcx,8), %xmm6 #6.21
mulpd %xmm2, %xmm6 #6.21
addpd %xmm6, %xmm7 #6.21
movaps %xmm7, 16(%rdi,%rcx,8) #6.3
movsd 32(%rsi,%rcx,8), %xmm8 #6.21
movhpd 40(%rsi,%rcx,8), %xmm8 #6.21
mulpd %xmm2, %xmm8 #6.21
addpd %xmm8, %xmm9 #6.21
movaps %xmm9, 32(%rdi,%rcx,8) #6.3
movsd 48(%rsi,%rcx,8), %xmm10 #6.21
movhpd 56(%rsi,%rcx,8), %xmm10 #6.21
mulpd %xmm2, %xmm10 #6.21
addpd %xmm10, %xmm11 #6.21
movaps %xmm11, 48(%rdi,%rcx,8) #6.3
addq $8, %rcx #5.1
cmpq %r8, %rcx #5.1
jl ..B1.15 # Prob 99% #5.1
it depends on how you placed u and b in memory.
if both memory block are far from each other, SSE wouldn't boost much in this scenario.
it is suggested that the array u and b are AOE (array of structure) instead of SOA (structure of array), because you can load both of them into register in single instruction.

Resources