SIMD programming languages - programming-languages

In the last couple of years, I've been doing a lot of SIMD programming and most of the time I've been relying on compiler intrinsic functions (such as the ones for SSE programming) or on programming assembly to get to the really nifty stuff. However, up until now I've hardly been able to find any programming language with built-in support for SIMD.
Now obviously there are the shader languages such as HLSL, Cg and GLSL that have native support for this kind of stuff however, I'm looking for something that's able to at least compile to SSE without autovectorization but with built-in support for vector operations. Does such a language exist?
This is an example of (part of) a Cg shader that does a spotlight and in terms of syntax this is probably the closest to what I'm looking for.
float4 pixelfunction(
output_vs IN,
uniform sampler2D texture : TEX0,
uniform sampler2D normals : TEX1,
uniform float3 light,
uniform float3 eye ) : COLOR
{
float4 color = tex2D( texture, IN.uv );
float4 normal = tex2D( normals, IN.uv ) * 2 - 1;
float3 T = normalize(IN.T);
float3 B = normalize(IN.B);
float3 N =
normal.b * normalize(IN.normal) +
normal.r * T +
normal.g * B;
float3 V = normalize(eye - IN.pos.xyz);
float3 L = normalize(light - IN.pos);
float3 H = normalize(L + V);
float4 diffuse = color * saturate( dot(N, L) );
float4 specular = color * pow(saturate(dot(N, H)), 15);
float falloff = dot(L, normalize(light));
return pow(falloff, 5) * (diffuse + specular);
}
Stuff that would be a real must in this language is:
Built in swizzle operators
Vector operations (dot, cross, normalize, saturate, reflect et cetera)
Support for custom data types (structs)
Dynamic branching would be nice (for loops, if statements)

So recently Intel released ISPC which is exactly what I was looking for when asking this question. It's a language that can link with normal C code, has and implicit execution model, and support for all the features mentioned in the start post (swizzle operators, branching, data structs, vector ops, shader like) and compiles for SSE2, SSE4, AVX, AVX2, and Xeon Phi vector instructions.

Your best bet is probably OpenCL. I know it has mostly been hyped as a way to run code on GPUs, but OpenCL kernels can also be compiled and run on CPUs. OpenCL is basically C with a few restrictions:
No function pointers
No recursion
and a bunch of additions. In particular vector types:
float4 x = float4(1.0f, 2.0f, 3.0f, 4.0f);
float4 y = float4(10.0f, 10.0f, 10.0f, 10.0f);
float4 z = y + x.s3210 // add the vector y with a swizzle of x that reverses the element order
On big caveat is that the code has to be cleanly sperable, OpenCL can't call out to arbitrary libraries, etc. But if your compute kernels are reasonably independent then you basically get a vector enhanced C where you don't need to use intrinsics.
Here is a quick reference/cheatsheet with all of the extensions.

It's not really the language itself, but there is a library for Mono (Mono.Simd) that will expose the vectors to you and optimise the operations on them into SSE whenever possible:

It's a library for C++, rather than built into the language, but Eigen is pretty invisible once your variables are declared.

Currently the best solution is to do it myself by creating a back-end for the open-source Cg frontend that Nvidia released, but I'd like to save myself the effort so I'm curious if it's been done before. Preferably I'd start using it right away.

The D programming language also provides access to SIMD in a similar way than Mono.SIMD.

That would be Fortran that you are looking for. If memory serves even the open-source compilers (g95, gfortran) will take advantage of SSE if it's implemented on your hardware.

I know this question is a bit old, but I found myself in a similar predicament and decided I should just make my own.
I haven't gotten very far yet in the slightest, but if you're interested in the directions that I'm exploring it might be worth a look. :)
https://github.com/HappMacDonald/MasterBlaster
MasterBlaster is a functional programming language, but it's going to compile down into a bytecode that is ultimately it's own much simpler stack-based language called Crude. Crude then compiles directly into assembly.
My strategy is a SIMD-first one: unoptimized executables will use almost entirely SIMD, and then one of the potential optimizations will be to simplify code that isn't benefiting from SIMD into using only general registers instead.
Crude is up to the turing-complete stage, but only exists as a few dozen GAS macros presently. I'm working towards a self-contained compiler for it, and building out the iterator/generator features that are the stars of the show when it comes to SIMD acceleration.
No vector-matrix-etc support just yet, but that is on the roadmap and I'll probably bear your description in mind when writing up that syntax. :)

Related

Equivalent terms of LLVM IR for watermarking by renumbering?

I want to apply an algorithm for watermarking that basically reorders equivalent terms of a programming language:
https://books.google.dk/books?id=mig-bH3u0Z0C&pg=PT595&lpg=PT595&dq=obfuscation+renumbering+register&source=bl&ots=b3vMhp-yTq&sig=RERdnDNewRqBi7ZmSNMlsnPy-Hw&hl=da&sa=X&ved=0ahUKEwiLw-zWrpnSAhWEHJoKHXCpAkMQ6AEIGTAA#v=onepage&q=obfuscation%20renumbering%20register&f=false
Say, T1, T2,...,Tn are equivalent terms of the language, then the watermark is a permutation f such that f(Ti) = Tj.
In this case the programming language is LLVM IR, which is an intermediate language.
The book gives an example of renumbering registers by applying a permutation. However, registers are not in the scope of LLVM IR, since they are a lower-level detail?
I've been thinking of equivalent terms of LLVM, but cannot come up with some. The more the better, since this means a more flexible degree of watermarking.
Can you think of equivalent terms of LLVM IR such that each could be substituted for some other? Or is it only possible to do such watermarking at the machine code level?
Even if you perform this at the IR level (and you could by changing patterns), you won't get far since the machine instruction level would reshuffle everything. You better off writing a (potentially post-RA) machine instruction level pass.

GIMP's method of layer compositing/blending

In my quest to add alpha capacity to my image blending tools in Matlab, I've come across a bit of a snag. Among others, I've been using these links as my references as to how foreground and background alpha plays into the composition of both the output color data and output alpha.
My original approach was to simply use a a Src-Over composition for "normal" blend mode and Src-Atop composition for other modes. When compared to the output from GIMP, this produced similar, but differing results. The output alpha matches, but the RGB data differs.
Specifically, the foreground's color influence over the background is zero where the background alpha is zero. After spending a few hours looking naively through the GIMP 2.8.10 source, I notice a few things that confuse me.
Barring certain modes and a few ancillary things that happen during export that I haven't gleaned in the code yet, the approach is approximately thus:
if ~normalmode
FGalpha = min(FGalpha, BGalpha); % << why this?
end
FGalpha = FGalpha * mask * opacity;
OUTalpha = BGalpha + (1 - BGalpha) * FGalpha;
ratio = FGalpha / (OUTalpha + eps);
OUT = OUT * ratio + BG * (1 - ratio);
if normalmode
OUT = cat(3, OUT, OUTalpha);
else
OUT = cat(3, OUT, BGalpha);
end
The points of curiosity lie in the fact that I don't understand conceptually why one would take the minimum of layer alphas for composition. Certainly, this approach produces results which match GIMP, but I'm uncomfortable establishing this as a default behavior if I don't understand the reasoning.
This may be best asked of a GIMP forum somewhere, but I figured it would be more fruitful to approach a general audience. To clarify and summarize:
Does it make sense that colors in a transparent BG region are unaffected by multiplication with an opaque foreground color? Wouldn't this risk causing bleeding of unaltered data near hard mask edges with some future operation?
Although I haven't found anything, are there other applications
out there that use this approach?
Am I wrong to use GIMP's behavior as a reference? I don't have PS to
compare against, and ImageMagick is so flexible that it doesn't
really suggest a particular expected behavior. Certainly, GIMP has
some things it does incorrectly; maybe this is something else that
may change.
EDIT:
I can at least answer the last question by obviating it. I've decided to add support for both SVG 1.2 and legacy GIMP methods. The GEGL methods to be used by GIMP in the future follow the SVG methods, so I figure that suggests the propriety of the legacy methods.
For what it's worth, the SVG methods are all based on a Porter-Duff Src-Over composition. If referring to the documentation, the fact that the blend math is the same gets obfuscated because the blend and composition are algebraically combined using premultiplied alpha to reduce the overall computational cost. With the exception of SoftLight, the core blend math is the same as those used by GIMP and elsewhere.
Any other blend operation (e.g. PinLight, Hue) can be made compatible by just doing:
As = Sa * (1 - Da);
Ad = Da * (1 - Sa);
Ab = Sa * Da;
Ra = As + Ad + Ab; % output alpha
Rc = ( f(Sc,Dc)*Ab + Sc*As + Dc*Ad ) / Ra;
and then doing some algebra if you want to simplify it.

Is D powerful enough for these features?

For the longest time I wanted to design a programming language that married extensibility with efficiency (and safety, ease-of-use, etc.) I recently rediscovered D and I am wondering if D 2.0 is pretty much the language I wanted to make myself. What I love most is the potential of metaprogramming; in theory, could D's traits system enable the following features at compile time?
Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
auto mass = kg(2.0);
auto accel = 1.0; // units are strictly optional
auto force = mass*accel;
accel += metresPerSecondSquared(9.81); // units of 'force' and 'accel' are now known
force += pounds(3.0); // unit mismatch detected
Run-time reflection: Are the compile-time reflection features sufficient to build a run-time reflection system a la Java/.NET?
Yes. You can get all the information you need at compile time using __traits and produce the runtime data structures you need for runtime reflection.
Code conversion: Using a metaprogram, create C#/C++/etc. versions of your D program every time you compile it (bonus point if doc comments can be propagated).
No, it simply isn't possible no matter how powerful D is. Some features simply do not transfer over. For example, D has an inline assembler, which is 100% impossible to convert to C#. No language can losslessly convert to all other languages.
Traits. I don't mean the metaprogramming traits built into D, I mean object-oriented traits for class composition. A D program would indicate a set of traits to compose, and a metaprogram would compose them.
You can use template mixins for this, although they don't provide method exclusion.
Unit inference engine: Given some notation for optionally indicating units, e.g. unit(value), could a D metaprogram examine the following code, infer the correct units, and issue an error message on the last line? (I wrote such a thing for boo so I can assure you this is possible in general, program-wide):
Yes, this is straightforward in D. There's at least one implementation already.

Alpha-beta tree search without recursion

I'd like to see an implementation of an alpha-beta search (negamax to be more precise) without recursion. I know the basic idea - to use one or more stacks to keep track of the levels, but having a real code would spare me a lot of time.
Having it in Java, C# or Javascript would be perfect, but C/C++ is fine.
Here's the (simplified) recursive code:
function search(crtDepth, alpha, beta)
{
if (crtDepth == 0)
return eval(board);
var moves = generateMoves(board);
var crtMove;
var score = 200000;
var i;
while (i<moves.length)
{
crtMove = moves.moveList[i++];
doMove(board, crtMove);
score = -search(crtDepth-1, -beta, -alpha);
undoMove(board, crtMove);
if (score > alpha)
{
if (score >= beta)
return beta;
alpha = score;
}
}
return alpha;
}
search(4, -200000, 200000);
Knuth and Moore published an iterative alpha-beta routine in 1975 using an ad-hoc Algol language.
An Analysis of Alpha Beta Pruning (Page 301)
Also in Chapter 9 of "Selected Papers on Analysis of Algorithms"
It doesn't look very easy to covert into C# but it might help someone who wants to do it for the pure joy of optimization.
I'm very new to chess programming so it's beyond my abilities. Plus, my biggest performance gain was when I switched from "Copy-Make" to "Make-Unmake". I'm using XNA, so getting my GC latency down to almost 0 fixed all my performance issues, now it runs faster on my 360 than it does on my PC so this optimization seems too difficult to attempt for my needs.
Also see Recursion to Iteration
For a more recent bit of code, I wrote a non-recursive Negamax routine as an option in the EasyAI python library. The specific source code is at:
https://github.com/Zulko/easyAI/blob/master/easyAI/AI/NonRecursiveNegamax.py
It uses a simple loop with a fixed array of objects (size determined by target depth) to move up and down the tree in an ordered fashion. For the particular project I was using it on, it was six times faster than the recursive version. But I'm sure each game would respond differently.
There is no way to deny that this is some dense and complex code and conversion to C/Java/C# will be ... challenging. It is pretty much nothing but border cases. :)
If you convert it to C/Java/C#, I would love to see the results. Place an link in the comment?

What programming languages support arbitrary precision arithmetic?

What programming languages support arbitrary precision arithmetic and could you give a short example of how to print an arbitrary number of digits?
Some languages have this support built in. For example, take a look at java.math.BigDecimal in Java, or decimal.Decimal in Python.
Other languages frequently have a library available to provide this feature. For example, in C you could use GMP or other options.
The "Arbitrary-precision software" section of this article gives a good rundown of your options.
Mathematica.
N[Pi, 100]
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117068
Not only does mathematica have arbitrary precision but by default it has infinite precision. It keeps things like 1/3 as rationals and even expressions involving things like Sqrt[2] it maintains symbolically until you ask for a numeric approximation, which you can have to any number of decimal places.
In Common Lisp,
(format t "~D~%" (expt 7 77))
"~D~%" in printf format would be "%d\n". Arbitrary precision arithmetic is built into Common Lisp.
Smalltalk supports arbitrary precision Integers and Fractions from the beginning.
Note that gnu Smalltalk implementation does use GMP under the hood.
I'm also developping ArbitraryPrecisionFloat for various dialects (Squeak/Pharo Visualworks and Dolphin), see http://www.squeaksource.com/ArbitraryPrecisionFl.html
Python has such ability. There is an excellent example here.
From the article:
from math import log as _flog
from decimal import getcontext, Decimal
def log(x):
if x < 0:
return Decimal("NaN")
if x == 0:
return Decimal("-inf")
getcontext().prec += 3
eps = Decimal("10")**(-getcontext().prec+2)
# A good initial estimate is needed
r = Decimal(repr(_flog(float(x))))
while 1:
r2 = r - 1 + x/exp(r)
if abs(r2-r) < eps:
break
else:
r = r2
getcontext().prec -= 3
return +r
Also, the python quick start tutorial discusses the arbitrary precision: http://docs.python.org/lib/decimal-tutorial.html
and describes getcontext:
the getcontext() function accesses the
current context and allows the
settings to be changed.
Edit: Added clarification on getcontext.
Many people recommended Python's decimal module, but I would recommend using mpmath over decimal for any serious numeric uses.
COBOL
77 VALUE PIC S9(4)V9(4).
a signed variable witch 4 decimals.
PL/1
DCL VALUE DEC FIXED (4,4);
:-) I can't remember the other old stuff...
Jokes apart, as my example show, I think you shouldn't choose a programming language depending on a single feature. Virtually all decent and recent language support fixed precision in some dedicated classes.
Scheme (a variation of lisp) has a capability called 'bignum'. there are many good scheme implementations available both full language environments and embeddable scripting options.
a few I can vouch for
MitScheme (also referred to as gnu scheme)
PLTScheme
Chezscheme
Guile (also a gnu project)
Scheme 48
Ruby whole numbers and floating point numbers (mathematically speaking: rational numbers) are by default not strictly tied to the classical CPU related limits. In Ruby the integers and floats are automatically, transparently, switched to some "bignum types", if the size exceeds the maximum of the classical sizes.
One probably wants to use some reasonably optimized and "complete", multifarious, math library that uses the "bignums". This is where the Mathematica-like software truly shines with its capabilities.
As of 2011 the Mathematica is extremely expensive and terribly restricted from hacking and reshipping point of view, specially, if one wants to ship the math software as a component of a small, low price end, web application or an open source project. If one needs to do only raw number crunching, where visualizations are not required, then there exists a very viable alternative to the Mathematica and Maple. The alternative is the REDUCE Computer Algebra System, which is Lisp based, open source and mature (for decades) and under active development (in 2011). Like Mathematica, the REDUCE uses symbolic calculation.
For the recognition of the Mathematica I say that as of 2011 it seems to me that the Mathematica is the best at interactive visualizations, but I think that from programming point of view there are more convenient alternatives even if Mathematica were an open source project. To me it seems that the Mahtematica is also a bit slow and not suitable for working with huge data sets. It seems to me that the niche of the Mathematica is theoretical math, not real-life number crunching. On the other hand the publisher of the Mathematica, the Wolfram Research, is hosting and maintaining one of the most high quality, if not THE most high quality, free to use, math reference sites on planet Earth: the http://mathworld.wolfram.com/
The online documentation system that comes bundled with the Mathematica is also truly good.
When talking about speed, then it's worth to mention that REDUCE is said to run even on a Linux router. The REDUCE itself is written in Lisp, but it comes with 2 of its very own, specific, Lisp implementations. One of the Lisps is implemented in Java and the other is implemented in C. Both of them work decently, at least from math point of view. The REDUCE has 2 modes: the traditional "math mode" and a "programmers mode" that allows full access to all of the internals by the language that the REDUCE is self written in: Lisp.
So, my opinion is that if one looks at the amount of work that it takes to write math routines, not to mention all of the symbolic calculations that are all MATURE in the REDUCE, then one can save enormous amount of time (decades, literally) by doing most of the math part in REDUCE, specially given that it has been tested and debugged by professional mathematicians over a long period of time, used for doing symbolic calculations on old-era supercomputers for real professional tasks and works wonderfully, truly fast, on modern low end computers. Neither has it crashed on me, unlike at least one commercial package that I don't want to name here.
http://www.reduce-algebra.com/
To illustrate, where the symbolic calculation is essential in practice, I bring an example of solving a system of linear equations by matrix inversion. To invert a matrix, one needs to find determinants. The rounding that takes place with the directly CPU supported floating point types, can render a matrix that theoretically has an inverse, to a matrix that does not have an inverse. This in turn introduces a situation, where most of the time the software might work just fine, but if the data is a bit "unfortunate" then the application crashes, despite the fact that algorithmically there's nothing wrong in the software, other than the rounding of floating point numbers.
The absolute precision rational numbers do have a serious limitation. The more computations is performed with them, the more memory they consume. As of 2011 I don't know any solutions to that problem other than just being careful and keeping track of the number of operations that has been performed with the numbers and then rounding the numbers to save memory, but one has to do the rounding at a very precise stage of the calculations to avoid the aforementioned problems. If possible, then the rounding should be done at the very end of the calculations as the very last operation.
In PHP you have BCMath. You not need to load any dll or compile any module.
Supports numbers of any size and precision, represented as string
<?php
$a = '1.234';
$b = '5';
echo bcadd($a, $b); // 6
echo bcadd($a, $b, 4); // 6.2340
?>
Apparently Tcl also has them, from version 8.5, courtesy of LibTomMath:
http://wiki.tcl.tk/5193
http://www.tcl.tk/cgi-bin/tct/tip/237.html
http://math.libtomcrypt.com/
There are several Javascript libraries that handle arbitrary-precision arithmetic.
For example, using my big.js library:
Big.DP = 20; // Decimal Places
var pi = Big(355).div(113)
console.log( pi.toString() ); // '3.14159292035398230088'
In R you can use the Rmpfr package:
library(Rmpfr)
exp(mpfr(1, 120))
## 1 'mpfr' number of precision 120 bits
## [1] 2.7182818284590452353602874713526624979
You can find the vignette here: Arbitrarily Accurate Computation with R:
The Rmpfr Package
Java natively can do bignum operations with BigDecimal. GMP is the defacto standard library for bignum with C/C++.
If you want to work in the .NET world you can use still use the java.math.BigDecimal class. Just add a reference to vjslib (in the framework) and then you can use the java classes.
The great thing is, they can be used fron any .NET language. For example in C#:
using java.math;
namespace MyNamespace
{
class Program
{
static void Main(string[] args)
{
BigDecimal bd = new BigDecimal("12345678901234567890.1234567890123456789");
Console.WriteLine(bd.ToString());
}
}
}
The (free) basic program x11 basic ( http://x11-basic.sourceforge.net/ ) has arbitrary precision for integers. (and some useful commands as well, e.g. nextprime( abcd...pqrs))
IBM's interpreted scripting language Rexx, provides custom precision setting with Numeric. https://www.ibm.com/docs/en/zos/2.1.0?topic=instructions-numeric.
The language runs on mainframes and pc operating systems and has very powerful parsing and variable handling as well as extension packages. Object Rexx is the most recent implementation. Links from https://en.wikipedia.org/wiki/Rexx
Haskell has excellent support for arbitrary-precision arithmetic built in, and using it is the default behavior. At the REPL, with no imports or setup required:
Prelude> 2 ^ 2 ^ 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
(try this yourself at https://tryhaskell.org/)
If you're writing code stored in a file and you want to print a number, you have to convert it to a string first. The show function does that.
module Test where
main = do
let x = 2 ^ 2 ^ 12
let xStr = show x
putStrLn xStr
(try this yourself at code.world: https://www.code.world/haskell#Pb_gPCQuqY7r77v1IHH_vWg)
What's more, Haskell's Num abstraction lets you defer deciding what type to use as long as possible.
-- Define a function to make big numbers. The (inferred) type is generic.
Prelude> superbig n = 2 ^ 2 ^ n
-- We can call this function with different concrete types and get different results.
Prelude> superbig 5 :: Int
4294967296
Prelude> superbig 5 :: Float
4.2949673e9
-- The `Int` type is not arbitrary precision, and we might overflow.
Prelude> superbig 6 :: Int
0
-- `Double` can hold bigger numbers.
Prelude> superbig 6 :: Double
1.8446744073709552e19
Prelude> superbig 9 :: Double
1.3407807929942597e154
-- But it is also not arbitrary precision, and can still overflow.
Prelude> superbig 10 :: Double
Infinity
-- The Integer type is arbitrary-precision though, and can go as big as we have memory for and patience to wait for the result.
Prelude> superbig 12 :: Integer
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336
-- If we don't specify a type, Haskell will infer one with arbitrary precision.
Prelude> superbig 12
1044388881413152506691752710716624382579964249047383780384233483283953907971557456848826811934997558340890106714439262837987573438185793607263236087851365277945956976543709998340361590134383718314428070011855946226376318839397712745672334684344586617496807908705803704071284048740118609114467977783598029006686938976881787785946905630190260940599579453432823469303026696443059025015972399867714215541693835559885291486318237914434496734087811872639496475100189041349008417061675093668333850551032972088269550769983616369411933015213796825837188091833656751221318492846368125550225998300412344784862595674492194617023806505913245610825731835380087608622102834270197698202313169017678006675195485079921636419370285375124784014907159135459982790513399611551794271106831134090584272884279791554849782954323534517065223269061394905987693002122963395687782878948440616007412945674919823050571642377154816321380631045902916136926708342856440730447899971901781465763473223850267253059899795996090799469201774624817718449867455659250178329070473119433165550807568221846571746373296884912819520317457002440926616910874148385078411929804522981857338977648103126085903001302413467189726673216491511131602920781738033436090243804708340403154190336

Resources