Is there any API call in minisat to extract unsat core or any other method for the same.
I want to extract the unsat core for every invocation of the solver and then work on the unsat core.
MiniSat is quite an old program at this point. At the very least, you should look into one of the solvers entered into a recent SAT competition, e.g. Glucose. The competitions have required SAT solvers to emit DRAT proofs of unsatisfiability since 2013. Run whichever solver you choose and have it dump its DRAT proof into proof.out. Feed proof.out into the drat-trim utility with the -c option, which will produce an UNSAT core in DIMACS format. I.e.
drat-trim originalproblem.cnf proof.out -c core.cnf
Note that you don't have to use a MiniSat clone; any modern solver that emits DRAT proofs can have its proof fed into drat-trim to yield an UNSAT core.
Related
Minisat is a constraint programming/satisfaction tool, there is a version of Minisat which works here in the browser http://www.msoos.org/2013/09/minisat-in-your-browser/
How can I express a scheduling problem with Minisat? Is there a higher level language which compiles to Minisat which would let me express it?
I mean for solving problems like exam timetabling. http://docs.jboss.org/drools/release/6.1.0.Final/optaplanner-docs/html_single/#examination
Another high level modeling language is Picat (http://picat-lang.org/), which have an option to solve/2 to convert to CNF when using the sat module, e.g. "solve([dump], Vars)". The syntax when using the sat module - as well as for the cp and mip modules - is similar to standard CLP syntax.
For some Picat examples, see my Picat page: http://hakank.org/picat/ .
SAT solvers like Minisat or Cryptominisat typically read a clause set of logical OR expressions in Conjunctive Normal Form (CNF). It takes an encoding step to translate your problem into this CNF format.
Circuit SAT Solvers process a nested Boolean expression rather than a CNF. But it appears that this type of solvers is nowadays outperformed by the CNF SAT Solvers.
Constraint programming solvers like Minizinc use a high level language which is easier to write and to comprehend. Depending on the features being used, Minizinc can translate its input language into a CNF/DIMACS format suitable for a SAT solver.
Peter Stuckey's paper "There are no CNF Problems" explains the idea. His slides also contain some insights on scheduling.
Have a look at Minizinc examples for scheduling written by Hakan Kjellerstrand.
Emmanuel Hebrard's Scheduling and SAT is an extensive treatment of the topic.
I worked on this project few months ago.
It was really interesting to do.
To use miniSAT (or any other SAT solvers),
you will have to reduce the Scheduling Problem to a SAT problem.
I can recommand you this question that I asked in 3 parts.
Class Scheduling to Boolean satisfiability [Polynomial-time reduction]
Class Scheduling to Boolean satisfiability [Polynomial-time reduction] part 2
Class Scheduling to Boolean satisfiability [Polynomial-time reduction] Final Part
And you will basically see, step by step, how to transform the Scheduling Problem to a SAT problem that MiniSAT can read and solve :).
Thanks again to #amit who was a very big help in this project.
With this answer, you will be able to solve N rooms with T teachers, who are teaching S subjects to G different group of students :) which is I think, enough for 99% of Scheduling Problems.
Today I wanted too look into options on SAT solving in haskell. First I tought about writing my own interface to the picosat solver.
Then I found out there is the SBV library.
It's interfaces to Z3, Yices, CVC4 and Boolector.
Also, I did a google search on github and it turs out there is even Picosat binding availiable.
Are there any other haskell bindings to SAT solvers that are worth looking at given the constraint of fast/high performance. Carification: that are as suitable for high performance SAT-solving (e.g., problems that run for days, as well as problems that need to finish as fast as possible as I check 2^20 or more SAT problems). For example, what I am particularly missing on hackage is a binding to a fast parrallel SAT solver like Plingeling. (Also, I found out about the current updated picosat binding on github more by accident and I very well might miss other options)
The default option of the SBV library is the Z3 SMT solver. Am I right in my educated guess that picosat is faster for plain SAT-solving than Z3?
Disclosure, I'm the author of the Haskell picosat bindings you mentioned.
SBV is really robust library that's been around for a while, it's good if you want an interface to external SMT or SAT solvers like Yices or Z3. Picosat is a much simpler library that I wrote simply because I wanted a library that could be installed simply without external dependencies.
Am I right in my educated guess that picosat is faster for plain SAT-solving than Z3?
I don't know what your performance constraints are, but as far as underlying solver libraries go you're not going to notice a significant difference between Z3 or Picosat until you hit really enormous problems ( billions of variables ). Both are very heavily optimized libraries and the bottleneck ( at least from the Haskell side ) is likely going to be marshalling data between the library and Haskell's runtime.
SBV is thread-safe.
Comparing Z3 and Lingeling for SAT performance is not an easy task. I'd hazard a guess that they would be more or less the same unless you take your time to figure out the exact parameters to fine tune their internal heuristics.
The good thing is that SBV provides a common interface, so you can change the solver by merely importing a different bridge:
import Data.SBV.Bridge.Z3
vs
import Data.SBV.Bridge.Boolector
and if you compile boolector to use lingeling, then you can test performance easily by merely changing one line of Haskell.
I have found some piece of code (function) in library which could be improved by the optimization of compiler (as the main idea - to find good stuff to go deep into compilers). And I want to automate measurement of time execution of this function by script. As it's low-level function in library and get arguments it's difficult to extract this one. Thus I want to find out the way of measurement exactly this function (precise CPU time) without library/application/environment modifications. Have you any ideas how to achieve that?
I could write wrapper but I'll need in near future much more applications for performance testing and I think to write wrapper for every one is very ugly.
P.S.: My code will run on ARM (armv7el) architecture, which has some kind of "Performance Monitor Control" registers. I have learned about "perf" in linux kernel. But don't know is it what I need?
It is not clear if you have access to the source code of the function you want to profile or improve, i.e. if you are able to recompile the considered library.
If you are using a recent GCC (that is 4.6 at least) on a recent Linux system, you could use profilers like gprof (assuming you are able to recompile the library) or better oprofile (which you could use without recompiling), and you could customize GCC for your needs.
Be aware that like any measurements, profiling may alter the observed phenomenon.
If you are considering customizing the GCC compiler for optimization purposes, consider making a GCC plugin, or better yet, a MELT extension, for that purpose (MELT is a high-level domain specific language to extend GCC). You could also customize GCC (with MELT) for your own specific profiling purposes.
What are good tests to benchmark a crypto library?
Which unit (time,CPU cycles...) should we use to compare the differents crypto libraries?
Are there any tools, procedures....?
Any Idea, comment is welcome!
Thank you for your inputs!
I assume you mean performance benchmarks. I would say that both time and cycles are valid benchmarks, as some code may execute differently on different architectures (perhaps wildly differently if they're different enough).
If it is extremely important to you, I would do the testing myself. You can use some timer (almost all languages have one) or you can use some profiler (almost all languages have one of these too) to figure out the exact performance for the algorithms you are looking for on your target platform.
If you are looking at one algorithm vs. another one, you can look for data that others have already gathered and that will give you a rough idea. For instance, here are some benchmarks from Crypto++:
http://www.cryptopp.com/benchmarks.html
Note that they use MB/Second and Cycles/Byte as metrics. I think those are very good choices.
Some very good answers before me, but keep in mind optimizations are a very good way to leak key material by timing attack (for example see how devastating it can be for AES). If there is any chance an attacker can time your operations you want not the fastest but the most constant time library available (and possibly the most constant power usage available, if there is any chance someone can monitor yours). OpenSSL does a great job of keeping on top of current attacks, can't necessarily say the same things of other libraries.
What are good tests to benchmark a crypto library?
The answers below are in the context of Crypto++. I don't now about other libraries, like OpenSSL, Botan, BouncyCastle, etc.
The Crypto++ library has a built-in benchmarking suite.
Which unit (time,CPU cycles...) should we use to compare the differents crypto libraries?
You typically measure performance in cycles-per-byte. Cycles-per-byte depends upon the CPU frequency. Another related metric is throughput measured in MB/s. It also depends upon CPU frequency.
Are there any tools, procedures....?
git clone https://github.com/weidai11/cryptopp.git
cd cryptopp
make static cryptest.exe
# 2.0 GHz (use KB=1024; not 1000)
make bench CRYPTOPP_CPU_SPEED=1.8626
make bench will create a file called benchmark.html.
If you want to manually run the tests, then:
./cryptest.exe b <time in seconds> <cpu speed in GHz>
It will output an HTML-like table without <HEAD> and <BODY> tags. You will still be able to view it in a web browser.
You can also check the Crypto++ benchmark page at Crypto++ Benchmarks. The information is dated, and its on our TODO list.
You also need accumen for what looks right. For example, SSE4.2 and ARMv8 have a CRC32 instruction. Cycles-per-byte should go from about 3 or 5 cpb (software only) to about 1 or 1.5 cpb (hardware acceleration). It should equate to a change of roughly 300 or 500 MB/s (software only) to roughly 1.5 GB/s (hardware acceleration) on modern hardware running around 2 GHz.
Other technologies, like SSE2 and NEON, are trickier to work with. There's a theoretical cycles-per-byte and throughput you should see, but you may not know what it is. You may need to contact the authors of the algorithm to find out. For example, we contacted the authors of BLAKE2 to learn if our ARMv7/ARMv8 NEON implementation was performing as expected because it was missing benchmark results on the author's homepage.
I've also found GCC 4.6 (and above) and -O3 can make a big difference in software-only implementations. That's because GCC heavily vectorizes at -O3, and you might witness a 2x to 2.5x speedup.For example, the compiler may generate code that runs at 40 cpb at -O2. At -O3 it may run at 15 or 19 cpb. A good SSE2 or NEON implementation should outperform the software-only implementation by at least a few cycles per byte. In the same example, the SSE2 or NEON implementation may run at 8 to 13 cpb.
There's also sites like OpenBenchmarking.org that may be able to provide some metrics for you.
My comments above aside, the US government has the FIPS program that you might want to look at. It's not perfect (by a long shot) but it's a start -- you can get an idea of things they were looking at when evaluation cryptography.
I also suggest looking at the Computer Security Division of the NIST.
Also, on a side note ... reviewing what the master has to say (Bruce Schneier) on the subject of Security Pitfalls in Cryptography is always good. Also: Security is harder than it looks.
I am currently using a GCC 3.3.3 based cross compiler to compile for a Xscale PXA270 development board. However, I was wondering if there are other Xscale compilers out there that run on Linux (or Windows for that matter)? The cross compiler setup I am using has horrendous performance on the target device, with certain programs that do a decent amount of math operations performing 10 to 20 times worse on the Xscale processor than on a similarly clocked Pentium 2. Any other options for compilers out there or specific compiler flags I should be setting with my GCC-based compiler that may help with the performance?
Thanks,
Ben
Unlike the Pentium 2, the XScale architecture doesn't have native floating point instructions. That means floating point math has to be emulated using integer instructions - a 10 to 20 times slowdown sounds about right.
To improve performance, you can try a few things:
Where possible, minimise the use of floating point - in some places, you may be able to subsitute plain integer or fixed point calculations;
Trade-off memory for speed, by precalculating tables of values where possible;
Use floats instead of doubles in calculations where you do not need the precision of the latter (including using the C99 float versions of math.h functions);
Minimise conversions between integers and floating point types.
Yes, you don't have an FPU so floating point needs to be done in integer math. However, there are two mechanisms for doing this, and one is 11 times faster than the other.
GCC target arm-linux-gnu normally includes real floating point instructions in the code for ARM's first FPU, the "FPA", now so rare it is nonexistent. These cause illegal instruction traps which are then caught and emulated in the kernel. This is extremely slow due to the context switch.
-msoft-float instead inserts calls to library functions (in libgcc.a). This avoids the switch into kernel space and is 11 times faster that the emulated FPA instructions.
You don't say what floating point model you are using - it may be that you are already building the whole userland with -msoft-float - but it might be worth checking that your object files contain no FPA instructions. You can check with:
objdump -d file | grep '<space><tab>f' | less
where file is any object file, executable or library that your compiler outputs. All FPA instructions start with f, while no other ARM instructions do. Those are actual space and tab characters there, and you might need to say <control-V><tab> to get the tab character past your shell.
If it is using FPA insns, you need to compile your entire userland using -msoft-float.
The most comprehensive further reading on these issues is http://wiki.debian.org/ArmEabiPort which is primarily concerned with a third alternative: using an arm-linux-gnueabi compiler, a newer alternative ABI that is available from gcc-4.1.1 onwards and which has different characteristics. See the document for further details.
"Other xscale compilers"
Open source: llvm and pcc, of which llvm is the most linux-friendly and functional, and also has a gcc front-end; pcc, a descendant of the venerable Portable C Compiler, seems more bsd-oriented.
Commercial: The Keil compiler (owned by ARM Ltd) seems to produce faster code than GCC, but is not going to impact your lack of an FPU significantly.