operatoin of bitvec in pySmt - model-checking

I'm learning bitvec with pysmt. But i meet some problem.
I have three bitvec in which size is 1 , called a,b,c. And I want to encode (a<b)&c
a = Symbol("a",BVType(1))
b = Symbol("b",BVType(1))
c = Symbol("c",BVType(1))
f = BVAnd(BVUlt(a,b),c)
It fails. Because when i trying to do operation with BVUlt(a,b),it returns BOOL but not BV(1). So I can't do BVAnd later.
Is there any solution, like convert BOOL to BVType(1) ?

Related

How does the syntax in a if/then/else within a do block work in Haskell

I'm trying to make the folowing function:
repcountIORIban :: IORef -> Int -> Int -> Int -> Int -> Lock -> IORef -> Lock -> Int -> Int -> IO ()
repcountIORIban count number lower modulus amountthreads lock done lock2 difference rest = do
if rest > number
then let extra = 1
else let extra = 0
if number + 1 < amountthreads
then
forkIO $ realcountIORIban(count lower (lower + difference + extra - 1) modulus lock done lock2)
repcountIORIban (count (number + 1) (lower + difference + extra) modulus amountthreads lock done lock2 difference rest)
else
forkIO $ realcountIORIban(count lower (lower + difference + extra - 1) modulus lock done lock2)
But I can't run the program from which this function is a part of. It gives me the error:
error: parse error on input `else'
|
113 | else let extra = 0
| ^^^^
I've got this error a lot of times withing my program but I don't know what I'm doing wrong.
This is incorrect, you can't let after then/else and expect those lets to define bindings which are visible below.
do if rest > number
then let extra = 1 -- wrong, needs a "do", or should be "let .. in .."
else let extra = 0
... -- In any case, extra is not visible here
Try this instead
do let extra = if rest > number
then 1
else 0
...
Further, you need then do if after that you need to perform two or more actions.
if number + 1 < amountthreads
then do
something
somethingElse
else -- use do here if you have two or more actions
...

How to call model.matrix or equivalent from RCPP, possibly in threaded code?

we were hoping to use threads to get things going faster in an algorithm with many loops whose results are not interdependent.
within the code we hoped to port to rcpp, there is a call to model.matrix.
This did not appear straightforward to port.
Investigating this further (as to what code this runs for our use case), revealed that the S3 method for lm objects does some preparatory work on the variable and then calls the default version of the function as can be seen in this copy-paste of the code:
function (object, ...)
{
if (n_match <- match("x", names(object), 0L))
object[[n_match]]
else {
data <- model.frame(object, xlev = object$xlevels, ...)
if (exists(".GenericCallEnv", inherits = FALSE))
NextMethod("model.matrix", data = data, contrasts.arg = object$contrasts)
else {
dots <- list(...)
dots$data <- dots$contrasts.arg <- NULL
do.call("model.matrix.default", c(list(object = object,
data = data, contrasts.arg = object$contrasts),
dots))
}
}
}
the default version of the function farms at least some of its functionality out to a compiled C function:
function (object, data = environment(object), contrasts.arg = NULL,
xlev = NULL, ...) {
t <- if (missing(data))
terms(object)
else terms(object, data = data)
if (is.null(attr(data, "terms")))
data <- model.frame(object, data, xlev = xlev)
else {
reorder <- match(vapply(attr(t, "variables"), deparse2,
"")[-1L], names(data))
if (anyNA(reorder))
stop("model frame and formula mismatch in model.matrix()")
if (!identical(reorder, seq_len(ncol(data))))
data <- data[, reorder, drop = FALSE]
}
int <- attr(t, "response")
if (length(data)) {
contr.funs <- as.character(getOption("contrasts"))
namD <- names(data)
for (i in namD) if (is.character(data[[i]]))
data[[i]] <- factor(data[[i]])
isF <- vapply(data, function(x) is.factor(x) || is.logical(x),
NA)
isF[int] <- FALSE
isOF <- vapply(data, is.ordered, NA)
for (nn in namD[isF]) if (is.null(attr(data[[nn]], "contrasts")))
contrasts(data[[nn]]) <- contr.funs[1 + isOF[nn]]
if (!is.null(contrasts.arg)) {
if (!is.list(contrasts.arg))
warning("non-list contrasts argument ignored")
else {
if (is.null(namC <- names(contrasts.arg)))
stop("'contrasts.arg' argument must be named")
for (nn in namC) {
if (is.na(ni <- match(nn, namD)))
warning(gettextf("variable '%s' is absent, its contrast will be ignored",
nn), domain = NA)
else {
ca <- contrasts.arg[[nn]]
if (is.matrix(ca))
contrasts(data[[ni]], ncol(ca)) <- ca
else contrasts(data[[ni]]) <- contrasts.arg[[nn]]
}
}
}
}
}
else {
isF <- FALSE
data[["x"]] <- raw(nrow(data))
}
ans <- .External2(C_modelmatrix, t, data)
if (any(isF))
attr(ans, "contrasts") <- lapply(data[isF], attr,
"contrasts")
ans
}
is there some way of calling C_modelmatrix from Rcpp at all, whether it is single OR multi-threaded? Is there any library or package that does essentially the same thing from within Rcpp so I don't have to reinvent the wheel here? I'd rather not have to fully re-implement everything that model.matrix does if I can avoid it.
as we don't actually have functioning code, there isn't any to show for this yet.
The relevant portion of the function we were trying to speed up calls model.matrix like this: ("model.y is an lm", data are both copies of an original object returned by model.frame(model.y) )
ymat.t <- model.matrix(terms(model.y), data=pred.data.t)
ymat.c <- model.matrix(terms(model.y), data=pred.data.c)
this isn't really a results based question, more of an approach/methods based question
You can call model.matrix from within C++, but you cannot do so in a multi-threaded way.
There will also be overhead, but if the function call is needed deep within the middle of your code, it could be worth it as a convenience.
Example:
// [[Rcpp::export]]
RObject call(RObject x, RObject y){
Environment env = Environment::global_env();
Function f = env["model.matrix"];
RObject res = f(x,y);
return res;
}

Find string in list - Erlang

I am trying to find if some string is really in list. There is my code:
comparing() ->
FileName = "msg-0001",
{ok,[NumLine],_} = io_lib:fread("msg-~d",FileName),
io:format("Numline:~p~n", [NumLine]),
{ok, Pars} = file:read_file("parsing.txt"),
{ok, Dump} = file:read_file("msg-0001"),
StringNumline = lists:flatten(io_lib:format("~p", [NumLine])),
io:format("StringNumline:~p~n", [StringNumline]),
StringDump = lists:flatten(io_lib:format("~p", [Dump])),
io:format("StringDump:~p~n", [StringDump]),
SubStringDump = string:substr(StringDump, 4),
io:format("SubStringDump:~p~n", [SubStringDump]),
Ndump = concat(StringNumline, SubStringDump),
io:format("Ndump:~p~n", [Ndump]),
FineDump = Ndump--"\">>",
io:format("FineDump:~p~n", [FineDump]),
L1 = binary:split(Pars, <<"\r\n">>, [global]),
io:format("L1=~p~n", [L1]),
Check = lists:member(FineDump, L1),
io:format("Check=~p~n", [Check]),
if
Check ->
file:write_file("check.txt", "true\n", [append]);
true ->
file:write_file("check.txt", "false\n", [append])
end.
Here is output of the code:
10> c(compare).
{ok,compare}
11> compare:comparing().
Numline:1
StringNumline:"1"
StringDump:"<<\"hello\">>"
SubStringDump:"hello\">>"
Ndump:"1hello\">>"
FineDump:"1hello"
L1=[<<"0hello">>,<<"something">>,<<"anyword">>,<<"1hello">>,<<"2exercise">>,
<<"2solution">>,<<"3test">>,<<"new">>,<<"4check">>,<<"4grade">>]
Check=false
ok
I have a problem in line Check = lists:member(FineDump, L1). It's always false although 1hello is member of the list. I don't know where is the mistake. Is it function lists:member fine for this operation? Or does exist some other way to find if string is a member of a list? I'm new at Erlang.
L1 is a list of binaries while FineDump is a string (a list of integers in Erlang). You need to convert FineDump into a binary to make the lists:member/2 call work.
This should work:
Check = lists:member(list_to_binary(FineDump), L1),
You also seem to be doing this in a way too convoluted way than necessary. If I understood the logic fine, you don't need all that code. You can concatenate NumLine and Dump into a binary using just:
X = <<(integer_to_binary(NumLine))/binary, Dump/binary>>
and then use that directly in lists:member:
lists:member(X, L1)
1> NumLine = 1.
1
2> Dump = <<"hello">>.
<<"hello">>
3> <<(integer_to_binary(NumLine))/binary, Dump/binary>>.
<<"1hello">>

What are the differences between Lwt.async and Lwt_main.run on OCaml/Node.JS?

I am experimenting with js_of_ocaml and node.js. As you know, node.js makes extensive use of callbacks to implement asynchronous requests without introducing explicit threads.
In OCaml we have a very nice threading library, Lwt, coming with a very useful syntax extension. I wrote a prototype with a binding to some node library (a AWS S3 client) and added a lwt-ish layer to hide the callback.
open Lwt.Infix
open Printf
open Js
let require_module s =
Js.Unsafe.fun_call
(Js.Unsafe.js_expr "require")
[|Js.Unsafe.inject (Js.string s)|]
let _js_aws = require_module "aws-sdk"
let array_to_list a =
let ax = ref [] in
begin
for i = 0 to a##.length - 1 do
Optdef.iter (array_get a i) (fun x -> ax := x :: !ax)
done;
!ax
end
class type error = object
end
class type bucket = object
method _Name : js_string t readonly_prop
method _CreationDate : date t readonly_prop
end
class type listBucketsData = object
method _Buckets : (bucket t) js_array t readonly_prop
end
class type s3 = object
method listBuckets :
(error -> listBucketsData t -> unit) callback -> unit meth
end
let createClient : unit -> s3 t = fun () ->
let constr_s3 = _js_aws##.S3 in
new%js constr_s3 ()
module S3 : sig
type t
val create : unit -> t
val list_buckets : t -> (string * string) list Lwt.t
end = struct
type t = s3 Js.t
let create () =
createClient ()
let list_buckets client =
let cell_of_bucket_data data =
((to_string data##._Name),
(to_string data##._CreationDate##toString))
in
let mvar = Lwt_mvar.create_empty () in
let callback error buckets =
let p () =
if true then
Lwt_mvar.put mvar
(`Ok(List.map cell_of_bucket_data ## array_to_list buckets##._Buckets))
else
Lwt_mvar.put mvar (`Error("Ups"))
in
Lwt.async p
in
begin
client##listBuckets (wrap_callback callback);
Lwt.bind
(Lwt_mvar.take mvar)
(function
| `Ok(whatever) -> Lwt.return whatever
| `Error(mesg) -> Lwt.fail_with mesg)
end
end
let () =
let s3 = S3.create() in
let dump lst =
Lwt_list.iter_s
(fun (name, creation_date) ->
printf "%32s\t%s\n" name creation_date;
Lwt.return_unit)
lst
in
let t () =
S3.list_buckets s3
>>= dump
in
begin
Lwt.async t
end
Since there is no binding to Lwt_main for node.js, I had to run my code with Lwt.async. What are the differences between running the code with Lwt.async rather than with Lwt_main.run – the latter not existing in node.js? Is it guaranteed that the program will wait until the asynchronous threads are completed before exiting, or is this rather a lucky but random behaviour of my code?
The Lwt_main.run function recursively polls the thread whose execution it supervises. At each iteration, if this thread is still running, the scheduler uses one engine (from Lwt_engine) to execute threads waiting for I/Os, either by selecting or synchronising on events.
The natural way to translate this in Node.JS is to use the process.nextTick method, which relies on Node.JS own scheduler. Implementing the Lwt_main.run function in this case can be as simple as:
let next_tick (callback : unit -> unit) =
Js.Unsafe.(fun_call
(js_expr "process.nextTick")
[| inject (Js.wrap_callback callback) |])
let rec run t =
Lwt.wakeup_paused ();
match Lwt.poll t with
| Some x -> x
| None -> next_tick (fun () -> run t)
This function only run threads of type unit Lwt.t but this is the main case for a program. It is possible to compute arbitrary values using a Lwt_mvar.t to communicate.
It is also possible to extend this example to support all sort of hooks, as in the original Lwt_main.run implementation.

does multithread conflict with Map in F#

let len = 25000000
let map = Map.ofArray[|for i =1 to len do yield (i,i+1)|]
let maparr = [|map;map;map;map|]
let f1 i =
for i1 =1 to len do
let l1 = maparr.[i-1].Item(i1)
()
let index = [|1..4|]
let _ = index |> Array.Parallel.map f1
printf "done"
I found that only one core is working at full speed be the code above . But what i except is all the four thread is working together with a high level of cpu usage. So it seems multithread conflict with Map, am i right? If not, how can achieve my initial goal? Thank you in advance
So I think you were tripping a heuristic where the library assumed when there were only a small number of tasks, it would be fastest to just use a single thread.
This code maxes out all threads on my computer:
let len = 1000000
let map = Map.ofArray[|for i =1 to len do yield (i,i+1)|]
let maparr = [|map;map;map;map|]
let f1 (m:Map<_,_>) =
let mutable sum = 0
for i1 =1 to len do
let l1 = m.Item(i1)
for i = 1 to 10000 do
sum <- sum + 1
printfn "%i" sum
let index = [|1..40|]
printfn "starting"
index |> Array.map (fun t -> maparr.[(t-1)/10]) |> Array.Parallel.iter f1
printf "done"
Important changes:
Reduced len significantly. In your code, almost all the time was spent creating the matrix.
Actually do work in the loop. In your code, it is possible that the loop was optimised to a no-op.
Run many more tasks. This tricked the scheduler into using more threads and all is good

Resources