haskell and large binary - haskell

When broadcasting BinaryString through a TChan, what gets copied the whole Binary or just the reference ?
if the whole binary gets copied how to send only the reference ?

Only a reference is written to the TChan, the payload is not copied. It would be far too inefficient to copy all the data all the time, and since the data is immutable (in general, you can cheat), it is safe to only transfer references.

To be a bit more precise than Daniel (and confirm Daniels suspicion in his comment): A pointer to the constructor of the BinaryString (do you mean ByteString?) is written to the TVar.
Let us confirm by checking the relevant code. TChan is built upon TVar, and uses writeTVar to write the value, whch is implemented in GHC.Conc.Sync (and re-exported by GHC.Cont and Control.Concurrent.STM.TVar):
-- |Write the supplied value into a TVar
writeTVar :: TVar a -> a -> STM ()
writeTVar (TVar tvar#) val = STM $ \s1# ->
case writeTVar# tvar# val s1# of
s2# -> (# s2#, () #)
The argument is just passed along to the function writeTVar#, which is a primitive operation which is implemented in rts/PrimOps.cmm:
stg_writeTVarzh
{
W_ trec;
W_ tvar;
W_ new_value;
/* Args: R1 = TVar closure */
/* R2 = New value */
MAYBE_GC (R1_PTR & R2_PTR, stg_writeTVarzh); // Call to stmWriteTVar may allocate
trec = StgTSO_trec(CurrentTSO);
tvar = R1;
new_value = R2;
foreign "C" stmWriteTVar(MyCapability() "ptr", trec "ptr", tvar "ptr", new_value "ptr") [];
jump %ENTRY_CODE(Sp(0));
}
This wrap the following code in rts/STM.c:
void stmWriteTVar(Capability *cap,
StgTRecHeader *trec,
StgTVar *tvar,
StgClosure *new_value) {
StgTRecHeader *entry_in = NULL;
TRecEntry *entry = NULL;
TRACE("%p : stmWriteTVar(%p, %p)", trec, tvar, new_value);
ASSERT (trec != NO_TREC);
ASSERT (trec -> state == TREC_ACTIVE ||
trec -> state == TREC_CONDEMNED);
entry = get_entry_for(trec, tvar, &entry_in);
if (entry != NULL) {
if (entry_in == trec) {
// Entry found in our trec
entry -> new_value = new_value;
} else {
// Entry found in another trec
TRecEntry *new_entry = get_new_entry(cap, trec);
new_entry -> tvar = tvar;
new_entry -> expected_value = entry -> expected_value;
new_entry -> new_value = new_value;
}
} else {
// No entry found
StgClosure *current_value = read_current_value(trec, tvar);
TRecEntry *new_entry = get_new_entry(cap, trec);
new_entry -> tvar = tvar;
new_entry -> expected_value = current_value;
new_entry -> new_value = new_value;
}
TRACE("%p : stmWriteTVar done", trec);
}
And here we see that new_value is a pointer that is never looked at, and stored as such.

Related

How to update a Data attribute defined with Maybe

I'm playing with Haskell using https://hackage.haskell.org/package/cursor library. I have this data definition:
data TuidoState =
TuidoState { tuidoStateEntries :: Maybe (NonEmptyCursor Entry) }
And I have this function:
buildNewItem :: TuidoState -> TuidoState
buildNewItem s =
let nextID = 10 -- TODO update here to function to return ID
headerTitle = "Test new item"
newEntry = Entry { entryHeader= Header { headerTitle= headerTitle }
, entryBody= Just (Body { bodyTitle= headerTitle })
, entryTags= [Tag {tagName= headerTitle}]
}
actualEntries = (tuidoStateEntries s)
ne = NE.nonEmpty [newEntry]
in
case actualEntries of
Nothing ->
s { tuidoStateEntries = Just(makeNonEmptyCursor ne) }
Just value -> s { tuidoStateEntries = Just(value) } -- possible here I will want to just add the new Entry to the existing list
But, I cannot understand the error:
• Couldn't match expected type ‘NE.NonEmpty Entry’
with actual type ‘Maybe (NE.NonEmpty Entry)’
• In the first argument of ‘makeNonEmptyCursor’, namely ‘ne’
In the first argument of ‘Just’, namely ‘(makeNonEmptyCursor ne)’
In the ‘tuidoStateEntries’ field of a record
|
327 | s { tuidoStateEntries = Just(makeNonEmptyCursor ne) }
Could someone help me with it?
nonEmpty takes an arbitrary list, and so cannot guarantee it returns a non-empty list. Instead, it returns a Maybe (NonEmpty a) to indicate that it may either return a nonempty list or cause an error.
Consider using NonEmpty's constructor, (:|), directly instead.
ne = newEntry NE.:| []

How to call model.matrix or equivalent from RCPP, possibly in threaded code?

we were hoping to use threads to get things going faster in an algorithm with many loops whose results are not interdependent.
within the code we hoped to port to rcpp, there is a call to model.matrix.
This did not appear straightforward to port.
Investigating this further (as to what code this runs for our use case), revealed that the S3 method for lm objects does some preparatory work on the variable and then calls the default version of the function as can be seen in this copy-paste of the code:
function (object, ...)
{
if (n_match <- match("x", names(object), 0L))
object[[n_match]]
else {
data <- model.frame(object, xlev = object$xlevels, ...)
if (exists(".GenericCallEnv", inherits = FALSE))
NextMethod("model.matrix", data = data, contrasts.arg = object$contrasts)
else {
dots <- list(...)
dots$data <- dots$contrasts.arg <- NULL
do.call("model.matrix.default", c(list(object = object,
data = data, contrasts.arg = object$contrasts),
dots))
}
}
}
the default version of the function farms at least some of its functionality out to a compiled C function:
function (object, data = environment(object), contrasts.arg = NULL,
xlev = NULL, ...) {
t <- if (missing(data))
terms(object)
else terms(object, data = data)
if (is.null(attr(data, "terms")))
data <- model.frame(object, data, xlev = xlev)
else {
reorder <- match(vapply(attr(t, "variables"), deparse2,
"")[-1L], names(data))
if (anyNA(reorder))
stop("model frame and formula mismatch in model.matrix()")
if (!identical(reorder, seq_len(ncol(data))))
data <- data[, reorder, drop = FALSE]
}
int <- attr(t, "response")
if (length(data)) {
contr.funs <- as.character(getOption("contrasts"))
namD <- names(data)
for (i in namD) if (is.character(data[[i]]))
data[[i]] <- factor(data[[i]])
isF <- vapply(data, function(x) is.factor(x) || is.logical(x),
NA)
isF[int] <- FALSE
isOF <- vapply(data, is.ordered, NA)
for (nn in namD[isF]) if (is.null(attr(data[[nn]], "contrasts")))
contrasts(data[[nn]]) <- contr.funs[1 + isOF[nn]]
if (!is.null(contrasts.arg)) {
if (!is.list(contrasts.arg))
warning("non-list contrasts argument ignored")
else {
if (is.null(namC <- names(contrasts.arg)))
stop("'contrasts.arg' argument must be named")
for (nn in namC) {
if (is.na(ni <- match(nn, namD)))
warning(gettextf("variable '%s' is absent, its contrast will be ignored",
nn), domain = NA)
else {
ca <- contrasts.arg[[nn]]
if (is.matrix(ca))
contrasts(data[[ni]], ncol(ca)) <- ca
else contrasts(data[[ni]]) <- contrasts.arg[[nn]]
}
}
}
}
}
else {
isF <- FALSE
data[["x"]] <- raw(nrow(data))
}
ans <- .External2(C_modelmatrix, t, data)
if (any(isF))
attr(ans, "contrasts") <- lapply(data[isF], attr,
"contrasts")
ans
}
is there some way of calling C_modelmatrix from Rcpp at all, whether it is single OR multi-threaded? Is there any library or package that does essentially the same thing from within Rcpp so I don't have to reinvent the wheel here? I'd rather not have to fully re-implement everything that model.matrix does if I can avoid it.
as we don't actually have functioning code, there isn't any to show for this yet.
The relevant portion of the function we were trying to speed up calls model.matrix like this: ("model.y is an lm", data are both copies of an original object returned by model.frame(model.y) )
ymat.t <- model.matrix(terms(model.y), data=pred.data.t)
ymat.c <- model.matrix(terms(model.y), data=pred.data.c)
this isn't really a results based question, more of an approach/methods based question
You can call model.matrix from within C++, but you cannot do so in a multi-threaded way.
There will also be overhead, but if the function call is needed deep within the middle of your code, it could be worth it as a convenience.
Example:
// [[Rcpp::export]]
RObject call(RObject x, RObject y){
Environment env = Environment::global_env();
Function f = env["model.matrix"];
RObject res = f(x,y);
return res;
}

LTL properties and promela program

I have the following program that models a FIFO with a process in PROMELA:
mtype = { PUSH, POP, IS_EMPTY, IS_FULL };
#define PRODUCER_UID 0
#define CONSUMER_UID 1
proctype fifo(chan inputs, outputs)
{
mtype command;
int data, tmp, src_uid;
bool data_valid = false;
do
:: true ->
inputs?command(tmp, src_uid);
if
:: command == PUSH ->
if
:: data_valid ->
outputs!IS_FULL(true, src_uid);
:: else ->
data = tmp
data_valid = true;
outputs!PUSH(data, src_uid);
fi
:: command == POP ->
if
:: !data_valid ->
outputs!IS_EMPTY(true, src_uid);
:: else ->
outputs!POP(data, src_uid);
data = -1;
data_valid = false;
fi
:: command == IS_EMPTY ->
outputs!IS_EMPTY(!data_valid, src_uid);
:: command == IS_FULL ->
outputs!IS_FULL(data_valid, src_uid);
fi;
od;
}
proctype producer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_FULL(false, PRODUCER_UID) ->
outputs?IS_FULL(v, PRODUCER_UID);
}
if
:: v == 1 ->
skip
:: else ->
select(v: 0..16);
printf("P[%d] - produced: %d\n", _pid, v);
access_fifo:
atomic {
inputs!PUSH(v, PRODUCER_UID);
outputs?command(v, PRODUCER_UID);
}
assert(command == PUSH);
fi;
od;
}
proctype consumer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_EMPTY(false, CONSUMER_UID) ->
outputs?IS_EMPTY(v, CONSUMER_UID);
}
if
:: v == 1 ->
skip
:: else ->
access_fifo:
atomic {
inputs!POP(v, CONSUMER_UID);
outputs?command(v, CONSUMER_UID);
}
assert(command == POP);
printf("P[%d] - consumed: %d\n", _pid, v);
fi;
od;
}
init {
chan inputs = [0] of { mtype, int, int };
chan outputs = [0] of { mtype, int, int };
run fifo(inputs, outputs); // pid: 1
run producer(inputs, outputs); // pid: 2
run consumer(inputs, outputs); // pid: 3
}
I want to add wr_ptr and rd_ptr in the program to indicate write and read pointers relative to the depth of FIFO when a PUSH update is performed:
wr_ptr = wr_ptr % depth;
empty=0;
if
:: (rd_ptr == wr_ptr) -> full=true;
fi
and similar chances on POP updates
Could you please help me to add this to this program?
or should i make it an ltl property and use that to check it?
from comments: and i want to verify this property, for example If the fifo is full, one should not have a write request, that is the right syntax?full means that fifo is full and wr_idx is the write pointer, I do not know how to access the full, empty, wr_idx, rd_idx, depth on the fifo process in the properties ltl fifo_no_write_when_full {[] (full -> ! wr_idx)}
Here is an example of the process-based FIFO with size 1 that I gave you here adapted for an arbitrary size, which can be configured with FIFO_SIZE. For verification purposes, I would keep this value as small as possible (e.g. 3), because otherwise you are just widening the state space without including any more significant behaviour.
mtype = { PUSH, POP, IS_EMPTY, IS_FULL };
#define PRODUCER_UID 0
#define CONSUMER_UID 1
#define FIFO_SIZE 10
proctype fifo(chan inputs, outputs)
{
mtype command;
int tmp, src_uid;
int data[FIFO_SIZE];
byte head = 0;
byte count = 0;
bool res;
do
:: true ->
inputs?command(tmp, src_uid);
if
:: command == PUSH ->
if
:: count >= FIFO_SIZE ->
outputs!IS_FULL(true, src_uid);
:: else ->
data[(head + count) % FIFO_SIZE] = tmp;
count = count + 1;
outputs!PUSH(data[(head + count - 1) % FIFO_SIZE], src_uid);
fi
:: command == POP ->
if
:: count <= 0 ->
outputs!IS_EMPTY(true, src_uid);
:: else ->
outputs!POP(data[head], src_uid);
atomic {
head = (head + 1) % FIFO_SIZE;
count = count - 1;
}
fi
:: command == IS_EMPTY ->
res = count <= 0;
outputs!IS_EMPTY(res, src_uid);
:: command == IS_FULL ->
res = count >= FIFO_SIZE;
outputs!IS_FULL(res, src_uid);
fi;
od;
}
No change to producer, consumer or init was necessary:
proctype producer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_FULL(false, PRODUCER_UID) ->
outputs?IS_FULL(v, PRODUCER_UID);
}
if
:: v == 1 ->
skip
:: else ->
select(v: 0..16);
printf("P[%d] - produced: %d\n", _pid, v);
access_fifo:
atomic {
inputs!PUSH(v, PRODUCER_UID);
outputs?command(v, PRODUCER_UID);
}
assert(command == PUSH);
fi;
od;
}
proctype consumer(chan inputs, outputs)
{
mtype command;
int v;
do
:: true ->
atomic {
inputs!IS_EMPTY(false, CONSUMER_UID) ->
outputs?IS_EMPTY(v, CONSUMER_UID);
}
if
:: v == 1 ->
skip
:: else ->
access_fifo:
atomic {
inputs!POP(v, CONSUMER_UID);
outputs?command(v, CONSUMER_UID);
}
assert(command == POP);
printf("P[%d] - consumed: %d\n", _pid, v);
fi;
od;
}
init {
chan inputs = [0] of { mtype, int, int };
chan outputs = [0] of { mtype, int, int };
run fifo(inputs, outputs); // pid: 1
run producer(inputs, outputs); // pid: 2
run consumer(inputs, outputs); // pid: 3
}
Now you should have enough material to work on and be ready to write your own properties. On this regard, in your question you write:
I do not know how to access the full, empty, wr_idx, rd_idx, depth on the fifo process in the properties ltl fifo_no_write_when_full {[] (full -> ! wr_idx)}
First of all, please note that in my code rd_idx corresponds to head, depth (should) correspond to count and that I did not use an explicit wr_idx because the latter can be derived from the former two: it is given by (head + count) % FIFO_SIZE. This is not just a choice of code cleanliness, because having fewer variables in a Promela model actually helps with memory consumption and running time of the verification process.
Of course, if you really want to have wr_idx in your model you are free to add it yourself. (:
Second, if you look at the Promela manual for ltl properties, you find that:
The names or symbols must be defined to represent boolean expressions on global variables from the model.
So in other words, it's not possible to put local variables inside an ltl expression. If you want to use them, then you should take them out from the process's local space and put them in the global space.
So, to check fifo_no_write_when_full* you could:
move the declaration of count out in the global space
add a label fifo_write: here:
:: command == PUSH ->
if
:: count >= FIFO_SIZE ->
outputs!IS_FULL(true, src_uid);
:: else ->
fifo_write:
data[(head + count) % FIFO_SIZE] = tmp;
count = count + 1;
outputs!PUSH(data[(head + count - 1) % FIFO_SIZE], src_uid);
fi
check the property:
ltl fifo_no_write_when_full { [] ( (count >= FIFO_SIZE) -> ! fifo#fifo_write) }
Third, before any attempt to verify any of your properties with the usual commands, e.g.
~$ spin -a fifo.pml
~$ gcc -o fifo pan.c
~$ ./fifo -a -N fifo_no_write_when_full
you should modify producer and consumer so that neither of them executes for an indefinite amount of time and therefore keep the search space at a small depth. Otherwise you are likely to get an error of the sort
error: max search depth too small
and have the verification exhaust all of your hardware resources without reaching any sensible conclusion.
*: actually the name fifo_no_write_when_full is quite generic and might have multiple interpretations, e.g.
the fifo does not perform a push when it is full
the producer is not able to push if the fifo is full
In the example I provided I chose to adopt the first interpretation of the property.

Joining on the first finished thread?

I'm writing up a series of graph-searching algorithms in F# and thought it would be nice to take advantage of parallelization. I wanted to execute several threads in parallel and take the result of the first one to finish. I've got an implementation, but it's not pretty.
Two questions: is there a standard name for this sort of function? Not a Join or a JoinAll, but a JoinFirst? Second, is there a more idiomatic way to do this?
//implementation
let makeAsync (locker:obj) (shared:'a option ref) (f:unit->'a) =
async {
let result = f()
Monitor.Enter locker
shared := Some result
Monitor.Pulse locker
Monitor.Exit locker
}
let firstFinished test work =
let result = ref Option.None
let locker = new obj()
let cancel = new CancellationTokenSource()
work |> List.map (makeAsync locker result) |> List.map (fun a-> Async.StartAsTask(a, TaskCreationOptions.None, cancel.Token)) |> ignore
Monitor.Enter locker
while (result.Value.IsNone || (not <| test result.Value.Value)) do
Monitor.Wait locker |> ignore
Monitor.Exit locker
cancel.Cancel()
match result.Value with
| Some x-> x
| None -> failwith "Don't pass in an empty list"
//end implentation
//testing
let delayReturn (ms:int) value =
fun ()->
Thread.Sleep ms
value
let test () =
let work = [ delayReturn 1000 "First!"; delayReturn 5000 "Second!" ]
let result = firstFinished (fun _->true) work
printfn "%s" result
Would it work to pass the CancellationTokenSource and test to each async and have the first that computes a valid result cancel the others?
let makeAsync (cancel:CancellationTokenSource) test f =
let rec loop() =
async {
if cancel.IsCancellationRequested then
return None
else
let result = f()
if test result then
cancel.Cancel()
return Some result
else return! loop()
}
loop()
let firstFinished test work =
match work with
| [] -> invalidArg "work" "Don't pass in an empty list"
| _ ->
let cancel = new CancellationTokenSource()
work
|> Seq.map (makeAsync cancel test)
|> Seq.toArray
|> Async.Parallel
|> Async.RunSynchronously
|> Array.pick id
This approach makes several improvements: 1) it uses only async (it's not mixed with Task, which is an alternative for doing the same thing--async is more idiomatic in F#); 2) there's no shared state, other than CancellationTokenSource, which was designed for that purpose; 3) the clean function-chaining approach makes it easy to add additional logic/transformations to the pipeline, including trivially enabling/disabling parallelism.
With the Task Parallel Library in .NET 4, this is called WaitAny. For example, the following snippet creates 10 tasks and waits for any of them to complete:
open System.Threading
Array.init 10 (fun _ ->
Tasks.Task.Factory.StartNew(fun () ->
Thread.Sleep 1000))
|> Tasks.Task.WaitAny
In case you are ok to use "Reactive extensions (Rx)" in your project, the joinFirst method can be implemented as:
let joinFirst (f : (unit->'a) list) =
let c = new CancellationTokenSource()
let o = f |> List.map (fun i ->
let j = fun() -> Async.RunSynchronously (async {return i() },-1,c.Token)
Observable.Defer(fun() -> Observable.Start(j))
)
|> Observable.Amb
let r = o.First()
c.Cancel()
r
Example usage:
[20..30] |> List.map (fun i -> fun() -> Thread.Sleep(i*100); printfn "%d" i; i)
|> joinFirst |> printfn "Done %A"
Console.Read() |> ignore
Update:
Using Mailbox processor :
type WorkMessage<'a> =
Done of 'a
| GetFirstDone of AsyncReplyChannel<'a>
let joinFirst (f : (unit->'a) list) =
let c = new CancellationTokenSource()
let m = MailboxProcessor<WorkMessage<'a>>.Start(
fun mbox -> async {
let afterDone a m =
match m with
| GetFirstDone rc ->
rc.Reply(a);
Some(async {return ()})
| _ -> None
let getDone m =
match m with
|Done a ->
c.Cancel()
Some (async {
do! mbox.Scan(afterDone a)
})
|_ -> None
do! mbox.Scan(getDone)
return ()
} )
f
|> List.iter(fun t -> try
Async.RunSynchronously (async {let out = t()
m.Post(Done out)
return ()},-1,c.Token)
with
_ -> ())
m.PostAndReply(fun rc -> GetFirstDone rc)
Unfortunately, there is no built-in operation for this provided by Async, but I'd still use F# asyncs, because they directly support cancellation. When you start a workflow using Async.Start, you can pass it a cancellation token and the workflow will automatically stop if the token is cancelled.
This means that you have to start workflows explicitly (instead of using Async.Parallel), so the synchronizataion must be written by hand. Here is a simple version of Async.Choice method that does that (at the moment, it doesn't handle exceptions):
open System.Threading
type Microsoft.FSharp.Control.Async with
/// Takes several asynchronous workflows and returns
/// the result of the first workflow that successfuly completes
static member Choice(workflows) =
Async.FromContinuations(fun (cont, _, _) ->
let cts = new CancellationTokenSource()
let completed = ref false
let lockObj = new obj()
let synchronized f = lock lockObj f
/// Called when a result is available - the function uses locks
/// to make sure that it calls the continuation only once
let completeOnce res =
let run =
synchronized(fun () ->
if completed.Value then false
else completed := true; true)
if run then cont res
/// Workflow that will be started for each argument - run the
/// operation, cancel pending workflows and then return result
let runWorkflow workflow = async {
let! res = workflow
cts.Cancel()
completeOnce res }
// Start all workflows using cancellation token
for work in workflows do
Async.Start(runWorkflow work, cts.Token) )
Once we write this operation (which is a bit complex, but has to be written only once), solving the problem is quite easy. You can write your operations as async workflows and they'll be cancelled automatically when the first one completes:
let delayReturn n s = async {
do! Async.Sleep(n)
printfn "returning %s" s
return s }
Async.Choice [ delayReturn 1000 "First!"; delayReturn 5000 "Second!" ]
|> Async.RunSynchronously
When you run this, it will print only "returning First!" because the second workflow will be cancelled.

Optimization of F# string manipulation

I am just learning F# and have been converting a library of C# extension methods to F#. I am currently working on implementing a function called ConvertFirstLetterToUppercase based on the C# implementation below:
public static string ConvertFirstLetterToUppercase(this string value) {
if (string.IsNullOrEmpty(value)) return value;
if (value.Length == 1) return value.ToUpper();
return value.Substring(0, 1).ToUpper() + value.Substring(1);
}
The F# implementation
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module public StringHelper
open System
open System.Collections.Generic
open System.Linq
let ConvertHelper (x : char[]) =
match x with
| [| |] | null -> ""
| [| head; |] -> Char.ToUpper(head).ToString()
| [| head; _ |] -> Char.ToUpper(head).ToString() + string(x.Skip(1).ToArray())
[<System.Runtime.CompilerServices.ExtensionAttribute>]
let ConvertFirstLetterToUppercase (_this : string) =
match _this with
| "" | null -> _this
| _ -> ConvertHelper (_this.ToCharArray())
Can someone show me a more concise implementation utilizing more natural F# syntax?
open System
type System.String with
member this.ConvertFirstLetterToUpperCase() =
match this with
| null -> null
| "" -> ""
| s -> s.[0..0].ToUpper() + s.[1..]
Usage:
> "juliet".ConvertFirstLetterToUpperCase();;
val it : string = "Juliet"
Something like this?
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module public StringHelper =
[<System.Runtime.CompilerServices.ExtensionAttribute>]
let ConvertFirstLetterToUppercase (t : string) =
match t.ToCharArray() with
| null -> t
| [||] -> t
| x -> x.[0] <- Char.ToUpper(x.[0]); System.String(x)
Try the following
[<System.Runtime.CompilerServices.ExtensionAttribute>]
module StringExtensions =
let ConvertFirstLetterToUpperCase (data:string) =
match Seq.tryFind (fun _ -> true) data with
| None -> data
| Some(c) -> System.Char.ToUpper(c).ToString() + data.Substring(1)
The tryFind function will return the first element for which the lambda returns true. Since it always returns true it will simply return the first element or None. Once you've established there is at least one element you know data is not null and hence can call Substring
There's nothing wrong with using .NET library functions from a .NET language. Maybe a direct translation of your C# extension method is most appropriate, particularly for such a simple function. Although I'd be tempted to use the slicing syntax like Juliet does, just because it's cool.
open System
open System.Runtime.CompilerServices
[<Extension>]
module public StringHelper =
[<Extension>]
let ConvertFirstLetterToUpperCase(this:string) =
if String.IsNullOrEmpty this then this
elif this.Length = 1 then this.ToUpper()
else this.[0..0].ToUpper() + this.[1..]

Resources