I am trying to set up a very simple test F# project in linux with mono, using Forge to set up the project and install nuget packages. Forge creates a build.fsx file which uses FAKE. I've tried to adjust this build file (in order to add tests) with inspiration from this tutorial http://fsharp.github.io/FAKE/gettingstarted.html. The tutorial, however, is using C# for testing and assumes Windows with .Net as environment. I want to use F# for testing and linux with mono as environment.
I think I almost got it working, but I am getting some cryptic error messages from NUnit. When running the build.fsx file I get following errors at the end:
...
Invalid argument: -nologo
The value '/home/michel/Documents/FSHARP/UnitTests/test/NUnit.Test.MyTests.dll' is not valid for option '--labels'.
Invalid argument: -xml:./test/TestResults.xml
Running build failed.
Error:
NUnit test failed (255).
---------------------------------------------------------------------
Build Time Report
---------------------------------------------------------------------
Target Duration
------ --------
Clean 00:00:00.0036366
Build 00:00:00.0402828
BuildTest 00:00:00.4911710
Total: 00:00:00.7494956
Status: Failure
---------------------------------------------------------------------
1) Fake.UnitTestCommon+FailedTestsException: NUnit test failed (255).
at Fake.NUnitSequential.NUnit (Microsoft.FSharp.Core.FSharpFunc`2 setParams, IEnumerable`1 assemblies) <0x41d27e50 + 0x0039f> in <filename unknown>:0
at FSI_0001+clo#32-4.Invoke (Microsoft.FSharp.Core.Unit _arg4) <0x41d27dc0 + 0x0006f> in <filename unknown>:0
at Fake.TargetHelper+targetFromTemplate#195[a].Invoke (Microsoft.FSharp.Core.Unit unitVar0) <0x41cd59b0 + 0x00023> in <filename unknown>:0
at Fake.TargetHelper.runSingleTarget (Fake.TargetTemplate`1 target) <0x41ccb490 + 0x000ca> in <filename unknown>:0
My build.fsx file looks like this
// include Fake libs
#r "./packages/FAKE/tools/FakeLib.dll"
open Fake
// Directories
let buildDir = "./build/"
let testDir = "./test/"
// version info
let version = "0.1" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir]
)
Target "Build" (fun _ ->
//MSBuildDebug buildDir "Build" appReferences
!! "/UnitTesting/*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "BuildTest" (fun _ ->
!! "src/NUnit.Test.MyTests/*.fsproj"
|> MSBuildDebug testDir "Build"
|> Log "TestBuild-Output: "
)
Target "Test" (fun _ ->
!! (testDir + "/NUnit.Test.MyTests.dll")
|> NUnit (fun p ->
{ p with
ToolPath = "packages/NUnit.ConsoleRunner/tools"
//DisableShadowCopy = true;
OutputFile = testDir + "TestResults.xml" })
)
Target "Default" (fun _ -> trace "HEEEELLOOOOOO world from FAKE!!!")
"Clean" ==> "Build" ==> "BuildTest" ==> "Test" ==> "Default"
RunTargetOrDefault "Default"
FAKE seems to be looking for a file nunit-console.exe under the packages/NUnit.ConsoleRunner/tools directory, but there is no such file. However, there is a nunit3-console.exe file, so I just made a copy of this file with the name nunit-console.exe.
My simple test file NUnit.Test.MyTests.fs looks like following:
namespace NUnit.Test.MyTests
module testmodule =
open NUnit.Framework
let SayHello name = "Hello"
[<TestFixture>]
type myFixture() =
[<Test>]
member self.myTest() =
Assert.AreEqual("Hello World!", SayHello "World")
and the file test/NUnit.Test.MyTests.dll seems to be generated just fine.
What does the cryptic error message mean, and how can I fix it so I can run my tests?
As mentioned by rmunn in the comment, I need to use the function NUnit3 because I am using NUnit version 3.4.1. The function resides in the FAKE.Testing module http://fsharp.github.io/FAKE/apidocs/fake-testing-nunit3.html. I modified my build.fsx file so it now looks like following:
// include Fake libs
#r "./packages/FAKE/tools/FakeLib.dll"
open Fake
open Fake.Testing // NUnit3 is in here
// Directories
let buildDir = "./build/"
let testDir = "./test/"
// version info
let version = "0.1" // or retrieve from CI server
// Targets
Target "Clean" (fun _ ->
CleanDirs [buildDir; testDir]
)
Target "Build" (fun _ ->
!! "/UnitTesting/*.fsproj"
|> MSBuildRelease buildDir "Build"
|> Log "AppBuild-Output: "
)
Target "BuildTest" (fun _ ->
!! "src/NUnit.Test.MyTests/*.fsproj"
|> MSBuildDebug testDir "Build"
|> Log "TestBuild-Output: "
)
Target "Test" (fun _ ->
!! (testDir + "/NUnit.Test.*.dll")
|> NUnit3 (fun p ->
{ p with
ToolPath = "packages/NUnit.ConsoleRunner/tools/nunit3-console.exe" })
)
Target "Default" (fun _ -> trace "HEEEELLOOOOOO world from FAKE!!!")
"Clean" ==> "Build" ==> "BuildTest" ==> "Test" ==> "Default"
RunTargetOrDefault "Default"
Note that you must specify ToolPath all the way to the nunit3-console.exe file, and not just the directory where it resides.
Now everything seems to work, and I get a fine and simple 'test-summary' in the console output when I run build.fsx. :)
Related
cabal build --help and others mention components like in sentences like
Build one or more targets from within the project. The available targets are
the packages in the project as well as individual components within those
packages, including libraries, executables, test-suites or benchmarks. Targets
can be specified by name or location. If no target is specified then the
default is to build the package in the current directory.
From the cabal user guide mentions them too in 9. Setup.hs Commands, and
gives two prefixes exe: and lib: to select those. Are there more of those prefixes?
A component is anything behind a stanza with its own set of dependencies, etc. So it can be multiple sublibraries, multiple executables, test-suites, etc.
You're right that this is underdocumented! In the source code we can see the following (https://github.com/haskell/cabal/blob/00a2351789a460700a2567eb5ecc42cca0af913f/Cabal/src/Distribution/Simple/BuildTarget.hs#L569)
matchComponentKind :: String -> Match ComponentKind
matchComponentKind s
| s `elem` ["lib", "library"] = return' LibKind
| s `elem` ["flib", "foreign-lib", "foreign-library"] = return' FLibKind
| s `elem` ["exe", "executable"] = return' ExeKind
| s `elem` ["tst", "test", "test-suite"] = return' TestKind
| s `elem` ["bench", "benchmark"] = return' BenchKind
| otherwise = matchErrorExpected "component kind" s
where
return' ck = increaseConfidence >> return ck
So that's the full list!
Can I actually deploy F# console app as Azure WebJob? Can't find a proper option in VS2017 :(
And if it's possible, can you have a look at my code? Would it work as it is if I will deploy it as AzureWebJob? Do I need to change something?
open FSharp.Data;
open System
open System.Net.Mail
let server = "smtp.gmail.com"
let sender = "fsharpie.send#gmail.com"
let password = "password"
let port = 587
let SendTest email topic msg =
use msg =
new MailMessage(
sender, email, topic,
msg)
let client = new SmtpClient(server, port)
client.EnableSsl <- true
client.Timeout <- 20000
client.DeliveryMethod <- SmtpDeliveryMethod.Network
client.UseDefaultCredentials <- false
client.Credentials <- System.Net.NetworkCredential(sender, password)
client.Send msg
let metaTitle (doc:HtmlDocument) =
doc.Descendants "meta"
|> Seq.choose (fun x ->
match x.AttributeValue("name"), x.AttributeValue("property") with
| "title", _
| "headline", _
| "twitter:title", _
| _, "og:title" ->
Some(x.AttributeValue("content"))
| _, _ -> None
)
let titles (doc:HtmlDocument) =
let tagged (tag:string) =
doc.Descendants tag |> Seq.map (fun x -> x.InnerText())
Seq.concat [tagged "title"; metaTitle doc; tagged "h1"]
let title (doc:HtmlDocument) =
titles doc |> Seq.tryHead
let finalTitle (link:string) = try
link
|> HtmlDocument.Load
|> titles
|> Seq.head
with
| :? Exception as ex -> ex.Message
[<EntryPoint>]
let main argv =
let website = "website.com"
if(finalTitle website <> "expected title")
then
SendTest "result#gmail.com" "Status: Failed" (website + " is down :( ")
0 // return an integer exit code
Can I actually deploy F# console app as Azure WebJob?
Yes, we can delploy F# console app as Azure WebJob
Can't find a proper option in VS2017
Publish F# project as Azure webjob from VS2017 tool directly is not supported currently.
But we can publish the F# project from Azure Portal.I did a demo for it. The following is my detail steps:
1.Create F# project with VS2017
2.Install the WebJob SKD in the project
3.Build the project and Zip the release or debug file in the bin folder
4.Upload the Zip file from Azure portal
5.Config the Appsetting with Storage connection string
6.Check the webjob with Azure Kudu(https://yourwebsite.scm.azurewebsites.net) tool
You need to create an .exe from your f# code, create a zip from the output folder and upload it to Azure.
https://blogs.msdn.microsoft.com/dave_crooks_dev_blog/2015/02/18/deploying-f-web-job-to-azure/
Another option is use Azure Functions which is the evolution of Azure WebJobs and has support for f#: https://learn.microsoft.com/en-us/azure/azure-functions/functions-reference-fsharp
I'm experimenting with Ivory (http://ivorylang.org, https://github.com/GaloisInc/ivory) and using the ivory-hw module to manipulate some registers in a microcontroller.
cmain :: Def ('[] :-> ())
cmain = voidProc "main" $ body $ do
setReg regFoo $ do
clearBit foo_bitbar
setBit foo_bitbaz
forever $ return ()
main_module :: Module
main_module = package "main" $ do
incl cmain
main :: IO ()
main = runCompiler [ main_module ] [] (initialOpts {constFold = True,
outDir = Just "out"})
Building and running gives:
$ exe
*** Procedure main
ERROR: [ No location available ]:
Unbound value: 'ivory_hw_io_write_u32'
exe: Sanity-check failed!
Adding the option scErrors = False to runCompiler turns sanity checks off and the code runs to completion generating sources.
However, main.c contains a call to ivory_hw_io_write_u32 but this function is not defined anywhere (perhaps explaining the error). Poking about github, I can find examples that have a file ivory_hw_prim.h.
After some experimentation, I can include this by adding a module for the hw stuff and then adding that as a dependency to my main_module:
hw_module :: Module
hw_module = package "ivory_hw_prim" hw_moduledef
main_module :: Module
main_module = package "main" $ do
depend hw_module
incl cmain
and calling the runCompiler with hw_artifacts added to generate the header:
main = runCompiler [ main_module ] hw_artifacts (initialOpts {scErrors = False,
constFold = True,
outDir = Just "out"})
This adds ivory_hw_prim.h to the collection of files generated and includes the necessary include in main.h.
However, this only works by retaining the scErrors = False option to runCompiler which suggests that I am still not doing this right.
My question is therefore: What is the correct way to use Ivory's HW package?
The solution is to include hw_moduledef in the package:
main_module :: Module
main_module = package "main" $
incl cmain >> hw_moduledef
(The depend function just includes the header.) Including hw_moduledef in the package "main" makes its definitions visible to the sanity-checker.
By the way, the Ivory module system may be improved in the future, so that Ivory computes, at compile time, the dependencies, relieving the programmer from having to make explicit includes.
I am new in Gradle. I trie to read existing files' content & write their content to a new file which is not existing in file system yet.
File existingFile1 = new File(path1);
File existingFile2 = new File(path2);
File newFile = new File(path3);
newFile.withWriter{ w ->
[existingFile1, existingFile2].each{ f ->
new File(f).withReader { r ->
w << r << '\n'
}
}
But, my gradle complains that No such file or directory path3. Why it complains path3? I mean of course file of path3 is not existing yet, I am writing the code to create this file. Any one can explain to me why my gradle complains it?
I am writing my gradle build script in Android studio & I am using Gradle V2.2.1
you will also have to create the path the file will end up. if the parent directory is not there yet, your creation fails. e.g:
f = new File('/tmp/it/aint/there')
assert !f.exists()
assert !f.parentFile.exists()
f.parentFile.mkdirs() // XXX create the dirs "to the file"
assert f.parentFile.exists()
f.withWriter{ it << 'x' }
assert f.exists()
assert f.text=='x'
I am using R, on linux.
I have a set a functions that I use often, and that I have saved in different .r script files. Those files are in ~/r_lib/.
I would like to include those files without having to use the fully qualified name, but just "file.r". Basically I am looking the same command as -I in the c++ compiler.
I there a way to set the include file from R, in the .Rprofile or .Renviron file?
Thanks
You can use the sourceDir function in the Examples section of ?source:
sourceDir <- function(path, trace = TRUE, ...) {
for (nm in list.files(path, pattern = "\\.[RrSsQq]$")) {
if(trace) cat(nm,":")
source(file.path(path, nm), ...)
if(trace) cat("\n")
}
}
And you may want to use sys.source to avoid cluttering your global environment.
If you set the chdir parameter of source to TRUE, then the source calls within the included file will be relative to its path. Hence, you can call:
source("~/r_lib/file.R",chdir=T)
It would probably be better not to have source calls within your "library" and make your code into a package, but sometimes this is convenient.
Get all the files of your directory, in your case
d <- list.files("~/r_lib/")
then you can load them with a function of the plyr package
library(plyr)
l_ply(d, function(x) source(paste("~/r_lib/", x, sep = "")))
If you like you can do it in a loop as well or use a different function onstead of l_ply. Conventional loop:
for (i in 1:length(d)) source(paste("~/r_lib/", d[[i]], sep = ""))
Write your own source() wrapper?
mySource <- function(script, path = "~/r_lib/", ...) {
## paste path+filename
fname <- paste(path, script, sep = "")
## source the file
source(fname, ...)
}
You could stick that in your .Rprofile do is will be loaded each time you start R.
If you want to load all the R files, you can extend the above easily to source all files at once
mySource <- function(path = "~/r_lib/", ...) {
## list of files
fnames <- list.files(path, pattern = "\\.[RrSsQq]$")
## add path
fnames <- paste(path, fnames, sep = "")
## source the files
lapply(fnames, source, ...)
invisible()
}
Actually, though, you'd be better off starting your own private package and loading that.