How can I check the client certificate using Snap - haskell

I know it's rarely used, but is it possible to access the client certificate in Snap?
If not, is it possible using a different web stack?

This is not available to you in Snap's snap-server package, which I'm assuming is how you're running your server.
Buuuuut it's not difficult to build, either by forking or as a separate module (you'll have to copy some code over, though, since some internal values you'll need aren't exported). bindHttps, located in Snap.Internal.Http.Server.TLS, is what you want to target. This function is largely a wrapper around calls to OpenSSL.Session from the HsOpenSSL library, which itself is a loose wrapper around the OpenSSL library.
Lucky for us OpenSSL has full support for client certificates. You simply have to set the verification mode to SSL_VERIFY_PEER. There are other knobs you can fiddle with too. You also have to make sure you install a certificate chain to actually verify the client certificate against. Chain of trust and all that jazz. For reference, see how nginx does it.
Even better, this function is exposed in HsOpenSSL as the function contextSetVerificationMode :: SSLContext -> VerificationMode -> IO (). You'll notice that ctx :: SSLContext exists in the definition of Snap's bindHttps. All you'll have to do is either copy or fork that module and introduce your calls.
It would look something like this (unverified code alert):
± % diff -u /tmp/{old,new}
--- /tmp/old 2016-04-11 11:02:42.000000000 -0400
+++ /tmp/new 2016-04-11 11:02:56.000000000 -0400
## -19,6 +19,7 ##
ctx <- SSL.context
SSL.contextSetPrivateKeyFile ctx key
+ SSL.contextSetVerificationMode ctx (SSL.VerifyPeer True True (Just (\_ _ -> return True)))
if chainCert
then SSL.contextSetCertificateChainFile ctx cert
else SSL.contextSetCertificateFile ctx cert
The first boolean tells OpenSSL to fail if no client certificate is present. The second boolean tells OpenSSL that the client certificate is only needed on the first request and no longer needed on renegotiations. The third value is a callback. I think the right thing to do is to just return True in the callback. It's what nginx does, anyway.

Related

Reading file contents once at haskell application startup

I have an API written using the Servant library which connects to a postgres db. The connection string for my db is stored in a configuration file. Everytime I make a request to any endpoint that interacts with the db I have to read the file to get the connection string, this is what im trying to avoid.
Step by step example of what im trying to achieve:
Application starts up.
Contents of the config file are read and bound to some type/object.
I make a request to my endpoint to create an entry in the db.
I read the connection string from the type/object that I bound it to and NOT the config file.
Every subsequent request for the lifetime of the application does not have to read the config file everytime it wants to interact with the database.
Doing this in something like java/c# you would just bind the contents of a file to some POCO which would be added to your DI container as a singleton so it can be referenced anywhere in your application and persist between each api request. If I have 100 requests that ineract with the db, none of those 100 requests would need to read config file to get the connection string as it was already loaded into memory when the app started.
I have thought about using the cache package, but is there an easier way to do something like this without a third party package?
Let's begin with this trivial Servant server:
import Servant
import Servant.Server
type FooAPI = Get '[JSON] Int
fooServer :: Server FooAPI
fooServer = pure 1
Suppose we don't want to hardcode that 1. We could turn fooServer into a function like
makeFooServer :: Int -> Server FooAPI
makeFooServer n = pure n
Then, in main, we could read the n from a file then call makeFooServer to construct the server. Something similar could be done for your database connection.
There's another approach that might be sometimes preferrable. Servant lets you define servers whose handlers live in a monad different from Handler, and then transform them into regular servers (tutorial).
We can write a server in which the handler monad is a ReaderT holding the configuration:
import Control.Monad.Trans.Reader
type RHandler env = ReaderT env Handler
fooServer' :: ServerT FooAPI (RHandler Int)
fooServer' = do
n <- ask
pure n
Where ServerT is a more general form of Server that lets you specify the handler monad in an extra type argument.
Then, we use the hoistServer function to supply the initial environment and go back to a regular server:
-- "Server FooAPI" is the same as "ServerT FooAPI Handler"
-- so the transformation we need is simply to run the `ReaderT`
-- by supplying an environment.
backToNormalServer :: Int -> Server FooAPI
backToNormalServer n = hoistServer (Proxy #FooAPI) (flip runReaderT n) fooServer'
The ServerT FooAPI (RHandler Int) approach has the advantage that you still have a server value that you can directly manipulate and pass around, instead of it being the result of a function.
Also, for some advanced use cases, the environment might reflect information derived from the structure of each endpoint.

Thorntail MP JWT / Undertow: required authentication

I'm trying to set up a JAX-RS-service in thorntail with JWT authentication. Everything works fine (I can inject Principal and user is correctly set), except that in case of a failed authentication, answer is still sent without any 401-HTTP-Header. What I've done is:
Added #LoginConfig(authMethod = "MP-JWT", realmName = "my-domain") to my Application-Class
Configured the security-domain
security:
security-domains:
my-domain:
jaspi-authentication:
login-module-stacks:
roles-token-stack:
login-modules:
jwt-jaspi-login-module:
code: org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.JWTLoginModule
flag: required
auth-modules:
http:
code: org.wildfly.extension.undertow.security.jaspi.modules.HTTPSchemeServerAuthModule
module: org.wildfly.extension.undertow
flag: required
login-module-stack-ref: roles-token-stack
Configured JWT-specific things (seem to work, so I'm skipping this here)
What else do I need to do in order for this to work properly? Do I need to add any annotations to my Endpoint? As I said, I want to return a 401 in case of a failed authentication.
What I've found out so far: JASPICAuthenticationMechanism.isMandatory needs to return true in order for this to work. If this is the case JWTAuthMechanism.sendChallenge is triggered after a failure of JWTAuthMechanism.authenticate and so a 401 is sent to the client. But i have no idea, in which cases isMandatory returns true.
Thanks for any help in this case!
One, the configuration of the security domain isn't 100% correct. Here's a fix for one part of the YAML:
roles-token-stack:
login-modules:
- login-module: jwt-jaspi-login-module
code: org.wildfly.swarm.microprofile.jwtauth.deployment.auth.jaas.JWTLoginModule
flag: required
Two, indeed you need to use the common Java EE annotations (#RolesAllowed, #DenyAll, #PermitAll) on the JAX-RS resources.
Solution (thanks to Ladicek, see comments below):
If you want to use MP JWT, don't start it with Swarm and don't forget to set flag useUberJar if starting it with thorntail:run.

Haskell Servant and streaming

I am trying to add a functionality to my servant server that would get a file from Amazon S3 and stream it back to the user.
Because files can be big I don't want to download them locally and then serve them to clients, I'd rather prefer to stream them directly from S3 to clients.
I use Amazonka for what I do with S3 and I can get a stream for an S3 file as a Conduit sink.
But now I don't know how to get from Sink to EitherT ServantErr IO a.
Can anyone explain me how to do it or show me some example of how it can be done?
The is nothing in Servant to do this out of the box, however all the needed parts are available.
Before we begin, I guess that if you can stream to a Sink, that means you have a source (the gorsBody of GetObjectResponse is a RsBody, which is a Source)
First of all, Servant provides us with the possibility to add support for new return types, by creating a new instance of HasServer, so we could serve a EitherT ServantErr IO (Source ...) and have it stream.
To create that instance, we must implement route :: Proxy layout -> Server layout -> RoutingApplication. Server layout, in this case, just means EitherT ServantErr IO layout, layout being the source we want to server, so it's the function that returns a source (and may fail with an HTTP error).
We have to return a RoutingApplication, which is (in continuation style) a function that takes a Request and returns a RouteResult Response, which means either an unmatched route error, or a response. Both Request and Response are standard wai, not Servant, so we can now look at the rest of the ecosystem to find how to implement it.
Fortunately, we don't have to go far: Network.Wai.Conduit contains just what we need to implement the route function: responseSource takes a status value, some response headers and your source and gives you a Response.
So, that is quite a lot of work to do, but everything we need is there. Looking a the source of the instance HasServer * (Get ...) might help.

hunchentoot-based app in a lisp image (from buildapp) immediately returns

So I have an application using restas, based on hunchentoot.
At some point, I have the following function:
(defun main (args)
(declare (ignore args))
(set-config)
(restas:start '#:spa :port 8080))
(set-config) sets a few values related to database.
Anyway, I then use buildapp in the following way:
buildapp --output dist/spa --load-system spa --asdf-tree ~/quicklisp/ --entry spa::main --compress-core
Which works perfectly. The (set-config) function requires a config.json file to be present, and it indeed doesn't work when the file doesn't exist, so I know for sure that the application is correctly compiled.
When I run the generated binary however, the application immediately returns. Which means the HTTP server doesn't stay up.
I guess it's related to the fact that hunchentoot spawns a new thread, but it shouldn't stop the process, should it?
Also, I don't want to not use threads, i.e. I want the fact that each request is a separate thread.
So... I'm not sure exactly why it immediately returns. Why? And how to fix it?
I guess that you have to enter a main loop to keep the program running. The example at http://www.xach.com/lisp/buildapp/ uses the SBCL-specific (sb-impl::toplevel-repl nil).

is it possible to call lua functions defined in other lua scripts in redis?

I have tried to declare a function without the local keyword and then call that function from anther script but it gives me an error when I run the command.
test = function ()
return 'test'
end
# from some other script
test()
Edit:
I can't believe I still have no answer to this. I'll include more details of my setup.
I am using node with the redis-scripto package to load the scripts into redis. Here is an example.
var Scripto = require('redis-scripto');
var scriptManager = new Scripto(redis);
scriptManager.loadFromDir('./lua_scripts');
var keys = [key1, key2];
var values = [val];
scriptManager.run('run_function', keys, values, function(err, result) {
console.log(err, result)
})
And the lua scripts.
-- ./lua_scripts/dict_2_bulk.lua
-- turns a dictionary table into a bulk reply table
dict2bulk = function (dict)
local result = {}
for k, v in pairs(dict) do
table.insert(result, k)
table.insert(result, v)
end
return result
end
-- run_function.lua
return dict2bulk({ test=1 })
Throws the following error.
[Error: ERR Error running script (call to f_d06f7fd783cc537d535ec59228a18f70fccde663): #enable_strict_lua:14: user_script:1: Script attempted to access unexisting global variable 'dict2bulk' ] undefined
I'm going to be contrary to the accepted answer, because the accepted answer is wrong.
While you can't explicitly define named functions, you can call any script that you can call with EVALSHA. More specifically, all of the Lua scripts that you have explicitly defined via SCRIPT LOAD or implicitly via EVAL are available in the global Lua namespace at f_<sha1 hash> (until/unless you call SCRIPT FLUSH), which you can call any time.
The problem that you run into is that the functions are defined as taking no arguments, and the KEYS and ARGV tables are actually globals. So if you want to be able to communicate between Lua scripts, you either need to mangle your KEYS and ARGV tables, or you need to use the standard Redis keyspace for communication between your functions.
127.0.0.1:6379> script load "return {KEYS[1], ARGV[1]}"
"d006f1a90249474274c76f5be725b8f5804a346b"
127.0.0.1:6379> eval "return f_d006f1a90249474274c76f5be725b8f5804a346b()" 1 "hello" "world"
1) "hello"
2) "world"
127.0.0.1:6379> eval "KEYS[1] = 'blah!'; return f_d006f1a90249474274c76f5be725b8f5804a346b()" 1 "hello" "world"
1) "blah!"
2) "world"
127.0.0.1:6379>
All of this said, this is in complete violation of spec, and is entirely possible to stop working in strange ways if you attempt to run this in a Redis cluster scenario.
Important Notice: See Josiah's answer below. My answer turns out to be wrong or at the least incomplete. Which makes me very happy ofcourse, it makes Redis all the more flexible.
My incorrect/incomplete answer:
I'm quite sure this is not possible. You are not allowed to use global variables (read the docs ), and the script itself gets a local and temporary scope by the Redis Lua engine.
Lua functions automatically set a 'writing' flag behind the scenes if they do any write action. This starts a transaction. If you cascade Lua calls, the bookkeeping in Redis would become very cumbersome, especially when the cascade is executed on a Redis slave. That's why EVAL and EVALSHA are intentionally not made available as valid Redis calls inside a Lua script. Same goes for calling an already 'loaded' Lua function which you are trying to do. What would happen if the slave is rebooted between the load of the first script and the exec of the second script?
What we do to overcome this limitation:
Don't use EVAL, only use SCRIPT LOAD and EVALSHA.
Store the SHA1 inside a redis hash set.
We automated this in our versioning system, so a committed Lua script automatically gets it's SHA1 checksum stored in the Redis master, in a hash set, with a logical name. The clients can't use EVAL (on a slave; we disabled EVAL+LOAD in config). But the client can ask for the SHA1 for the next step. Almost all our Lua functions return a SHA1 for the next call.
Hope this helps, TW
Because I'm not one to leave well enough alone, I built a package that allows for simple internal calling semantics. The package (for Python) is available on GitHub.
Long story short, it uses ARGV as a call stack, translates KEYS/ARGV references to _KEYS and _ARGV, uses Redis as a name -> hash mapping internally, and translates CALL.<name>(<keys>, <argv>) to a table append + Redis lookup + Lua function call.
The METHOD.txt file describes what goes on, and all of the regular expressions I used to translate the Lua scripts are available in lua_call.py. Feel free to re-use my semantics.
The use of the function registry makes this very unlikely to work in Redis cluster or any other multi-shard setup, but for single-master applications, it should work for the foreseeable future.

Resources