My Yesod-Application should get it's approot via an Envoirement-Variable. This seems describe exactly this situation. But: When doing so, only links generated by Yesod will consider the approot, but the Route-definitions in the Quasiquoter parseRoutes will stay absolute:
mkYesod "App" [parseRoutes|
/ HomeR GET
|]
So setting APPROOT to "domain.com/path" will not work, since there's no route for "/path" but only for "/".
Is there a way to make the routes relative to the approot?
Update:
I tried to modify cleanPath accordingly too, but Routes to the Home-Route are still wrong…
instance Yesod Browser where
approot = ApprootMaster myApproot
cleanPath site s = do
if corrected == s
then Right $ dropprefix (Data.List.map dropDash s)
else Left $ dropprefix corrected
where
corrected = Data.List.filter (not . Data.Text.null) s
dropDash t
| Data.Text.all (== '-') t = Data.Text.drop 1 t
| otherwise = t
r = Data.Text.drop 1 $ myApproot site
l = Data.Text.length r
dropprefix l
| Data.List.take 1 l == [r] = Data.List.drop 1 l
| otherwise = l
If I set approot to "/foo" it works, except for "#{HomeR}" leading to "/foo/" which redirects to "/foo/foo"…
It's not completely clear what problem you're running into. I think what you're saying is, if you simply set APPROOT, then:
The URLs generated by Yesod will include the /path prefix
But incoming requests with the /path prefix will not be parsed correctly
The usual use case for this kind of prefix is that you have a reverse proxy in front of your app, such as Apache or Nginx, which is delegating only part of the domain to your app. In this case: APPROOT works fine.
I'm not sure what other use case you're trying to address, but in general you can just drop a part of the request path by overriding the cleanPath method. Or you could use a WAI middleware to modify the request itself if you wanted.
Related
I have a simple snaplet, which has its own routing and pretty much independent of the whole application behavior. However, as for most of the application, I want to hide the new simple snaplet under the restricted area, where only logged in users can enter.
For the root snaplet the problem solved by using simple function restricted which accepts a handler and checks if the user logged in, proceeding further with the given handler or redirecting to the login screen.
Here is the whole configuration:
appInit :: SnapletInit App App
appInit = makeSnaplet "myapp" "My Example Application" Nothing $ do
fs <- nestSnaplet "foo" foo fooInit
ss <- nestSnaplet "sess" sess $
initCookieSessionManager "site_key.txt" "sess" (Just 3600)
as <- nestSnaplet "auth" auth $
initJsonFileAuthManager defAuthSettings sess "users.json"
addRoutes [("content", restricted $ render "content"),
("login", login)]
return $ App ss as fs
restricted :: Handler App App () -> Handler App App ()
restricted = requireUser auth (redirect "/login")
fooInit :: SnapletInit b Foo
fooInit = makeSnaplet "foo" "A nested snaplet" Nothing $ do
addRoutes [("info", writeText "Only registered users can have acess to it")]
return Foo
If I enter http://mywebsite/foo/info, I will be able to see the content of the subsnaplet without logging it. It seems to me, that I cannot protect all of the handlers implemented inside of my new Foo without changing that snaplet and modifying its routing. Or am I wrong?
P.S.: There is an option to use weapSite and check the request URL, but since it implies verification based on URL, not on the recourse, (handler in this case) it doesn't seem right to me.
The answer here is to use the wrapSite function. It takes an argument (Handler b v () -> Handler b v ()), which is exactly the type signature of your restricted function.
Network.HTTP.Proxy has a nice function called fetchProxy:
fetchProxy :: Bool -> IO Proxy
fetchProxy flg gets the local proxy settings and parse the string into
a Proxy value. If you want to be informed of ill-formed proxy
configuration strings, supply True for flg. Proxy settings are sourced
from the HTTP_PROXY environment variable [...]
I want to use the Proxy obtained this way with Wreq library, which has it's own Proxy defined like this, by importing it from HTTP:
import Network.HTTP.Client.Internal (Proxy(..), Response)
There appears to be a type mismatch between Network.HTTP.Proxy.Proxy and Network.Wreq.Proxy, where I presume they must be identical.
I import both like this:
import Network.Wreq
import Network.HTTP.Proxy (fetchProxy)
How can I use HTTP.Proxy.Proxy with Wreq and why does GHC see them as different types?
It's likely that the Wreq authors were just unaware of the other Proxy as they seem to be storing equivalent information. It'll be tricky to get them to talk to each other, however, since fetchProxy stores the host:port as a string and Wreq's Proxy wants the individual host and port. You'll have to do some URI parsing:
import Control.Lens
import Data.Text.Strict.Lens
import Network.HTTP.Proxy
import Network.Wreq
import URI.ByteString
main :: IO ()
main = do
Network.HTTP.Proxy.Proxy host _ <- fetchProxy True
case parseURI strictURIParserOptions (host ^. packed . re utf8) of
Left e -> do
putStrLn "uh oh"
print e
Right uri ->
case ( uri ^? uriAuthorityL . _Just . authorityHostL . hostBSL
, uri ^? uriAuthorityL . _Just . authorityPortL . _Just . portNumberL) of
(Just host_, Just port_) -> do
let opts = defaults & proxy ?~ httpProxy host_ port_
response <- getWith opts "http://example.com"
print response
_ ->
putStrLn "uh oh"
I'm using lens here to do the boring bits and pieces of packing/unpacking strings, encoding UTF8, and talking to the uri-bytestring package to get URI parsing. But the general idea is that datatypes in Haskell can be sliced and diced simply by pattern matching on the constructor; once extracted, the host:string here is funneled down into the httpProxy call, which returns Wreq's Proxy type. By qualifying the name of the constructor (Newtork.HTTP.Proxy.Proxy) I've let the compiler know which module I want that name from.
It would also not be too difficult, and probably less code to boot, to manually parse proxy information from the environment variables yourself. You could even have a separate environment variable for host and port, which would obviate the need for URI parsing. URIs are have such massively low entropy that they're an awful format for storing configuration information.
I'm trying to make the chat example from the Yesod book working in the scaffolding site.
I think I've corrected almost all I had to correct, but all of that is completely new to me (it's my first "real" Haskell project) and I'm not very confident in all my modifications; moreover, I'm really stuck at the point 7. Could you comment all the following points if necessary, and helping me for the 7. (in bold some questions/remarks)?
Copy/paste the Chat.hs and Chat/Data.hs in the root of my site,
add import Chat as Import and import Chat.Data as Import to Import.NoFoundation,
add import of IO, Bool, return, Maybe(Nothing), ($) in Data.hs, since the extension NoImplicitPrelude is on Seems very clumsy... Do we have to import all the standard operators on each new file?
in Fundation.hs, add getChat in the App record (after appHttpManager and appLogger)
in Fundation.hs, add YesodChat instance for App: I had to modify the getUserName on the Just uid case (in the original example of Chat, it was just Just uid -> return uid):
Just uid -> do
muser <- runDB $ get uid
case muser of
Nothing -> error "uid not in the DB"
Just user -> return $ userIdent user
This seems very long and nested... Can we do better?
In Fundation.hs, add chatWidget ChatR after the line pc <- widgetToPageContent $ do in the defaultLayout definition.
Now, I have the following warning :
Application.hs:60:36: Warning:
Fields of ‘App’ not initialised: getChat
In the expression: App {..}
I think I have to write something like getChat <- newChan >>=Chat after the appLogger <- newStdoutLoggerSet defaultBufSize >>= makeYesodLogger and appStatic <- ... in the makeFundation definition, but the type doesn't match. I'm a totally lost here, I don't really understand how this function makeFundation works.
You actually got almost the entire way there. I think you just need to change the line to:
getChat <- fmap Chat newChan
Alternatively, if you're not familiar with he fmap function yet, you can use do notation and get:
chan <- newChan
let getChat = Chat chan
I have a situation where I have to construct compiled splices and feed data into them which depends on the URL variable. I struggle to solve the problem.
So there is simple file name list that needs to be rendered in a table. Simple.
Files belong to a group or category so you can list all files or related to a particular category. I pull data using this function:
getFilesList :: Maybe ByteString -> AppHandler [Document]
getFilesList cat = do
let selection = maybe [] (\c -> ["category" =: T.decodeUtf8 c]) cat
r <- eitherWithDB $ rest =<< find (select selection "files") {project = ["blob" =: 0]}
return $ either (const []) id r
If it gets Nothing it pulls the whole list if it gets Just category it pulls files that belongs to that category. Easy so far.
I call the above function from within a handler so that I can feed an argument into it.
listFiles :: AppHandler [Document]
listFiles = do
cat <- getParam "cat"
let r = maybe Nothing (\c -> if c == "all" then Nothing else Just c) cat
render "files/list-files"
getFilesList r
If I get "all" or Nothing on the URL - I get the full list. Anything other then that - I get a category filtered list.
The URL root looks like this
("/files/:cat", method GET listFiles)
But now I have a problem because the "method" function will only accept Handler App App () signature. My handler returns data to be fed into the splices.
I construct my splices like so:
listFilesS :: Splices (Splice (Handler App App))
listFilesS = "files" ## files
where
files = manyWithSplices runChildren file $ lift listFiles -- Feed data here
file = do
"file-name" ## (pureSplice . textSplice $ at "name")
"file-oid" ## (pureSplice . textSplice $ id)
"file-date" ## (pureSplice . textSplice $ dateFromDoc)
"file-size" ## (pureSplice . textSplice $ fsize)
"file-type" ## (pureSplice . textSplice $ at "type")
"file-auth" ## (pureSplice . textSplice $ const "admin")
"file-link" ## (pureSplice . textSplice $ flink)
"file-category" ## (pureSplice . textSplice $ at "category")
where id = T.pack . show . valueAt "_id"
fsize = T.pack . show . round . (flip (/) 1024) . (at "size")
flink = T.append "/files/" . id
I cannot find a way around it. Probably just missing something stupid.
Any ideas what I am doing wrong?
In any case, my handler function looks incorrect since I render the template first and then pull the data. If I fix the handler then I cant feed the data based on the URL parameter.
Confused.
First of all, if listFiles is just returning [Document], then you don't want to call render "files/list-files". So the first order of business is to eliminate that line entirely. You might wonder why. That brings us to the second point. Your route should look like this:
("/files/:cat", method GET $ render "files/list-files")
Your route is the result of rendering a template. That's pretty much always the case with Heist routes. Sometimes you might want to explicitly call render. Other times you might just use the routes automatically given to you by heistServe.
I can't really comment on listFilesS without seeing more of the code for the Document API, but it looks reasonable. Assuming it works properly, you just have to bind that splice for your application with something like this:
addConfig heist $ mempty { hcCompiledSplices = listFilesS }
Then just use the files tag in your "files/list-files" template.
I'm trying to access the Mt Gox REST API using http-conduit. Queries that just have a path (e.g. https://data.mtgox.com/api/2/BTCUSD/money/ticker) work fine, but when I add a queryString to the request it times out.
So this works:
mtGoxRequest :: String -> QueryText -> Request m
mtGoxRequest p qt = def {
secure = True,
host = "data.mtgox.com",
port = 443,
method = "GET",
path = fromString $ "api/2/" ++ p,
queryString = renderQuery False $ queryTextToQuery qt,
responseTimeout = Just 10000000
}
currencyTicker :: Request m
currencyTicker = mtGoxRequest "BTCUSD/money/ticker" []
But this times out:
tradeStream :: Currency -> UTCTime -> Request m
tradeStream t = mtGoxRequest
"BTCUSD/money/trades/fetch"
[("since", Just $ T.pack $ utcToGoxTime t)]
The difference seems to be in the use of a queryString: when I added the bogus query "foo=bar" to the currencyTicker that timed out too.
However all this works fine in the web browser: going to https://data.mtgox.com/api/2/BTCUSD/money/ticker?foo=bar instantly returns the correct error message instead of timing out. The trade fetch URL works as well, although I won't include a link because the "since" argument says how far back to go. Conversely, if I remove the queryString from the trades list request it correctly returns the entire available trade history.
So something about the http-conduit query string is obviously different. Anyone know what it might be?
Here is the Haskell Request object being sent (as printed by "Show"):
Request {
host = "data.mtgox.com"
port = 443
secure = True
clientCertificates = []
requestHeaders = []
path = "api/2/BTCUSD/money/trades/fetch"
queryString = "since=1367142721624293"
requestBody = RequestBodyLBS Empty
method = "GET"
proxy = Nothing
rawBody = False
redirectCount = 10
responseTimeout = Just 10000000
}
According to its returned headers Mt Gox is using cloudflare-nginx and PHP 5.
Edit: Forgot to mention that when I use http-conduit to send a request with a queryString to http://scooterlabs.com/echo I get the correct response as well, so it seems to be some interaction between the Mt Gox webserver and http-conduit.
Got it figured out. You need to add a User-Agent string. So
requestHeaders = [(CI.mk "User-Agent", "Test/0.0.1")],
in the middle of the request function makes it work.
$ time curl https://data.mtgox.com/api/2/BTCUSD/money/trades/fetch?since=1367142721624293
...
real 0m20.993s
Looks to me like everything is working correctly: the API call takes a while to return, so http-conduit throws a timeout exception since 20s is longer than 10s.