Basically right now I run fbId <- runDB $ insert myNewFooBar and I get back a Key FooBar. Is there any way to return the value of the whole FooBar directly from an insert without running a separate query of runDB $ get404 fbId after?
I just build the Entity Haskell-side: Entity fbId myNewFooBar.
Another, shorter option is to use insertEntity, which returns an entity instead of a record. Under the hood this function calls insert and builds an entity from the supplied record and the returned key (no additional DB queries).
insertedFooBar <- runDB $ insertEntity myNewFooBar
Related
I am using the python bolt driver to create nodes in a neo4j database. These nodes get altered by apoc.trigger functions. And i want the returning BoltStatementResult to contain the altered version of these nodes.
This is what i have tested so far:
My triggers are working as expected. The nodes stored, are altered correctly.
I tried the 'before' and 'after' phases.
I set the trigger functions to return the altered version.
I did write a second query to get the new and updated node of database. But this option is quite unsafe, as it has no unique identifier.
My trigger function:
CALL apoc.trigger.add(
'onCreateNodeAddMetadata',
'UNWIND {createdNodes} AS n
SET n.uid = apoc.create.uuid(), n.timestamp = timestamp() RETURN n',
{phase: 'before'}
)
I expect the returning value of my session.write_transaction to contain the the added properties.
As a safe workaround (but see caveat below), the Cypher query in your write_transaction can return the native ID of the created node (e.g., RETURN ID(n)).
Then, as long as you know the node was not deleted, you can perform a query for it with that ID (in this example, myID contains the ID value and is passed as a parameter):
MATCH (n) WHERE ID(n) = $myId
...
If the node could be deleted before you search for it by native ID, then this technique is not safe, since neo4j can re-assign the native ID of a deleted node to another newly- created node.
I'm building a web application with Snap that needs to authenticate both staff and customers. So far I'm using the auth snaplet provided by snaplet-postgresql-simple to authenticate the two types of users from the same table in the database.
The initialization code therefore looks something like this:
s <- nestSnaplet "sess" sess $ initCookieSessionManager sessionKeyfile "sess" Nothing (Just sessionTimeout)
db <- nestSnaplet "pg" pg Pg.pgsInit
a <- nestSnaplet "auth" auth $ initPostgresAuth sess db
I'm considering separating the two types of users into two tables for these reasons:
the information associated with each type of user (i.e. the columns) is actually different (e.g. I don't need to know first and last names of staff)
I want to allow the staff to be authenticate to the backend without being logged into the frontend (I'd need separate cookies then, I guess)
I think security could benefit if the two types of users are in separate tables
I'm considering using two instances of the snaplets for postgresql-simple and sessions.
The initialization code would then look something like this:
s1 <- nestSnaplet "sess1" sess1 $ initCookieSessionManager sessionKeyfile "sess1" Nothing (Just sessionTimeout)
s2 <- nestSnaplet "sess2" sess2 $ initCookieSessionManager sessionKeyfile "sess2" Nothing (Just sessionTimeout)
db <- nestSnaplet "pg" pg Pg.pgsInit
a1 <- nestSnaplet "auth1" auth1 $ initPostgresAuth sess1 db
a2 <- nestSnaplet "auth2" auth2 $ initPostgresAuth sess2 db
Is that possible to use several instances of a snaplet like that?
Or does my problem have a better solution?
I wouldn't use two instances. I'd use a single instance where a user represents whatever is common to both, and then you add a user type column and put the extra information in other tables linked with a foreign key.
I'm making a little app for work to handle shared to-do lists, I'm almost done but I'd like to add some very simple authentication. I followed the doc to add hashdb to the scaffolded site (https://github.com/yesodweb/yesod-cookbook/blob/master/cookbook/Using-HashDB-In-a-Scaffolded-Site.md), and it compiles fine, but when I log in with correct username / password (added by hand in the database) I get this :
10/Dec/2016:13:36:02 +0100 [Debug#SQL] SELECT `name`,`password` FROM `user` WHERE `name`=?; [PersistText "Ulrar"]
10/Dec/2016:13:36:08 +0100 [Error#yesod] Exception from Warp: stack overflow #(cstod_GjWCdZJB9K0EGPCbjz5gnP:Application Application.hs:133:15)
Line 133 of Application.hs is this one : $(qLocation >>= liftLoc)
That's from the default code.
As you can see my "User" table is pretty simple, I have a primary key on the name and a password, and that's it. The name must be unique, of course.
I'll be adding the few user by hand in the database, that'll be more than enough for us.
I tried the query by hand, it returns what you'd expect, and trying to log in with the wrong username / password does "work", it redirects to the login form with an error. Only using the correct username / password couple will give that Exception, and it does seem to load for a while before throwing it after clicking on the button.
I end up on http://localhost:3000/auth/page/hashdb/login with just "Something went wrong" written.
I assume I must have missed something, I'm using the yesod-mysql scaffolded site and I have this in the YesodAuth instance :
authPlugins app = [authHashDB (Just . UniqueUser)]
I removed the getAuthId definition since I had no idea what to put there, the definition from the doc doesn't compile because getAuthIdHashDB isn't exported anymore apparently. Is this my problem ?
Thanks !
Yep, my problem was indeed removing getAuthId !
Solved it by adding this instead :
authenticate creds = runDB $ do
x <- getBy $ UniqueUser $ credsIdent creds
case x of
Just (Entity uid _) -> return $ Authenticated uid
Nothing -> return $ UserError InvalidUsernamePass
I'm facing some issues when trying to bind a list variable in an ArangoDB query. More concretely, the list might look like as follows and comes from a URL parameter in a certain Foxx controller endpoint:
.../someAPIEndpoint?list=A,B,C,D
I would like to be able to do something like this:
stmt = db._createStatement({query: "for i in [#list] return i"});
stmt.bind('list', req.params('list').split(','));
Since I do not know how many values will I receive from the API call, I can't create n bindings for each possible one. Is what I want to achieve even possible?
Thanks in advance.
you were almost there, you can bind an array directly to the parameter (i just removed "[" and "]" from your query:
stmt = db._createStatement({query: "for i in #list return i"});
stmt.bind('list', req.params('list').split(','));
I want to be able to provide an UPDATE method to my users that will update the record they specify based on RowKey but it will NOT ADD it if the RowKey they pass in does not exist.
My reasoning is that if they mistakenly send in an invalid RowKey, I do not want them unknowingly ending up with a new entity vs. having updated the one they intended to update in the first place.
Here is the gist of the code I have (but that Adds/inserts the entity if it does not exist):
' p below is the entity obj (Inherits from TableServiceEntity)
' PartitionKey and RowKey are set to values of entity to update
MyBase.AttachTo(_tableName, p)
MyBase.UpdateObject(p)
MyBase.SaveChangesWithRetries(Services.Client.SaveChangesWithOptions.Batch)
My issue is that I was expecting to get some exception thrown when SaveChanges executed and no entity with matching PK and RK was found. Instead the entity with the new PK, RK combination is added.
How can I structure my code so that only an UPDATE is done and no ADD if PK, RK is not existent?
I believe if you call the three argument version of AttachTo(), you will get the desired behavior.
MyBase.AttachTo(_tableName, p, "*");
But I haven't actually tried it.
Rather than catching an error when you try to update and item that doesn't exists (I believe breischl is right about how to avoid that) the easiest thing to do would be to run a query to check if it does actually exist first.