I have existing code that uses Ado.Net and am looking at integrating some new code that uses entity frameworks 5. As a proof of concept I wanted to try a transacted operation with BOTH ado.net and EF5.
I tried using TransactionScope but as I was using "two connections", I got an error about the transaction being promoted and DTC not available.
Can I have Ado.Net and EF5 use the same connection to avoid the issue above? If so, any code examples ....
Any other suggestion appreciated.
BTW, I am using EF5 Database First.
Thanks for help in advance.
Regards,
Travis
Yes, but only if you have a Distributed Transaction Coordinator (DTC) configured. DTC transactions are much slower than local transactions.
This may help:
How do I enable MSDTC on SQL Server?
Another option is to get the EntityConnection object and get the SqlConnection and use that in your ADO.NET commands, but you have to be careful about that because the connection can be closed if the containing entity is disposed.
Or you can pass an EntityConnection to the context when you create it, thus allowing the same connection to be used for both.
Related
I am using HazelCast to cache data from a database in a Proof Of Concept to a likely customer.
Client layer is in C#. I am using the .Net dll to retrieve data from the HazelCast layer.
My requirement is to execute some business logic steps followed by a transaction. This transaction will insert/update few records in the database.
So, I want to execute a service method which will take an object as input and return another object as output. The method implementation will have the business logic followed by the transaction. The method should return the result of the execution.
I see that I cannot invoke a generic service through the HazelCast client.
Client only provides methods to get data through HazelCast datastructures.
Is there a solution for my requirement?
Thanks for your answers.
s.r.guruprasad
Distributed Executor Service or Entry Processor is what you are looking for but apparently it is not made available for a .NET client.
Solution would be have another webservices layer which can make use of Hazelcast's Java client which supports them.
http://docs.hazelcast.org/docs/3.5/manual/html/distributedcomputing.html
I need to write into two different mongodb collections using an 'all or nothing' process. Fyi I use NodeJs in my backend side.
As far as I know MongoDb provides atomicity when it comes to a single collection, but it does not when we need to write into multiple collections.
So I'd like to know a way of emulating this a transaction in nodejs/mongodb in order to avoid writing into one collection if the other failed and also getting the possibility of doing a 'roll back' if the second process fails.
Thank you guys!
Starting from version 4.0 MongoDB will add support for multi-document transactions. Transactions in MongoDB will be like transactions in relational databases.
For details visit this link:
https://www.mongodb.com/blog/post/multi-document-transactions-in-mongodb?jmp=community
I wrote a library that implements the two phase commit system mentioned above. It might help in this scenario. Fawn - Transactions for MongoDB
The transactions for multi-document have been introduced in MongoDB 4.0 !!!
https://docs.mongodb.com/manual/core/transactions
In MongoDB (prior to 4.0) there is no way you can fully implement transactions on database level. However, there are some mechanisms which provides some transactions functionality. You can read about them in documentation.
Since MongoDB 4.0, transactions are supported. Very little chage is needed in your current code to support them. There's a new section in the documentation fully dedicated to the subject
As we all know that mongooplog tool is going to be removed in upcoming releases. I needed help about some the following issue:
I was planning to create a listener using mongooplog which will read any kind of activity on mongodb and will generate a trigger according to activity which will hit another server. Now, since mongooplog is going out, can anyone suggest what alternative can I use in this case and how to use it.
I got this warning when trying to use mongooplog. Please let me know if you any further questions.
warning: mongooplog is deprecated, and will be removed completely in a future release
PS: I am using node.js framework to implement the listener. I have not written any code yet so have no code to share.
The deprecation message you are quoting only refers to the mongooplog command-line tool, not the general approach of tailing the oplog. The mongooplog tool can be used for some types of data migrations, but isn't the right approach for a general purpose listener or to wrap in your Node.js application.
You should continue to create a tailable cursor to follow oplog activity. Tailable cursors are supported directly by the MongoDB drivers. For an example using Node.js see: The MongoDB Oplog & Node.js.
You may also want to watch/upvote SERVER-13932: Change Notification Stream API in the MongoDB issue tracker, which is a feature suggestion for a formal API (rather than relying on the internal oplog format used by replication).
I am new to subsonic and I'd like to know about the best practices regarding the following scenario:
Subsonic supports multiple database systems, e.g. SQLServer and MySQL. Our customers need to decide while deploying our application to their servers, which database system should be used. Long story short: the providerName, normally specified within the application configuration, should be configurable after the application is finished.
How can this be done? Do I have to generate seperate data libraries for each database system I want to support?
Thank you in advance
Marco
No you do not need to genarate seperate libraries.
How ever you can not use direct sql string as you understand but you need to go always using subsonic sql create code.
Also is good to make some tests on the diferent databases, because not all code have been 100% testes on every case.
Traditionalist argue that stored procedures provide better security than if you use a Object Relational Mapping (ORM) framework such as NHibernate.
To counter that argument what are some approaches that can be used with NHibernate to ensure that proper security is in place (for example, preventing sql injection, etc.)?
(Please provide only one approach per answer)
Protect your connection strings.
As of .NET 2.0 and NHibernate 1.2, it is easy to use encrypted connection strings (and other application settings) in your config files. Store your connection string in the <connectionStrings> block, then use the NHibernate connection.connection_string_name property instead of connection.connection_string. If you're running a web site and not a Windows app, you can use the aspnet_regiis command line tool to encrypt the <connectionStrings> block, while leaving the rest of your NHibernate settings in plaintext for easy editing.
Another strategy is to use Integrated Authentication for your database connection, if your database platform supports it. That way, you're (hopefully) not storing credentials in plaintext in your config file.
Actually, NHibernate can be vulnerable to SQL injection if you use SQL or HQL to construct your queries. Make sure that you use parameterized queries if you need to do this, otherwise you're setting yourself up for a world of pain.
Use a dedicated, locked-down SQL account
One of the arguments I've heard in favor of sprocs over ORM is that they don't want people to do whatever they want in the database. They disallow select/insert/update/delete on the tables themselves. Every action is controlled through a procedure which is reviewed by a DBA. I can understand where this thinking comes from... especially when you have a bunch of amateurs all with their hands in your database.
But times have changed and NHibernate is different. It's incredibly mature. In most cases it will write better SQL than your DBA :).
You still have to protect yourself from doing something stupid. As spiderman says "with great power comes great responsibility"
I think it's much more appropriate to give NHibernate the proper access to the database and control actions through other means, such as audit logging and regular backups. If someone were to do something stupid, you can always recover.
http://weblogs.asp.net/fbouma/archive/2003/11/18/38178.aspx
Most ORM's handle SQL injection by creating parameterized queries. In NHibernate, if you are using LINQ to NHibernate or the Criteria/Query over methods of writing queries, the queries are automatically parameterized, if you are dynamically creating HQL/SQL queries yourself you are more vunerable and would have to keep in mind that your queries would have to be parameterized.
OWASP mentions one form of SQL injection vulnerability in the context of ORM tools (and gives HQL injection as an example): http://www.owasp.org/index.php/Interpreter_Injection#ORM_Injection