Hi guys and girls:) I have a question about parralelism and ms sql in C#
I have method that looks into Db for specific object. If it not exists it will add it to Db. Unfortunately it is done with Parallel.ForEach, so I have had some situation, using thread A and B:
A: look for entity with code 'xxx' - result: Not Exist
B: look for entity with code 'xxx' - result: Not Exist
A: Add entity to Db - result OK
B: Add entity to Db - result: "Violation of UNIQUE KEY constraint (...) The duplicate key value is 'xxx' "
what should I do to avoid that situation ?
For do not have this duplicate error, you want to catch here
try{
//execute you insert in base
}catch(Exception ex){
// If your constraint is not respected, an error is thrown.
console.WriteLine("db error : "+ex.Message);
}
But, it's temporary, It's functionnaly, but it's bad, it's not proper...
For have a proper code, you want to create a spooler:
class Spooler{
public System.Collections.Generic.List<String> RequestList = new System.Collections.Generic.List<String>();
public Spooler(){
// Open you Database
// Start you thread will be verify if a request adding in the collection
SpoolThread = new Thread(new ThreadStart(SpoolerRunner));
SpoolThread.Start();
}
public createRequestDb(String DbRequest){
RequestList.Add(DbRequest);
}
private void SpoolerRunner()
{
while (true)
{
if (RequestList.Count() >= 1){
Foreach (String request in RequestList){
// Here, you want to verify your request, if args already exist
// And add request in Database
}
}
// Verify is request exist in the collection every 30 seconds..
Thread.Sleep(30000);
}
}
}
Why using a spooler?
just because, when you initialize you spooler before call you threads, you want to call spooler in every threah, and, for each request, you adding request in collection, and, the spooler will process one after another... and not in the same time in every different threads...
EDIT:
This spooler is a sample, for insert a string request one by one in your database,
you can create a spooler with a collection for your object in you want, and insert in db if not exist... It's just a sample for a solution, when you have many threads, to have a treatments one after another ^^
Related
I have a simple IBackgroundTask implementation that performs a query and then either performs an insert or one or more updates depending on whether a specific item exists or not. However, the updates are not persisted, and I don't understand why. New items are created just as expected.
The content item I'm updating has a CommonPart and I've tried authenticating as a valid user. I've also tried flushing the content manager at the end of the Sweep method. What am I missing?
This is my Sweep, slightly edited for brevity:
public void Sweep()
{
// Authenticate as the site's super user
var superUser = _membershipService.GetUser(_orchardServices.WorkContext.CurrentSite.SuperUser);
_authenticationService.SetAuthenticatedUserForRequest(superUser);
// Create a dummy "Person" content item
var item = _contentManager.New("Person");
var person = item.As<PersonPart>();
if (person == null)
{
return;
}
person.ExternalId = Random.Next(1, 10).ToString();
person.FirstName = GenerateFirstName();
person.LastName = GenerateLastName();
// Check if the person already exists
var matchingPersons = _contentManager
.Query<PersonPart, PersonRecord>(VersionOptions.AllVersions)
.Where(record => record.ExternalId == person.ExternalId)
.List().ToArray();
if (!matchingPersons.Any())
{
// Insert new person and quit
_contentManager.Create(item, VersionOptions.Draft);
return;
}
// There are at least one matching person, update it
foreach (var updatedPerson in matchingPersons)
{
updatedPerson.FirstName = person.FirstName;
updatedPerson.LastName = person.LastName;
}
_contentManager.Flush();
}
Try to add _contentManager.Publish(updatedPerson). If you do not want to publish, but just to save, you don't need to do anything more, as changes in Orchard as saved automatically unless the ambient transaction is aborted. The call to Flush is not necessary at all. This is the case both during a regular request and on a background task.
I have been trying desperately to delete an item from the database but have so far been unable to get it to work. The error message I see is this one:
"The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable."
I am using EF4.1, with EDMX (Database first) and POCO objects. I have a repository for each type and a general repository that implements the base methods.
The specific problem is when I want to delete an item. Deleting the children is not a problem - everything works perfectly - the problem is when I come to delete the entity itself.
Consider the following model. I have an entity "Foo" which has a 1 to many relationship with "Bar". I call the following method in my repository:
public override void Delete(Models.Foo entity)
{
//Load the child items...
base.Context.Entry(entity).Collection(x => x.Bars).Load();
//Bar
BarRepository barRep = new BarRepository();
foreach (var item in entity.Bars)
{
var obj = barRep.GetById(item.ID);
barRep.Delete(obj);
}
barRep.Save();
//First attempt
//base.Delete(entity);
//base.Save();
//Have to resort to some SQL
base.ExecuteSqlCommand(string.Format("delete from Foo where ID = {0}", entity.ID));
}
The GenericRepository "Delete" method is:
public virtual void Delete(T entity)
{
_entities.Set<T>().Remove(entity);
}
The GenericRepository "Save" method is simply:
public virtual void Save()
{
_entities.SaveChanges();
}
What I would like to get to work is this:
//First attempt
//base.Delete(entity);
//base.Save();
But unfortunately the only way (currently) for me to delete the item is to run some SQL which just calls _entities.Database.ExecuteSqlCommand(SQL).
I've read a lot of things but nothing seems to work. I would really appreciate some help in trying to understand what is going on.
Thanks,
Jose
I use a function to get access to a configuration document:
private Document lookupDoc(String key1) {
try {
Session sess = ExtLibUtil.getCurrentSession();
Database wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
View wView = wDb.getView(this.viewname1);
Document wDoc = wView.getDocumentByKey(key1, true);
this.debug("Got a doc for key: [" + key1 + "]");
return wDoc;
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
}
return null;
}
In another method I use this function to get the document:
Document wDoc = this.lookupDoc(key1);
if (wdoc != null) {
// do things with the document
wdoc.recycle();
}
Should I be recycling the Database and View objects when I recycle the Document object? Or should those be recycled before the function returns the Document?
The best practice is to recycle all Domino objects during the scope within which they are created. However, recycling any object automatically recycles all objects "beneath" it. Hence, in your example method, you can't recycle wDb, because that would cause wDoc to be recycled as well, so you'd be returning a recycled Document handle.
So if you want to make sure that you're not leaking memory, it's best to recycle objects in reverse order (e.g., document first, then view, then database). This tends to require structuring your methods such that you do whatever you need to/with a Domino object inside whatever method obtains the handle on it.
For instance, I'm assuming the reason you defined a method to get a configuration document is so that you can pull the value of configuration settings from it. So, instead of a method to return the document, perhaps it would be better to define a method to return an item value:
private Object lookupItemValue(String configKey, itemName) {
Object result = null;
Database wDb = null;
View wView = null;
Document wDoc = null;
try {
Session sess = ExtLibUtil.getCurrentSession();
wDb = sess.getDatabase(sess.getServerName(), this.dbname1);
wView = wDb.getView(this.viewname1);
wDoc = wView.getDocumentByKey(configKey, true);
this.debug("Got a doc for key: [" + configKey + "]");
result = wDoc.getItemValue(itemName);
} catch (NotesException ne) {
if (this.DispLookupErrors)
ne.printStackTrace();
this.lastErrorMsg = ne.text;
this.debug(this.lastErrorMsg, "error");
} finally {
incinerate(wDoc, wView, wDb);
}
return result;
}
There are a few things about the above that merit an explanation:
Normally in Java, we declare variables at first use, not Table of Contents style. But with Domino objects, it's best to revert to TOC so that, whether or not an exception was thrown, we can try to recycle them when we're done... hence the use of finally.
The return Object (which should be an item value, not the document itself) is also declared in the TOC, so we can return that Object at the end of the method - again, whether or not an exception was encountered (if there was an exception, presumably it will still be null).
This example calls a utility method that allows us to pass all Domino objects to a single method call for recycling.
Here's the code of that utility method:
private void incinerate(Object... dominoObjects) {
for (Object dominoObject : dominoObjects) {
if (null != dominoObject) {
if (dominoObject instanceof Base) {
try {
((Base)dominoObject).recycle();
} catch (NotesException recycleSucks) {
// optionally log exception
}
}
}
}
}
It's private, as I'm assuming you'll just define it in the same bean, but lately I tend to define this as a public static method of a Util class, allowing me to follow this same pattern from pretty much anywhere.
One final note: if you'll be retrieving numerous item values from a config document, obviously it would be expensive to establish a new database, view, and document handle for every item value you want to return. So I'd recommend overriding this method to accept a List<String> (or String[ ]) of item names and return a Map<String, Object> of the resulting values. That way you can establish a single handle for the database, view, and document, retrieve all the values you need, then recycle the Domino objects before actually making use of the item values returned.
Here's an idea i'm experimenting with. Tim's answer is excellent however for me I really needed the document for other purposes so I've tried this..
Document doc = null;
View view = null;
try {
Database database = ExtLibUtil.getCurrentSessionAsSigner().getCurrentDatabase();
view = database.getView("LocationsByLocationCode");
doc = view.getDocumentByKey(code, true);
//need to get this via the db object directly so we can safely recycle the view
String id = doc.getNoteID();
doc.recycle();
doc = database.getDocumentByID(id);
} catch (Exception e) {
log.error(e);
} finally {
JSFUtils.incinerate(view);
}
return doc;
You then have to make sure you're recycling the doc object safely in whatever method calls this one
I have some temporary documents which exist for a while as config docs then no longer needed, so get deleted. This is kind of enforced by an existing Notes client application: they have to exist to keep that happy.
I've written a class which has a HashMap of Java Date, String and Doubles with the item name as the key. So now I have a serializable representation of the document, plus the original doc noteID so that it can be found quickly and amended/deleted when it's not needed anymore.
That means the config doc can be collected, a standard routine creates a map of all items for the Java representation, taking into account the item type. The doc object can then be recycled right away.
The return object is the Java class representation of the document, with getValue(String name) and setValue(String name, val) where val can be Double String or Date. NB: this structure has no need for rich text or attachments, so it's kept to simple field values.
It works well although if the config doc has lots of Items, it could mean holding a lot of info in memory unnecessarily. Not so in my particular case though.
The point is: the Java object is now serializable so it can remain in memory, and as Tim's brilliant reply suggests, the document can be recycled right away.
I have a project class, and related to it I have a categories class and responsible class. When I want to add a project object to database, I call a method from the categories class to get the categories id, and do the same for the responsible class. I mean:
int categoryId = getCategoryId("Beverages");
int responsibleId = getResponsibleId("An Example Name");
These two methods are in different classes, but use similar code. When I run the program I get the error:
"There is already an open DataReader associated with this Command which must be closed first."
You must either fix your code, so that only one active (open) DataReader-object its active on a single DbConnection or enable MARS in your connection string (MultipleActiveResultSets=True).
I do agree with Mithrandir here, but there is another workaround for your question; I suppose methods you listed access to the database (silly, I know , but bare it with me ), and they all open connection to same database. What you should add in your code is an final block,which closes database connection, after transaction is executed:
try
{
string sql="select * from ...."
connection.Open();
comand.CommandText = sql;
comand.ExecuteNonQuery();
}
catch(SQLException)
{
}
finally
{
if (connection!=null)
{
connection.Close();
}
}
I'm working with Windows Azure Table Storage and have a simple requirement: add a new row, overwriting any existing row with that PartitionKey/RowKey. However, saving the changes always throws an exception, even if I pass in the ReplaceOnUpdate option:
tableServiceContext.AddObject(TableName, entity);
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
If the entity already exists it throws:
System.Data.Services.Client.DataServiceRequestException: An error occurred while processing this request. ---> System.Data.Services.Client.DataServiceClientException: <?xml version="1.0" encoding="utf-8" standalone="yes"?>
<error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
<code>EntityAlreadyExists</code>
<message xml:lang="en-AU">The specified entity already exists.</message>
</error>
Do I really have to manually query for the existing row first and call DeleteObject on it? That seems very slow. Surely there is a better way?
As you've found, you can't just add another item that has the same row key and partition key, so you will need to run a query to check to see if the item already exists. In situations like this I find it helpful to look at the Azure REST API documentation to see what is available to the storage client library. You'll see that there are separate methods for inserting and updating. The ReplaceOnUpdate only has an effect when you're updating, not inserting.
While you could delete the existing item and then add the new one, you could just update the existing one (saving you one round trip to storage). Your code might look something like this:
var existsQuery = from e
in tableServiceContext.CreateQuery<MyEntity>(TableName)
where
e.PartitionKey == objectToUpsert.PartitionKey
&& e.RowKey == objectToUpsert.RowKey
select e;
MyEntity existingObject = existsQuery.FirstOrDefault();
if (existingObject == null)
{
tableServiceContext.AddObject(TableName, objectToUpsert);
}
else
{
existingObject.Property1 = objectToUpsert.Property1;
existingObject.Property2 = objectToUpsert.Property2;
tableServiceContext.UpdateObject(existingObject);
}
tableServiceContext.SaveChangesWithRetries(SaveChangesOptions.ReplaceOnUpdate);
EDIT: While correct at the time of writing, with the September 2011 update Microsoft have updated the Azure table API to include two upsert commands, Insert or Replace Entity and Insert or Merge Entity
In order to operate on an existing object NOT managed by the TableContext with either Delete or SaveChanges with ReplaceOnUpdate options, you need to call AttachTo and attach the object to the TableContext, instead of calling AddObject which instructs TableContext to attempt to insert it.
http://msdn.microsoft.com/en-us/library/system.data.services.client.dataservicecontext.attachto.aspx
in my case it was not allowed to remove it first, thus I do it like this, this will result in one transaction to server which will first remove existing object and than add new one, removing need to copy property values
var existing = from e in _ServiceContext.AgentTable
where e.PartitionKey == item.PartitionKey
&& e.RowKey == item.RowKey
select e;
_ServiceContext.IgnoreResourceNotFoundException = true;
var existingObject = existing.FirstOrDefault();
if (existingObject != null)
{
_ServiceContext.DeleteObject(existingObject);
}
_ServiceContext.AddObject(AgentConfigTableServiceContext.AgetnConfigTableName, item);
_ServiceContext.SaveChangesWithRetries();
_ServiceContext.IgnoreResourceNotFoundException = false;
Insert/Merge or Update was added to the API in September 2011. Here is an example using the Storage API 2.0 which is easier to understand then the way it is done in the 1.7 api and earlier.
public void InsertOrReplace(ITableEntity entity)
{
retryPolicy.ExecuteAction(
() =>
{
try
{
TableOperation operation = TableOperation.InsertOrReplace(entity);
cloudTable.Execute(operation);
}
catch (StorageException e)
{
string message = "InsertOrReplace entity failed.";
if (e.RequestInformation.HttpStatusCode == 404)
{
message += " Make sure the table is created.";
}
// do something with message
}
});
}
The Storage API does not allow more than one operation per entity (delete+insert) in a group transaction:
An entity can appear only once in the transaction, and only one operation may be performed against it.
see MSDN: Performing Entity Group Transactions
So in fact you need to read first and decide on insert or update.
You may use UpsertEntity and UpsertEntityAsync methods in the official Microsoft Azure.Data.Tables TableClient.
The fully working example is available at https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet/blob/main/2-completed-app/AzureTablesDemoApplicaton/Services/TablesService.cs --
public void UpsertTableEntity(WeatherInputModel model)
{
TableEntity entity = new TableEntity();
entity.PartitionKey = model.StationName;
entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
// The other values are added like a items to a dictionary
entity["Temperature"] = model.Temperature;
entity["Humidity"] = model.Humidity;
entity["Barometer"] = model.Barometer;
entity["WindDirection"] = model.WindDirection;
entity["WindSpeed"] = model.WindSpeed;
entity["Precipitation"] = model.Precipitation;
_tableClient.UpsertEntity(entity);
}