When I remove Thread pooling or uncomment Console.WriteLine() from code then code works fine, but to improve performance I want to process each DataTable column on separate Task. It throws index out of range exception.
DataTable dt = new DataTable();
private async void btnExcel_Click(object sender, RoutedEventArgs e)
{
dt = new DataTable("worksheet");
dt.Columns.Add("Id");
dt.Columns.Add("MobileNo");
dt.Columns.Add("Name");
dt.Columns.Add("Name1");
dt.Columns.Add("Name2");
dt.Columns.Add("Name3");
dt.Columns.Add("Name4");
for (int i = 0; i < 100; i++)
dt.Rows.Add(i, "99999", "ABC" + i, "n1", "n2", "n3", "n4");
//var tasksInFlight = new Task[dt.Columns.Count];
var tasksInFlight = new List<Task>();
for (int index = 0; index < dt.Columns.Count; index++)
{
tasksInFlight.Add(updateDt(index, "col " + index));
}
//await Task.Factory.ContinueWhenAll(tasksInFlight, cT => { string a = "abc"; });
await Task.WhenAll(tasksInFlight);
}
public async Task updateDt(int colNum, string data)
{
try
{
Task t = Task.Run(() =>
{
for (int i = 0; i < dt.Rows.Count; i++)
{
// Console.WriteLine("Col Num : " + colNum + " i = " + i);
dt.Rows[i][colNum] = data;
}
});
await t;
}
catch (Exception ex)
{
}
}
When you call tasksInFlight.Add(updateDt(index, "col " + index)); the value of index is not read; you merely store a task that will be executed later - when you call Task.WhenAll. It is when the tasks are executed when the value of index is evaluated. Which happens after the loop is over and the value of index is now equal to dt.Columns.Count, which is out of the range of the array.
Read about C# closures here.
To fix it, you can do like this:
for (int index = 0; index < dt.Columns.Count; index++)
{
int tmpIndex = index;
tasksInFlight.Add(updateDt(tmpIndex, "col " + tmpIndex));
}
EDIT: After further investigation it turns out that DataTable is not thread-safe.
In addition to the fix above, DataTable should be accessed in a lock:
lock (dt)
{
dt.Rows[i][colNum] = data;
}
However, unless retrieving the actual data to put in the DataTable CPU-intensive, the lock eliminates all benefits of concurrency in this case.
Related
I am new to Revit API and am working in C#. I want to get the schedule element parameters value using C#. I used the below code to get the view schedule.
var viewSchedule = new FilteredElementCollector(document)
.OfClass(typeof(ViewSchedule))
.FirstOrDefault(e => e.Name == "MyScheduleName") as ViewSchedule;
Schedule Element Data
From the above schedule, I used the below code to get the element data (please refer the above screenshot link) but it taking long time to reflect the output (10 to 15 seconds).
var rowCount = viewSchedule.GetTableData().GetSectionData(SectionType.Body).NumberOfRows;
var colCount = viewSchedule.GetTableData().GetSectionData(SectionType.Body).NumberOfColumns;
for (int i = 0; i < rowCount; i++)
{
for (int j = 0; j < colCount; j++)
{
data += viewSchedule.GetCellText(SectionType.Body, i, j);
}
}
Please let me know is there any alternate approach to get the schedule data using C#.
Thanks in advance.
Maybe you can also use ViewSchedule.Export as demonstrated by The Building Coder discussing The Schedule API and Access to Schedule Data.
Yes, you can easily access Schedule data without exporting.
Firstly, get all the schedules and read the data cell by cell. Secondly, create dictionary and store data in form of key, value pairs. Now you can use the schedule data as you want. I have tried this in Revit 2019.
Here is the implementation.
public void getScheduleData(Document doc)
{
FilteredElementCollector collector = new FilteredElementCollector(doc);
IList<Element> collection = collector.OfClass(typeof(ViewSchedule)).ToElements();
String prompt = "ScheduleData :";
prompt += Environment.NewLine;
foreach (Element e in collection)
{
ViewSchedule viewSchedule = e as ViewSchedule;
TableData table = viewSchedule.GetTableData();
TableSectionData section = table.GetSectionData(SectionType.Body);
int nRows = section.NumberOfRows;
int nColumns = section.NumberOfColumns;
if (nRows > 1)
{
//valueData.Add(viewSchedule.Name);
List<List<string>> scheduleData = new List<List<string>>();
for (int i = 0; i < nRows; i++)
{
List<string> rowData = new List<string>();
for (int j = 0; j < nColumns; j++)
{
rowData.Add(viewSchedule.GetCellText(SectionType.Body, i, j));
}
scheduleData.Add(rowData);
}
List<string> columnData = scheduleData[0];
scheduleData.RemoveAt(0);
DataMapping(columnData, scheduleData);
}
}
}
public static void DataMapping(List<string> keyData, List<List<string>>valueData)
{
List<Dictionary<string, string>> items= new List<Dictionary<string, string>>();
string prompt = "Key/Value";
prompt += Environment.NewLine;
foreach (List<string> list in valueData)
{
for (int key=0, value =0 ; key< keyData.Count && value< list.Count; key++,value++)
{
Dictionary<string, string> newItem = new Dictionary<string, string>();
string k = keyData[key];
string v = list[value];
newItem.Add(k, v);
items.Add(newItem);
}
}
foreach (Dictionary<string, string> item in items)
{
foreach (KeyValuePair<string, string> kvp in item)
{
prompt += "Key: " + kvp.Key + ",Value: " + kvp.Value;
prompt += Environment.NewLine;
}
}
Autodesk.Revit.UI.TaskDialog.Show("Revit", prompt);
}
I'm using the same code in 6 Areas, each a separate database. Been working fine for a long time, now fails in 3 of the 6 with a null reference exception. Classes all the same, all database tables hold data. First method is the view which displays the data, second called by a link in the view to export the data to Excel. 3 cases display and export correctly. In 2 out of the other 3 cases the data is displayed correctly in the view, and the null exception is thrown at the point the data should be being retrieved in the export method. In the last case the null exception is thrown when trying to return the view. Simply can't understand how it works in some instances but not in others.
public ActionResult PVS(string studySite, string currentFilter, int? page)
{
ViewBag.SelectedID = studySite;
if (studySite != null)
page = 1;
else
studySite = currentFilter;
ViewBag.CurrentFilter = studySite;
IQueryable<PVS> schedule = db.PVS.OrderBy(v => v.SiteNumber).ThenBy(v => v.Participant);
if (studySite != null)
{
string selectedID = studySite;
images = schedule.Where(v => v.SiteNumber == selectedID);
}
int pageSize = 10;
int pageNumber = (page ?? 1);
return View(schedule.ToPagedList(pageNumber, pageSize));
}
public void ExportPVSToExcel()
{
var grid = new System.Web.UI.WebControls.GridView();
var pvs = db.PVS.OrderBy(p => p.SiteNumber).ThenBy(p => p.Participant).ToList();
grid.DataSource = pvs;
grid.DataBind();
// create an Excel worksheet for the data export
ExcelPackage excel = new ExcelPackage();
var workSheet = excel.Workbook.Worksheets.Add("PVS");
var totalCols = grid.Rows[0].Cells.Count;
var totalRows = grid.Rows.Count;
var headerRow = grid.HeaderRow;
for (var i = 1; i <= totalCols; i++)
{
workSheet.Cells[1, i].Value = headerRow.Cells[i - 1].Text;
}
for (var j = 1; j <= totalRows; j++)
{
for (var i = 1; i <= totalCols; i++)
{
var data = pvs.ElementAt(j - 1);
workSheet.Cells[j + 1, i].Value = data.GetType().GetProperty(headerRow.Cells[i - 1].Text).GetValue(data, null);
}
}
using (var memoryStream = new MemoryStream())
{
Response.ContentType = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet";
Response.AddHeader("content-disposition", "attachment; filename=PVS.xlsx");
excel.SaveAs(memoryStream);
memoryStream.WriteTo(Response.OutputStream);
Response.Flush();
Response.End();
}
}
I am uploading a large file in azure storage . I am uploading a file in to 4 MB chunks. I used the following code from last 1 year but from last one month whenever I am uploading file It is getting corrupt some times and some times It uploads fine.
Can any one suggest me what I need to change in the code.
//Uploads a file from the file system to a blob. Parallel implementation.
public void ParallelUploadFile(CloudBlockBlob blob1, string fileName1, BlobRequestOptions options1, int maxBlockSize = 4 * 1024 * 1024, int rowId)
{
blob = blob1;
fileName = fileName1;
options = options1;
file = new FileInfo(fileName);
var fileStream = new FileStream(fileName, FileMode.Open, FileAccess.Read,FileShare.ReadWrite);
long fileSize = file.Length;
//Get the filesize
long fileSizeInMb = file.Length/1024/1024;
// let's figure out how big the file is here
long leftToRead = fileSize;
long startPosition = 0;
// have 1 block for every maxBlockSize bytes plus 1 for the remainder
var blockCount =
((int) Math.Floor((double) (fileSize/maxBlockSize))) + 1;
blockIds = new List<string>();
// populate the control array...
for (int j = 0; j < blockCount; j++)
{
var toRead = (int) (maxBlockSize < leftToRead
? maxBlockSize
: leftToRead);
var blockId = Convert.ToBase64String(
Encoding.ASCII.GetBytes(
string.Format("BlockId{0}", j.ToString("0000000"))));
transferDetails.Add(new BlockTransferDetail()
{
StartPosition = startPosition,
BytesToRead = toRead,
BlockId = blockId
});
if (toRead > 0)
{
blockIds.Add(blockId);
}
// increment the starting position
startPosition += toRead;
leftToRead -= toRead;
}
//*******
//PUT THE NO OF THREAD LOGIC HERE
//*******
int runFrom = 0;
int runTo = 0;
int uploadParametersCount = 0;
TotalUpload = Convert.ToInt64(fileSizeInMb);
for (int count = 0; count < transferDetails.Count; )
{
//Create uploading file parameters
uploadParametersesList.Add(new UploadParameters()
{
FileName = file.FullName,
BlockSize = 3900000,
//BlockSize = 4194304,
LoopFrom = runFrom + runTo,
IsPutBlockList = false,
UploadedBytes = 0,
Fs = fileStream,
RowIndex = rowId,
FileSize = Convert.ToInt64(fileSizeInMb)
});
//Logic to create correct threads
if (transferDetails.Count < 50)
{
runTo = transferDetails.Count;
uploadParametersesList[uploadParametersCount].LoopTo += runTo;
count += transferDetails.Count;
}
else
{
var tmp = transferDetails.Count - runTo;
if (tmp > 50 && tmp < 100)
{
runTo += tmp;
count += tmp;
uploadParametersesList[uploadParametersCount].LoopTo += runTo;
}
else
{
runTo += 50;
count += 50;
uploadParametersesList[uploadParametersCount].LoopTo += runTo;
}
}
//Add to Global Const
GlobalConst.UploadedParameters.Add(uploadParametersesList[uploadParametersCount]);
//Start the thread
int parametersCount = uploadParametersCount;
var thread = new Thread(() => ThRunThis(uploadParametersesList[parametersCount]))
{Priority = ThreadPriority.Highest};
thread.Start();
uploadParametersCount++;
//Start a timer here to put all blocks on azure blob
aTimer.Elapsed += OnTimedEvent;
aTimer.Interval = 5000;
aTimer.Start();
}
}
//Timer callback
private void OnTimedEvent(object source, ElapsedEventArgs e)
{
if (uploadParametersesList.Count(o => o.IsPutBlockList) == uploadParametersesList.Count)
{
aTimer.Elapsed -= OnTimedEvent;
aTimer.Stop();
//Finally commit it
try
{
uploadParametersesList.ForEach(x => x.Status = "Uploaded");
blob.PutBlockList(blockIds);
IsCompleted = true;
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
}
}
}
//Main thread
private void ThRunThis(UploadParameters uploadParameters)
{
try
{
for (int j = uploadParameters.LoopFrom; j < uploadParameters.LoopTo; j++)
{
br = new BinaryReader(uploadParameters.Fs);
var bytes = new byte[transferDetails[j].BytesToRead];
//move the file system reader to the proper position
uploadParameters.Fs.Seek(transferDetails[j].StartPosition, SeekOrigin.Begin);
br.Read(bytes, 0, transferDetails[j].BytesToRead);
if (bytes.Length > 0)
{
//calculate the block-level hash
MD5 md5 = new MD5CryptoServiceProvider();
byte[] blockHash = md5.ComputeHash(bytes);
string convertedHash = Convert.ToBase64String(blockHash, 0, 16);
blob.PutBlock(transferDetails[j].BlockId, new MemoryStream(bytes), convertedHash, options);
//Update Uploaded Bytes
uploadParameters.UploadedBytes += transferDetails[j].BytesToRead;
TotalUploadedBytes += transferDetails[j].BytesToRead;
Console.WriteLine(Thread.CurrentThread.Name);
//Try to free the memory
try
{
GC.Collect();
}
catch (Exception exception)
{
Console.WriteLine(exception.Message);
}
}
}
//Is Completed
uploadParameters.IsPutBlockList = true;
}
catch (Exception exception)
{
Console.WriteLine(Thread.CurrentThread.Name);
uploadParameters.Exception = exception.Message;
Console.WriteLine(exception.Message);
}
}
It's been a long time since I touched large blob upload with threads, but it looks like your block list is getting out of sequence by threads.
Why don't you get the block list from cloud once all blocks have been uploaded and then use that list for putBlockList. That would make sure you get them in correct sequence.
I have an array of files like this..
string[] unZippedFiles;
the idea is that I want to parse these files in paralle. As they are parsed a record gets placed on a concurrentbag. As record is getting placed I want to kick of the update function.
Here is what I am doing in my Main():
foreach(var file in unZippedFiles)
{ Parallel.Invoke
(
() => ImportFiles(file),
() => UpdateTest()
);
}
this is what the code of Update loooks like.
static void UpdateTest( )
{
Console.WriteLine("Updating/Inserting merchant information.");
while (!merchCollection.IsEmpty || producingRecords )
{
merchant x;
if (merchCollection.TryTake(out x))
{
UPDATE_MERCHANT(x.m_id, x.mInfo, x.month, x.year);
}
}
}
This is what the import code looks like. It's pretty much a giant string parser.
System.IO.StreamReader SR = new System.IO.StreamReader(fileName);
long COUNTER = 0;
StringBuilder contents = new StringBuilder( );
string M_ID = "";
string BOF_DELIMITER = "%%MS_SKEY_0000_000_PDF:";
string EOF_DELIMITER = "%%EOF";
try
{
record_count = 0;
producingRecords = true;
for (COUNTER = 0; COUNTER <= SR.BaseStream.Length - 1; COUNTER++)
{
if (SR.EndOfStream)
{
break;
}
contents.AppendLine(Strings.Trim(SR.ReadLine()));
contents.AppendLine(System.Environment.NewLine);
//contents += Strings.Trim(SR.ReadLine());
//contents += Strings.Chr(10);
if (contents.ToString().IndexOf((EOF_DELIMITER)) > -1)
{
if (contents.ToString().StartsWith(BOF_DELIMITER) & contents.ToString().IndexOf(EOF_DELIMITER) > -1)
{
string data = contents.ToString();
M_ID = data.Substring(data.IndexOf("_M") + 2, data.Substring(data.IndexOf("_M") + 2).IndexOf("_"));
Console.WriteLine("Merchant: " + M_ID);
merchant newmerch;
newmerch.m_id = M_ID;
newmerch.mInfo = data.Substring(0, (data.IndexOf(EOF_DELIMITER) + 5));
newmerch.month = DateTime.Now.AddMonths(-1).Month;
newmerch.year = DateTime.Now.AddMonths(-1).Year;
//Update(newmerch);
merchCollection.Add(newmerch);
}
contents.Clear();
//GC.Collect();
}
}
SR.Close();
// UpdateTest();
}
catch (Exception ex)
{
producingRecords = false;
}
finally
{
producingRecords = false;
}
}
the problem i am having is that the Update runs once and then the importfile function just takes over and does not yield to the update function. Any ideas on what am I doing wrong would be of great help.
Here's my stab at fixing your thread synchronisation. Note that I haven't changed any of the code from the functional standpoint (with the exception of taking out the catch - it's generally a bad idea; exceptions need to be propagated).
Forgive if something doesn't compile - I'm writing this based on incomplete snippets.
Main
foreach(var file in unZippedFiles)
{
using (var merchCollection = new BlockingCollection<merchant>())
{
Parallel.Invoke
(
() => ImportFiles(file, merchCollection),
() => UpdateTest(merchCollection)
);
}
}
Update
private void UpdateTest(BlockingCollection<merchant> merchCollection)
{
Console.WriteLine("Updating/Inserting merchant information.");
foreach (merchant x in merchCollection.GetConsumingEnumerable())
{
UPDATE_MERCHANT(x.m_id, x.mInfo, x.month, x.year);
}
}
Import
Don't forget to pass in merchCollection as a parameter - it should not be static.
System.IO.StreamReader SR = new System.IO.StreamReader(fileName);
long COUNTER = 0;
StringBuilder contents = new StringBuilder( );
string M_ID = "";
string BOF_DELIMITER = "%%MS_SKEY_0000_000_PDF:";
string EOF_DELIMITER = "%%EOF";
try
{
record_count = 0;
for (COUNTER = 0; COUNTER <= SR.BaseStream.Length - 1; COUNTER++)
{
if (SR.EndOfStream)
{
break;
}
contents.AppendLine(Strings.Trim(SR.ReadLine()));
contents.AppendLine(System.Environment.NewLine);
//contents += Strings.Trim(SR.ReadLine());
//contents += Strings.Chr(10);
if (contents.ToString().IndexOf((EOF_DELIMITER)) > -1)
{
if (contents.ToString().StartsWith(BOF_DELIMITER) & contents.ToString().IndexOf(EOF_DELIMITER) > -1)
{
string data = contents.ToString();
M_ID = data.Substring(data.IndexOf("_M") + 2, data.Substring(data.IndexOf("_M") + 2).IndexOf("_"));
Console.WriteLine("Merchant: " + M_ID);
merchant newmerch;
newmerch.m_id = M_ID;
newmerch.mInfo = data.Substring(0, (data.IndexOf(EOF_DELIMITER) + 5));
newmerch.month = DateTime.Now.AddMonths(-1).Month;
newmerch.year = DateTime.Now.AddMonths(-1).Year;
//Update(newmerch);
merchCollection.Add(newmerch);
}
contents.Clear();
//GC.Collect();
}
}
SR.Close();
// UpdateTest();
}
finally
{
merchCollection.CompleteAdding();
}
}
I am trying to change all the Business Unit references I got after importing a solution to the ones in the Acceptance environment.
QueryExpression ViewQuery = new QueryExpression("savedquery");
String[] viewArrayFields = { "name", "fetchxml" };
ViewQuery.ColumnSet = new ColumnSet(viewArrayFields);
ViewQuery.PageInfo = new PagingInfo();
ViewQuery.PageInfo.Count = 5000;
ViewQuery.PageInfo.PageNumber = 1;
ViewQuery.PageInfo.ReturnTotalRecordCount = true;
EntityCollection retrievedViews = service.RetrieveMultiple(ViewQuery);
//iterate though the values and print the right one for the current user
int oldValues = 0;
int accValuesUpdated = 0;
int prodValuesUpdated = 0;
int total = 0;
foreach (var entity in retrievedViews.Entities)
{
total++;
if (!entity.Contains("fetchxml"))
{ }
else
{
string fetchXML = entity.Attributes["fetchxml"].ToString();
for (int i = 0; i < guidDictionnary.Count; i++)
{
var entry = guidDictionnary.ElementAt(i);
if (fetchXML.Contains(entry.Key.ToString().ToUpperInvariant()))
{
Console.WriteLine(entity.Attributes["name"].ToString());
oldValues++;
if (destinationEnv.Equals("acc"))
{
accValuesUpdated++;
Console.WriteLine();
Console.WriteLine("BEFORE:");
Console.WriteLine();
Console.WriteLine(entity.Attributes["fetchxml"].ToString());
string query = entity.Attributes["fetchxml"].ToString();
query = query.Replace(entry.Key.ToString().ToUpperInvariant(), entry.Value.AccGuid.ToString().ToUpperInvariant());
entity.Attributes["fetchxml"] = query;
Console.WriteLine();
Console.WriteLine("AFTER:");
Console.WriteLine();
Console.WriteLine(entity.Attributes["fetchxml"].ToString());
}
else
{
prodValuesUpdated++;
string query = entity.Attributes["fetchxml"].ToString();
query = query.Replace(entry.Key.ToString().ToUpperInvariant(), entry.Value.ProdGuid.ToString().ToUpperInvariant());
entity.Attributes["fetchxml"] = query;
}
service.Update(entity);
}
}
}
}
Console.WriteLine("{0} values to be updated. {1} shall be mapped to acceptance, {2} to prod. Total = {3} : {4}", oldValues, accValuesUpdated, prodValuesUpdated, total, retrievedViews.Entities.Count);
I see that the new value is corrected, but it does not get saved. I get no error while updating the record and publishing the changes in CRM does not help.
Any hint?
According to your comments, it sounds like the value you're saving the entity as, is the value that you want it to be. I'm guessing your issue is with not publishing your change. If you don't publish it, it'll still give you the old value of the FetchXml I believe.
Try calling this method:
PublishEntity(service, "savedquery");
private void PublishEntity(IOrganizationService service, string logicalName)
{
service.Execute(new PublishXmlRequest()
{
ParameterXml = "<importexportxml>"
+ " <entities>"
+ " <entity>" + logicalName + "</entity>"
+ " </entities>"
+ "</importexportxml>"
});
}