Threading does not work when deploying to another server - multithreading

I have created a web page where users can upload a file which contains data that are inserted into the Lead table in Microsoft Dynamics CRM 2011.
The weird thing now is that when I deploy to our test environment the application runs fine seemingly but absolutely no rows are imported. In my dev environment it works just fine.
After a while when trying to find the error I created a setting that runs the application without using threading (Thread.Run basically) and then it inserts all the Leads. If I switch back to using threads it inserts no leads at all although I get no application errors.
When using SQL Server Profiler I can see in dev (where it works with threading) that all of the insert statements run. When profiling in test environment no insert statements at all are run.
I sort of get the feeling that there is some server issue or setting that is causing this behaviour but when googling I don't really get any results that have helped me.
I was sort of hoping that someone recognizes this problem. I haven't got much experience in using threading so maybe this is some road bump that I need to go over.
I cannot show my code completely but this is basically how I start the threads:
for (int i = 0; i < _numberOfThreads; i++)
{
MultipleRequestObject mpObject = new MultipleRequestObject() { insertType = insertType, listOfEntities = leadsForInsertionOrUpdate[i].ToList<Entity>() };
Thread thread = new Thread(delegate()
{
insertErrors.AddRange(leadBusinessLogic.SaveToCRMMultipleRequest(mpObject));
});
thread.Start();
activeThreads.Add(thread);
}
// Wait for threads to complete
foreach (Thread t in activeThreads)
t.Join();
I initialize my crm connection like this (it reads the connectionstring from web.config -> connectionstrings)
public CrmConnection connection { get; set; }
private IOrganizationService service { get; set; }
public CrmContext crmContext { get; set; }
public CrmGateway()
{
connection = new CrmConnection("Crm");
service = (IOrganizationService)new OrganizationService(connection);
crmContext = new CrmContext(service);
}

Related

Azure connection attempt failed

I'm using the following code to connect. I can connect to other Azure Resources ok.
But for one resource I get the following error: URL and Key are correct.
{"A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond"}
The Code is as follows
_searchClient = new SearchServiceClient(searchServiceName, new
SearchCredentials(apiKey));
_httpClient.DefaultRequestHeaders.Add("api-key", apiKey);
_searchServiceEndpoint = String.Format("https://{0}.{1}",
searchServiceName, _searchClient.SearchDnsSuffix);
bool result = RunAsync().GetAwaiter().GetResult();
Any ideas? thx in advance? How can I troubleshoot this?
I will show how this is done in c#
you will need a appsettings.json
you will need this code in the program.cs file
there are a lot of other files in the example from the document
that you may need to use , learn and edit for ur usecase
When working in c# and azure, always know what is unique about the file structured your solution first. This is why we build the examples from the docs as we learn the solution. Next we must study the different blocks of code that when executed deliver one feature or functionality to the solution as a whole.
appsettings.json
{
"SearchServiceName": "[Put your search service name here]",
"SearchIndexName": "hotels",
"SearchServiceAdminApiKey": "[Put your primary or secondary Admin API key here]",
"SearchServiceQueryApiKey": "[Put your primary or secondary Query API key here]"
}
Program.cs
namespace AzureSearch.SDKHowTo
{
using System;
using System.Linq;
using System.Threading;
using Microsoft.Azure.Search;
using Microsoft.Azure.Search.Models;
using Microsoft.Extensions.Configuration;
using Microsoft.Spatial;
// This sample shows how to delete, create, upload documents and query an index
static void Main(string[] args)
{
IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
IConfigurationRoot configuration = builder.Build();
SearchServiceClient serviceClient = CreateSearchServiceClient(configuration);
string indexName = configuration["SearchIndexName"];
Console.WriteLine("{0}", "Deleting index...\n");
DeleteIndexIfExists(indexName, serviceClient);
Console.WriteLine("{0}", "Creating index...\n");
CreateIndex(indexName, serviceClient);
ISearchIndexClient indexClient = serviceClient.Indexes.GetClient(indexName);
Console.WriteLine("{0}", "Uploading documents...\n");
UploadDocuments(indexClient);
ISearchIndexClient indexClientForQueries = CreateSearchIndexClient(indexName, configuration);
RunQueries(indexClientForQueries);
Console.WriteLine("{0}", "Complete. Press any key to end application...\n");
Console.ReadKey();
}
private static SearchServiceClient CreateSearchServiceClient(IConfigurationRoot configuration)
{
string searchServiceName = configuration["SearchServiceName"];
string adminApiKey = configuration["SearchServiceAdminApiKey"];
SearchServiceClient serviceClient = new SearchServiceClient(searchServiceName, new SearchCredentials(adminApiKey));
return serviceClient;
}
private static SearchIndexClient CreateSearchIndexClient(string indexName, IConfigurationRoot configuration)
{
string searchServiceName = configuration["SearchServiceName"];
string queryApiKey = configuration["SearchServiceQueryApiKey"];
SearchIndexClient indexClient = new SearchIndexClient(searchServiceName, indexName, new SearchCredentials(queryApiKey));
return indexClient;
}
private static void DeleteIndexIfExists(string indexName, SearchServiceClient serviceClient)
{
if (serviceClient.Indexes.Exists(indexName))
{
serviceClient.Indexes.Delete(indexName);
}
}
private static void CreateIndex(string indexName, SearchServiceClient serviceClient)
{
var definition = new Index()
{
Name = indexName,
Fields = FieldBuilder.BuildForType<Hotel>()
};
serviceClient.Indexes.Create(definition);
}}
Azure concepts to learn
How and why we create azure clients
Why do we use appsettings.json
What is some example file structures for azure search solutions
What coding lanague do you want to use to build that solutio
do u want to use the azure sdk
How to find and create api keys
C# concepts to learn
What is an interface and how do you use it
How to import one file in the file structure into another
How the main function works
How to call variables in to a function
How to call a function with a function
How to write server side code vs client side code
How to deploy c# code to azure
What version of c# are u using What’s is asp.net and what version will u use
What is asp.net core and what version will u use
As u can see azure and c# have a high learning curve.
Luckily you have stack overflow and documentation to research all of the above questions and more:)
For how u would troubleshoot...what I do is research each block of code in the documentation example and run all of the code locally. Then I test each block of code one at a time. Ur always testing data flowing thought the block of code. So you can just console log the result of a block a code by creating a test varable and print that varable to the console.
Because each block of Code represents one feature or functionality, each test will output either a pass or fail delivery of that feature or functionality. Thus you can design functionality, implement that design and create a test for new Feature.

Handling Acumatica timeout on API Invoke action

I have code in a standalone application that invokes an Acumatica action to generate reports; I am running into timeouts on large documents while the action completes.
What is the best method to handle these timeouts? I need to wait for the action to complete in order to retrieve the files I've generated.
Standalone application code:
public SalesOrder GenerateAcumaticaLabels(string orderNbr, string reportType)
{
SalesOrder salesOrder = null;
using (ISoapClientProvider clientProvider = soapClientFactory.Create())
{
try
{
SalesOrder salesOrderToFind = new SalesOrder
{
OrderType = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).First() },
OrderNbr = new StringSearch { Value = orderNbr.Split(OrderSeparator.SalesOrder).Last() },
ReturnBehavior = ReturnBehavior.OnlySpecified,
};
salesOrder = clientProvider.Client.Get(salesOrderToFind) as SalesOrder;
InvokeResult invokeResult = new InvokeResult();
invokeResult = clientProvider.Client.Invoke(salesOrder, new exportSFPReport());
ProcessResult processResult = clientProvider.Client.GetProcessStatus(invokeResult);
//Wait for the update to complete before we attempt to retrieve the files
while (processResult.Status == ProcessStatus.InProcess)
{
Thread.Sleep(1000); //pause for 1 second
processResult = clientProvider.Client.GetProcessStatus(invokeResult);
}
}
And the action in Acumatica:
public PXAction<SOOrder> ExportSFPReport;
[PXButton]
[PXUIField(DisplayName = "Generate Robot SFP PDF")]
protected IEnumerable exportSFPReport(PXAdapter adapter)
{
//Report Paramenters
Dictionary<String, String> parameters = new Dictionary<String, String>();
parameters["SOOrder.OrderType"] = Base.Document.Current.OrderType;
parameters["SOOrder.OrderNbr"] = Base.Document.Current.OrderNbr;
IEnumerable reportFileInfo = ExportReport(adapter, "IN619217", parameters);
exportTrayLabelReport(adapter, "SFP");
return reportFileInfo;
}
The problem here is that your action is synchronous, so it is trying to complete within the Invoke call (which is not a good thing for long processes). You have to explicitly make your operation long-running by using PXLongOperation.StartOperation inside your handler, and then your client code should work properly, as it already handles the waiting and checking.
I believe the reason why you encounter time-out is because there is no TCP communication between the time you sent the request and receive the response. With TCP KeepAlive flag set to true, the client will periodically ping the server to reset the time-out period.
That would be the best way. However Acumatica connections are rather high level so I don't think you'll be able to easily access that flag. What I would try first in a scenario that doesn't involve external application is to wrap your action event-handler code in a PXLongOperation block which has to do something similar to keep connection alive under the hood:
PXLongOperation.StartOperation(this or Base, delegate
{
your code here
});
When I do encounter time-outs in Acumatica that can't be solved with PXLongOperation I go for the simplest method which is increasing IIS timeout in Web.Config file. I'm not sure if your use case with external application will go well with async PXLongOperation. The handler would return prematurely and the client could not be able to retrieve the async payload.
So you might have to increase time-out instead. As far as I know there's no real practical drawback to doing this unless your website is under threat of DOS attacks.
You can locate and edit the Web.Config file of your Acumatica instance using inetmgr program if you are self-hosting Acumatica. Otherwise talk to your SAAS contact to see if that's an option.
I'm pretty sure you are hitting IIS time-out. A tell-tale sign would be lost connection after exactly 5 minutes which is the default 300 seconds value. You can edit Web.Config file to increase executionTimeout value. It's not a bad idea to increase maxRequestLength too if you are requesting large amount of data from Acumatica API as this is also a common cause of failure that you miss in testing and occurs in real-life scenarios:
<httpRuntime executionTimeout="300" requestValidationMode="2.0" maxRequestLength="1048576" />

Subscribing to Service Fabric cluster level events

I am trying to create a service that will update an external list of Service Endpoints for applications running in my service fabric cluster. (Basically I need to replicate the Azure Load Balancer in my on premises F5 Load Balancer.)
During last month's Service Fabric Q&A, the team pointed me at RegisterServiceNotificationFilterAsync.
I made a stateless service using this method, and deployed it to my development cluster. I then made a new service by running the ASP.NET Core Stateless service template.
I expected that when I deployed the second service, the break point would hit in my first service, indicating that a service had been added. But no breakpoint was hit.
I have found very little in the way of examples for this kind of thing on the internet, so I am asking here hopping that someone else has done this and can tell me where I went wrong.
Here is the code for my service that is trying to catch the application changes:
protected override async Task RunAsync(CancellationToken cancellationToken)
{
var fabricClient = new FabricClient();
long? filterId = null;
try
{
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:")
};
fabricClient.ServiceManager.ServiceNotificationFilterMatched += ServiceManager_ServiceNotificationFilterMatched;
filterId = await fabricClient.ServiceManager.RegisterServiceNotificationFilterAsync(filterDescription);
long iterations = 0;
while (true)
{
cancellationToken.ThrowIfCancellationRequested();
ServiceEventSource.Current.ServiceMessage(this.Context, "Working-{0}", ++iterations);
await Task.Delay(TimeSpan.FromSeconds(1), cancellationToken);
}
}
finally
{
if (filterId != null)
await fabricClient.ServiceManager.UnregisterServiceNotificationFilterAsync(filterId.Value);
}
}
private void ServiceManager_ServiceNotificationFilterMatched(object sender, EventArgs e)
{
Debug.WriteLine("Change Occured");
}
If you have any tips on how to get this going, I would love to see them.
You need to set the MatchNamePrefix to true, like this:
var filterDescription = new ServiceNotificationFilterDescription
{
Name = new Uri("fabric:"),
MatchNamePrefix = true
};
otherwise it will only match specific services. In my application I can catch cluster wide events when this parameter is set to true.

LaunchUriAsync Windows calculator - first time

Perhaps this is expected behavior, but the programmatic launching of built-in applications in Windows 10 is scarce for anything aside from settings app, maps, and contacts, in my experience - and I could use some help on this.
I am launching the stock Windows Calculator from within the application. I took some guesses as the Uri and it appears to work - except on the first launch. When we get a new device, the first time the app is run and the calculator is attempted to be launched, it wants to get an app from the store (which the end users will not have access to) - it does not even offer the built-in calculator as a choice. If the calculator is opened manually, even once, it just works from that point on. Is there something else I could/should be doing? Any guidance or suggestions would be greatly appreciated.
I would like to have it work the first time (a setting on the device?), or at least offer the built-in calculator as a choice.
Here is the code I am using:
private async void LaunchCalculatorAsync(object sender, TappedRoutedEventArgs e)
{
var options = new Windows.System.LauncherOptions();
options.TreatAsUntrusted = false;
options.DesiredRemainingView = Windows.UI.ViewManagement.ViewSizePreference.UseNone;
await Windows.System.Launcher.LaunchUriAsync(new Uri("calculator:"), options);
}
From running a list of installed apps on the device, I see the calculator listed: Microsoft.WindowsCalculator_8wekyb3d8bbwe. I have been unsuccessful with attempting to provide the PreferredApplicationPackageFamilyName using options.PreferredApplicationPackageFamilyName = "WindowsCalculator";
I have tried with/without the "Microsoft." as well as with/without the odd string of characters.
You may get the demo from Microsoft in GitHub,
Association launching sample
Hope this can help you.
private async void LaunchUriWithWarning()
{
// Create the URI to launch from a string.
var uri = new Uri(UriToLaunch.Text);
// Configure the warning prompt.
var options = new LauncherOptions() { TreatAsUntrusted = true };
// Launch the URI.
bool success = await Launcher.LaunchUriAsync(uri, options);
if (success)
{
rootPage.NotifyUser("URI launched: " + uri.AbsoluteUri, NotifyType.StatusMessage);
}
else
{
rootPage.NotifyUser("URI launch failed.", NotifyType.ErrorMessage);
}
}

Azure Autoscale Restarts Running Instances

I've been using Autoscale to shift between 2 and 1 instances of a cloud service in a bid to reduce costs. This mostly works except that from time to time (not sure what the pattern seems to be here), the act of scaling up (1->2) causes both instances to recycle, generating a service outage for users.
Assuming nothing fancy is going on in RoleEntry in response to topology changes, why would scaling from 1->2 restart the already running instance?
Additional notes:
It's clear both instances are recycling by looking at the Instances
tab in Management Portal. Outage can also be confirmed by hitting the
public site.
It doesn't happen consistently but I'm not sure what the pattern is. It feels like when the 1-instance configuration has been running for multiple days, attempts to scale up recycle both. But if the 1-instance configuration has only been running for a few hours, you can scale up and down without outages.
The first instance always comes back much faster than the 2nd instance being introduced.
This has always been this way. When you have 1 server running and you go to 2+, the initial server is restarted. In order to have a full SLA, you need to have 2+ servers at all time.
Nariman, see my comment on Brent's post for some information about what is happening. You should be able to resolve this with the following code:
public class WebRole : RoleEntryPoint
{
public override bool OnStart()
{
// For information on handling configuration changes
// see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.
IPHostEntry ipEntry = Dns.GetHostEntry(Dns.GetHostName());
string ip = null;
foreach (IPAddress ipaddress in ipEntry.AddressList)
{
if (ipaddress.AddressFamily.ToString() == "InterNetwork")
{
ip = ipaddress.ToString();
}
}
string urlToPing = "http://" + ip;
HttpWebRequest req = HttpWebRequest.Create(urlToPing) as HttpWebRequest;
WebResponse resp = req.GetResponse();
return base.OnStart();
}
}
You should be able to control this behavior. In the roleEntrypoint, there's an event you can trap for, RoleEnvironmentChanging.
A shell of some code to put into your solution will look like...
RoleEnvironment.Changing += RoleEnvironmentChanging;
private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}
RoleEnvironment.Changed += RoleEnvironmentChanged;
private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}
Then, inside the RoleEnvironmentChanged method, we can detect what the change is and tell Azure if we want to restart or not.
if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
e.Cancel = true; // don't recycle the role
}

Resources