Multithreading using Wcf - c#-4.0

HI i am new to the the whole programming thing, i have been given a task to multithread 4 stored procedures where each thread runs asynchronously so that the user can get output real quick i have to do it using WCF can anyone help me out with this. Initially what i am trying to do is taking each procedure and getting how much time it takes to execute using parametrizedthreadstart, but i am not sure how to go about it.

Considering you are new to the whole programming thing, you can follow these very basic steps to get thing done.
Create a new WCF service.
Add 4 methods each calling one stored procedure.
Add parameters to the methods which are required by stored procedures.
For Example if your stored procedure is - MySP(varchar name) then your WCF method will
be - MySP(string name);
Now depoly your service in IIS or windows service or Console App or wherever you want.
Create a client application, again it could be anything ConsoleApp or Win Form etc.
Add a reference to your service.
Instantiate service class and call there Async version. By Async I mean there you'll
see all of the four methods with Async attached.
For Example you will find your MySP(string name) method as MySPAsync(string name)
Also there will be MySPCompleted event, subscribe to it.
Now all of your methods are running asynchronously whenever they finish execution they'll call your subscribed methods.
I hope this helps you get started :)

There are a couple of different ways to do this. At the highest level, you can place each service request in it's own service endpoint. This could be defining endpoints for each method, or if you are hosting in IIS, placing each service it's own website. At the lower level, you could define callbacks for each method so that WCF will not block while the method calls are taking place.

Related

What is the difference between PubSub.Subscriptions() and Topic.getSubscriptions() in Google PubSub?

I've created a program that handles PubSub messaging using the Google PubSub NodeJS SDK.
While developing this I noticed that the NodeJS Library and docs show two ways of retrieving active subscriptions in Google PubSub:
PubSub.subscriptions('SubscriptionName') docs
PubSub.topic('TopicName).getSubscriptions() docs
I understand that the 2nd option might only list subscriptions related to a topic, but I'm more interested in the workings behind the scene.
In my first attempt I used the 2nd option to retrieve my subscriptions and that worked while running the application, but I ran into timeouts when trying to mock the call in my unit tests and I couldn't fix it. I switched to the 1st approach which doesn't use a Promise and just returns a plain Subscription object, this did work in my unit tests just fine
Are there downsides to not using the promise based call as it might not yield the most up to date results? If not, is there a reason why there are two options and one is promise based and the other is not?
These two APIs do very different things. The first one creates a Subscription object in the client that can be used for receiving messages. It does not retrieve any state from the Cloud Pub/Sub service. Therefore, there is no asynchronous work to do and it need not return a promise. The second one actually goes to the Cloud Pub/Sub service, retrieves the list of subscriptions for the topic, and creates a Subscription object for each one.
If you know the name of the subscription for which you want to receive messages and can be reasonably confident that it exists, then use PubSub.subscriptions('SubscriptionName'). If you try to start receiving messages on this subscription by calling subscription.on('message', messageHandler); and it doesn't exist, then an error is emitted.
If you don't know the name of the subscription and instead need to fetch the list and choose the subscription from which to receive messages from the list of all subscriptions for the topic, then use the PubSub.topic('TopicName).getSubscriptions() call.
For further help with why mocking the getSubscriptions() call didn't work, would probably need to see the code you were using to mock it.

An Example of Unit Test for Time Triggered Azure function

I am a newbie in Azure Functions.
I have implemented a time triggered azure function and wish to write unit test cases for it.
I am using specflow and nunit for writing my testcases.
However, I am unable to find a proper example of how to stub time trigger function.
Can someone point me to the correct example?
Thanks.
I wouldn't call it a unit test anymore but you can trigger non-HTTP functions by calling the following admin endpoint of the function app:
POST <ROOT_URL>/admin/functions/<FUNCTION_NAME>
Note that you need to specify the system key in the x-functions-key header when making a request to a deployed function app.
More info in the docs.
Alternative
What I usually try to do is put as much of the business logic in a seperate class which is easily testable and call this class from a function.
Personally, I don't think you should test if the trigger works, that's the responsibility of the Azure Functions Runtime. Fine to test this in a larger scoped integration test but not as a fast and frequently executed unit test.
Get the business logic out of the function itself, and instead have the function call libraries.
Add tests for those libraries.
You don't need to do anything that's specific to azure functions in order to test your code.
If you are attempting to do integration testing, then follow Marc's advice.

Pass parameters from C# function app to another Javascript function app in Azure

I need to set up an application in Azure and make communicate 2 functions (one written in C# and one written in JavaScript).
The C# fragment consists in analyzing a XML feed, get the data and save in objects then finally send them to the other JavaScript function by parameter.
I did read that we could establish communication between both functions using HTTP calls but is it possible to do it with parameters ?
If not, would have any suggestions in order to achieve something like this properly? I'm getting started with Azure and i don't have enough visibility to know what is recommened in such a situation
Thank you for your advices
Yes, this is absolutely possible. How you do this is up to you. If you look at the default HTTP trigger templates, you can see that they take parameters (for example, as query string parameters). You can find more examples in the HTTP and webhook recipes documentation.
You can use other trigger types for cross-function communication as well. Take a look at this documentation for related best practices: https://learn.microsoft.com/en-us/azure/azure-functions/functions-best-practices#cross-function-communication

Long time operations in MVC and WCF service

I have got a MVC application, that uses WCF service as a connection layer to database. Some operations on WCF are time-consuming. So, what is the best solution for this problem? Should I use async controllers in my MVC application and Task.Factory.StartNew in WCF? Or should I use simple controllers and AJAX in MVC and Task.Factory.StartNew in WCF? Or it is better to use Task.Factory.StartNew in MVC? Or maybe it is better to use simple threads in WCF?
EDITED:
For example my service generates reports - it takes a long of time. I want user to be able to start generating report and not to wait until it finished - for example show 'generating'. Then he can start generating another one, and etc. After generation of a report is finished - return it to a user.
Please, give the best practices in such type solutions or show me the right direction, thanks a lot.
This kind of requirement can be designed in multiple ways as Tim Rogers pointed out in his comment. Judging by your use case focused around reports i would probably do it in the following way:
WCF method to initiate generating a report (would initiate a worker thread and return back a process/report id of some sort)
WCF method for checking the status of the operation (based on the operation/report id)
Regular controller which would call the method from point 1 and allow querying for the status of ongoing jobs for a particular user
View presenting a list of historical and ongoing jobs with their statuses (i would probably implement an AJAX request for refreshing the status to get a nice user experience)
Of course this is only a rough description. Whereas points 3 and 4 are easy and straightforward points 1 and 2 may be implemented in a number of ways. You can approach this with implementing your own threading strategy with task persiatance (in memory or db based), you can try using WCF Workflow Service etc. Hope this helps.

Data is being entered twice when cross domain web service is fired

I have created a .net web service and when i try to call a method that saves the data in the database, the request is fired twice. I use net profiler to check if two requests are made to the server but only one request is made to server. I fail to understand why data is being entered twice in the database.I am using jsonp method to call the cross domain site
I just found something interesting. I have two servers. When i host the web service on one and call the web service using cross domaining, the data is entered once whereas on the other the data is entered twice. Do we need to take care of some IIS settings too?
So, if there aren't two requests being made then there are almost certainly two calls to the Save() method (or whatever it is called), being fired from the web service end point. But there maybe dupliacte data somewhere too.
Here are a couple of things to check:
What data is actually being
transferred? Have you checked this
using a tool like Charles?
Is the data being passed to your Save() method the same as the data being passed to the web service?
How is the data being written to the database? Is there duplicate SQL somewhere?
Heyi all, i just converted my project to visual studio 2010 and then installed it on the server. Everything is running perfectly now. Thanks

Resources