How should data be pushed into CRM 2011 from another system?
I would like to pass the data to a web service.
I have thought of 2 options so far:
Create a custom entity and create records of this entity by calling the Organization service from the other system. A workflow can handle everything from that point.
Create a WCF service and host it somewhere. The other system passes data to this service and the service interacts with CRM.
The client will just pass us records, so validation must happen on the CRM side.
EDIT:
If the client is an old system (in Cobol or something) is it still possible to connect to the CRM service?
Just to expand on Predro's answer:
I have actually done both with CRM 4. I would highly reccommend that you create your own sevice for the client to call which in turn calls the CRM services.
This gives you an extra layer of abstraction for when things change later - and they will.
If your client calls the CRM services directly it will be difficult/impossible for you to change your internal data structures or move servers around in the farm. Esecially true if you currently use a singer-server infrastructre.
Also don't map the service you create directly to the entity data structure, use an intermidiate model.
So if you wanted the client to pass Account details in, have your service expect an XML document that you convert in to an Account Entity, rather than expose the Account Entity and have your client submit that.
The second choice for me is the best way to handle with this situation, because you can control and validate everything in your side. You can host the WCF service in the same server of CRM Dynamics or in another server with access to CRM Dynamics and interact with with CRM through CRM Web Services.
I think you don't have any better solution.
Assuming the client has a login to your CRM system, I would actually go with the option #1 first. Why?
You can still validate the data in the Pre-Validation plugin stage.
This is by far the easiest and fastest way to get going. Will things change? Maybe! But:
You are going to spend lots and lots of hours getting a custom WCF service up and running
Somebody has to deploy it
Somebody has to maintain it
Your client has to learn how to connect to a proprietary WCF service instead of saying "Here's a bunch of published info on how to connect to CRM's webservices."
It becomes a "hidden silo" of business logic instead of it all existing in CRM. Any good CRM developer would be able to work on option number one. Number two requires additional skills.
If it really does change over time so much that it needs its own WCF service, you haven't really wasted that much time. All your business logic would be ported from a plugin to the WCF service. But it is much harder to go from the more complex solution (#2) to the simpler solution (#1) if it turns out that you don't need it.
I promise you that your customer wants this done faster and simpler (cheaper) rather than longer and more complex (expensive).
Related
I want to integrate our Sales CRM dynamics system with another application. Approaches we are considering are using web api and webhooks.
Can we use webhooks without using an azure function/azure service?
what all are the challenges?
Is there any other better methods to pull data from Dynamics CRM?
What all things should be considered while doing an integration with Dynamics CRM system?
(I am new to Dynamics)
link followed:https://dynamicsninja.blog/2019/05/22/d365-webhooks-part-1/
If it's a relatively simple integration, using Power Automate might be your best bet.
https://flow.microsoft.com/
To answer your specific questions, though:
Can we use webhooks without using an azure function/azure service?
No. You'll need to use Power Automate, Logic Apps, or some other method to watch for changes.
what all are the challenges?
That's a pretty broad question, can't help you without specific things you've already tried.
Is there any other better methods to pull data from Dynamics CRM? What all things should be considered while doing an integration with Dynamics CRM system?
If you're just doing reporting (pulling data) then PowerBI is a good choice, otherwise it's the API for custom integration or Power Automate/Logic Apps for something that needs a little less customization.
If you realy want to have a CRM fully integrated i would consider to try
• Power Automate (as said before): it's a good way to easily connect several systems
• Azure Functions: Sometimes power automate is just not enough but with azure functions you can easily create some serverless code. I have some aazure functions that are triggered by receiving a message in a queue by power automate for example. You can make both work together and that what we have at work :)
The biggets challeng is the "know-how" I guess. A lot of people do not know it yet. Azure functions can be a little difficult at the beggining (even more if you want to use early bound classes for you CRM). With some time you will get it.
If you want do distribute your tool as a dynamics solution package which than should be added to an organisation without generating API token in azure and stuff like that i would suggest you don't pull data from dynamics (that way you need to authorize at the ms webapi with a token generated in azure and so on). Instead use web components and plugins triggered by core messages to push the data to your api.
I understand that Microservices is about independent loosely coupled services. I have read https://en.wikipedia.org/wiki/Microservices.
When it comes to Azure, I understand there are many components like Azure Service Fabric, AKS and also have the option of deploying containers within Azure VMs using Docker or any other containerization tools. However, since Microservices is about developing atmoic individually scalable services, can this also be achieved by deploying each service as an Azure Web API APP within an App Service Plan and configure Auto-Scale based on Performance metrics (though each API APP may not be individually scalable, they can still be individually manageable in terms of deployment, configuration etc)?
Can someone please suggest if this thought process is correct?
Microservices aren't a platform or technology so if you can make small independently deployable services then they are microservices. Sure - some tech helps but it depends on your situation.
If you only need a few services you probably don't need anything complex. Make sure services are well modeled, own their own data and ideally have a good monitoring and deployment pipeline setup. Design for service failure where possible.
Do you need to scale each part independently? Ideally, you should be able to but do services have very different requirements? You could have many small App service plans but that comes at cost of unused resources so split when you need to.
This question and of course the answers are going to be opinion based, but generally when thinking in terms of micros services, think not in terms as things like loads of API's and VM's etc. Instead think in terms of. When i upload an image, its needs to be resized, and the table updated to give a url for the thumb. or when XXX record is updated in database, Run XXX in order to create a report, or update Azure search. and that each service, just knows how to do a single thing only. I.E Resize an image.
Now one could say. I have a system, A repo library, and some functions library. When an image is posted, I upload, then call this, and that etc.
With Micor services. You would instead just add the image to a queue. Create an azure function that has a queue trigger. that would resize and save both the large and the thumb to storage. this would then either update the database, or in true micro service, it would add a queue to store the new info, another function would watch that queue and insert into the database.
You can use the DB queue from anything. You can use the Blob queue from anything. Your main API, does not care how images are handled. You can change your functions one day to maybe save to dropbox, instead of azure blob. All really easy, with no re-build of the API, because the API does not care.
A good example I use it for is email and SMS. My systems dont know how to send an email, or an SMS. They only know how to add to a queue. My microservices. SendEmail and SendSMS do know how to do it, and I can change how and who i send that content with, really easy. I can tomorrow change from Twilio to send grid, without ever telling the API that i've done it.
On a more complex thing. I have approval, at the moment that approval sends an email or SMS to either user or admin, and that can change over time. So I have an SMS server, Email Service and and approvalService. when approval happens, it just adds a config to the queue, The rest is done by a logic app, that knows to send an email to XXX and an SMS to XXX and then update database. My api, is just a post, that creates a queue.
Basically what I am saying here is to get started, maybe porting an existing app. Start with the workflow stuff, like send an email, resize an image, create a report, create a PDF, email 50 subscribers etc. and take all that code out and put into there own micro service that just knows how to do one thing. Then when you grow with confidence, create a workflow from all of these services with Logic Apps, let azure take care of the rest, thats what they want to do.
I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?
Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.
My problem is a simple one. I'm working on a project that uses the service bus from Microsoft Azure to send messages asynchronously between different modules on different virtual machines. And a lot of messages are sent through this bus, so we want to have some indicators about it's performance and other usage information. Why? Because when everything is working, users are happy. When the system is slow, we want to show the user some interesting graphs, statistics, meters and other gadgets to give them an indication if there's a problem within Azure or with something else. And to do this, I need data about the usage of the Azure service bus.
So, which Azure API's are available to display what kind of (diagnostic) information about the service bus?
(Users should have no access to Azure itself! They should just see some performance data to re-assure them Azure is working fine. Or else I could look at it and discover some problem with it, fix it and then make users happy again.)
To elaborate what I'm looking for, the Azure website has some nice chart when you click the Monitor of the Azure bus showing you overviews of the number of incoming messages, the number of errors and their types, size information and the number of succesful operations, all based on a specified period. It would be nice if I could receive this data within my project.
The entity metrics API will give you the exact data the portal is using:
http://msdn.microsoft.com/en-us/library/windowsazure/dn163589.aspx
Here's a Subscribe! episode I recorded with Rajat on the topic http://channel9.msdn.com/Blogs/Subscribe/Service-Bus-Namespace-Management-and-Analytics
I've spent quite some time to make the entity metrics API work, so I decided to share the results.
Here is a full C# code example of how to consume those API:
github repository.
It's a small library which wraps the HTTP request into strongly typed .NET classes. You can also grab it from NuGet.
Finally, here is my blog post with the walkthrough.