Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.
Related
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
So first of all I'd like to say I'm no DBA nor coder, I'm just a regular IT person that works as support for network and infrastructure, however, I like to get familiar with technologies in general and understand the basics of it, let's say how they work, implemented with no additional specific details.
I've been reading about Azure Storage Accounts in regards to tables. As IT, I had to implement simple file shares via SMB 3.0 in order to have them mapped on our network, I've come across other options such as blobs, tables and queues. I've read about them however I'm trying to get the main functionality of tables for a coder.
Correct me if I am wrong, when you code an app with a database, you can put the database on same/different server, and that can be on premise or on the cloud and you kind of link both together.
And as far as Im concerned and what I was able to find out investigating on the web, these tables are NoSQL and no constraints, you create the tables and data through Visual Studio thanks to an API, then that information is reflect on your storage.
How is this is useful when using it for the app you're developing?
I've been reading about Azure Storage Accounts in regards to tables. As IT, I had to implement simple file shares via SMB 3.0 in order to have them mapped on our network, I've come across other options such as blobs, tables and queues. I've read about them however I'm trying to get the main functionality of tables for a coder.
And as far as Im concerned and what I was able to find out investigating on the web, these tables are NoSQL and no constraints, you create the tables and data through Visual Studio thanks to an API, then that information is reflect on your storage.
Azure Storage Accounts is a "box" to keep your Blobs, Tables, Queues, Files organised from the management point of view and for the access control. Each storage type is good for it's specific tasks.
If the world would have just one super storage which will solve all our possible cases for storing, querying and managing the data then there would not be such variety of different databases, storage types etc. available.
If you need to share the files as a "network folder" - try Azure Files.
If your coders need a database storage, then the first question would be what are the requirements to the database do they have? What is the purpose of that database would be, etc. Azure, particularly, has a lot of different database solutions, and again, each of them good for some specific task, and can be not a good choice for other tasks.
As to Azure Tables, from the official docs:
Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design.
So, if your coders do need to store such data, then yes, that would be one of the possible choices.
Correct me if I am wrong, when you code an app with a database, you can put the database on same/different server, and that can be on premise or on the cloud and you kind of link both together.
Correct. But also you can have your own server with the database which you need to manage yourself, or you can choose some cloud service which will provide the database for you but will keep the underlying server and other maintenance activity managed for you, so you no need to worry/spend your time on that.
How is this is useful when using it for the app you're developing?
It is important to understand what your requirements are for data storage in order to pick a proper one. This question perhaps should be addressed not to you, but to your coders, who are building the app and can consolidate their requirements to the database store. Usually, they will tell you exactly what they need, and you may give them some ideas or advice of the alternatives, if any (That may be a similar solution with extra functionality or the way how the data is stored or processed, or have more built in integrations that may be important for you, or a decision whether keep own installation or use cloud managed service)
For your further possible question about When should I use a NoSQL database instead of a relational database? Is it okay to use both on the same site? see this thread
Update based on further questions:
If I develop an application with a database whose tables are on Azure, can I call let's say functions or data from it to my main application that is hosted on premise? What's the benefit of doing that versus hosting the tables on premise other than it's largely scalable and highly available?
Perhaps you need to better understand the relationship between App (Application) and DB (Database). The Database is a standalone system, which store the data, reply to the incoming queries (receive request, process it, return the result). In overall to the DB is not important who is requesting the data. It is a "passive" system. (There are some cases when DB can trigger further processes in data processing pipelines, but that is beyond this scope).
The App in opposite is an active system in App<->DB relationship. (Also leave behind more advanced designs where App is not just a 1 system). App receive requests, process them (may do external requests to other "services" if that is necessary), give a response (with or without data) to the requester. In App<->DB relationship the external requests is what happening. At some point App need some data from the DB, so App make a request to the DB, obtain the response and continue its own logic.
Where App server and DB server are placed is not that important (for simplicity). The important part is whether DB server is accessable for the requests. DB can be on-prem with public static IP address, it can be in cloud on your own server which has public static IP address (sometimes that is archived in different ways but we skip that for simplicity), that can be a Database as a Service cloud solution, where you do not need to have a server and configure the database, but have a url endpoint which you need to use to query the DB.
I appreciate the answer, and I pretty much agree with what you're saying.
But my questions goes beyond what the requirements are for the developers.
I'll modify the question. If I develop an application with a database whose tables are on Azure, can I call let's say functions or data from it to my main application that is hosted on premise? What's the benefit of doing that versus hosting the tables on premise other than it's largely scalable and highly available?
Azure Storage Tables are the "Notepad" of NoSQL Databases. If you want quick and easy key/value pairs, tables is the way to go. If you are looking for the "Word" of NoSQL in Azure then Cosmos DB is where it's at. Cosmos DB offers global distrobution, better features and better SLA (see comparison). Tables are cheaper too.
Azure also supports MySQL, PostGreSQL, MariaDB and MSSQL as PaaS offerings if you wish to use a traditional database.
I understand that Microservices is about independent loosely coupled services. I have read https://en.wikipedia.org/wiki/Microservices.
When it comes to Azure, I understand there are many components like Azure Service Fabric, AKS and also have the option of deploying containers within Azure VMs using Docker or any other containerization tools. However, since Microservices is about developing atmoic individually scalable services, can this also be achieved by deploying each service as an Azure Web API APP within an App Service Plan and configure Auto-Scale based on Performance metrics (though each API APP may not be individually scalable, they can still be individually manageable in terms of deployment, configuration etc)?
Can someone please suggest if this thought process is correct?
Microservices aren't a platform or technology so if you can make small independently deployable services then they are microservices. Sure - some tech helps but it depends on your situation.
If you only need a few services you probably don't need anything complex. Make sure services are well modeled, own their own data and ideally have a good monitoring and deployment pipeline setup. Design for service failure where possible.
Do you need to scale each part independently? Ideally, you should be able to but do services have very different requirements? You could have many small App service plans but that comes at cost of unused resources so split when you need to.
This question and of course the answers are going to be opinion based, but generally when thinking in terms of micros services, think not in terms as things like loads of API's and VM's etc. Instead think in terms of. When i upload an image, its needs to be resized, and the table updated to give a url for the thumb. or when XXX record is updated in database, Run XXX in order to create a report, or update Azure search. and that each service, just knows how to do a single thing only. I.E Resize an image.
Now one could say. I have a system, A repo library, and some functions library. When an image is posted, I upload, then call this, and that etc.
With Micor services. You would instead just add the image to a queue. Create an azure function that has a queue trigger. that would resize and save both the large and the thumb to storage. this would then either update the database, or in true micro service, it would add a queue to store the new info, another function would watch that queue and insert into the database.
You can use the DB queue from anything. You can use the Blob queue from anything. Your main API, does not care how images are handled. You can change your functions one day to maybe save to dropbox, instead of azure blob. All really easy, with no re-build of the API, because the API does not care.
A good example I use it for is email and SMS. My systems dont know how to send an email, or an SMS. They only know how to add to a queue. My microservices. SendEmail and SendSMS do know how to do it, and I can change how and who i send that content with, really easy. I can tomorrow change from Twilio to send grid, without ever telling the API that i've done it.
On a more complex thing. I have approval, at the moment that approval sends an email or SMS to either user or admin, and that can change over time. So I have an SMS server, Email Service and and approvalService. when approval happens, it just adds a config to the queue, The rest is done by a logic app, that knows to send an email to XXX and an SMS to XXX and then update database. My api, is just a post, that creates a queue.
Basically what I am saying here is to get started, maybe porting an existing app. Start with the workflow stuff, like send an email, resize an image, create a report, create a PDF, email 50 subscribers etc. and take all that code out and put into there own micro service that just knows how to do one thing. Then when you grow with confidence, create a workflow from all of these services with Logic Apps, let azure take care of the rest, thats what they want to do.
We have an old SBS2008 server that is on the way out. We only use it for Authentication as all our apps are in the cloud now. Is it possible to be done with an in-house server altogether and use cloud services to do some/all of what SBS does? Has anybody done this?
[my thoughts so far]
Azure Active directory Looks like it might do the authentication part, and very cost effective.
Azure Domains services look like it might to the Group policy part, but looks expensive ( probably more expensive than a server for a small business).
Azure DNS looks like it might do the DNS part and cost effective as well.
Obviously, DHCP would now go on the router
Please don't shut this question down, I need a helpful answer to a specific question, I will reword it if need be ( just comment ). :-)
You have to look at your overall roadmap and strategy for the company really, rather than just replacing what you have with new technologies.
I'd recommend that you try and move away from using ADDS and modernize workstations to Windows 10, being Azure AD joined. Move away from Group Policy, and look at using Intune for policy management.
For your cloud apps, look at using Azure AD for authentication and not something like ldap or the sorts. Basically, standardize using Azure AD for auth across all your access mediums, and stay away from traditional AD or using ADDS.
These recommendations might seem drastic and rather large effort, however if your company is running SBS still, it might be that you have a low enough amount of apps and infrastructure to take on a transformational change.
How should data be pushed into CRM 2011 from another system?
I would like to pass the data to a web service.
I have thought of 2 options so far:
Create a custom entity and create records of this entity by calling the Organization service from the other system. A workflow can handle everything from that point.
Create a WCF service and host it somewhere. The other system passes data to this service and the service interacts with CRM.
The client will just pass us records, so validation must happen on the CRM side.
EDIT:
If the client is an old system (in Cobol or something) is it still possible to connect to the CRM service?
Just to expand on Predro's answer:
I have actually done both with CRM 4. I would highly reccommend that you create your own sevice for the client to call which in turn calls the CRM services.
This gives you an extra layer of abstraction for when things change later - and they will.
If your client calls the CRM services directly it will be difficult/impossible for you to change your internal data structures or move servers around in the farm. Esecially true if you currently use a singer-server infrastructre.
Also don't map the service you create directly to the entity data structure, use an intermidiate model.
So if you wanted the client to pass Account details in, have your service expect an XML document that you convert in to an Account Entity, rather than expose the Account Entity and have your client submit that.
The second choice for me is the best way to handle with this situation, because you can control and validate everything in your side. You can host the WCF service in the same server of CRM Dynamics or in another server with access to CRM Dynamics and interact with with CRM through CRM Web Services.
I think you don't have any better solution.
Assuming the client has a login to your CRM system, I would actually go with the option #1 first. Why?
You can still validate the data in the Pre-Validation plugin stage.
This is by far the easiest and fastest way to get going. Will things change? Maybe! But:
You are going to spend lots and lots of hours getting a custom WCF service up and running
Somebody has to deploy it
Somebody has to maintain it
Your client has to learn how to connect to a proprietary WCF service instead of saying "Here's a bunch of published info on how to connect to CRM's webservices."
It becomes a "hidden silo" of business logic instead of it all existing in CRM. Any good CRM developer would be able to work on option number one. Number two requires additional skills.
If it really does change over time so much that it needs its own WCF service, you haven't really wasted that much time. All your business logic would be ported from a plugin to the WCF service. But it is much harder to go from the more complex solution (#2) to the simpler solution (#1) if it turns out that you don't need it.
I promise you that your customer wants this done faster and simpler (cheaper) rather than longer and more complex (expensive).