Our application uses an Azure SQL Database.
Apart from our local dev setup, we have two environments:
Staging (for quality assurance and client testing), and
Production (live)
The Staging database and Production database are stored on two separate SQL Database servers. In both servers, the databases are named the same.
Problem:
Since the server names are automatically and uniquely generated (and are a bunch of randomly generated letters), it is very difficult to distinguish between Staging and Production. Screenshot from the Azure portal below:
This also increases the possibility of pointing to the wrong database when running change scripts, queries, etc. If it was possible to alias/rename the servers, then this wouldn't be a problem, but I know that this isn't possible.
Any suggestions? What do you do in your environment?
If you want to have speaking database URLs you could use custom DNS Names for your SQL Azure Servers.
So you could CNAME your custom domains like this:
liveDB.mydomain.com to random2323LIVE32323.database.windows.net
stageDB.mydomain.com to random43435STAGE34.database.windows.net
But there is one caveat:
You still need the server name, because you need to login as user#random2323LIVE32323.
Anyways... if you use this scenario, the worst case can be a rejected login, if you mix the real server names.
For a detailed explanation see here
Although it's a bit more administrative work, I typically recommend different Live IDs for Stage vs. Prod. That's because I normally want to have a different set of administrators for my cloud services. If you name one Live ID as PRODAppName and STGAppName, you can make yourself the co-admin on both of those live IDs, and simply use the Filter capability of the portal to only see PROD or STG when you need to know which service is which. Hopefully this makes sense.
Related
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
So first of all I'd like to say I'm no DBA nor coder, I'm just a regular IT person that works as support for network and infrastructure, however, I like to get familiar with technologies in general and understand the basics of it, let's say how they work, implemented with no additional specific details.
I've been reading about Azure Storage Accounts in regards to tables. As IT, I had to implement simple file shares via SMB 3.0 in order to have them mapped on our network, I've come across other options such as blobs, tables and queues. I've read about them however I'm trying to get the main functionality of tables for a coder.
Correct me if I am wrong, when you code an app with a database, you can put the database on same/different server, and that can be on premise or on the cloud and you kind of link both together.
And as far as Im concerned and what I was able to find out investigating on the web, these tables are NoSQL and no constraints, you create the tables and data through Visual Studio thanks to an API, then that information is reflect on your storage.
How is this is useful when using it for the app you're developing?
I've been reading about Azure Storage Accounts in regards to tables. As IT, I had to implement simple file shares via SMB 3.0 in order to have them mapped on our network, I've come across other options such as blobs, tables and queues. I've read about them however I'm trying to get the main functionality of tables for a coder.
And as far as Im concerned and what I was able to find out investigating on the web, these tables are NoSQL and no constraints, you create the tables and data through Visual Studio thanks to an API, then that information is reflect on your storage.
Azure Storage Accounts is a "box" to keep your Blobs, Tables, Queues, Files organised from the management point of view and for the access control. Each storage type is good for it's specific tasks.
If the world would have just one super storage which will solve all our possible cases for storing, querying and managing the data then there would not be such variety of different databases, storage types etc. available.
If you need to share the files as a "network folder" - try Azure Files.
If your coders need a database storage, then the first question would be what are the requirements to the database do they have? What is the purpose of that database would be, etc. Azure, particularly, has a lot of different database solutions, and again, each of them good for some specific task, and can be not a good choice for other tasks.
As to Azure Tables, from the official docs:
Azure Table storage is a service that stores structured NoSQL data in the cloud, providing a key/attribute store with a schemaless design.
So, if your coders do need to store such data, then yes, that would be one of the possible choices.
Correct me if I am wrong, when you code an app with a database, you can put the database on same/different server, and that can be on premise or on the cloud and you kind of link both together.
Correct. But also you can have your own server with the database which you need to manage yourself, or you can choose some cloud service which will provide the database for you but will keep the underlying server and other maintenance activity managed for you, so you no need to worry/spend your time on that.
How is this is useful when using it for the app you're developing?
It is important to understand what your requirements are for data storage in order to pick a proper one. This question perhaps should be addressed not to you, but to your coders, who are building the app and can consolidate their requirements to the database store. Usually, they will tell you exactly what they need, and you may give them some ideas or advice of the alternatives, if any (That may be a similar solution with extra functionality or the way how the data is stored or processed, or have more built in integrations that may be important for you, or a decision whether keep own installation or use cloud managed service)
For your further possible question about When should I use a NoSQL database instead of a relational database? Is it okay to use both on the same site? see this thread
Update based on further questions:
If I develop an application with a database whose tables are on Azure, can I call let's say functions or data from it to my main application that is hosted on premise? What's the benefit of doing that versus hosting the tables on premise other than it's largely scalable and highly available?
Perhaps you need to better understand the relationship between App (Application) and DB (Database). The Database is a standalone system, which store the data, reply to the incoming queries (receive request, process it, return the result). In overall to the DB is not important who is requesting the data. It is a "passive" system. (There are some cases when DB can trigger further processes in data processing pipelines, but that is beyond this scope).
The App in opposite is an active system in App<->DB relationship. (Also leave behind more advanced designs where App is not just a 1 system). App receive requests, process them (may do external requests to other "services" if that is necessary), give a response (with or without data) to the requester. In App<->DB relationship the external requests is what happening. At some point App need some data from the DB, so App make a request to the DB, obtain the response and continue its own logic.
Where App server and DB server are placed is not that important (for simplicity). The important part is whether DB server is accessable for the requests. DB can be on-prem with public static IP address, it can be in cloud on your own server which has public static IP address (sometimes that is archived in different ways but we skip that for simplicity), that can be a Database as a Service cloud solution, where you do not need to have a server and configure the database, but have a url endpoint which you need to use to query the DB.
I appreciate the answer, and I pretty much agree with what you're saying.
But my questions goes beyond what the requirements are for the developers.
I'll modify the question. If I develop an application with a database whose tables are on Azure, can I call let's say functions or data from it to my main application that is hosted on premise? What's the benefit of doing that versus hosting the tables on premise other than it's largely scalable and highly available?
Azure Storage Tables are the "Notepad" of NoSQL Databases. If you want quick and easy key/value pairs, tables is the way to go. If you are looking for the "Word" of NoSQL in Azure then Cosmos DB is where it's at. Cosmos DB offers global distrobution, better features and better SLA (see comparison). Tables are cheaper too.
Azure also supports MySQL, PostGreSQL, MariaDB and MSSQL as PaaS offerings if you wish to use a traditional database.
Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.
I have my Azure Sql database located in West Europe and are considering to have a database in the States also. Deploying my website in the states was easy, but this website then query the database in Europe, which gives delays.
What do people do in these cases? Having separate databases for different users could work I guess, but it then fails if a user normally on one server get routed to the other server, then his data is not in the database. Is there easy solutions to have the same data available in two azure SQL servers, and Azure maintain the data sync? What about conflicts when syncing?
It really depends on your requirements and how you implement routing. You can design your distributed application in a manner that user A, when authenticated always go the US server for instance. Even if he/she is currently in Europe or Asia.
If you want to sync everything everywhere, there a preview feature named "SQL Data Sync". It can sync data between multiple instance of SQL Server (including on-premises SQL Server installations). It is quite flexible in terms of configuring and options for syncing. But again, it really depends on application requirements. If I was building distributed system, I would not sync data across continents. Will design the app so that user specific data lives in only one Data Centre. this, of course is impossible if my user has access to a lot more data then just related to his/her profile.
The best option would be to keep user-specific data in user's designated Data Centre, and sync the data that must be available to all users at all locations.
We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?