Good Afternoon,
Here is the scenario, our company manages multiple computers/pcs (about 50 or so) that are located in different geographic locations. These 50 computers run the same software locally. They all connect to a Apache Linux Box that is housed on premises. They are all in a workgroup locally. Something like this:
Location 1 (Term1, Term2, Term3, Term4, etc.)
Location 2 (Term1, Term2, Term3, etc.)
Location 3 (Term1, Term2, etc.)
Location 4 (Term1, Term2, Term3, Term4, Term5, etc.)
As mentioned, all the computers run in a workgroup. A local user is defined for each computer. Say user1/user2/user3/etc. We have to maintain a list of users since user management has become a bear as we have grown. In addition, patch management has become very time consuming as well. Traditional Active Directory (on-premise) is not an option as the clients at each location are different people with different financial resources and most/none are willing purchase/maintain Windows Server on premise.
The basic thought being, if it is not broke, don't fix it. As far as the user is concerned, they are not seeing a problem because most only have to deal with 4/5 computers in each location.
I noticed that in Windows 10, there is a an Azure AD service support. I was wondering if anyone could shed some light as to how that would work and how many users I would have to setup as to control costs. Any thoughts on this would be much appreciated.
Thanks....
You will want to look at Microsoft Intune for client management: https://www.microsoft.com/en-us/server-cloud/products/microsoft-intune/Features.aspx
And more specifically: https://technet.microsoft.com/en-us/library/dn646959.aspx
With Windows 10 you are able to manage the user identities in Azure AD and the computers can to a domain join into Azure. At the same time you can set this up so that the devices also is auto-enrolled into Microsoft Intune for management when they join AzureAD.
This will allow you to manage settings and policies on the Windows 10 machines and control Windows Updates and Windows Defender.
This is just a very short description of the possibilities within Azure AD with Windows 10, and even more is coming this summer with the Aniversary Release of Windows 10.
Related
I'm trying to setup a collection of Azure workstation VMs for a small organisation (3 staff and increasing).
My prior experience with Azure is focused on web servers. I'm familiar and comfortable with the resources required for discrete VMs (VM, managed disk, network interface, public IP, DNS).
On the face of it, Azure Virtual Desktop looks like a potentially attractive option for scalability. But I've just followed the Getting Started process, and I have ended up with around 25 new resources spread across three new resource groups.
It is not clear what each of them do or what each of them cost. I am experiencing difficulties with trying to connect with the test user created as part of this process.
I understand that AVD is probably targeted at large organisations where this complexity may be warranted and navigable.
But with limited time to pursue this, I'm suspecting that the best option for constructing this small network may be to stay with the resources that I am familiar with.
Would appreciate feedback on the following:
It seems that discrete VMs can only be provisioned with a flavor of Windows Server, and desktop versions of Windows (ie, 10, 11) are only available via AVD. Is there any downside to using Windows Server (compared to a desktop version) as the platform for a workstation? The workload apps here are primarily development-focused: Office, Visual Studio, SSMS etc.
A copy of Office will be needed on each VM. AVD has a bundled option for this, but I believe it can also be provisioned separately per VM via the Microsoft Account/Office 365 pathway. Any licensing pitfalls there that I should be aware of?
If each staff member is to be allocated their own separate VM, we would want them to call that VM up and shut it down as needed, to avoid wasted compute expense for the majority of the daily cycle. But without granting them access to the Azure portal account. Is there a mechanism than can help with this?
Yes you can, but we do have multiple advantages and disadvantages.
It depends on operating system of the server but it is not that good idea.
Performance Degradation - Server should works as SERVICE PROVIDER and use it as workstation can consume its resources and cause degradation of its performance.
In some of the Windows server OS case it may be a good idea because of Windows Workstation OS versions supports only a limited number of processors (less then server version).
One is needed to disable IE enhanced security and few other available in the below link
Add windows features once the OS is installed etc.
Reference: https://www.quora.com/Can-I-use-Windows-Server-as-I-would-use-Windows-Desktop-i-e-Win-10-7-8
https://www.researchgate.net/post/Is_it_possible_to_use_a_server_as_a_workstation
Options: Enable Hyper V on server and add multiple Workstation.
https://support.auvik.com/hc/en-us/articles/212801986-How-to-enable-Microsoft-Hyper-V-on-Windows-Servers-and-Workstations
A copy of Office will be needed on each VM. AVD has a bundled option for this, but I believe it can also be provisioned separately per VM via the Microsoft Account/Office 365 pathway. Any licensing pitfalls there that I should be aware of?
Office version with license can be installed manually and via application packaging. There will not be any challenges.
If each staff member is to be allocated their own separate VM, we would want them to call that VM up and shut it down as needed, to avoid wasted compute expense for the majority of the daily cycle. But without granting them access to the Azure portal account. Is there a mechanism than can help with this?
Automation is available to stop vm if not running in azure
Reference : https://learn.microsoft.com/en-us/azure/automation/automation-solution-vm-management-config
We have a suite of apps we are developing. We have already rolled the app out to about 50 users and have over 200 more. Sharing connections (custom connection & connector) and the apps have become super cumbersome. Long story short, this is a lot of time. Each time we have a new user we have to share 3 apps, 2x connections, and setup access on an internal method we have. We are using SQL, not CDS.
This has been misery. Is there a way to create 1x address that I would share with the Apps/Connection and I would just add users to this group? Would save us time to just add users to the one list. Then access is just shared via this common group. Does anyone know a better method to deploy powerapps like this? We can't share to "everyone". Thanks.
If you have an Azure Active Directory Security Group you can give them access to the connector and powerapp. See: https://powerapps.microsoft.com/en-us/blog/sharing-powerapps-with-multiple-users/
There are some kind of distinctions between Security Groups, Distribution Groups, O365 groups, and on prem vs Azure. I couldn't tell you the difference between them all, but you can follow Microsoft's instructions on how to share a canvas app which will go through some of these different methods of sharing.
I have a BizSpark account but I'm struggling to work out what I'm actually entitled to as part of my free Azure package. The package details are listed here:
http://www.windowsazure.com/en-us/offers/details?locale=en-us&offer=ms-azr-0012p&no-rewrite=true
I need to run:
One virtual machine (running Linux) to power the website
One hosted service to provide the client software (Windows Phone and Windows 8) with database access
One hosted service to provide the virtual machine with database access
Two storage accounts (one for images and one for the virtual machine)
One SQL database
Do the hosted services count as VMs and can anybody shed some light on the best configuration (VM sizes etc) to fit all of the above into my subscription please? Multiple instances would be nice but I think I might be getting greedy now!
Thank you.
The most important thing to keep in mind is that you 1500 hours of small compute instances (this includes both Cloud Services and Virtual Machines). 1500 hours per month means you can run 2 small instances full time or choose for an equivalent ratio. So you could go for 4 extra small instances and still have room for 2 extra small instances and 1 small instance to use for something else. To keep the SLA (on the hosted service at least) I would suggest the following:
2 extra small instances of a Linux Virtual Machine
2 extra small instances of a hosted service with a web role. The web role would have 2 tasks:
Provide the client software with database access
Provide the Virtual Machine with database access
This might not be the best solution in terms of performance, but you'll be able to run everything high available without having to pay anything extra.
The 2 storage accounts and the SQL Azure database (you must use the web edition) are also covered by the BizSpark subscription.
Update: 1 small = 4 extra small equivalent ratio isn't right. The ratio is 1 small = 6 extra small.
I'm considering to join the Windows Azure Platform Introductory Special, but I'm a little bit afraid of losing money with it. I don't wanna develop any fancy large scale application, I want to join just to learn Azure and do my experiments, what should I be afraid of?
In the transference, it says: "Data Transfers (per region)", what does that mean?
Can I put limits to stop the app if it goes over this plan in order to avoid get charged?
Can it be "pre pay" instead "bill pay"?
Would it be enough for a blog?
Any experiencie so far?
Kind regards.
As ligget pointed out, Azure isn't cost affect as a host for an application that can be easily deployed to a traditional shared hosting provider. Azure's target market are those that want dedicated resources without the need to micro-manage the infrasture and the capability to easily scale up/down based on demand.
That said, here's the answers to the questions you posted:
Data Transfers are based on bandwidth in and out of the hosting data center. bandwidth for communication occuring within components (SQL Azure, Windows Azure, Azure Storage, etc...) in the same datacenter are not billable.
Your usage is not currently capped when the free quotas are used up. However, you will recieved warning emails when those items approach their usage threadsholds.
There is the option to pay your subscription using a PO, but the minimum threshold for most of these operations is $500/month. So as a hobbyist, its unlikely you're wanting that route.
The introductory special does not provide enough resources for hosting a 24x7 personal blog. That level includes only 25hrs of compute resources. Each hour a single instance of your application is deployed will count against this, even if the application received no traffic. Think of it like renting office space. You still pay rent on the office even if there are no customers there.
All this said, there's still much to be learned with the introductory special. The azure development tools allows you to work with Windows Azure and Azure storage locally and get a feel for how they work. The introductory special then lets you deploy those solutions so you can see what works and what doesn't (not everything that works locally works hosted).
I would recommend you host your blog somewhere else - it's a waste of resources running it on Azure and you'll find much cheaper options. A recently introduced extra small instance would be a better choice in this case, but AFAIK it is charged separately as of now, e.g. even when you have an MSDN subscription those extra small instance hours do not count towards free Azure hours that come with the subscription.
There is no pre-pay option I know of and it's not possible to stop the app automatically. It'll be running until the deployment is deleted (beware! even if suspended/stopped the deployment will continue to accrue charges). I believe you will be sent a notification shortly before reaching your free hours threshold.
Be aware that when launching more than 1 instance you are charged for every hour of every instance combined. This can happen for example when you have more than one role in your Azure project (1 web role + 1 worker role - a separate instance will be started for each role).
Data trasfer means your entire data trasfer: blobs/Table storage/queues (transfers between your hosted service and storage account inside the same data center are free) + whatever data is transfered in/out of your hosted application, e.g. when somebody visits your pages. When you create storage accounts and hosted services in Azure you will specify a region that will be hosting your account/app - hosting in Asia is slightly more expensive than in Europe/U.S.
Your best bet would be to contact Microsoft with these questions.
We are experiencing a very serious unscheduled downtime of our Azure application today for what is now coming up to 9 hours. We reported to Azure support and the ops team is actively trying to fix the problem and I do not doubt that. We managed to get our application running on another "test" hosted service that we have and redirected our CNAME to point at the instance so our customers are happy, but the "main" hosted service is still unavailable.
My own "finger in the air" instinct is that the issue is network related within our data center (west europe), and indeed, later on in the day the service dash board has gone red for that region with a message to that effect. (Our application is showing as "Healthy" in the portal, but is unreachable via our cloudapp.net URL. Additionally threads within our application are logging sql connection exceptions into our storage account as it cannot contact the DB)
What is very strange, though, is that the "test" instance I referred to above is also in the same data centre and has no issues contacting the DB and it's external endpoint is fully available.
I would like to ask the community if there is anything that I could have done better to avoid this downtime? I obeyed the guidance with respect to having at least 2 roles instances per role, yet I still got burned. Should I move to a more reliable data centre? Should I deploy my application to multiple data centres? How would I manage the fact that my SQL-Azure DB is in the same datacentre?
Any constructive guidance would be appreciated - being a techie, I've never had a more frustrating day being able to do nothing to help fix the issue.
There was an outage in the European data center today with respect to SQL Azure. Some of our clients got hit and had to move to another data center.
If you are running mission critical applications that cannot be down, I would deploy the application into multiple regions. DNS resolution is obviously a weak link right now in Azure, but can be worked around (if you only run a website it can be done very simply using Response.Redirects or similar)
Now, there is a data synchronization service from Microsoft that will sync up multiple SQL Azure databases. Check here. This way, you can have mirror sites up in different regions and have them be in sync with SQL Azure perspective
Also, be a good idea to employ a 3rd party monitoring service that would detect problems with your deployed instances externally. AzureWatch can notify or even deploy new nodes if you choose to, when some of the instances turn "Unresponsive"
Hope this helps
I can offer some guidance based on our experience:
Host your application in multiple data centers, complete with Sql Azure databases. You can connect each application to its data center specific Sql Server. You can also cache any external assets (images/JS/CSS) on the data center specific Windows Azure machine or leverage Azure Blog Storage. Note: Extra costs will be incurred.
Setup one-way SQL replication between your primary Sql Azure DB and the instance in the other data center. If you want to do bi-rectional replication, take a look at the MSDN site for guidance.
Leverage Azure Traffic Manager to route traffic to the data center closest to the user. It has geo-detection capabilities which will also improve the latency of your application. So you can redirect map http://myapp.com to the internal url of your data center and a user in Europe should automatically get redirected to the European data center and vice versa for USA. Note: At the time of writing this post, there is not a way to automatically detect and failover to a data center. Manual steps will be involved, once a failover is detected and failover is a complete set (i.e. you will failover both the Windows Azure AND Sql Azure instances). If you want micro-level failover, then I suggest putting all your config the in the service config file and encrypt the values so you can edit the connection string to connect instance X to DB Y.
You are all set now. I would create or install a local application to detect the availability of the site. A better solution would be to create a page to check for the availability of application specific components by writing a diagnostic page or web service and then poll it from a local computer.
HTH
As you're deploying to Azure you don't have much control about how SQL server is setup. MS have already set it up so that it is highly available.
Having said that, it seems that MS has been having some issues with SQL Azure over the last few days. We've been told that it only affected "a small number of users". At one point the service dashboard had 5 data centres affected by a problem. I had 3 databases in one of those data centres down twice for about an hour each time, but one database in another affected data centre that had no interruption.
If having a database connection is critical to your app, then the only way in the Azure environment to ensure against problems that MS haven't prepared against (this latest technical problem, earthquakes, meteor strikes) would be to co-locate your sql data in another data centre. At the moment the most practical way to do this is to use the synch framework. There is an ability to copy SQL Azure databases, but this only works within a data centre. With your data located elsewhere you could then point your app at the new database if the main one becomes unavailable.
While this looks good on paper though, this may not have helped you with the latest problem as it did affect multiple data centres. If you'd just been making database copies on a regular basis, that might have been enough to get you through. Or not.
(I would have posted this answer on server fault, but I couldn't find the question)
This is just about a programming/architecture issue, but you amy also want to ask the question on webmasters.stackexchange.com
You need to find out the root cause before drawing any conclusions.
However. my guess one of two things was the problem
The ISP connectivity differs for the test system and your production system. Either they use different ISPs, or different lines from the same ISP. When I worked in a hosting company we made sure that ou IP connectivity went through at least two different ISPS who did not share fibre to our premises (and where we could, they had different physical routes to the building - the homing ability of backhoes when there's a critical piece of fibre to dig up is well proven
Your datacentre had an issue with some shared production infrastructure. These might be edge routers, firewalls, load balancers, intrusion detection systems, traffic shapers etc. These typically are also often only installed on production systems. Defences here involve understanding the architecture and making sure the provider has a (tested!) DR plan for restoring SOME service when things go pair shaped. Neatest hack I saw here was persuading an IPS (intrusion prevention system) that its own management servers were malicious. And so you couldn't reconfigure it at all.
Just a thought - your DC doesn't host any of the Wikileaks mirrors, or Paypal/Mastercard/Amazon (who are getting DDOS'd by wikileaks supporters at the moment)?