Who is faster? AppFabric Cache or Memcached? - azure

We use the MS AppFabric Cache to store the central session. But we want to know how fast this store? Realties who used to do the virtual machine and Memcached on this machine?
thanks a lot!

I'm not sure I have seen any benchmarks on this so its a hard one to answer but the new Windows Azure Caching service released on June 7th addresses a lot of the performance issues of the Distributed Multi-Tenanted Azure Caching Service.
Below are extracts from Haishi's blog on Azure Caching, note Windows Azure Caching now supports Memcached interoperability too...
"The cluster utilizes free memory spaces on your Cloud Service host machines, and the cluster service process can either share the host machines of your web/worker roles, or run on dedicated virtual machines when you use dedicated caching worker roles"
http://haishibai.blogspot.com.au/2012/06/windows-azure-caching-service-memcached.html
Fast performance. The cluster either collocates with your roles, or runs in close proximity based on affinity groups. This helps to reduce latency to minimum. In addition, you can also enable local cache on your web/worker roles so that they get fastest in-memory accesses to cached items as well as notification support.

Related

Trying to find out Azure latency between on premises client and azure cloud application

I am trying to accomplish one task which is below.
What I am doing it.
All my users are on Premises.
Application is hosted on Azure VM IaaS.
Question =>
Azure cloud application talk with Internet and download huge packages and share with client which is on- Primes. So I am trying to understand the Risk and latency matrix between on-Prime users and Azure cloud application.
If any one has done some sort of thing and encounter latency issues and what will be possible fixes for that?
Note=> I can't Migrate user to Azure cloud as of now.
To encounter latency issues, please try the following:
To reduce the latency between on premises client and azure cloud application make use of Azure HPC cache.
Azure HPC Cache reduces latency for applications where data may be tethered to existing infrastructure because of dataset sizes and operational scale.
Azure HPC caches active data automatically that is present in both on-premises and in Azure.
You can make use of Accelerated networking where communication will be done more fast.
Try eliminating network congestion.
Try reducing number of network nodes needed to traverse from one stage to another.
Make use of Azure ExpressRoute and Azure Analysis Services to reduce Network latency.
Azure ExpressRoute creates a private connection between on-premises sources and the Azure.
Azure Analysis Services avoids the need for an on-premises data gateway and generally eliminates network latency.
For more in detail, please refer below links:
https://azure.microsoft.com/en-us/blog/azure-hpc-cache-reducing-latency-between-azure-and-on-premises-storage/
https://blogit.create.pt/gustavobrito/2017/11/27/latency-test-between-azure-and-on-premises-intro/
https://viniciusdeschamps.com.br/3-ways-to-reduce-network-latency-in-azure/#how-can-I-measure-network-latency

What is an azure role?

I'm reading this article on distributed caching in Azure. Being new to Azure I'm trying to understand what they mean when they use the term "role" in the following context:
In-Role Cache You can deploy an in-role cache on a co-located or
dedicated role in Azure. Co-located means your application is also
running on that VM and dedicated means it’s running only the cache.
Although a good distributed cache provides elasticity and high
availability, there’s overhead associated with adding or removing
cache servers from the cache cluster. Your preference should be to
have a stable cache cluster. You should add or remove cache servers
only when you want to scale or reduce your cache capacity or when you
have a cache server down.
The in-role cache is more volatile than other deployment options
because Azure can easily start and stop roles. In a co-located role,
the cache is also sharing CPU and memory resources with your
applications. For one or two instances, it’s OK to use this deployment
option. It’s not suitable for larger deployments, though, because of
the negative performance impact.
You can also consider using a dedicated in-role cache. Bear in mind
this cache is deployed as part of your cloud service and is only
visible within that service. You can’t share this cache across
multiple apps. Also, the cache runs only as long as your service is
running. So, if you need to have the cache running even when you stop
your application, don’t use this option.
Microsoft Azure Cache and NCache for Azure both offer the in-role
deployment option. You can make Memcached run this configuration with
some tweaking, but you lose data if a role is recycled because
Memcached doesn’t replicate data.
They talk about In-Role cache, cache service, cache VMs and multi-region cache VMs.
I understand cache services to be "server-less" meaning you don't manage the server or cluster, Azure
does all of that, as apposed to cache VMs where you handle deployment of the server and the cache solution on that server.
How does In-Role cache differ, and what is a "role"? I usually think of role as the definition of how a user participates in a given system, and it establishes the capabilities or permissions that members of that role would need within the system to fulfill their duties. This seems different than that.
It's legacy. In the past, there were Azure In-Role Cache and the Azure Managed Cache Service. The recommendation is to use Azure Redis Cache now:
https://azure.microsoft.com/en-us/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/

Azure IO transfer

We have a VM setup to run SQL server in Azure. we are seeing the disk write is doing like 0.6MBps with WRITE THROUGH during testing. We have tried numerous different this from changing Azure Vm types D series L series etc. We have also created different RAID based disks. is this a limit in Azure that it can do only certain rate for non cached disks than what is advertised as 500MBps.. any help to improve the WRITE THROUGH rate?
In order to prevent the blocking issues and to improve the IO performance, we need to:
1.Prevent VM level throttling at all cost.
2.Prevent disk level throttling if the application has dependent blocking issue due to the software design. Adding more disks to create a storage pool may help.
More information about Azure VM Storage Performance and Throttling, we can refer to:
Azure VM Storage Performance and Throttling
According to your issue, as you are using SQL Server, here is an article about how to optimize SQL Server performance in Microsoft Azure Virtual Machine: Performance best practices for SQL Server in Azure Virtual Machines

how to create windows virtual machine with 16gb ram

I am totally new to cloud services, and using Windows Azure, I need a web server and a database server, each with 16gb of RAM. However, the extra large windows virtual machines only have 14gb of RAM. How would I go about adding 2gb of RAM to each of these servers, or do I need to do something else, such as incorporate a SQL database? I don't need to know the specifics of installation, all I need to know right now is what needs to be paid for, as I am just trying to figure out the price for everything. Thank you.
The Extra Large (XL) VM size provides 14GB available RAM. This applies to both Virtual Machines (IaaS) and web/worker roles (PaaS). There are no other VM sizes that provide more RAM than that. There's nothing you can do to add 2 extra GB.
UPDATE April 16, 2013: There are now two new sizes: 28GB/4-core and 56GB-8-core, available to Virtual Machines (not for Cloud Services e.g. web & worker roles). Announcement here. There's also a new SharePoint template in the Virtual Machine image gallery (since you mentioned using SharePoint) as well as a SQL Server template.
UPDATE APRIL 30, 2013: The new 28GB/56GB sizes are now available with Cloud Services, coincident with the release of Azure SDK 2.0 for .NET. Details here.
Just to add a bit, regarding web servers: Unlike on-premises servers where it's typically economical to buy the largest machine possible, it's better in Windows Azure to go with smaller VMs and have more of them. So, for a web server, go with the smallest VM size that would still run your software. Then, to handle additional traffic, scale to more web instances. As traffic ebbs, reduce the instance count. Load will be distributed amongst all of of the web servers (which are stateless - no user affinity to a specific instance).

Is it possible to deploy an application using cassandra database on Windows Azure?

I recently got a trial version of Windows Azure and wanted to know if there is any way I can deploy an application using Cassandra.
I can't speak specifically to Cassandra working or not in Azure unfortuantly. That's likely a question for that product's development team.
But the challenge you'll face with this, mySQL, or any other role hosted database is persistence. Azure Roles are in and of themselves not persistent so whatever back end store Cassandra is using would need to be placed onto soemthing like an Azure Drive (which is persisted to Azure Blob Storage). However, this would limit the scalability of the solution.
Basically, you run Cassandra as a worker role in Azure. Then, you can mount an Azure drive when a worker starts up and unmount when it shuts down.
This provides some insight re: how to use Cassandra on Azure: http://things.smarx.com/#Run Cassandra
Some help w/ Azure drives: http://azurescope.cloudapp.net/CodeSamples/cs/792ce345-256b-4230-a62f-903f79c63a67/
This should not limit your scalability at all. Just spin up another Cassandra instance whenever processing throughput or contiguous storage become an issue.
You might want to check out AppHarbor. AppHarbor is a .Net PaaS built on top of Amazon. It gives users the portability and infrastructure of Amazon and they provide a number of the rich services that Azure offers such as background tasks & load balancing plus some that it doesn't like 3rd party add-ons, dead-simple deployment and more. They already have add-ons for CouchDB, MongoDB and Redis if Cassandra got high enough on the requested features I'm sure they could set it up.

Resources