I have an instance of the Service Fabric that resides in the German Cloud. Since the OMS is not available in the German cloud, I need to use an OMS instance from West Europe. Because Of this, I cannot use the Service Fabric OMS connector solution.
Any information on how/what do I need to enable in order to connect these two services spread across subscriptions?
You should be able to do this using EventFlow.
It doesn't rely on an Agent being present to upload to a Storage Account, but it accesses OMS directly.
Related
May I ask what is the security protocol (Https/TCPIP etc) applied in the following scenarios in Azure? I need these details to write my design document.
Between Azure Services
Azure Data Factory interacting with Azure Storage
Azure Databricks interacting with Azure Storage
Azure Python SDK connecting to Storage Account (Is it TCP/IP ?)
If there is any support page in MS Azure, please direct me there.
Inside the Azure data centers used TLS/SSL for communication between
services and you can read about it "Encryption of data in transit"
section on this page.
The main SDK implementations are wrappers around the REST API and
Python SDK is one of them.
I am building an application which uses the following Azure Services.
Azure App Service : To Host the FrontEnd and BackEnd Web Apps
Azure SQL Database : To store the structured data
Azure Cosmos DB : To store the JSON Data
Azure Storage : To store the images, files, videos as blobs
All these services will run in Central India Region. Will I be able to use Azure Search on these services like SQL DB, CosmosDB (MongoDB API) & Storage (Blobs and Files)? During a Bootcamp, last week, an MVP said that the Azure Search feature works only in West US region.
Thanks,
Manoj Kumar
Azure Search is generally available in Central India and many others regions (I am using it in West Europe).
Have a look here for more information on azure services availability.
Based on https://azure.microsoft.com/en-in/global-infrastructure/services/?products=search®ions=central-india,south-india,west-india, it seems Azure Search is available in Central India however when I tried to create a search account in that region using portal, I was not able to do so.
Having said that, Azure Search should still work as it uses HTTP based REST API. Considering how you end up using Azure Search, you might expect some latency based on where your users are based.
I have site deployed on Azure. I am using Cloud Services, Storage, SQL Database.
I want to have High Availability and Disaster Recovery for our Azure Website.
My question is that how can we provide this feature on Azure? Is it already managed by Azure or we need to use any services from Azure for the same.
Thanks in Advance
Well, I don't think DR is needed, since everything you use is PaaS Service, so if you trust Azure - it will handle everything for you, if you don't. Well, if you don't it won't help you ;)
So, in my opinion best way to achieve what you are looking for is using build-in HA for Cloud Services (increase instance count), while Storage and Azure SQL are HA by design.
If you really-really want DR, you can implement Traffic Manager with extra copy of your Cloud Service in another Azure region and implement Storage Replication and Azure SQL Replication.
I won't be giving link to documentation, as all of those are found in under 5 minutes in and search engine.
I have some VMs running on Azure Service. I'd like to redirect logs from them (Windows Event Logs and MS SQL server logs) to a specific log concentrator (like Graylog). For Windows logs, I'm using Nxlog (https://nxlog.co/docs/nxlog-ce/nxlog-reference-manual.html#quickstart_windows). However, for specific (PaaS) applications such as SQL Server (PaaS in general) Nxlog does not apply.
Is there a way to redirect logs (VMs and PaaS) just using Azure (web) tools?
Most services keep their logs in a Storage Account so you can tap into that source and forward logs to your own centralized log database. You generally define the storage account at the place you enable diagnostics for the service.
Don't know what king of logs you are looking for in SQL DB, but for example the audit logs are saved in a storage account.
Azure Operations Management Suite (OMS) can ingest from dozens of services as well as custom logs. As itaysk mentioned, most services in Azure write service related diagnostic information to a storage account. It's really easy to ingest these from within OMS.
https://azure.microsoft.com/en-us/services/log-analytics/
For Azure Web Sites, you can use Application Insights and store custom metrics as well. There's also an option to continuously write these metrics to a storage account.
Here's a similar option for Azure SQL:
https://azure.microsoft.com/en-us/documentation/articles/sql-database-auditing-get-started/
I deployed WorkerRole to Azure Cloud Service (classic) in new portal. With this, I also created Azure Storage account for queue.
Try to add AutoScale rule, the storage account is not listed. Tried to select Other Resource and put Resource Identifier of storage, there's no Metric name listed.
Is it by design that classic Cloud Service and new Storage account not working together?
Storage account data (e.g. blobs, queues, containers, tables) are accessible simply with account name + key. Any app can work with them.
However, to manage/enumerate available storage accounts, there are Classic-created and ARM-created accounts, each with different API's.
The original Azure Service Management (ASM) API doesn't know anything about ARM resources. There's a fairly good chance that, since you're deploying to a Classic cloud service, it's using ASM only and will not be able to enumerate ARM-created storage accounts.
If you create a Classic storage account (which has zero difference in functionality), you should be able to see it as an option for auto-scale.
I have a bit more details on the differences in this answer.
At this time, it is not possible to autoscale anything based on a new "v2" storage account. It has nothing to do with the fact that you are using the classic Azure Cloud Service. I am having the same issue with using Azure App Services. In the end, I just created a classic storage account to use for the autoscaling. There is no difference in how you interact with the different types of storage accounts.