Is Microsoft provided few standard device provisioning service in Azure marketplace which can be directly used for device provisioning for any custom device? Or OEM needs to create a device provisioning service for their custom device ,put that in the market place and then use that device provisioning service from market place?
Could someone please clarify? Thanks.
MS has a Auto-provisioning service but is not a full fledged system. But that is to be expected as the market requirements can be vary broad and very much use case specific.
https://learn.microsoft.com/en-us/azure/iot-dps/
In terms of OEM services. OEM's can probably create such services, but again the use case and ownership questions arise.
So the basically the 'basic' tools are there but it is kinda left to you on how you want to deal with the situation. Thinking of integration with CI/CD and the custom software needed to install on each device - Some custom work such a provisioning service is need. At least in our case.
But this can be a nightmare when going to the thousands of devices to manage.
As Van mentioned, Azure does have a device provisioning service that can be used for any custom device, and new features were announced about a month ago.
If there are gaps in the service, please suggest improvements in the UserVoice! As the PM for provisioning service, knowing the gaps means we can start to fill them.
Related
I am not understanding when Azure IoT Hub can be preferred over Azure IoT Central. From the readings done so far, IoT central seems better over all the aspects.
Anybody can explain me where are the situations where IoT hub is better than IoT Central?
Thanks
There is no definitive answer to that question, neither are "better", but most of the times one will fit your use case more than the other.
If you want a complete, managed way of connecting devices to the cloud and create dashboards (within the product's limits), a Software as a Service solution like Azure IoT Central can be a match. Think about the requirements of the project you're looking to do, and if it's all supported by IoT Central, go for it! If there are some features you can build by leveraging data export from IoT Central, it might still be a great fit.
If you want to build bi-directional communication and device registration for IoT devices into your own cloud platform, IoT Hub comes into play. Maybe you need better control of the data, or maybe the data insights you need aren't supported by IoT Central. There are a lot of cases where it might not be the best choice. IoT Hub gives you a lot more flexibility that you can use to create almost any IoT scenario.
Both are not directly comparable, there are specific advantages of IoT Central which you may need to consider.
IoT Hub is a PaaS service which can be used with other services to create an IoT solution while IoT Central is IoT Application platform which can be used as-is or extended via companion application. Even addressing basic functionality in IoT Central you will need over dozen other services and you own responsibility to design, manage and administer the orchestration yourself.
IoT Central internally uses multiple IoT Hubs (HA/DR) and bunch of services to bring the functionality that you see in the application. This includes App Service to host the UX, Rules Engine, Fast Storage, API layer, Data Export, RBAC, in-app Multi-tenancy , etc. etc. The key advantages you get -
Full featured IoT solution with high availability, security, scalability that is available in < 10 secs under 99.9% SLA
Simplification, easy to connect any device or simulate basic capabilities using the built-in plug-n-play support. Just select any device from the pnp catalog and try it out even before purchasing the devices.
Create user or app level dashboards with device specific views. Device specific view can be auto-generated with PnP devices.
Rule creation, alerting and integration with other applications via Logic Apps, Functions
Data Export functionality to Event Hub, Service Bus, Blob Storage or Web hooks
Rich Job's interface allowing updating device configurations or firmware
RBAC in combination with Organizations allow giving specific permissions to user.
The big advantage is all this is available with a very simpler per device per month pricing that starts as low as 8 cents per device per month ($2 a year) + additional messages https://azure.microsoft.com/en-us/pricing/details/iot-central/
In general unless you already have UX, Storage, Rules engine, etc. elements required for IoT Solution and need to add IoT Hub to ingest and manage IoT devices it will make more sense to start with IoT Central and build with it. It will save time, efforts and you can focus on specific differentiation than build the underlying plumbing and owning the management and sustenance. It is difficult to come to that price point given the high cost of cloud engineers required to support and maintain it.
It is recommended that all customers begin their IoT journey with our aPaaS offering Azure IoT Central. IoT Central is a ready-made environment for IoT solution development. As an aPaaS offering it is built to simplify and accelerate IoT solution assembly and operations, by preassembling PaaS services from the IoT Platform (including IoT Hub and the IoT Hub Device Provisioning Service) and across Azure. A customer that starts with IoT Central builds valuable expertise regardless of whether they go to production with IoT Central, or later build a custom solution to meet complex business needs using PaaS services. To learn more about onboarding to Azure IoT check out this documentation: https://aka.ms/azureiotarch and stay tuned for a session at Microsoft Ignite Nov3-4th Entitled Onboarding to Azure IoT
I'm developing a basic Azure IoT Remote Monitoring solution with the Azure Solution Accelerator "Remote Monitoring". When I start to actually pay for services and stop using a free account, very soon the cash starts to pile up and there seem to be very many resources created behind the scenes. I'm wondering which resources I really need and which one I could throw away to save money. These are the resources that I have:
App Service plan
App Service
Network interface
Network security group
Public IP address
Virtual network
Storage account
Azure Cosmos DB account
Device Provisioning Service
Event Hubs Namespace
App Service
App Service plan
IoT Hub
Key vault
Logic app
Azure Maps Account
API Connection
Disk
Storage account (2)
Stream Analytics job
Time Series Insights environment
Time Series Insights event source
Virtual machine
CosmosDB is probably one of the more expensive resources in your list so if you can find a way to swap some other datastore for it you can save some money.
Take a look at Remote Monitoring architectural choices. The Azure IoT Remote Monitoring solution accelerator is an open-source, MIT licensed, solution accelerator. To help you speed up your IoT development process, it shows common IoT scenarios such as:
Device connectivity
Device management
Stream processing
The Remote Monitoring solution follows the recommended Azure IoT reference architecture.
This article describes the key architectural and technical choices made in each of the Remote Monitoring subsystems. However, the technical choices Microsoft made in the Remote Monitoring solution aren't the only way to implement a remote monitoring IoT solution. You should regard the technical implementation as a baseline for building a successful application and you should modify it to:
Fit the available skills and experience in your organization.
Meet your vertical application needs.
I have 40+ micro-services using Windows Service Bus 1.1 with lots of Queues/Topics/Subscriptions and messages, and I am going to use Azure Service Bus instead.
How can I move all the information and the farm on-premises to Azure?
Not sure you can "move" anything off on-premises into Azure. What you will need to do is to transition your solution. And that's where it's getting a bit hairy.
First, answer the question if you can stop your system for a massive redeployment w/o impacting the business. If you are (which would be rare), you're in a luck as you could take the system offline and "transition" to the new topology on the Azure Service Bus. But that is highly unpropable situation.
A more realistic scenario is when you cannot turn down the sytem. An approach to take is to transition gradually. 40 microservices you've mentioned operate on the same WSSB. You could attempt to take one by one on the Azure Service Bus, but then other services need to know how to communicate over ASB and WSSB as well. Potentially, having a middleware infrastructure that knows to send and recieve to/from both WSSB and ASB until you can disable the WSSB completely. The devil is in details, which for a clear reason cannot be shared here.
And there are also complications such as messages in flight that are sent in the future. Those need to be accounted for. I would recommend to turn to Microsoft support for some pointers, but be aware that the product is already out of support and they technically are not necessarily have to provide any assistence.
I am having a hard time understanding Windows Azure service bus and access control concepts. In layman's terms, what are they? What are they used for?
The Service Bus component of Windows Azure is meant to handle the problems arising from services that are living in multiple networks. Basically, a service bus just makes it appear as if your code is running on a single machine, while in reality it could be running anywhere within the Azure datacenters.
Access Control lets you use "federated authentication for your service based on a claim-based RESTful model. (Sorry, copy&Paste from an O'Reilly book about Azure!)
Basically, when you create an Azure site, application or service, it could be running on any of the thousands of systems within the datacenter. And each of those systems has it's own IP address, it's own network, memory, processor and whatever more. To let them collaborate and to appear as a single system, these two services have been created.
If you want to learn more about Azure, this would be a good moment to buy a book! :-)
Azure is quite complex and service buses and access control are a bit more advanced topics.
Service Bus is a solution for the integration between multiple applications whether they are hosted on the same infrastructure or even spread along multiple infrastructure or/and Cloud Computing provider. If you search more in the internet you might find a lot about EAI (Enterprise application integration) here is my blog post about this topic:
http://hhaggan.wordpress.com/2013/03/07/introduction-to-enterprise-application-integration-eai/
and here another that I hope that helps you understand better what is the service bus:
http://hhaggan.wordpress.com/2013/03/09/introducing-service-bus/
in another words, it is a messaging platform that helps you communicate with multiple applications, softwares or services no matter what programming language they are written with or on which os or platform they are hosted on. you will feel its effect specially when you work on connecting multiple nodes together, I don't mean 5 or 6 nodes but 10 and above.
Certainly there are several types of service bus, whether they are based on relayed messaging service or brokered messaging service, each one of them has several uses, its purpose and way of working.
For the Access control, this is so easy, it is a way of authentication and authorization for your application using third parties, It is a claim based identity that you can do the required authentication through the third party database. you wont need to build everything from scratch in your database. this helps a lot during development and I believe that this can help a lot in social media marketing and branding because of the use of facebook, twitter during the authentication.
If I were to use separate Windows Server that was PCI-DSS compliant, would I still be compliant if I had a SQL Azure hosting the backend? This is assuming that I'm compliant at the application layer, and that I'm only storing permitted values (like no CVV), etc.
AWS is now PCI DSS 2.0 Level 1 compliant, so the assumptions that Level 1 is not achievable by a cloud vendor is not correct:
http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/
In addition, Rackspace has also achieved PCI Level 1 compliance:
http://www.rackspace.co.uk/rackspace-home/media-centre/news/article/article/rackspace-enhances-security-with-pci-accreditation/
It is true that Microsoft has not yet achieved PCI compliance for Windows Azure.
It is likely that they are actively working on addressing any limitations in Windows Azure so that they will also be able to provide this service to their customers and remain competitive, but as of today they have not yet achieved PCI compliance.
Microsoft writes in the Azure Faq:
At commercial launch, Windows Azure will not have specific audit or security certifications. You can expect to see us pursue key certifications, such as the ISO27001, in the near future. The Windows Azure Platform and Windows Azure apply the rigorous security practices incorporated in the Security Development Lifecycle (SDL) process. SDL introduces security and privacy early and throughout the development process. The Windows Azure Platform and Windows Azure also benefit from the security capabilities afforded by the Microsoft Global Foundation Services’ (GFS) infrastructure. The GFS assurances are validated by external auditors on a regular basis and include a comprehensive security program that covers the entire delivery stack.
Microsoft makes no claim regarding PCI standards for 3rd party hosting. There are ways to develop cloud based applications to use 3rd party PCI data processers that may keep the cloud application itself out of scope.
http://www.microsoft.com/windowsazure/faq/default.aspx
choose "Licensing and Service Level Agreements" in the drop down
then find the last paragraph "What industry audit and security certifications cover the Windows Azure Platform? Specifically, call out position on SAS70, ISO 27001, and PCI?"
Not sure of PCI-DSS Compliance status in Azure, but I will note that Azure and EC2S3 are not the same animals. Azure is a completely hosted infrastructure which exposes services and endpoints to offer application writers the ability to sit on a fully managed and monitored (including typical security constructs in place for the on-premise Server product) platform, and extend these services to the resident applications.
Considering the amount of time that Microsoft has spent with the PCI folks (from Vista on), I would be highly surprised if a PCI-DSS compliant application didn't maintain it's level of certification when extended to Windows Azure.
Hope this helps. The purpose wasn't to bash EC2S3, it was more to fill in the blamks on Azure.
Mr. Helper :-)
Just an update on this question.
As it stands currently, Windows Azure is indeed PCI DSS Level 1 compliant. See the following Windows Azure Trust Centre article for more information:
Windows Azure Trust Center - Compliance
With PCI DSS it is important to remember that it is not just about storing, it's "store, process, or transmit." If any of this happens in or through the cloud then the cloud becomes part of your cardholder data environment, thus in scope for PCI compliance. Since it's a cloud that you don't control, there would be no way to verify compliance.
No verification, no compliance. Sorry.
Looks like AWS and Rackspace both have achieved some level of compliance (http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/, http://www.rackspace.co.uk/rackspace-home/media-centre/news/article/article/rackspace-enhances-security-with-pci-accreditation/), but Global Foundation Services (the infrastructure behind Microsoft Windows/SQL Azure, CDN, etc) has not (http://www.globalfoundationservices.com/security/). I would not be surprised to see that GFS achieves some accredication in the near future, however.
Amazon announced PCI DSS Level 1 compliance on Dec 07, 2010. My answer below is now incorrect.
See http://www.mckeay.net/2009/08/14/cannot-achieve-pci-compliance-with-amazon-ec2s3/. Amazon says you can't achieve PCI-DSS level 1 compliance on their infrastructure. The important lines are -
It is possible for you to build a PCI
level 2 compliant app in our AWS cloud
using EC2 and S3, but you cannot
achieve level 1 compliance. If you
have a data breach, you automatically
need to become level 1 compliant which
requires on-site auditing; that is
something we cannot extend to our
customers.
I haven't read Azure's documentation, but I am pretty sure they don't allow on-site auditing. Given that, the same conclusions would apply to Microsoft Azure as well.