I am modeling an automated storage, retrieval system.
I need a reource that can move horizontally and vertically at the same time (diagonally) to store and retrieve agents in the Rack.
I found this model in AnyLogic cloud and I am interested to know, what kind of resource has been used?
Model in AnyLogic Cloud
I am using a Forklift now, but it can not move vertically and it needs more time to store and retrieve agents in the rack.
Related
I wonder if it's a good idea to use Azure Cosmos DB "container" to manage an entity's status? For example, an employee's reimbursement can have different statuses like submitted, approved, paid, etc. Do you see any problem creating a separate container for "Submitted", "Approved", etc? They would contain the similar reimbursement object but slightly different data points due to their status. For example, Submitted container could have Manager's name as the approver, Paid container could have payment method.
In other words, it's like a persistent queue. It will be moved out of the container and into the next in the workflow process.
Are there any concerns with this approach? Does Azure pricing model "provisioned throughput" charge by the container? Meaning the more container you have, the more expensive it gets? Or is it on the database level so that I can have as many containers I want, it's only charging the queries?
Sorry for the newbie questions, learning about Cosmos. Thanks for any advice!
It's a two part question :).
First part (single container v/s multiple container) is basically an "opionion-based" question. I would have opted for a single container approach as it would have given me just one place to look for the current status of an item. But, that's just my opinion :).
Regarding your question about pricing model, Azure Cosmos DB offers you both.
You can provision throughput at the container level as well as on the database level. When you provision throughput at the database level, all containers (up to 25) in that database will share the throughput of the database.
You can even mix and match the approaches as well i.e. you can have throughput provisioned at the database level and then have some containers share the throughput of the database while some containers have dedicated throughput.
Please note that once throughput type (fixed, shared or auto-scale) is configured at the database/container level, it can't be changed. You will have to delete and create new resource with changed throughput type.
You can learn more about throughput here: https://learn.microsoft.com/en-us/azure/cosmos-db/set-throughput.
I want to create a wayfinding app for a specific building using AR Core.
Because Google Cloud Anchor service has a 24-hour limit, I thought the Azure Spatial Anchor Service might do the job.
But my location is in East Europe. According to the docs, East Europe is not yet supported.
Has anyone tried these services from my location?
I'd try using either West Europe or North Europe. They're the closest regions that we have Azure Spatial Anchors in to eastern Europe. (You can get a sense for the Azure network's RTT to various points on the global from the Azure network round trip latency statistics page. It's between Azure data centers, so it doesn't take into account things like the app user's cell provider/ISP.)
Also, take a look at the effective anchor experience guidelines for some recommendation about building your UX. For example, consider designing the locating experience assuming the user will spend a few seconds doing that and may need to create a new anchor if an existing one cannot be found. Also, look for opportunities to create a watcher while something else is happening in your app so that you can overlap multiple operations that the user needs to wait for.
For example, in one of our apps, we start loading 3D assets and create a watcher at the same time. When the assets are done loading, we switch to an AR view, and, often, the anchor has also been located by the time the assets have loaded. If not, the UX can handle that case too.
I have a single Azure SQL Server and a single database in it. I want a solution to store specific records of selected tables in this database in different regions.
as an example, I have a users table with all PII data in it. these users can be from anywhere from the world. but i would want to store user records who are from EU region to be stored only in EU region.
To add it - i want all the other table records related to a specific user as well to get stored in that user's region.
from application perspective, i would be able to query across all users and all related tables to have dashboard data for the global users.
Any pointers to solve this scenario would be helpful for me.
Another approach could be sharding the database. Use horizontal sharding to store the rows for each country/region in a separate database in that country/region. The Elastic Database Client library will use a shardmap do most of the sharding work for you (assuming you are using .NET). You can use the country code in your shardmap to split regional data.
Reference Architecture: https://learn.microsoft.com/en-us/azure/architecture/patterns/sharding
Elastic Database Client: https://learn.microsoft.com/en-us/azure/sql-database/sql-database-elastic-database-client-library
Here is one approach... When your user/tenant registers for your service they will need to pick where their data should reside. This is referred to as data residency. Then on subsequent requests to read or write data your application's repository layer needs to be aware of who the request is executing as so it can lookup the appropriate connection string and connect to that database to retrieve/write the data.
The routing data can be replicated to multiple regions and/or housed in a single location as it would not contain PII. The Azure Web App can be single region hosted (as depicted in the image below) or it can be replicated to multiple regions and traffic routed to it via a global traffic manager.
This approach supports the case where an European user picks to have their data reside in France but happens to be visiting the united states.
This picture shows how this might look. A guy named Barry Luijbregts has a nice pluralsight video that delves into this approach. https://www.pluralsight.com/courses/azure-paas-building-global-app
Good luck!
I am working on a project whereby the web site (all components are hosted in Azure) will have both US and international users. We are using Blob and Table storage for 99% of the data. What I do not understand is how to setup global instances, including multiple tables, etc, and keep everything in sync. Say a user logs into the site from France, how can I ensure they will always hit the same data center (which implies the same Storage instance)? If they hit a different storage instance, their data will not be there and/or stale.
Both Compute and Storage are affinitized to a specific data center. There's no global compute or global storage deployment concept.
Having said that: You'll typically host your human-facing app (e.g. web app) in a single data center. Usually, latency between browser and server is not much of an issue if only a relatively-small quantity of data is moving between the two. The majority of bandwidth is typically between web server and app servers and/or database instances. And in Azure, data doesn't necessarily need to be colocated in the same data center as the web app (though it's the ideal scenario from latency + egress bandwidth cost perspective).
If you want Compute in multiple data centers, you'd need to have a higher-level mechanism doing some type of load balancing for you (such as Azure's Traffic Manager). However, even with Traffic Manager's "closest" setting, you're not really guaranteed that a user in France will hit the W. Europe vs. N. Europe data center. You'd always have to plan for a visitor hitting any data center. This is why it's much simpler to deal with Compute in a single data center.
Regarding data: If your Compute is in a single data center, there's no need (other than disaster recovery) to write data to multiple data centers. If you do decide to deploy Compute to multiple data centers, you'll need your own method for syncing data. For Azure blobs & table storage, you can consider some type of command pattern (e.g. CQRS) where your operations are queue driven. This allows you to process each queued data operation against multiple storage accounts across different data centers.
Now, you might have data sovereignty issues, where data must reside in a specific data center for specific customers, based on their geo. Again, you'll need to implement this in the app layer. One thought on this is to affinitize a user with a particular data center when they get set up (and just store the data center mapping in a single database along with your web tier). At this point, when a visitor logs in, you can easily look up their correct data center and, within their browsing session, access their data from the specific data center.
AFAIK Amazon AWS offers so-called "regions" and "availability zones" to mitigate risks of partial or complete datacenter outage. Looks like if I have copies of my application in two "regions" and one "region" goes down my application still can continue working as if nothing happened.
Is there something like that with Windows Azure? How do I address risk of datacenter catastrophic outage with Windows Azure?
Within a single data center, your Windows Azure application has the following benefits:
Going beyond one compute instance, your VMs are divided into fault domains, across different physical areas. This way, even if an entire server rack went down, you'd still have compute running somewhere else.
With Windows Azure Storage and SQL Azure, storage is triple replicated. This is not eventual replication - when a write call returns, at least one replica has been written to.
Ok, that's the easy stuff. What if a data center disappears? Here are the features that will help you build DR into your application:
For SQL Azure, you can set up Data Sync. This facility synchronizes your SQL Azure database with either another SQL Azure database (presumably in another data center), or an on-premises SQL Server database. More info here. Since this feature is still considered a Preview feature, you have to go here to set it up.
For Azure storage (tables, blobs), you'll need to handle replication to a second data center, as there is no built-in facility today. This can be done with, say, a background task that pulls data every hour and copies it to a storage account somewhere else. EDIT: Per Ryan's answer, there's data geo-replication for blobs and tables. HOWEVER: Aside from a mention in this blog post in December, and possibly at PDC, this is not live.
For Compute availability, you can set up Traffic Manager to load-balance across data centers. This feature is currently in CTP - visit the Beta area of the Windows Azure portal to sign up.
Remember that, with DR, whether in the cloud or on-premises, there are additional costs (such as bandwidth between data centers, storage costs for duplicate data in a secondary data center, and Compute instances in additional data centers). .
Just like with on-premises environments, DR needs to be carefully thought out and implemented.
David's answer is pretty good, but one piece is incorrect. For Windows Azure blobs and tables, your data is actually geographically replicated today between sub-regions (e.g. North and South US). This is an async process that has a target of about a 10 min lag or so. This process is also out of your control and is purely for a data center loss. In total, your data is replicated 6 times in 2 different data centers when you use Windows Azure blobs and tables (impressive, no?).
If a data center was lost, they would flip over your DNS for blob and table storage to the other sub-region and your account would appear online again. This is true only for blobs and tables (not queues, not SQL Azure, etc).
So, for a true disaster recovery, you could use Data Sync for SQL Azure and Traffic Manager for compute (assuming you run a hot standby in another sub-region). If a datacenter was lost, Traffic Manager would route to the new sub-region and you would find your data there as well.
The one failure that you didn't account for is in the ability for an error to be replicated across data centers. In that scenario, you may want to consider running Azure PAAS as part of HP Cloud offering in either a load balanced or failover scenario.