Appengine Cloud Endpoints & core data - core-data

When using Cloud Endpoints generated libs, is there anyway to use core data without manually creating models? Is there any easy way to integrate the two so the objects from the endpoint can be persisted with core data?

I am not aware of any generic solution. It seems to me that the two philosophies - Appengine Cloud Endpoints as a relational database vs. Core Data as an object graph - are just to different to facilitate a solution without any manual adaptation.
So there is no "easy way to integrate the two". That being said, if you write your own conversion solution, the most complicated part seems to be the change from foreign keys to relationships. Apart from that it should not be too difficult.

Related

Multiple web services sharing single database using Sequelize

I'm trying to make a service architecture which includes two Node.js apps which shares the same database. The overall service architecture looks like below (simplified version)
I'm planning to use Sequelize as an ORM to access the database. As far as I know, if a service uses Sequelize, it needs model to get the structure of data tables. In my case, api and service will access the same database, which means they should share the same Sequelize model.
So here is the question: where should I locate the common Sequelize relevant files? It seems I have two choices:
put them on the upper common location (assuming the project structure is monorepo) so that each apps can use the single same files
maintain copies of files in each apps' project folders. In this case, each apps will be independent(Let's say I want to dockerize each apps) but in case the Sequelize files modified, the same action should be done for the other.
I'm not sure how I understood is correct. Is my question valid? If so, what is the better choice and practice? I appreciate for your answers in advance.
There is no correct answer, it depends on the specific situation, but sharing a database between multiple microservices is a bad design.
Sharing a database means tight coupling at the data level. The direct consequence is that when a service modifies the database table structure, such as deleting the name field of the user table, it may break the APIs of other services and all use the sequelize user model. All services need to update the model definition and modify the implementation code of the API.
If all of your services are maintained by a team, I suggest you choose the first solution, which costs less and is easier to maintain. If your services are maintained by different teams, the two solutions are actually similar, because as long as the table structure is modified, the application layer model needs to be modified or verified whether it still works well.
Therefore, I recommend following the best practices of microservice architecture, first splitting the database vertically according to the business model, and building application APIs on top of it.
Core principles of microservices:
loose coupling
high cohesion

Design for a Cloud Native Application in Azure for ML Insights and Actions

I have an idea whereby I intend to build a cloud native application for algorithmic trading, ideally by consuming all PaaS and SaaS (no IaaS), and I'd like to get some feedback on how I intend to build it. The concept is pretty straight-forward in that I intend to consume financial trading data from an external SaaS solution via an API query, feed that data into various Azure PaaS solutions (most notably ML for modeling), and then take some action. Here is a high-level diagram I've come up with so far:
Solution Overview
As a note, while I'm familiar with Azure, I'm not a Azure cloud engineer and have limited experience in actually building solutions myself. Subsequently, I intend to use this project as a foundation to further educate myself.
When starting on the build, I immediately questioned whether I should or shouldn't use Event Hubs. Conceptually it makes sense, in that I'm decoupling the production of a data stream from the consumption of it. Presumably, this facilitates less complications when / if I need to update the data feed(s) in the future. I also thought about where the data is stored... should it be a SQL database, or more simply, an Azure Table? The idea here is that the trading data will need to be stored for regression testing as my iterate through my models. All that said, looking for some insights from anybody that may have experience in this space.
Thanks!
There's no real question in here. Take a look on the architecture reference provided by Microsoft: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/

Decision path for Azure Service Fabric Programming Models

Background
We are looking at porting a 'monolithic' 3 tier Web app to a microservices architecture. The web app displays listings to a consumer (think Craiglist).
The backend consists of a REST API that calls into a SQL DB and returns JSON for a SPA app to build a UI (there's also a mobile app). Data is written to the SQL DB via background services (ftp + worker roles). There's also some pages that allow writes by the user.
Information required:
I'm trying to figure out how (if at all), Azure Service Fabric would be a good fit for a microservices architecture in my scenario. I know the pros/cons of microservices vs monolith, but i'm trying to figure out the application of various microservice programming models to our current architecture.
Questions
Is Azure Service Fabric a good fit for this? If not, other recommendations? Currently i'm leaning towards a bunch of OWIN-based .NET web sites, split up by area/service, each hosted on their own machine and tied together by an API gateway.
Which Service Fabric programming model would i go for? Stateless services with their own backing DB? I can't see how Stateful or Actor model would help here.
If i went with Stateful services/Actor, how would i go about updating data as part of a maintenance/ad-hoc admin request? Traditionally we would simply login to the DB and update the data, and the API would return the new data - but if it's persisted in-memory/across nodes in a cluster, how would we update it? Would i have to expose this all via methods on the service? Similarly, how would I import my existing SQL data into a stateful service?
For Stateful services/actor model, how can I 'see' the data visually, with an object Explorer/UI. Our data is our Gold, and I'm concerned of the lack of control/visibility of it in the reliable services models
Basically, is there some documentation on the decision path towards which programming model to go for? I could model a "listing" as an Actor, and have millions of those - sure, but i could also have a Stateful service that stores the listing locally, and i could also have a Stateless service that fetches it from the DB. How does one decide as to which is the best approach, for a given use case?
Thanks.
What is it about your current setup that isn't meeting your requirements? What do you hope to gain from a more complex architecture?
Microservices aren't a magic bullet. You mainly get four benefits:
You can scale and distribute pieces of your overall system independently. Service Fabric has very sophisticated tools and advanced capabilities for this.
You can deploy and upgrade pieces of your overall system independently. Service Fabric again has advanced capabilities for this.
You can have a polyglot system - each service can be written in a different language/platform.
You can use conflicting dependencies - each service can have its own set of dependencies, like different framework versions.
All of this comes at a cost and introduces complexity and new ways your system can fail. For example: your fast, compile-time checked in-proc method calls now become slow (by comparison to an in-proc function call) failure-prone network calls. And these are not specific to Service Fabric, btw, this is just what happens you go from in-proc method calls to cross-machine I/O - doesn't matter what platform you use. The decision path here is a pro/con list specific to your application and your requirements.
To answer your Service Fabric questions specifically:
Which programming model do you go for? Start with stateless services with ASP.NET Core. It's going to be the simplest translation of your current architecture that doesn't require mucking around with your data layer.
Stateful has a lot of great uses, but it's not necessarily a replacement for your RDBMS. A good place to start is hot data that can be stored in simple key-value pairs, is accessed frequently and needs to be low-latency (you get local reads!), and doesn't need to be datamined. Some examples include user session state, cache data, a "snapshot" of the most recent items in a data stream (like the most recent stock quote in a stream of stock quotes).
Currently the only way to see or query your data is programmatically directly against the Reliable Collection APIs. There is no viewer or "management studio" tool. You have to write (and secure) an API in each service that can display and query data.
Finally, the actor model is a very niche model. It serves specific purposes but if you just treat it as a data store it will not work for you. Like in your example, a listing per actor probably wouldn't work because you can't query across that list, or even have multiple users reading the same listing simultaneously.

What does building an application in Arango Foxx offer beyond a regular node application

I'm learning more about ArangoDB and it's Foxx framework. But it's not clear to me what I gain by using that framework over building my own stand alone nodejs app for API/access control, logic, etc.
What does Foxx offer that a regular nodejs app wouldn't?
Full disclosure: I'm an ArangoDB core maintainer and part of the Foxx team.
I would recommend taking a look at the webinar I gave last year for a detailed overview of the differences between Foxx and Node and the advantages of using Foxx when you are using ArangoDB. I'll try to give a quick summary here.
If you apply ideas like the Single Responsibility Principle to your architecture, your server-side code has two responsibilities:
Backend: persist and query data using the backend data storage (i.e. ArangoDB or other databases).
Frontend: transform the query results into a format acceptable for the client (e.g. HTML, JSON, XML, CSV, etc).
In most conventional applications, these two responsibilities are fulfilled by the same monolithic application code base running in the same process.
However the task of interacting with the data storage usually requires writing a lot of code that is specific to the database technology. You need to write queries (e.g. using SQL, AQL, ReQL or any other technology-specific language) or use database-specific drivers.
Additionally in many non-trivial applications you need to interact with things like stored procedures which are also part of the "backend code" but live in the database. So in addition to having the application server do two different tasks (storage and rendering), half the code for one of the tasks ends up living somewhere else, often using an entirely different language.
Foxx lets you solve this problem by allowing you to move the logic we identified as the "backend" of your server-side code into ArangoDB. Not only can you hide all the nitty gritty of query languages, edges and collections behind a more application-specific API, you also eliminate the network overhead often necessary to handle requests that would cause more than a single roundtrip to the database.
For trivial applications this may mean that you can eliminate the Node server completely and access your Foxx API directly from the client. For more complicated scenarios you may want to use Node to build external micro services your Foxx service can tap into (e.g. to interface with external non-HTTP APIs). Or you just put your conventional Node app in front of ArangoDB and use Foxx to create an HTTP API that better represents your application's problem domain than the database's raw HTTP API.
It's also worth keeping in mind that structurally Foxx services aren't entirely dissimilar from Node applications. You can use NPM dependencies and split your code up into modules and it can all live in version control and be deployed from zip bundles. If you're not convinced I'd suggest giving it a try by implementing a few of your most frequent queries as Foxx endpoints and then deciding whether you want to move more of your logic over or not.

Azure Mobile Services - complex processing

I am fairly new to Azure and mobile services, and all the examples and tutorials I can find for the table and API scripts are fairly simplistic.
If I have some processes that are fairly complex and rely on pulling information from many different tables and processing contingent on that data, should I be doing that somewhere other than the API scripts? I am new to node.js as well so maybe that's the problem but I was wondering if there is a more appropriate place for business logic, such as some bridge I need to add to my stack?
There are a lot of examples of how to use MSSql object which is used to query tables and Node in general available. A healthy search will reveal just about anything you need. Since you said you are new to Node.js consider using the .NET backend instead. It is based on Entity Framework and there are lots of Entity framework examples out there for you too. Finally, there are some really good examples of complex logic being used in the back ends in the sample code available: http://azure.microsoft.com/en-us/develop/mobile/ios-samples/ (pick your client OS) and here: http://azure.microsoft.com/blog/topics/mobile/ and here: http://blogs.msdn.com/b/azuremobile/
Let us know if you have specific questions!

Resources