What's the difference between "Google App Engine Flexible project", "Standard project" and "Dataflow project" - google-eclipse-plugin

After installing Google Plugin for Eclipse, there are options to create three different types of projects
Google App Engine Flexible Java Project
Google App Engine Standard Java Project
Google Dataflow Java Project
What is the difference between these three?

App Engine allows you to deploy application with minimum infrastructure related configuration. Standard is quicker in both deploying and scaling your application, but has a limited choice of languages and libraries to use. Flexible manages docker containers so it can run almost anything and has more powerful machines to chose from, same that Compute Engine has, but it takes longer to deploy and is in general more expensive.
Lightweight generic code that should be always available should run on Standard. A batch job that is processed once every day and requires an expensive setup and third party libraries would better run on Flex.
More on that here
Google App Engine Dataflow Java Project
It's not an App Engine, it says:
Google App Engine Dataflow Java Project
It's a service directed specifically at batch jobs and streaming. More on it here.

Related

How to use GCP Runtime Configurator from Node.js?

I'm working on a Node.js application hosted on Google Cloud, using Google Application Engine. The app has a few settings like the following:
const TASK_BATCH_SIZE = 50;
Currently, every time we need to change some settings like that one to do some tests we need to re-deploy the app, and that happens very often. We are looking for some alternatives inside the Google Cloud ecosystem that allows us to configure our running services without needing to re-deploy.
One of the things we found in the docs was Runtime Configurator, which still seems to be a beta product. For Node.js specifically nodejs-rcloadenv is the only client library we found, but doesn't seem to support the Watcher / Waiter concepts described in the Runtime Configurator docs or any other way to subscribe to variable changes in a configuration resource.
Is Runtime Configurator the solution to our problem? Are there any other services inside the Google Cloud ecosystem, or any other library for Node.js that could help us with this?
If you want to update or to make some settings changes in the service, you need to re-delpoy the service.
You can't use Runtime Configurator as it's for Compute Engine and not for App Engine.
You can create a feature request on Google's Public Issue Tracker for your issue ( to update configurations without re-deploying the service).

ODM Command Line Build For Classic Rule Projects

I am using ODM 8.10 and want to automate building rule app files. The code is currently configured in the old Classic Rule Project, and we are trying to avoid migrating to Decision Services at this time. I have found build jars for Decision Services but nothing so far for Classic Rule Projects. There must be a way to do this as the rule app jar files are created in the eclipse IDE when you deploy/export a ruleApp. I am trying to find out the jar files the IDE uses and the commands it calls to execute the rule app builds.
Re: "There must be a way to do this"
But you will not necessarily have access to it. The ODM product developers have experience, source code, documentation, and other tools that you do not have access to.
Having said that, there is an build/deploy API that you may be able to access via ANT. I haven't used it since switching to Decision Services when that became feasible in ODM 8.7. Standard practice before that time was to automate deployments via Ant and a "headless" version of Eclipse. If the latest online docs don't describe it, you might try the older docs.
WARNING: Classic Rule Projects are a dead end! Not only will all your effort building them in a non-standard way be wasted, I believe that it will likely be more trouble than just migrating to Decision Services (which is not usually that difficult).

Google App Engine - specify custom build dependencies

My app needs cmake, libx11-dev and libpng-dev to build. I came across this documentation, which leads me to believe that I can list these as dependencies for my app to run on the Google App Engine platform, although I cannot figure out how. I was successfully able to run my app in a Compute Engine instance, although this is costly and, if I'm not mistaken, unnecessary. How do I get the packages listed at the beginning of the question installed beyond session end?
You can only list Node.js dependencies that way. From Declaring and managing dependencies (emphasis mine):
You can use any Linux-compatible Node.js package with App Engine
flexible environment, including packages that require native (C)
extensions.
You can use dependencies other than Node.js (at least cmake in your list) but only in the flexible environment, via a custom runtime. From About Custom Runtimes:
Custom runtimes allow you to define new runtime environments, which
might include additional components like language interpreters or
application servers.
See also Building Custom Runtimes.
You need to keep in mind that the App Engine Flexible Environment still uses Compute Engine instances so may not get an additional benefit from moving across to this
Based on Google Compute Engine, the App Engine flexible environment
automatically scales your app up and down while balancing the load.
The issue that you have is that if you require cmake, libx11-dev and libpng-dev to build your application you'll still need to use an underlying Compute Engine VM in order to run the application. This will be the case even if you consider moving across to Kubernetes Engine as well.
If you're looking to manage costs for your application, perhaps consider downsizing the VM to a smaller instance or look into modifying your application to suit the App Engine Standard Environment or use Cloud Functions

Google App Engine How to implement Profiling(Stack Tracing)?

I am using Google App Engine to run my NodeJS app on flexible env, now i wanted to generate FlameGraph but the thing is as App Engine itself handle scaling and deploying of instances now can anyone please tell me how can i generate Flamegraph(NodeJs Profiling) to trace Requests coming on my NodeJs server.
If anyone of you has worked on Google App Engine on any Framework(NodeJs or any other), Can you all please tell me how did you solve this kind of problem on App Engine.
Update -
Why We need to delete the instance after debugging it.
Flame graphs are a visualization of profiled software, allowing the
most frequent code-paths to be identified quickly and accurately.
So FlameGraphs have nothing to do with networking, scaling or deploying to GCP.
Anyhow, FlameGraph is a just a 3rd party tool you can install and run. So the answer is you can make it work same way you would install and run on your local computer.
If you don't know how to use FlameGraph to profile NodeJS, then you should start reading some tutorials, as this site is not for that type of questions. A good one is here: https://nodejs.org/en/blog/uncategorized/profiling-node-js/
UPDATE: How to ssh into app engine flex instance
In google cloud console go to App Engine Flex -> Instances

Azure Function Structure

I'm trying to wrap my head around how we're supposed to build Azure functions.
I love the idea of building serverless, compact, single-function apps that respond to events.
Here are the problems I'm running into:
I have nice class libraries built in .NET Standard 2 that handle all my "backend needs" namely handling CRUD ops with Cosmos Db, Azure Table Storage, Azure SQL, Redis, Azure Storage. No matter what I did, I couldn't integrate these class libraries into an Azure Functions project. More details below.
Also, getting dependency injection in Azure Functions project has proven to be quite a task -- especially with my class libraries mentioned above.
At this point, the only option I'm seeing is to "copy and paste" code into a new Azure Functions project and use it without any DI.
This seems to go against "best practices". So what's the solution other than either to create monolithic code or wait till Azure Functions support .NET Core and DI.
I thought I could use my .NET Standard class libraries from a regular Azure Functions project targeting .NET Framework. After all, the idea of .NET Standard is to "standardize" things. I opened a couple of posts here on SO. I'm providing the links so that you can see the issues I've run into:
Using .NET Core 2.0 Libraries in WebJob Targeting .NET Framework 4.7
No parameterless constructor error in WebJobs with .NET Core and Ninject
P.S. My previous posts are referring to WebJobs. That was plan B approach because WebJobs seem half a step ahead of Azure Functions when it comes to supporting things like .NET Core and DI. Ultimately, I'd like to build a few Azure Functions that can use my class libraries built in .NET Standard 2.
Also, my previous posts mention that my class libraries target .NET Core 2.0. Since then I converted them to .NET Standard 2 which didn't really take much at all. I did this so that I truly conform to .NET Standard 2.
One issue is that Visual Studio has an outdated version of the Functions Core tools. Until this is resolved, you can work around in the following way:
Install the latest via npm by running npm install -g azure-functions-core-tools
In your Function App in VS, go to the Properties
Go to Debug, and click New... under Profile
Name the new Profile something like FunctionsNpm
Set the executable to (replace [YourUserName]): C:\Users\[YourUserName]\AppData\Roaming\npm\node_modules\azure-functions-core-tools\bin\func.exe
Set the arguments to host start
Set the working directory to $(TargetDir)
In toolbar, look for the green triangle icon to change your current Profile to the one you just created:
Now when you run from VS, you'll be using the npm tools instead of the older one that come with the VS package.
.NET Standard 2 support is on its way, see this github issue.

Resources