Is it possible to stream Cloud Build logs with the Node.js library? - node.js

Some context: Our Cloud Build process relies on manual triggers and about 8 substitutions to customize deploys to various firebase projects, hosting sites, and preview channels. Previously we used a bash script and gcloud to automate the selection of these substitution options, the "updating" of the trigger (via gcloud beta builds triggers import: our needs require us to use a single trigger, it's a long story), and the "running" of the trigger.
This bash script was hard to work with and improve, and through the import-run shenanigans actually led to some faulty deploys that caused all kinds of chaos: not great.
However, recently I found a way to pass substitution variables as part of a manual trigger operation using the Node.js library for Cloud Build (runTrigger with subs passed as part of the request)!
Problem: So I'm converting our build utility to Node, which is great, but as far as I can tell there isn't a native way to steam build logs from a running build in the console (except maybe with exec, but that feels hacky).
Am I missing something? Or should I be looking at one of the logging libraries?
I've tried my best scanning Google's docs and APIs (Cloud Build REST, the Node client library, etc.) but to no avail.

Related

Can Google App Engine (flexible environment) perform a build step defined in package.json just before deployment?

I couldn't find any documentation about build steps on the flexible environment. Only thing I found is that App Engine will run the start script from your package.json file after deployment, but is it possible to make it run the build script first? This is what Heroku does and I want to replicate it.
What you're looking for is the script called gcp-build as this one can perform a custom build step at deployment, just before starting the application. While this is only documented for Standard Environment as of now (I've let the engineers know), there are multiple public resources that can confirm this works on both environments. See the following links as reference:
Why does Google App Engine flex build step fail while standard works for the same code?
https://github.com/GoogleCloudPlatform/nodejs-docs-samples/tree/master/appengine/typescript

Execution of frontend tests in embedded webapp

In our setup we are building and deploying our UI5 app as an embedded static resource within our Spring boot maven-based application. During the CI build with the SAP Cloud SDK pipeline, the frontent tests are however not being executed.
Looking at the pipeline code, it seems to me that those stages are only executed for HTML5 modules and not for Java modules. However, the npm modules should be available as they are collected during initialization stage as far as I can see.
So the question for me is if there is a way to execute the frontend tests also in this scenario or if not, whether this intentionally not being done due to other constraints I am not aware of.
For projects using MTA/Cloud Application Programming Model this is correct. Currently, we expect only html5 modules to contain frontends and the corresponding tests. Reason for that is that MTA brings that structure by default and there were no other request for this yet. However, as it also looks like a valid setup we will discuss whether we implement that in one of the future releases. You are also invited to create pull requests.
If you are using a plain maven project generated with the SAP Cloud SDK, you can have this setup of having the frontend embedded into the webapp folder. In this case, you only need to configure the npm script ci-frontend-unit-test in your package.json in the root of the project.

AWS Lambda Dev Workflow

I've been using AWS for a while now but am wondering about how to go about developing with Lambda. I'm a big fan of having server-less functions and letting Amazon handle the maintenance and have been using it for a while. My question: Is there a recommended workflow for version control and development?
I understand there's the ability to publish a new version in Lambda. And that you can point to specific versions in a service that calls it, such as API Gateway. I see API Gateway also has some nice abilities to partition who calls which version. i.e. Having a test API and also slowly rolling updates to say 10% of production API calls and scaling up slowly.
However, this feels a bit clunky for an actual version control system. Perhaps the functions are coded locally and uploaded using the AWS CLI and then everything is managed through a third party version control system (Github, Bitbucket, etc)? Can I deploy to new or existing versions of the function this way? That way I can maintain a separation of test and production functions.
Development also doesn't feel as nice through the editor in Lambda. Not to mention using custom packages require to upload anyways. Seems local development is the better solution. Trying to understand others workflows so I can improve mine.
How have you approached this issue in your experience?
I wrote roughly a dozen lambda functions that trigger based on S3 file write event or time, and make a HTTP req to an API to kickstart data processing jobs.
I don't think there's any gold standard. From my research, there are various approaches and frameworks out there. I decided that I didn't want to depend on any kind of frameworks like Serverless nor Apex because I didn't want to learn how to use those things on top of learning about Lambda. Instead I built out improvements organically based on my needs as I was developing a function.
To answer your question, here's my workflow.
Develop locally and git commit changes.
Mock test data and test locally using mocha and chai.
Run a bash script that creates a zip file compressing files to be deployed to AWS lambda.
Upload the zip file to AWS lambda.
You can have version control on your lambda using aws CodeCommit (much simpler than using an external git repository system, although you can do either). Here is a tutorial for setting up a CodePipeline for commit/build/deploy stages: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
This example deploys an EC2 instance, so for the deploy portion for a lambda, see here
If you set up a pipeline you can have an initial commit stage, then a build stage that runs your unit tests and packages the code, and then a deploy stage (and potentially more stages if required). It's a very organized way of deploying lambda changes.
I would suggest you to have a look at SAM. SAM is a command line tool and a framework to help you to develop your serverless application. Using SAM, you can test your applications locally before to upload them to the cloud. It also support blue / green deployment and CI/CD workflows, starting automatically from github.
https://github.com/awslabs/aws-sam-cli

Creating a Web UI for StrongLoop build & deploy processes?

I want to build web ui for StrongLoop. It would let a user build and deploy process with that UI like StrongLoop Arc.
There are simple node applications(Web Services) without created with StrongLoop tools. Need to deploy these applications via web ui. Solution in my mind is some server-side processes, listed steps below:
Upload zip folder(node application) to server
Extract zip and build to tar.gz by shell command (slc build) through node.js child_process API
Deploy tar.gz file to relevant StrongLoop host by shell command(slc deploy..) through API which is mentioned on previous step.
I wonder is there any alternative way to deploy node application(without created with StrongLoop tools) to StrongLoop host via web ui using some StrongLoop API?
I have looked API could not find specific solution.
What you require is a CDP (Continuous delivery pipeline) setup, there seem to be many ways in which you can achieve this (easiest way is using Codeship or similar platforms), but if you want to know how it works it requires a bit of orchestration tools to help you. To describe the steps I'll be using the following tools:
Docker (what is docker?)
Ansible (Use Cases and How it works?)
Jenkins (What is it and Why to use it?)
"There are many other combination of tools that you can look at, but this should give you an idea"
Now that we have the tools, I'll try to describe the deployment pipeline with a very basic use-case.
Step I "Ideally" - Creating a docker image for your nodejs application.
What generally everyone suggests is that you create a docker image of your application. Then save this image on docker-hub. How this will help you is that, now your nodejs application is contained inside a docker image which makes it independent of the Host and can be deployed anywhere you want.
To create this image all you need to do is create a Dockerfile, which is described in the in the link I've shared.
Step II "Ideally" - Creating an Ansible playbook to mimic the setup steps of your application.
Ansible playbooks are basically used to automate every manual process that you would need to do in order to setup-deploy-run your application. This decreases the need to run even trivial tasks like "slc build".
Step III "Ideally" - This is where we get to the UI stuff
By using Jenkins, you are given a UI which will help you configure tasks that can be combined with Github hooks and trigger the deployment as soon as you make a commit. This is explained in more details in the link shared.
So to summarize, This is what goes on at back to some extent, in order to automate the build and deployment of your application using UI. I hope this serves as a good starting point to achieve your requirements, and also in case you want skip these steps in the start, you could always go with Codeship or similar other tools to help you with the steps that you've mentioned.

How to do continuous deployment - c++ on AWS

I have a source on a repository server. The application is running on an AWS instance. I could make a script that logs, makes the pull, compiles, and copies the new binary to it's destination.
But how do I copy the new binary if the application is running? what's the usual way to do this? Do I have to stop the application to make an update? how does continuous deployment works then?
I'm using linux, the application is in C++.
You would have to restart the application after copying the binary. I would highly recommend that you use one of the frameworks for continuous building/integration to make this less painful though, for example Jenkins.
It will not only help with the actual deployment process, but can also run tests for you and only deploy if the tests succeed. There is also a plugin for AWS integration.

Resources