Does Prisma's way of saving the entity files inside the node_modules work within CI/CD if enabling the NX cloud distributed cache?
Related
I'm trying to use the Nx monorepo approach.
I've created a project with Angular & NestJs:
npx create-nx-workspace --preset=angular-nest
But now I'm trying to create modules.
On a standalone NestJs project, I would usually do something like:
nest g module auth
But if I do it in the root the file doesn't get created in the correct folder. What is the expected approach to develop in such multi-repo environment? should I open each app subfolder in a dedicated VS code? But then I guess I would loose the benefit of shared libraries?
I also think I could move my VS Code terminal in the correct folder, but then each time I open a new one I should go into the correct folder, pretty sure it will leads me to some errors.
THanks
If you're in an Nx workspace, you should be using the nx generator instead of the nest generator.
nx g #nrwl/nest:module auth
nx g #nrwl/nest:controller auth
I believe the generator will ask for which project as well. There's also the Nx Console Plugin for VSCode where you can see the changes before committing to any of them.
I created and initialized a Firebase the project on my machine and Firebase console. However during the CLI initialization process on my machine I did not tick/include Firebase-Functions feature during the feature selection process.
Halfway through the project I realized there where some features on my website that needed cloud functions and now I'm stuck trying to add firebase functions to the project.
Enabling Firebase functions on Firebase console is easy enough but it's making neccessary changes in the source code to enable it that are frustrating (e.g Creating a 'functions' folder in the code, creating a package.json etc). Is there a command to automatically generate this?
You should just be able to run firebase init in the same project again to add Cloud Functions. It will add extra information to your firebase.json for the new products you choose, but will not overwrite what you've already done for Hosting. If you don't trust that process, simply back up your files, run the comment, and revert the changes if you don't like them.
I've been using AWS for a while now but am wondering about how to go about developing with Lambda. I'm a big fan of having server-less functions and letting Amazon handle the maintenance and have been using it for a while. My question: Is there a recommended workflow for version control and development?
I understand there's the ability to publish a new version in Lambda. And that you can point to specific versions in a service that calls it, such as API Gateway. I see API Gateway also has some nice abilities to partition who calls which version. i.e. Having a test API and also slowly rolling updates to say 10% of production API calls and scaling up slowly.
However, this feels a bit clunky for an actual version control system. Perhaps the functions are coded locally and uploaded using the AWS CLI and then everything is managed through a third party version control system (Github, Bitbucket, etc)? Can I deploy to new or existing versions of the function this way? That way I can maintain a separation of test and production functions.
Development also doesn't feel as nice through the editor in Lambda. Not to mention using custom packages require to upload anyways. Seems local development is the better solution. Trying to understand others workflows so I can improve mine.
How have you approached this issue in your experience?
I wrote roughly a dozen lambda functions that trigger based on S3 file write event or time, and make a HTTP req to an API to kickstart data processing jobs.
I don't think there's any gold standard. From my research, there are various approaches and frameworks out there. I decided that I didn't want to depend on any kind of frameworks like Serverless nor Apex because I didn't want to learn how to use those things on top of learning about Lambda. Instead I built out improvements organically based on my needs as I was developing a function.
To answer your question, here's my workflow.
Develop locally and git commit changes.
Mock test data and test locally using mocha and chai.
Run a bash script that creates a zip file compressing files to be deployed to AWS lambda.
Upload the zip file to AWS lambda.
You can have version control on your lambda using aws CodeCommit (much simpler than using an external git repository system, although you can do either). Here is a tutorial for setting up a CodePipeline for commit/build/deploy stages: https://docs.aws.amazon.com/codepipeline/latest/userguide/tutorials-simple-codecommit.html
This example deploys an EC2 instance, so for the deploy portion for a lambda, see here
If you set up a pipeline you can have an initial commit stage, then a build stage that runs your unit tests and packages the code, and then a deploy stage (and potentially more stages if required). It's a very organized way of deploying lambda changes.
I would suggest you to have a look at SAM. SAM is a command line tool and a framework to help you to develop your serverless application. Using SAM, you can test your applications locally before to upload them to the cloud. It also support blue / green deployment and CI/CD workflows, starting automatically from github.
https://github.com/awslabs/aws-sam-cli
Tech Stack
Bitbucket Pipelines
Docker
Node
Webpack
VueJS 2
Firebase
Question
Which piece of technology above do I use to reconfigure the Firebase APIKeys before building? I think it should be done in Webpack. Can you point me to an example? I'm a Webpack Noob.
The Issue
When I use Webpack to build the Vue project, I need to change the $FIREBASE_API_KEY and other config options based on the branch that Bitbucket Pipelines was triggered from. If I check-in a change to "inflight" on Bitbucket, Pipelines should build and deploy with the $FIREBASE_API_KEY (and other props) that match the firebase project, "dev".
// Firebase Config
let config = {
apiKey: "$FIREBASE_API_KEY",
authDomain: "my-project.firebaseapp.com",
databaseURL: "https://my-project.firebaseio.com",
databasecloudfunctionsUrl: "https://my-project.firebaseio.com",
projectId: "my-project",
storageBucket: "my-project.appspot.com",
messagingSenderId: "xyz123"
}
Details
The config info you see above currently lives within my-project\src\validation-service.js
Seems, I need to define $FIREBASE_API_KEY (and the other props) in a separate file that Webpack can manipulate.
Does Docker have a roll in updating the APIKeys and other configs for deploying Dev/Prod from Inflight/Master?
Examples - similar tech
https://github.com/bartw/multi_env_webpack_travis_app
https://hackernoon.com/continuous-deployment-of-a-webpack-app-to-multiple-environments-using-travic-ci-d2c6f22eac50
https://www.atlassian.com/continuous-delivery/tips-for-scripting-tasks-with-Bitbucket-Pipelines
Answer
Webpack
Webpack makes it possible to configure Client API Keys for different environments, like Prod or Dev.
I created this project with vue-cli and chose the webpack option. This creates folders and files to configure Continuous Integration for multiple environments.
In my specific case, I saved my two Client API Keys in dev.env.js and prod.env.js. These .js files are in the config folder created by vue-cli.
I used This Tutorial to gain an understanding of how Webpack uses files in the build folder. For vue-cli, keep and eye on NODE_ENV and '"production"'
More
There are other Secret API Keys that this application needs access to when deploying. For this, I use Environmental Variables in Bitbucket Pipelines. Although Bitbucket Pipelines uses Docker Images to build and deploy, it is the bitbucket-pipelines.yml that can reference encrypted environmental variables from Bitbucket, i.e., $PROD_FIREBASE_API_KEY, $DEV_FIRBASE_API_KEY.
I would like to know what is the best approach in create several deploys from a big code base. The idea is to divide the big API into microservices (each one in it's own server/vm),
The first idea: I could simply create a folder with only the available routes for that microservice, but still using the "common" codebase...
I currently end up with this, and it's a running API in production (with staging environment in heroku with their pipeline):
and I was thinking that I could have something like:
can anyone point me to a good reference on ... where to start? how can I push multiple version of the same base code to a server?
for more detail on the used technologies, I'm using:
mocha and chai for tests
sequelize for mariaDb modeling and access
restify for server engine
When you divide the API into microservices, you have few options:
Make completely separate repos for all of them with some code duplication
Make completely separate repos but sharing common code as Node modules
Make one repo with multiple microservices, each as its own Node module
Make one repo with one big codebase and build multiple modules with needed parts from that
I'm sure you can do it in even more ways
Having a mismatch of the number of Node modules and code repos will cause some troubles but it may have some benefits in certain cases.
Having a 1-to-1 mapping of repos and modules will be easier to work with with some cases, like the ability to add private GitHub repos directy to dependencies in package.json.
If you want to factor out some common functionality then you can do it in several ways:
The npm supports organizations, scoped packages, private modules and private scoped packages with restricted access.
You can host a private npm registry
You can host a module on GitHub or GitLab or any other git server
For more info see:
Node.js: How to create paid node modules?
There are some nice frameworks that can help you with splitting your code base into microservices, like Seneca:
http://senecajs.org/
Or to a certain extent with Serverless if you're using AWS Lambda, Microsoft Azure, IBM OpenWhisk or Google Cloud Platform:
https://serverless.com/