point of making a stub file- knex - node.js

for some test project, I avoided explicitly creating a stub, but, looking at the doc, they mentioned something like creating a stub file:
knex migrate:make --stub
which according to configurations we specify created <filename>.stub. I looked at someone's starter pack, and saw that it was a file similar to migrations file.
So, from what I knew from my novice experience, altering, creating table is possible from migrations, and filling test data is possible from seeds. So, why do we need a .stub file for both the migrations and seeds?(which by the way seem similar) What's the whole concept of the .stub file?

The stub file is just a template to be copy-pasted as your new migration.
E.g. you call knex migrate:make new_migration
knex will create a file $timestamp_new_migration.$extension and the contents of the file will be the content of your stub file.

Related

Editing the .env file using node.js

I have a check.env file, in which I have some variables like
SharedAccessKey=
I want to put a value in the check.env file from my node.js code. Articles on internet are there for updating at the running time of node.js, but my requirement is to change in the file and keep the file with changes made.
How can I accomplish that.
I got this link : How to change variables in the .env file dynamically in Laravel?
but it is in some other language, how can I do in node.js.
I was unable to find out the best solution so went with another solution of mine that I took.
I am using two files now both .env extensions and I am copying main .env file to another empty .env file (like check1.env to check2.env).
Any modifications I am making is in the second file (check2.env).
And I am using string replacement in the .env file, using fs.readLine() and getting the string and the data.replace(), this worked for me.
The reason to use two .env files is that even if I change in the second file, again by copying from the first file I will get same string search and will replace with a different value.
-- Please suggest if there is an any better approach. Thanks

Ecto mutliple repos-- how to ignore one for migrations

I have an app configured
config :my_app,
ecto_repos: [MyApp.Repo, MyApp.LegacyRepo]
MyApp.Repo's migrations are managed by Ecto.
MyApp.LegacyRepo migrations are handled by Rails and error out on mix ecto.migrate
Is there a way to specify "I have two repos, but please ignore the second for migrations"?
Another option is to change your config.exs. Change it to...
config :my_app, ecto_repos: [MyApp.Repo]
According to the ecto.migrate documentation...
The repositories to migrate are the ones specified under the
:ecto_repos option in the current app configuration. However, if the
-r option is given, it replaces the :ecto_repos config.
Not having a MyApp.LegacyRepo in ecto_repos doesn't prevent reads or writes or anything else you'd expect. It just configures the migration task.
You can pass a repo into mix ecto.migrate like this
mix ecto.migrate -r MyApp.Repo
You can update test/test_helper.ex in a phoenix app to only run one repo migrations like this
Mix.Task.run "ecto.migrate", ["-r", "MyApp.Repo", "--quiet"]

Sequelize-cli how to create seed files from an existing database?

Issue:
In order to start in a clean environment for developing stuff for a web app I would appreciate to be able to retrieve some data from an existing DB (let say the 10 first lines of every tables) in order to create a sequelize seed file per table. It will then be possible to seed an empty DB with these data into the corresponding models and migrations.
I have found the tool named sequelize-auto which seems to work fine to generate a model file from an exsting DB (beware of not already having for example a uses.js model ; it will be overwritten !) : https://github.com/sequelize/sequelize-auto.
This tool will create a model file, but neither a migration or a seed file.
Question:
Is there a way to build a seed file from an existing database?
Found this cool module
you can create (dump) seed with command
npx sequeliseed generate table_name --config
https://www.npmjs.com/package/sequeliseed

How to include static files on Serverless Framework?

I'm creating a NodeJS service with serverless framework to validate a feed so I added a schema file (.json) to the service but I can’t make it work.
It seems to not be included in the package. Lambda doesn't find that file.
First I just run sls package and check the zip contents but the file is not present.
I also tried to include the file with:
package:
include:
- libs/schemas/schema.json
but still not works.
How can I make sure a static file is included in the package and can be read inside the lambda function?
It depends on how you are actually trying to load the file.
Are you loading it with fs.readFile or fs.readFileSync?
In that case, Serverless doesn't know that you're going to need it. If you add a hint for Serverless to include that file in the bundle then make sure that you know where it is relative to your current working directory or your __dirname.
Are you loading it with require()? (Do you know that you can use require() to load JSON files?) In that case, Serverless should know that you need it, but you still need to make sure that you got the path right.
If all else fails then you can always use a dirty hack.
For example if you have a file 'x.json' that contains:
{
"a": 1,
"b": 2
}
then you can change it to an x.js file that contains:
module.exports = {
"a": 1,
"b": 2
};
that you can load just like any other .js file, but it's a hack.
From what I've found you can do this in many ways:
As it is stated in another answer: if you are using webpack you need to use a webpack plugin to include files in the lambda zip file
If you are not using webpack, you can use serverless package commnad (include/exclude).
You can create a layer and reference it from the lambda (the file will be in /opt/<layer_name>. Take in consideration that as today (Nov20) I haven't found a way of doing this if you are using serverless.ts without publishing the layer first (lambda's layer property is an ARN string and requires the layer version).
If your worried about security you can use AWS Secrets as it is stated in this answer.
You can do what #rsp says and include it in your code.
For those struggling to find a solution in 2022: use package.patterns parameter. Example:
package:
patterns:
- libs/schemas/schema.json
- !libs/schemas/schema-prod.json
(! in front of the file path excludes specified pattern)
Documentation: https://www.serverless.com/framework/docs/providers/aws/guide/packaging

Nodejs require config

I am writing Nodejs at this moment and I was wondering what is better for requiring configuration:
In my main file I require conf.js only once and then pass it to the other files require('./jwt)(config)
In every file where I need something from the config I require it
Which one is better? I think it's the first one but I have some files that are used by the controllers (eg. jwt.js - veryfy and create token). Is it a best practise to require this module in the main file (where I don't need it) and pass the config or to use the second way?
If you are calling main file in every files then 1st one is better no need to add
var LatLonModule = require('conf.js');
in every file.
else you can choose 2nd option

Resources