How to use an existing DB for unit testing in Django? - python-3.x

I have created a REST-API application using Django Rest Framework.
The API just converts the data from an existing Read-only Postgres DB to a REST- API to be consumed by the front-end.
Now I need to write some unit tests to test the application. Since there are lots of tables and views involved, I cannot create mock data and test.
Is there a way to run the tests in Django using the existing DB (Postgres) without creating any mock data ?
Note: I have read a lot of SO posts related to my problem but none of them worked as they were for old versions of Django.

You can't use the Django test command for looking at production data. It always creates an empty database that is populated from fixtures in the TestCase.
This is not a good practice and should not be done. But if you want to achieve what you stated, you can try this -
DATABASES = {
'default': {
...
'TEST': {
'NAME': 'your prod db'
}
}
in your settings.py file.
(This may interfere with your production database)
I will suggest you use hypothesis and factories for data production to writing tests as it will help you in the long run and will be foolproof.
What you are suggesting will not be called a test though. So it will be helpful for you to reconsider writing tests for your application.

You could do something like this in your settings.py
import sys
if 'test' in sys.argv:
DATABASES['default'] = {
# your default database credentials
}

Related

node and express - how to use fake data

It's been a while since I used node and express and I was sure that this was possible, but i'm having an issue of figuring it out now.
I have a simple postgres database with sequelize. I am building a back end and don't have a populated database yet. I want to be able to provide fake data to use to build the front end and to test with. Is there a way to populate a database when the node server is started? Maybe by reading a json file into the database?
I know that I could point to this fake data using a setting in the environment file, but I don't see how to read in the data on startup. Is there a way to create a local database, read in the data, and point to that?
You can use fake factory package, I think it can solve your problem.
https://www.npmjs.com/package/faker-factory
FakerJs provides that solution.
import { faker } from '#faker-js/faker';
const randomName = faker.name.findName();
const randomEmail = faker.internet.email();
with the above, you can run a loop for loop to be specific to create
the desired data you may need for your project.
Also, check on the free web-API that provides fake or real data to workon

Environment specific configuration of datasources in loopback4 application

I have just started my first loopback project and chosen loopback4 version for the application. Its purely a server application which will interact with databases (Redis and mongodb) and will call external API services due to micro-service architecture.
Now, I have 3 datasources in my application i.e. mongodb, Redis, and REST based datasource to call external services. I am facing 2 problems in going forward.
1. Environment specific configurations of Datasources: I need to maintain configuration for all three datasources according to the NODE_ENV environment variable. For lb3 i found this solution,
https://loopback.io/doc/en/lb3/Environment-specific-configuration.html#data-source-configuration
which does not work in lb4. One solution is to add configuration files having names mongodb.staging.json and mongodb.production.json and same for redis and rest datasources in directory src/datasources, and load this config according to NODE_ENV variable using if condition and pass it to the constructor of datasource. It works but it does not seem nice, as it should be application's responsibility to do this.
Can somebody suggest me lb3 equivalent solution for the above?
2. Calling External APIs via datasource: in lb4, To call external services its recommended to have a separate REST based datasource and its service to call it via controller. Now, In REST datasource config, one has to define a template of all the API calls which will happen to the external service https://loopback.io/doc/en/lb4/REST-connector.html#defining-a-custom-method-using-a-template.
As my application calls external service heavily with relatively large number of request parameters. It becomes really messy to declare each API call with its request params and to maintain this in the datasource config which will be environment specific.
Can somebody tell me a more robust and cleaner alternative of the above problem?
Thanks in advance!!
Using environment variables in datasource configs
The datasource config is simply a JSON file that's imported in into *.datasource.ts. Hence, you can replace that JSON file with a Typescript file and import it accordingly. LoopBack 4 does not provide any custom variable substitution mechanism. Instead, it is recommended to use process.env.
Recent CLI versions replace the JSON config in favour of using a single Typescript file:
import {inject} from '#loopback/core';
import {juggler} from '#loopback/repository';
const config = {
name: 'db',
connector: 'memory',
};
export class DbDataSource extends juggler.DataSource {
static dataSourceName = 'db';
static readonly defaultConfig = config;
constructor(
#inject('datasources.config.db', {optional: true})
dsConfig: object = config,
) {
super(dsConfig);
}
}
The dependency injection in the constructor allows you to override the config programmatically via the IoC container of the application.
Further reading
https://loopback.io/doc/en/lb4/DataSources.html
Calling external APIs without REST connector
The REST connector enforces a well-defined interface for querying external APIs so as to be able to do validation before sending out the request.
If this is not favourable, it is possible to create a new Service as a wrapper to the HTTP queries. From there, you can expose your own functions to handle requests to an external API. As Services do not need to follow a rigid structure, it is possible to customize it to your use-case.
It is also possible to create a new request directly inside the controller using either built-in or external libraries.
Overall, there isn't a 100% right or wrong way of doing certain things in LoopBack 4. Hence why the framework provides numerous ways to tackle the same issue.

Are migrations really needed in NodeJS Sequelize?

In Sequelize, when we create a model using the following command,
sequelize model:generate --name Company --attributes name:string, desc:text
A migration file is also getting created. And we can make the models sync with the DB by adding the following piece of code.
models.sequelize.sync().then(() => {
console.log("DB Synced");
}).catch((error) => {
console.log(error);
});
Therefore, when there's a change in the column names or something, they get synced with the DB.
So, do we really need to run migration ? At any point in the development or production ?
Please correct me if I am mistaken.
I strongly recommend to use explicit migrations and to point tables for storing applied migrations and seeds in a sequelize config in case you have different versions of a database in different production environments (even if you are just planning different production deployments).
If you run a newer version of your app with some model changes on an older production DB then by executing sync method you change this production DB accidentally. Moreover you cannot reverse these changes easily because you don't know anything about differences between models in your current app and tables in the production DB.

Loopback4 challenges

I am very new to this Loopback 4. When I am setting up my project I am having some setup issues. Below are few things.
Environment based datasource loading
There is no direct way to load the datasource based on the environment.
Some configurations/constant variables need to be defined on a JSON file to access into the entire application, again this is also based on the environment.
Not able to connect MongoDB Atlas database. In an express application I am able connect, but not in Loopback. Below is the error it is returning.
url.dbName || self.settings.database,
^
TypeError: Cannot read property 'dbName' of null
Not able to achieve model relations.
I don't want to return the entire Model in my API response. How can I customize my API response using the Model?
I want to write my business logic in a separate file, not in a controller/repository. is it a good idea OR where should I return the business logic? and best practices.
I don't find proper documentation on Loopback4 to solve these issues. any help would be appreciated.
Let me try and help you with a few of these.
1 - You can do env based ds config loading by adding below to the constructor of your datasource.ts file.
constructor(
#inject('datasources.config.pgdb', {optional: true})
dsConfig: object = config,
) {
// Override data source config from environment variables
Object.assign(dsConfig, {
host: process.env.DB_HOST,
port: process.env.DB_PORT,
user: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_DATABASE,
schema: process.env.DB_SCHEMA,
});
super(dsConfig);
}
After this you can use packages like dotenv to keep env vars out of your repo.
2 - Use dotenv. Load dotenv config in application.ts. Add this to the end of application.ts.
dotenv.config();
You may need to import dotenv like this
import * as dotenv from 'dotenv';
3 - Not sure about this, but check if it is supported in data source generator here.
4 - There are currently only 3 type of relations supported. And, in my experience, it's enough for most of the applications - belongsTo, hasMany, hasOne. Refer docs here for details.
5 - You can return any custom model you want. Just make sure that it extends Entity class from #loopback/repository. Also, make sure you define property types using #property decorator.
6 - You can move your business logic to service classes or create providers as well. We used to keep the DB specific operational logic like custom queries, etc, in the repository and rest of the business logic inside the controller. But if there is a big complex logic, create a provider class and do there. Refer docs for providers here.
We also created a boiler plate starter project on github to help community members like you to get a kick start with some of the basic stuff. Most of the above mentioned stuff is implemented there. You can just clone it, change the remote url and all set to go. Take a look here.

Full stack testing Ember.js and Laravel app

I am working out how best to test an application we are developing using Laravel as backend (REST API) and Ember.js as front end.
I am currently using Behat to run acceptance tests for the API. I would like to use Cucumber.js to also create and test features for the Ember side of the project.
I already have an "acceptance" environment dedicated to running the Behat features in the API.
The issue I can see with "full stack" testing from the JS side is how to clean and reseed the database?
My thoughts are to add a route for testing purposes only, e.g.:
if ( App::environment() == 'acceptance' ) {
Route::get('/test/reseed', function() {
// delete db if exists
Artisan::call('migrate');
// call other migrations...
Artisan::call('db:seed');
return "ok";
});
}
This way only the acceptance environment has access to that call, and it should be all thats needed to set up the API side of things ready for Cucumber.js & Ember to run the tests (probably via Node.js)
I'm asking here to see if there is anything glaring I might have missed, if there is another way to do this.
The reason I want to be able to test purely from the JS side is so that we can test the systems independently of each other. I am aware I can test the full application with Behat, Mink and PhantomJS for example.

Resources