Expediency of using Faker in Minitest fixtures - minitest

As a lot of Rails programmers nowadays I'm moving from RSpec to Minitest. I loved to have beautiful and meaningful data in my tests generated with Faker in FactoryGirl factories. However I was surprised to see different approach in Minitest fixtures. In all examples that I've found Faker wasn't used at all. So my question is what approach should I use for fixtures in Minitest. Should I use Faker for fill in fixtures or not?

There's nothing wrong with using Faker in your fixtures, but I think the answer to your question comes down to the fundamental difference between the two. Both serve the purpose of providing data for running your tests, but factories are generators with the potential to produce models. That means that they're used to create new objects as they're needed in your tests, and in particular, they're used as a shortcut for creating new objects with valid and predictable data without having to specify every attribute.
FactoryGirl.define do
factory :user do
first_name "John"
last_name "Doe"
admin false
end
end
user = build(:user, first_name: "Joe")
On the other hand, fixtures give you real data in your application DB. These are created apart from test execution and without validation, so the tendency as applications grow is to have to manage all fixtures as a single test data set. This is a big part of the argument against fixtures, but both factories and fixtures have their strengths and weaknesses.
To answer your question directly though, the reason you haven't found more examples of fixtures using Faker might be because fixture data, being fixed, needs to be more tightly controlled than factory definitions, and some of it might also be that developers inclined to use Minitest might also be inclined to remove unnecessary dependencies from their Gemfiles. But if you want to continue to use Faker for names and addresses and such, there's no reason not to.

Related

Django Wagtail dynamically create form without new model

How would I allow my primary user to dynamically create forms they can issue to their end clients. Each of my primary users has their own unique information they would like to collect that I do not know before hand. I would like to avoid creating new models in code for their dynamic needs and then having to migrate the models.
I came across this which had an interesting response but it starts with disclaimer
The flexibility of Python and Django allow developers to dynamically create models to store and access data using Django’s ORM. But you need to be careful if you go down this road, especially if your models are set to change at runtime. This documentation will cover a number of things to consider when making use of runtime dynamic models.
Which leads me to believe a lot can go wrong.
However because I'm using wagtail I believe there is probably a way to use StructBlocks & StreamFields to accomplish it.
Any guidance would be helpful.
Wagtail provides a form builder module for this purpose.
I have two possible solutions for you, although it should be said that there is probably some library with Django that I don't know about that does this, but that being said.
Prompt your user for which fields they want and the field type.
Pass this as a dictionary to some function that would generate the HTML code for the form.
When this form is used, instead of worrying about storing the fields seperately, store a dictionary in the Models. There are two ways to do that here
Another way that you could do this, albeit more convoluted but more suited to your needs, is to use MongoDB for the database for Django instead. Because it is unstructured, it might be better suited for your use case. Instructions on using MongoDB for Django are here

Nestjs dependency injection order when a module depends on a Mongoose model from #nestjs/mongoose, detailed diagram inside

The diagram does a good job at explaining the flow I currently have and why that is failing.
I have logger.module that exports a loger.service that depends on a #nestjs/mongoose LogModel.
I have a db.module that exports session.service and imports logger.module.
I have a session.service that is exported by the Db.Module and imports logger.service
I have a mock.db.module that is exactly like the real db.module (no mocked services, the real one) except the mongoose connection is to a in-memory mongodb.
I have a session.service.spec tests file that imports mock.db.module
However, I can't find any good way of providing LogModelinto log.module that doesn't need me to import #nestjs/mongooseand instantiate/wait for a new connection on every startup.
I was only able to produce 2 results:
use #forwardRef(() => Logger.module.register()) or/and #forwardRef(()=> Db.module.register()) which causes heap allocation error
don't use forwardRef and get circular dependency warnings.
How can I effectively map dependencies in an efficient way with Nestjs for this use case?
Diagram:

Code generation against Sprocs?

I'm trying to understand choices for code generation tools/ORM tools and discover what solution will best meet the requirements that I have and the limitations present.
I'm creating a foundational solution to be used for new projects. It consists of ASP.NET MVC 3.0, layers for business logic and data access. The data access layer will need to go against Oracle for now, and then switch to SQL this year as the db migration is finished.
From a DTO standpoint mapping to custom types in the solution, what ORM/code generation tool will work with creating my needed code but can ONLY access Stored Procs in Oracle and SQL.?
Meaning, I need to generate the custom objects that are the artifacts from and being pushed to the stored procedures as the parameters, I don't need to generate the sprocs themselves, they already exist. I'm looking for the representation of what the sproc needs and gives back to be generated into DTOs. In some cases I can go against views and generate DTOs. I'm assuming most tools already do this. But for 90% of the time, I don't have access directly to any tables or views, only stored procs.
Does this make sense?
ORMs are best at mapping objects to tables (and/or views), not mapping objects to sprocs.
Very few tools can do automated code generation against whatever output a sproc may generate, depending on the complexity of the sproc. It's much more straight-forward to code generate the input to a sproc as that is generally well defined and clear.
I would say if you are stuck with sprocs, your options for using third party code to help reduce your development and maintenance time are severely limited.
I believe either LinqToSql or EntityFramework (or both?) are capable of some magic with regards to SQL Server to try to mostly automatically figure out what a sproc may be returning. I don't think it works all the time, it's just sophisticated guess work and I seriously doubt it would work with Oracle. I am not aware of anything else software-wise that even attempts to figure out what a sproc may return.
A sproc can return multiple diverse record sets that can be built dynamically by the sproc depending on the input and data in the database. A technical solution to automatically anticipating sproc output seems like it would require the following:
A static set of underlying data in the database
The ability to pass all possible inputs to the sproc and execute the sproc without any negative impact or side effects
That would give you a static set of possible outputs for any given valid input. A small change in the data in the database could invalidate everything.
If I recall correctly, the magic Microsoft did was something like calling the sproc passing NULL for all input parameters and assuming the output is always exactly the first recordset that comes back from the database. That is clearly an incomplete solution to the problem, but in simple cases it appears to be magic because it can work very well some of the time.

What is the best method for setting up data for ATDD style automation?

I assume that most implementations have a base set of known data that gets spun up fresh each test run. I think there are a few basic schools of thought from here..
Have test code, use application calls to produce the data.
Have test code spin up the data manually via direct datastore calls.
Have that base set of data encompass everything you need to run tests.
I think it's obvious that #3 is the least maintainable approach.. but I'm still curious if anyone has been successful with it. Perhaps you could have databases for various scenarios, and drop/add them from test code.
It depends on the type of data and your domain. I had one unsuccessful attempt when the schema wasn't stable yet. We kept running into problems adding data to new and changed columns which bricked the tests all the time.
Now we successfully use starting state data where the dataset will largely be fixed, stable schemas and required in the same state for all tests. (e.g. A postcode database)
for most other stuff the tests are responible for setting up data themselves. That works for us!

Using metamorphic code to reduce boilerplate

Has anyone seen metamorphic code -- that is, code that generates and runs instructions (including IL and Java Bytecode, as well as native code) -- used to reduce boilerplate code?
Regardless of the application or language, typically one has some database code to get rows from the database and return a list of objects. Of course, there are countless ways of doing this based on your database connector. You might end up accessing the cells of the row by index (awkward, because changing "SELECT Name, Age" to "SELECT Age, Name" would break your code, plus the indexes obfuscate meaning), or using myObject.Age = resultRow.getValue("Age") (awkward, because this involves simply going through every field to set its data based on the columns).
Keeping with the database theme, LINQ to SQL is awesome. However, defining data models is less awesome, especially when your database has so many tables that SSMS can't list all of them in the object browser. Also, it's not the stored procedure writing or the SQL involvement that I dislike; just the connection of objects to database.
Someone at the company at which I intern wrote a really awesome method from our SqlCommand class (which inherits from the System one) that uses .NET reflection, with System.Reflection.Emit, to generate a method that would set fields (decorated with an attribute containing the name of the column) on any model object with a nullary constructor. I would consider this metamorphic because a specific part of the program writes new methods.
This pattern of generating objects from the database is just one example. One which I came across two days ago was databinding support for SWT (via JFace). I made these perfectly clean models with setAddress(Address address) and getName() and now I have to pollute the setters with PropertyChangeSupport fire-ers and carry around a PropertyChangeSupport instance (even if it is just in an abstract base class)! Then I found PojoBindables and now I feel like a level 80 databinder, simply because I need to write less.
Specifically, things that use native code with something like this or a Java Agent would be really sweet.
Generic programming might up your alley. The Concept C++ website has a really good tutorial that covers abstraction and lifting, ideas that can be used in any language and turn boilerplate code into a positive force. By examining a bunch of boilerplate methods that are almost exactly the same, you can derive a set of requirements that unite the code conceptually ("To make X happen you must do Y, so make X1 happen you must do Y with difference 1"). From there you can use a template to capture the commonalities, and use the template inputs to specify the differences. C# and Java have their own generics implementations at this point, so it might be worth checking out.

Resources