How to correctly design application architectures on nestjs? - nestjs

I just started to study my first serious framework with the code itself, there are no problems, but I can’t find examples of how to make the application architecture.
Example - I figured out the related tables in the database, but how to structure it correctly?
conditional example
this catalogs
<src>
<categories>
<dto>
create-category.dto.ts
create-subcategory.dto.ts
<models>
category.model.ts
subcategory.model.ts
<services>
category.service.ts
subcategory.service.ts
category.controller.ts
category.module.ts
OR
<src>
<categories>
<dto>
create-category.dto.ts
category.model.ts
category.service.ts
category.controller.ts
category.module.ts
<subcategory>
<dto>
create-subcategory.dto.ts
subcategory.model.ts
subcategory.service.ts
If possible, send me a link where I can read about it at all

It's a long story. Basically, (expect CRUDs) an entity doesn't equal a module. Directories you can design whatever you prefer, but you have to focus on defining modules and its boundaries. https://twitter.com/_MaciejSikorski/status/1505613059221594113

Related

How to invoke whatever generator rule is configured for a concrete instance of an abstract concept?

I have a collection of nodes of concept Command that I'm iterating over with a $LOOP$ macro. Command is an abstract concept. I have defined templates and reduction rules for concrete subconcepts, such as Outline:
template tpl_Outline
input Outline
...
and
reduction rules:
[concept Outline ] --> tpl_Outline
[inheritors false ]
[condition <always>]
Question: How would I invoke the appropriate generator rule for the concrete concept from inside the $LOOP$ macro where the nodes are only known to be of the abstract type Command?
[EDIT] Since the proposed answer is specific to looping over a collection of elements, how would I do the same when there's no looping? That is, how to trigger the configured rule for a given node (e.g. a certain child of the current node).
Note 1: I tried using just $LOOP$[null], hoping for the element nodes to be processed by appropriate rules automatically, but that just produced nulls in the output.
Note 2: I tried $LOOP[$COPY_SRC$[null]], but that produced
textgen error: 'No textgen for Draw.structure.Outline' in [actualArgument] Outline null[847086916111387210] in Draw.sandbox#0
[EDIT 2] This is actually a working solution. What helped was probably invalidating the caches (just Rebuild Project was not working).
Note 3: Previously I used a template switch to call an appropriate template based on concrete concept, but I now want to support custom extensions of Command so I can no longer create an exhaustive template switch.
Try using $COPY_SRCL$ (L stands for Loop here), this macro is designed exactly for your situation.
Also, template switches are extensible
Regarding your Build --> Rebuild Project problem: sometimes File --> Invalidate caches can help to resolve such problems.

Convention for passing arguments to non-Silicon subblocks/helpers

Sorry if the title is a bit confusing, but what are the options/conventions that Origen provides for setting up subblocks that aren't necessarily silicon models, or are just general helpers?
For example, I have a scan helper plugin that guides the user through creating a scan test program. I'd like to add a list of options/customizations to the top-level app. There are a few ways to do this:
I can add a list of attr_readers/methods. I think this looks a bit ugly though and adds a bunch of stuff to the toplevel that isn't used by anything else, and it blows up $dut.methods.
I could use parameters as defined here: http://origen-sdk.org/origen/guides/models/parameters/ and just call of them in the scan tester app. But looking at the guides I don't think that is the desired use case. It looks more like context switching, but maybe that was just the example use case.
I could add a scan_tester.setup method or something on the toplevel. This just seems unnecessary though since its basically doing the same thing as #2, but requires a 'setup' method to be called. Yeah, its only 1 line, but if you mess up or forget to add that line then you've got some debug to do avoided by #2 (I can print a warning for example if the scan parameters aren't provided to help warn of typos, etc.).
I can set it up as a subblock (currently how I've got it), but this doesn't really fit. Scan isn't a silicon model, so base address is useless, but required. It has no registers, etc.
Then there's other 'Ruby' things I could do (setup via on_create, use global variable etc.) but these all seem not as great as any of the options above for one reason or another (mainly, more setup required on my part than using any of the existing options).
Any one of these would work. But from a convention standpoint, which direction should my scan tester setup go? Is there another option I hadn't considered? I'd lean towards option #2 as it looks the cleanest.
Thanks
This is a really good question.
There are actually two other options:
Add application config parameters from the plugin: http://origen-sdk.org/origen/release_notes/#v0_7_24
Define a constant as used by the JTAG and other early plugins: http://origen-sdk.org/jtag/#How_To_Use
I think #2 is using parameters in a way that was not originally intended, maybe it could work though but I just can't picture it.
I don't really like #5 or #6 since they provide application-level and class-level configuration, which is sometimes what you want, but often these days I see the need more for (DUT) instance-level configuration.
So, my best answer here is that I don't know, but you are touching on a good point that we need to have an official API or at least a recommendation for this.
I think you should be open to the possibility of adding something new to Origen for this if you can think of something better.
As I'm writing this, I suppose #5 would also support instance-level configuration, albeit a bit long-winded:
def initialize(options = {})
Origen.app.config.scan_chain_length = 6
end
My comment wouldn't keep its format, so here it is but looks better:
#Ginty
What would you think of a 'component' API. For example, we could have:
# components.rb
component(:scan, TIPScan::ScanTester,
# options
wgl_dir: ..., # defaults to Origen.app.root/pattern/wgl
custom_sort: proc do {|wgl_name| ...},
)
# then we can do things like:
$dut.scan #=> TIPScan instance
$dut.component(:scan) #=> same as above
$dut.components #=> [TIPScan instance, ...]
$dut.has_component(:scan) #=> true etc.
Pretty much just a stripped down subblock class to handle these. I think our IAR/C compilers and even CATI could benefit from this and make the setup cleaner and more customizable.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Sphinx4 figuring out correct models

I am trying to use the Sphinx4 library for speech recognition, but I cannot seem to figure out the correct combination of acoustic model-dictionary-language model. I have tried out various combinations and I get a different error every time.
I am trying to follow the tutorial on http://cmusphinx.sourceforge.net/wiki/tutorialsphinx4. I do not have a config.xml as I would if I was using ConfigurationManager instead of Configuration, because there is no perceivable way of passing the location of the config file to the Configuration itself (ConfigMgr takes it as an argument to the constructor); and that might be my problem right there. I just do not know how to point to one, and since the tutorial says "It is possible to configure low-level components of the application through XML file although you should do that ONLY IF you understand what is going on.", I assume having a config.xml file is not compulsory.
Combining the latest dictionary (7b - obtained from Sourceforge) with the latest acoustic model (cmusphinx-en-us-5.2.tar.gz - from SF again) and the language model (cmusphinx-5.0-en-us.lm.gz - from SF again) results in NullPointerException in startRecognition. The issue is similar to the problem here: sphinx-4 NullPointerException at startRecognition, but the link given in the answer no longer works. I obtained 0.7a from SF (since that is the dict the link seems to point at), but I am getting even earlier in the execution Error loading word: ;;; when I use that one. I tried downloading latest models and dict from the Github repo, that results in java.lang.IndexOutOfBoundsException: Index: 16128, Size: 16128.
Any help is much appreciated!
You need to use latest code from github
http://github.com/cmusphinx/sphinx4
as described by tutorial
http://cmusphinx.sourceforge.net/wiki/tutorialsphinx4
Correct models (en-us) are already included, you should not replace anything. You should not configure any XML files, use samples as provided in the sources.

setupcon use a variant as default

Context
I'm building my complete debian system configuration,
so I'm modifying the keyboard and console setups.
I prefer not to modify the base files to keep a maximum
commpatibility and modularity. So I want to use VARIANT
(see setupcon (5)) and load them at init.
But not sure I'm doing it right.
Desired Architecture
I will only use keyboard file for the following example.
There is the base file /etc/default/keyboard
And two possible custom files (according to setupcon (5))
~/.keyboard
/etc/default/keyboard.variant
~/.keyboard
It provides a custom behaviour per $HOME (user)
/etc/default/keyboard.variant
A global and default keyboard setup
I would like to use the three at a time.
Problem
The daemon calling setupcon are console-setup and console-setup-mini
(according to the coments in their initd scripts). They are started
before login shell, so won't know ~/.keyboard.
setupcon needs to be called
setupcon variant
or, looking at the sources, with a variable $VARIANT
VARIANT=variant
What is the best solution to adopt, saving a maximum modularity.
Thank you,

Resources