Cannot find crate on refactor - rust

I am unable to understand why I cannot import a library into my file.
Here is what the folder looks like :
src/proofsystem/
├── proofsystem
│ ├── aggregation.rs
│ ├── mod.rs
│ └── simple.rs
└── mod.rs
In aggregation.rs , I import
use crate::proofsystem::{prepare_circuit_and_public_input, gen_pk};
use crate::proofsystem::ModelInput;
use solax::Address;
I want to move solax::Address to mod.rs, however the compiler can’t seem to find the imports.
/// Aggregation circuit
#[cfg(feature = “activate”)]
pub mod suffice;
use crate::commands::{data_path, Cli};
use crate::fieldutils::i32_to_felt;
use crate::graph::{utilities::vector_to_quantized, Model, ModelCircuit};
use crate::tensor::{Tensor, TensorType};
use solax::Address;
It is important to note that solax is is optional dependency in Cargo.toml.

Related

Unable to inject a class-based provider in a function expression in nestjs

I have created the following project structure using NestJS
.
├── app.controller.ts
├── app.module.ts
├── app.service.ts
├── config
│   ├── config.controller.ts
│   ├── config.module.ts
│   └── config.service.ts
├── handlers
│   ├── handler1.ts
│   ├── handler2.ts
│   └── handlers.module.ts
├── main.ts
└── producer
└── producer.ts
Both the handler files wiz. handler1.ts & handler2.ts look something like this:
export const handler = async (args) {
...
...
}
I've ConfigModule in which I've registered ConfigService as a provider. What I want to do is I want to somehow use ConfigService in handler1.ts & handler2.ts. Now mind you, these handlers are not classes, but just normal function expressions. I know if they were classes, I could have injected ConfigService in the handler's constructor using Dependency Injection. But, unfortunately, declaring these handlers as classes is no more an option now. The reason behind that is that these handler files are being consumed by producer.ts and producer.ts reads the entire file and expects an exported function.
I went through the NestJS documentation and found about property-based injection, module-ref, but they were of no help. I also found this link which I think is very close to my problem, But after going through the comments I found out this is not possible.
I don't have much liberty in changing the existing code but I'd still like to know what options do I have here. A solution that would require fewer changes and solve my problem. Thanks!
Note: File handlers.module.ts is serving no purpose in this scenario.

Rust - Include rust module in another directory

This is my directory structure
src/
├── lib.rs
├── pages/
│ ├── mod.rs
│ ├── home_page.rs
└── components/
├── mod.rs
└── header.rs
Inside my pages/home_page.rs I try to access my pub struct Header which is inside components/header.rs.
My components/mod.rs looks like this: pub mod header; which works fine because inside lib.rs - I can use it like:
mod components;
use components::header::Header;
However, I don't know how to access it in pages/homepage.rs. How can get access to that struct? Is it something in Cargo.toml?
You can use a whole bunch of Rust keywords to navigate between the modules of your crate:
super::components::Header
// `super` is like a `parent` of your current mod
crate::components::Header
// `crate` is like a root of you current crate
And to include submodules of current mod:
self::submodule1::MyStruct
// `self` is like current module
You can read more about that here
Also it is good idea to make a prelude mod of your crate and include all main items of your crate there, so then you can include them just by passing use crate::prelude::*.
You can read more about prelude in offical rust docs and here.
Inside my src/pages/home_page.rs I can use my header like: use crate::components::header::Header;

Terraform - How to use tfvars with modules

With three different environments, I want to be able to dynamically set variables based on the environment. In my example below, let's say the instance type is different between dev and prod. I'm not able to reference instance_type within the module UNLESS I have a vars.tf file alongside my terraform.tfvars.
The error I get is:
unknown variable referenced: 'instance_type'. define it with 'variable' blocks
If that's the case, then wouldn't this file be the same exact file under modules/apollo/vars.tf?
I thought modules/apollo/vars.tf defines the necessary variables needed for the module. I didn't think it was necessary within the "root" level under env-dev/services/apollo/. If there's a "better" way of doing this, I'm all ears.
├── env-dev
│   └── services
│   └── apollo
│   ├── main.tf
│   ├── terraform.tfvars
│   └── vars.tf # Do i need this?
├── env-test
├── global
├── mgmt
└── modules
├── apollo
│   ├── main.tf
│   ├── user_data.tpl
│   └── vars.tf
└── defaults
└── main.tf
env-dev/services/apollo/terraform.tfvars
instance_type = "t2.medium"
env-prod/services/apollo/terraform.tfvars
instance_type = "t2.large"
modules/apollo/vars.tf
variable "instance_type" {
description = "EC2 Instance Type"
}
modules/apollo/main.tf
resource "aws_instance" "instance" {
...
instance_type = "${var.instance_type}"
...
}
Adjust the structure, this is my understand for your applications.
├── dev
│ └── apollo_terraform.tfvars
├── test
│ └── apollo_terraform.tfvars
├── global
│ └── apollo_terraform.tfvars
├── mgmt
│ └── apollo_terraform.tfvars
├── main.tf, vars.tf, output.tf, apollo.tf, default.tf, etc
└── modules
├── apollo
│ ├── main.tf
│ ├── user_data.tpl
│ └── vars.tf
└── defaults
└── main.tf
apollo.tf will have source module code to use the share module apollo. Same setting for default.tf
your plan/apply command should be like this:
terraform plan -var-file=${env}/apollo_terraform.tfvars
I've been trying to achieve something similar as intuitively it seems this is how it should work, however I am coming to the conclusion that modules are simply not designed for this use case.
Basically you are assigning values to variables that do not exist in your test/prod, to work around this instead of providing assignments in .tfvars you could try to declare them with default values:
env-dev/services/apollo/variables.tf
variable "instance_type" {
default = "t2.medium"
}
env-prod/services/apollo/variables.tf
variable "instance_type" {
default = "t2.large"
}
having those declared and assigned with default values still does not automatically link them to the input variables declared in your module, so
additionally in env-dev/services/apollo/main.tf and env-prod/services/apollo/main.tf you would still need to fill in properties for your module:
module "aws_inst" {
source = "..\\..\\..\\modules\\apollo"
instance_type = "${var.instance_type}"
}
You can quickly see how this defeats the purpose of modules in this scenario.
To elaborate, I think that modules were not designed for defining single resource per module to be able to fill in it's values dynamically, but rather to create "collections" of resources within a module where they can share/reuse same variables.
Note that when you are assigning value to instance_type key in module call, you are actually passing that value to modules input variable which is is then assigned to resource key by the same name.

Node Express CRUD application folder structure

I am using the following folder structure for my express CRUD application. I use it to have ideal manageable code of each file containing not more than 70 lines of code. What are your thoughts on the folder structure?
.
├── bin\
│ └── www
├── common\
│ ├── enums\
│ │ └── logTypesEnum.js
│ └── validators\
│ └── studentNameValidator.js
├── config\
│ └── db.js # config file for database connection
├── models\
│ └── log.js # contains model data for model 'log'
├── routes\
│ ├── log\
│ │ ├── index.js # handles all routes for /log/ endpoints and requires files in the directory and also contains middleware code
│ │ ├── insert.js # handles all routes for /log/insert endpoints
│ │ ├── remove # handles all routes for /log/remove endpoints
│ │ └── exportCSV.js # handles all routes for /log/exportCSV endpoints
│ └── student\
│ ├── index.js
│ ├── insert.js
│ └── remove.js
├── public\
│ ├── javascripts
│ ├── images
│ └── stylesheets
├── views\
│ ├── log\
│ │ ├── index.jade
│ │ ├── insert.jade
│ │ ├── remove.jade
│ │ ├── exportCSV.jade
│ └── student\
│ ├── index.jade
│ └── insert.jade
└── app.js
I am not sure why you decided on the number 70 unless you read somewhere that 70 makes some sort of ideal microservice which your structure does not allow for anyways.
As to the directory structure. I have come to the conclusion that internal directory structures are usually based on the programmer or team leader. It is more of a matter of what makes sense in your head as you see the visual design and implementation of your code.
That said, IMHO, overly complicated or let us say overly structured directory structures in Node and say for instance PHP cause an inordinate amount of moving up and down directory trees in order to access code, classes or just plain functions. Not to mention that it becomes jibberish for those who may come after you to maintain the code.
So use the directory structure you feel at home with. But make it clean and not complicated. Do not try to pigeonhole every aspect of every call and function into a specific 70 line definable directory structure. (again no clue where that number came from).
Clean, simple and sensible.
Those would be the best rules to follow IMHO.
Edit based on OP questions below:
Clarity of code is not defined by the amount of lines. When I test coders, no matter what language they specialize in, put some code up on the screen and ask them to translate that code line for line into plain English. First this actually tests the candidate ability. And second if the code is clear and good, then any other coder should be able to read it and clearly understand what is going on in the code. (Actually it tests both the original coder and the candidate.)
So clear code is not about lines. It is about excellent coding practices and the ability to make your code do what you envision.
Microservices has become sort of a buzzword. But essentially it simply means "focused". It means that you are creating a module to do a specific task So each module in your system does a specific task or tasks essential to your system and only focuses on that task. I think that may be actually what you are striving for. There are quite a few really good articles out there on Node and Microservices.

test or prevent some relative path imports / requires

I have a folder structure like so:
.
└── client
├── components
└── routes
├── index.js
├── Login
│ ├── index.js
│ ├── assets
│ ├── components
│ ├── container
│ └── modules
└── UpdatePassword
├── index.js
├── assets
├── components
├── container
└── modules
I would like to see if anyone is importing files from the UpdatePassword folder to the Login folder and vice versa.
Basically I'm following a fractal project structure where I want components that are related to the UpdatePassword or Login route to only exist in their respective folders. Shared components would exist in the client/components subdirectory. To maintain a structure like this, I would like to write a test that fails when an 'unacceptable' imports or require is used. I.e. if a file in UpdatePassword imports from Login/components.
Is there a way to test or check whether an import is coming from specific folders?
Try madge: I usually run it as madge --image /path-to-folder/dependencies.png routes (There is also a exclude option if you need it)
You'll get a visual graph which shows you dependencies between files.
I have no idea about native way to do it.But you can wrap "require" function:
function myRequire(fromPath, requiredPath) {
//code to judge whether or not can load requiredPath from fromPath
var can = ...
if(can) {
return require(requiredPath);
}
else {
throw new Error(`you can not load ${requiredPath} from ${fromPath}`);
}
}

Resources