What is the preferred way to re-use helper code in examples?
For example, if there is some duplicate code in both example_1.rs and exaple_2.rs:
- src
- ...
- benches
- ...
- examples
- example_1.rs
- example_2.rs
- tests
- ...
I currently have something like this:
- src
- ...
- benches
- ...
- examples
- lib
- helper_1.rs
- helper_2.rs
- example_1.rs
- example_2.rs
- tests
- ...
Then, in example_1.rs, to access the helpers, I use:
use helper_1::helper_function;
#[path = "./lib/helper_1.rs"]
mod helper_1;
This seems to work, but I end up with some dead_code warnings, presumably because each example is its own executable and not every executable uses every function.
Related
I am working on a gRPC experiment at this githib repo, using buf cli to generate the gRPC server and Client libraries. I have it working in GoLang but I want to generate libraries for RUST.
The git repo is at: https://github.com/vinceyoumans/wc6
The buf file is at: buf.gen.yaml
I have RUST code commented out but I believe the problem is that I do not have correct plugin for RUST. The documentation to use the buf cli is almost nonexistent with buf. I am not a RUST expert either way. I am looking for guidance for strategies on how this should be done.
yaml file to use buf is...
Documentation: https://docs.buf.build/configuration/v1/buf-gen-yaml
version: v1
plugins:
- name: go # Synonym with: protoc-gen-<name>
out: gen/go
opt: paths=source_relative
- name: go-grpc
out: gen/go
opt:
- paths=source_relative
- require_unimplemented_servers=false
# - name: rust
# out: gen/rust
# opt: paths=source_relative
# - name: rust
# out: gen/rust
# opt: paths=source_relative
Take a look at https://docs.rs/protoc-gen-prost/latest/protoc_gen_prost/
There's configuration for using prost. Something like.
version: v1
plugins:
- name: prost
out: gen
opt:
- bytes=.
- compile_well_known_types
- extern_path=.google.protobuf=::pbjson_types
- file_descriptor_set
- type_attribute=.helloworld.v1.HelloWorld=#[derive(Eq\, Hash)]
I'm setting up a test framework for my project. After configuring pytest and coverage, it shows 100% coverage for all the python files, but there are no tests yet. I'm guessing it is counting the source code also as tests and giving 100% coverage for all the scripts.
Apologies, I cannot really post the image from my work account. Please let me know if there is anything wrong with my configuration, or the way I'm running it.
Project structure:
etl-orchestrator
- etl_api
- com.etl.api
- com.etl.tests
- etl_core
- com.etl.core
- com.etl.tests
- etl_services
- com.etl.services
- com.etl.tests
- .coveragerc
- pytest.ini
- setup.py
.coveragerc
[run]
source = .
omit =
*/__init__.py
*tests*
*virtualenvs*
.venv/*
*build/*
pytest.ini
[pytest]
python_files = tests/*.py
addopts = --cov-config=.coveragerc --cov=etl_api --cov=etl_core --cov=etl_services
command to run:
cd <project root directory>
pytest
I know there are a lot of similar questions out there, but none of them has a proper answer. I am trying to deploy my code using GitLab cicd pipeline. While executing the deployment stage, my pipeline failed and got this error.
My serverless.yml has this code related to exclude
package:
patterns:
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
The error I am getting is
Serverless Error ----------------------------------------
No file matches include / exclude patterns
I forgot to mention, I have a nltk layer which I am deploying in the same serverless.yml as my lambda function and other resources.
I am not sure what has to be done exactly to get rid of the error. Any help would be appreciated. thank you.
Your directives do not define any inclusive patterns. Perhaps you want to list the files & directories you need packaged. Each directive builds on the next.
Something like:
package:
patterns:
- "**/**"
- '!nltk'
- '!node_modules/**'
- '!package-lock.json'
- '!package.json'
- '!__pycache__/**'
- '!.gitlab-ci.yml'
- '!tests/**'
- '!README.md'
See https://www.serverless.com/framework/docs/providers/aws/guide/packaging/#patterns
I did as much research as I could but I can`t seem to find a way to structure my folder the way I want to.
My folder structure looks like this:
aws-lambdas
database_credentials.yml (just a file to read the creds from in a var)
serverless.yml
functions
ETLTool
somefile1.py
somefile2.py
lambda_function.py
ETLToolFullLoadSLS.yml
ETLToolSLS.yml
TriggerSnowflakeETL
somefile1.py
somefile2.py
lambda_function.py
TriggerSnowflakeETLSLS.yml
What I want to do is to pull in all the .yml from inside the functions folder into the serverless.yml at the root folder. My main serverless.yml file looks as such:
service: SnowflakePoc
frameworkVersion: '2'
custom:
database_credentials: ${file(./database_credentials.yml):database_credentials}
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
timeout: 90
memorySize: 2048
stage: dev
region: eu-west-2
vpc:
securityGroupIds:
- sg-013059b0cbf4054b5
- sg-02c6fcaa9f2bfac7f
subnetIds:
- subnet-04aa5cacdb8d9d077
- subnet-0ea7eb629fbc6f6a8
iam:
role: arn:aws:iam::309161096106:role/LamdaRDSAccess
functions:
- ${file(./functions/ETLTool/ETLToolSLS.yml)}
- ${file(./functions/ETLTool/ETLToolFullLoadSLS.yml)}
- ${file(./functions/TriggerSnowflakeETL/TriggerSnowflakeETLSLS.yml)}
plugins:
- serverless-python-requirements
The issue is that the whole functions/* folder is picked up by each of the lambdas even if I have something like this in each inner function yml file.
TriggerETLTool:
timeout: 600
memorySize: 5000
reservedConcurrency: 3
handler: functions/TriggerSnowflakeETL/lambda_function.lambda_handler
layers:
- arn:aws:lambda:eu-west-2:309161096106:layer:Snowflake:3
- arn:aws:lambda:eu-west-2:309161096106:layer:DatabaseUtilities:5
package:
patterns:
- '!functions/TriggerSnowflakeETL/**'
- functions/TriggerSnowflakeETL/lambda_function.py
Inside AWS it looks like this:
Pic from AWS Lambda Source Code
Is there a better pattern than having 1 directory per lambda?
I would like just the files inside each function to be at the root of my lambdas without them being inside a folder once they reach AWS as they are in the image. Also I`d like to have just the files from each inner-function inside the functions folder rather than the whole functions directory.
If you want to package each function individually, you'll need two things (one of which you've already done)
Configure your serverless.yml file to package functions individually:
service: SnowflakePoc
package:
individually: true
provider:
...
In each function, specify the pattern to correctly zip just that part (you've already done this)
Packaging individually is configurable globally or at a per-function level, so you can choose what's best for you.
You can find more information in the documentation
Let's say I have the following directory structure in a VSCode project:
- MY_EXAMPLES
- Example_1
- src
- main.rs
- Cargo.lock
- Cargo.toml
- Example_2
- src
- main.rs
- Cargo.lock
- Cargo.toml
Now I want to compile the comments from all main.rs files. In this case, that'd be MY_EXAMPLES/Example_1/src/main.rs and MY_EXAMPLES/Example_2/src/main.rs. Note that I would like this solution to scale, so I could automatically compile the comments if I had 10 or 20 Example_X folders.
MY_EXAMPLES/Example_1/src/main.rs
/// My notes on Example 1 that I'd like to add to my README.md
fn main(){
//..
}
MY_EXAMPLES/Example_2/src/main.rs
/// My notes on Example 2 that I'd like to add to my README.md
fn main(){
//..
}
Running:
C:\Users\Primary User\Desktop\MY_EXAMPLES> cargo doc
should create a README.md file that looks like:
Example_1
My notes on Example 1 that I'd like to add to my README.md
Example_2
My notes on Example 2 that I'd like to add to my README.md
So our general template for the README.md content is something like:
### Project Rootdir
rustdoc comments
Does rustdoc have any kind of templating system, similar to that of JSDoc, etc?
I've also seen this similar question, which is currently without an answer.