My current project tree for cypress looks something like this:
├── cypress
│ ├── OtherProjectFolder
│ │ ├── frontend
│ │ │ └── TestUI.feature
│ ├── pages_objects
│ │ ├── mainPage.js
│ └── step_definitions
│ │ └── Testui.js
│ ├── e2e
│ │ ├── backend
│ │ │ └── TestBackend.feature
│ ├── pages_objects
│ │ ├── backendPage.js
│ └── step_definitions
│ │ └── TestBackend.js
Essentially I want to define all my step definitions in a different director, and all my page objects in a different directory, because I have many project to automate.
Here is my current cucumber preprocessor look like in package.json:
"cypress-cucumber-preprocessor": {
"nonGlobalStepDefinitions": false,
"step_definitions": "cypress/e2e"
}
If I change the path of the stepsDefinition to "cypress/OtherProjectFolder", this time it does not picked the steps in e2e. If I just type "cypress" I get this error. Please check attached screenshot. I'm wondering if there is a way to make stepDefinitions global?
try:
"cypress-cucumber-preprocessor": {
"nonGlobalStepDefinitions": false,
"stepDefinitions": "cypress/your_folder_name/*.js"
}
you can of course add more path after that.
Good to look at:
https://github.com/badeball/cypress-cucumber-preprocessor/blob/master/docs/step-definitions.md
Related
I am facing a big problem while deploying to vercel via now.
I am able to successfully execute yarn run build and yarn run start, but when I am trying to deploy it via now.
I get this error
Error: You have 'doc' in your headerLinks, but no 'docs' folder exists one level up from 'website' folder. Did you run `docusaurus-init` or `npm run examples`? If so, make sure you rename 'docs-examples-from-docusaurus' to 'docs'.
2020-08-24T18:48:36.566Z at forEach (/vercel/58999d95/node_modules/docusaurus/lib/core/nav/HeaderNav.js:253:15)
2020-08-24T18:48:36.566Z at Array.forEach (<anonymous>)
2020-08-24T18:48:36.566Z at HeaderNav.renderResponsiveNav (/vercel/58999d95/node_modules/docusaurus/lib/core/nav/HeaderNav.js:248:17)
2020-08-24T18:48:36.566Z at HeaderNav.render (/vercel/58999d95/node_modules/docusaurus/lib/core/nav/HeaderNav.js:325:19)
2020-08-24T18:48:36.566Z at processChild (/vercel/58999d95/node_modules/react-dom/cjs/react-dom-server.node.development.js:3134:18)
2020-08-24T18:48:36.567Z at resolve (/vercel/58999d95/node_modules/react-dom/cjs/react-dom-server.node.development.js:2960:5)
2020-08-24T18:48:36.567Z at ReactDOMServerRenderer.render (/vercel/58999d95/node_modules/react-dom/cjs/react-dom-server.node.development.js:3435:22)
2020-08-24T18:48:36.567Z at ReactDOMServerRenderer.read (/vercel/58999d95/node_modules/react-dom/cjs/react-dom-server.node.development.js:3373:29)
2020-08-24T18:48:36.567Z at renderToStaticMarkup (/vercel/58999d95/node_modules/react-dom/cjs/react-dom-server.node.development.js:4004:27)
2020-08-24T18:48:36.567Z at renderToStaticMarkupWithDoctype (/vercel/58999d95/node_modules/docusaurus/lib/server/renderUtils.js:16:48)
2020-08-24T18:48:36.607Z error Command failed with exit code 1.
Here is my file structure.
example
├── Dockerfile
├── docker-compose.yml
├── docs
│ ├── doc1.md
│ ├── doc2.md
│ ├── doc3.md
│ ├── exampledoc4.md
│ └── exampledoc5.md
└── website
├── README.md
├── blog
│ ├── 2016-03-11-blog-post.md
│ ├── 2017-04-10-blog-post-two.md
│ ├── 2017-09-25-testing-rss.md
│ ├── 2017-09-26-adding-rss.md
│ └── 2017-10-24-new-version-1.0.0.md
├── core
│ └── Footer.js
├── package.json
├── pages
│ └── en
│ ├── help.js
│ ├── index.js
│ └── users.js
├── sidebars.json
├── siteConfig.js
├── static
│ ├── css
│ │ └── custom.css
│ └── img
│ ├── favicon.ico
│ ├── oss_logo.png
│ ├── undraw_code_review.svg
│ ├── undraw_monitor.svg
│ ├── undraw_note_list.svg
│ ├── undraw_online.svg
│ ├── undraw_open_source.svg
│ ├── undraw_operating_system.svg
│ ├── undraw_react.svg
│ ├── undraw_tweetstorm.svg
│ └── undraw_youtube_tutorial.svg
└── yarn.lock
Done in 0.57s.
This is actually generated via npx docusaurus-init, but still unable to deploy.
Any help would be highly appreciated :)
Fixed it by doing npm install and yarn run build
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I'm building a system formed by 3 programs, lets call them A, B and C. Right now I think that my file structure is a disaster.
Inside my root folder I have some shortcuts to .bat files that run the programs and a folder called programs_data. Inside that folder I have 4 separate folders, one for each program plus a common folder.
The issue is that I need each program and the sub-scripts in those programs to be able to import from the common folder and I need program C and B to be able to call program A API.
As it is right now, I have a mess of appending to sys.path in the sub files to import functions from upper levels.
What's the correct way to structure something like this?
Current structure:
root
├── Configuration.lnk
├── Documentation.lnk
├── Program A.lnk
├── Program B.lnk
├── Program C.lnk
├── programs_data
│ ├── Program A
│ │ ├── Program A API.py
│ │ ├── Program A.bat
│ │ ├── Program A.py
│ │ ├── src
│ │ │ ├── server.py
│ │ │ ├── test_functions.py
│ │ │ └── validation.py
│ │ └── targets
│ │ ├── sql_querys
│ │ │ ├── query1.sql
│ │ │ ├── query2.sql
│ │ │ └── queryn.sql
│ │ ├── target1.py
│ │ ├── target2.py
│ │ ├── target3.py
│ │ └── targetn.py
│ ├── Program B
│ │ ├── Program B.bat
│ │ ├── Program B.py
│ │ ├── classifiers
│ │ │ ├── classifier1.py
│ │ │ ├── classifier2.py
│ │ │ └── classifiern.py
│ │ ├── events.log
│ │ ├── o365_token.txt
│ │ └── src
│ │ ├── batchECImport.py
│ │ ├── classifier.py
│ │ └── logger.py
│ ├── Program C
│ │ ├── Program C.bat
│ │ ├── Program C.py
│ │ ├── Reports
│ │ │ ├── report 1
│ │ │ │ └── report.py
│ │ │ └── report 2
│ │ │ └── report.py
│ │ ├── o365_token.txt
│ │ ├── schedule.xlsx
│ │ └── src
│ │ └── report.py
│ └── common
│ ├── APIMailboxManager.py
│ ├── Documentation
│ │ └── Documentation.pdf
│ ├── FlexibleProcess.py
│ ├── config.py
│ ├── misc.py
│ ├── print_functions.py
│ ├── production_support_passwords.py
│ ├── reports_log.db
│ └── reports_log.py
└── schedule spreadsheet.Ink
Thanks!
You could use __init__.py files to configure the import path of your modules in each directory. Documentation here.
In these files, you should only add the relative path to the common folder.
I don't think there is an other way to have a common folder if you want to avoid duplicated code...
I would suggest to put your root folder to the PYTHONPATH environment variable (https://www.tutorialspoint.com/What-is-PYTHONPATH-environment-variable-in-Python) so that you don't have to append to the sys.path.
That way you can easily write code like this
from root.programs_data.src import server
I have a need to run scripts in the folders an parralel jobs.
Here is what my folder structure looks like:
.
├── quo1374
├── quo2147
├── quo1407
......
├── quo1342
│ ├── dist
│ │ └── v0.1.0-alpha
│ │ └── unix
│ │ └── mjolnir
│ ├── examples
│ │ ├── values-local-134217728-m4.2xlarge.yaml
│ │
│ ├── remote_script.sh
│ └── run
│ ├── quo1342-134217728-m4.2xlarge
│ │ ├── quo1342-134217728-m4.2xlarge
│ │ └── quo1342-134217728-m4.2xlarge.sh
│ ├── quo1342-134217728-m4.xlarge
│ │ ├── quo1342-134217728-m4.xlarge
│ │ └── quo1342-134217728-m4.xlarge.sh
│ ├── quo1342-134217728-m5.12xlarge
│ │ ├── quo1342-134217728-m5.12xlarge
│ │ └── quo1342-134217728-m5.12xlarge.sh
│ ├── quo1342-134217728-m5.16xlarge
│ │ ├── quo1342-134217728-m5.16xlarge
│ │ └── quo1342-134217728-m5.16xlarge.sh
│ ├── quo1342-134217728-m5.24xlarge
│ │ ├── quo1342-134217728-m5.24xlarge
│ │ └── quo1342-134217728-m5.24xlarge.sh
│ ├── quo1342-134217728-m5.4xlarge
│ │ ├── quo1342-134217728-m5.4xlarge
│ │ └── quo1342-134217728-m5.4xlarge.sh
│ ├── quo1342-134217728-m5.8xlarge
│ │ ├── quo1342-134217728-m5.8xlarge
│ │ └── quo1342-134217728-m5.8xlarge.sh
│ ├── quo1342-134217728-m5.metal
│ │ ├── quo1342-134217728-m5.metal
│ │ └── quo1342-134217728-m5.metal.sh
│ ├── quo1342-134217728-t2.2xlarge
│ │ ├── quo1342-134217728-t2.2xlarge
│ │ └── quo1342-134217728-t2.2xlarge.sh
│ ├── quo1342-134217728-t3a.2xlarge
│ │ ├── quo1342-134217728-t3a.2xlarge
│ │ └── quo1342-134217728-t3a.2xlarge.sh
│ └── quo1342-134217728-t3a.xlarge
│ ├── quo1342-134217728-t3a.xlarge
│ └── quo1342-134217728-t3a.xlarge.sh
For example, the script └── quo1342-134217728-m4.2xlarge.sh runs one job. This a subset of jobs I would like to run. I am trying to come up with a s ript that will take the content of run/quo134*/quo1342-134217728*.sh and run it as a seperate job, i.e. when activated, I would loop through each of the scripts in the folder , but the entire job would be held by a &. The reasoning behind this is that I have about 12 separate folders that look like this . I would love to run them in parallel. it is however important that the scripts within the folders are run sequentially.
Here is an attempt of what I trying to do . Although it does not work, I hope it add clarity to my question.
for f in *
do cd $f/run
for f in *.sh
bash "$f" -H &
cd ..
done
done
I would appreciate any pointers on this.
Update
The answers from dash-o helped, but lead to another issue. The bash scripts use relative paths eg.. quo1342-134217728-t3a.xlarge.sh contains references like
../../dist/v0.1.0-alpha/unix/mjolnir
when I use your script it runs, but it appears that the execution does not respect the file path in the script i.e
ssh: Could not resolve hostname : Name or service not known + ../../dist/v0.1.0-alpha/unix/mjolnir destroy ../../examples/values-local-549755813888-t3a.xlarge.yaml
Is there a way to run the script that doesnt break this
You can implement with a help function
Function run_folder {
local dir=$1 f=
cd $dir/run
# sequential execution
for f in */*.sh ; do
# Execute each test in it's folder.
(cd ${f%/*} && bash ${f##*/} -H)
done
}
# parallel execution
For j in * ; do
run_folder $j &
Done
wait
My google-fu seems to have failed me.
I have a directory structure as follow
provisioner/
├── etc
│ ├── apt
│ │ ├── preferences.d
│ │ │ ├── experimental.pref
│ │ │ ├── security.pref
│ │ │ ├── stable.pref
│ │ │ ├── testing.pref
│ │ │ └── unstable.pref
│ │ └── sources.list.d
│ │ ├── experimental.list
│ │ ├── security.list
│ │ ├── stable.list
│ │ ├── testing.list
│ │ └── unstable.list
I have full access to the machine, is there a command that would allow me to put everything in provisioner/etc into /etc, provisioner/var inside /var and so on?
Side question: how do you call this?
Thank you!
Turns out my google-fu isn't really on point.
rsync can be used locally:
rsync -a provisioner /
Inside your provisioner directory, loop through them and copy their content to the corresponding directories in the root directory.
for d in */; do cp -r "${d}"* /"${d}" ; done
My team uses a Puppet architecture which currently accommodates a single application in multiple environments (vagrant, staging, production.)
We now want to expand the scope of this setup to support additional applications. Many of them will use a subset of the existing modules we've already defined, and others will call for new modules to be defined (which may or may not be shared.)
What is the most appropriate Puppet architecture, for supporting multiple environments of multiple applications?
In such an architecture, each application would amount to a module, presumably. What's the best way of (file-) structurally differentiating between a module which is an application, and a module which is a dependency of one or more modules?
Could it be as simple as adding a third modules folder under a top-level applications folder, for example? Or is there a better tiering strategy?
Research so far hasn't turned up any best-practice examples / boilerplates, e.g. via example42 or puppetlabs on GitHub.
Our file structure:
puppet
├── environments
│ ├── production → manifests → init.pp
│ ├── staging → manifests → init.pp
│ └── vagrant → manifests → init.pp
├── hiera.yaml
├── hieradata
│ ├── accounts.yaml
│ ├── common.yaml
│ └── environments
│ ├── production.yaml
│ ├── staging.yaml
│ └── vagrant.yaml
├── modules
│ ├── acl [..]
│ ├── newrelic [..]
│ ├── nginx [..]
│ └── puma [..]
└── vendor
├── Puppetfile
├── Puppetfile.lock
└── modules [..]
I'm sure there are a lot of opinions on what the 'most appropriate' solution for this is, but I'll give you mine.
Puppet is actually designed to support multiple applications in multiple environments right out of the box, with some notable caveats:
All common dependencies (within a single environment) must be pinned to the same version
So if you have three applications that need Apache, you can only have one Apache module
All applications can be referenced using a distinctive name
I.E. If you have three different node.js applications that require their own module, you would need three uniquely named modules (or manifests) for them
You are willing to tackle the upkeep/maintenance of updating dependencies for multiple applications simultaneously
If app1 needs to update an Apache module dependency, you're willing to make sure that apps 2-* remain compatible
The other thing to keep in mind is that Puppet's terminology for 'environment' is an acknowledged misnomer. Most well operated environments I have seen actually have distinct Puppet masters in each of their true 'environments' (vagrant/dev/stage/prod) in order to avoid the perils of environment leakage as well as test out upgrades to the Puppet infrastructure (You should have somewhere to test an upgrade to your Puppet version that doesn't instantly impact your production)
Therefore, this frees up the Puppet 'environment directories' to operate free of the true 'environment' concept, and should be considered 'a collection of modules at a particular revision' instead of an 'environment'. You do still need to be cognizant of environment leakage, but this does open up a potential avenue for splitting up your modules.
Another concept you will want to keep in mind is Roles and Profiles (Well discussed by Gary Larizza, Adrien Thebo, and Craig Dunn). These help enable separating business logic from technology management modules. You can then handle dependency ordering and business oriented logic separate from the code/modules for managing individual components.
With all of these concepts in place, here are two architectural layouts that may be a good fit in your use case:
Environments by application
puppet
├── environments (Managed by r10k/code manager)
│ ├── app1
│ │ └── modules
│ │ ├── profiles [..]
│ │ └── app1_specific_component [..]
│ ├── app2
│ │ └── modules
│ │ ├── profiles [..]
│ │ └── app2_specific_component [..]
│ └── app3
│ └── modules
│ ├── profiles [..]
│ └── app3_specific_component [..]
├── hiera.yaml
├── hieradata
│ ├── accounts.yaml
│ ├── common.yaml
│ └── applications
│ ├── app1
│ │ ├── default.yaml
│ │ └── environments (server environments)
│ │ ├── vagrant
│ │ │ └── roles
│ │ │ ├── role1.yaml
│ │ │ ├── role2.yaml
│ │ │ └── role3.yaml
│ │ ├── stg
│ │ │ └── roles
│ │ │ ├── role1.yaml
│ │ │ ├── role2.yaml
│ │ │ └── role3.yaml
│ │ └── prd
│ │ └── roles
│ │ ├── role1.yaml
│ │ ├── role2.yaml
│ │ └── role3.yaml
│ ├── app2
│ │ ├── default.yaml
│ │ └── environments
│ │ ├── vagrant
│ │ │ └── roles
│ │ │ ├── role1.yaml
│ │ │ ├── role2.yaml
│ │ │ └── role3.yaml
│ │ ├── stg
│ │ │ └── roles
│ │ │ ├── role1.yaml
│ │ │ ├── role2.yaml
│ │ │ └── role3.yaml
│ │ └── prd
│ │ └── roles
│ │ ├── role1.yaml
│ │ ├── role2.yaml
│ │ └── role3.yaml
│ └── app3
│ ├── default.yaml
│ └── environments
│ ├── vagrant
│ │ └── roles
│ │ ├── role1.yaml
│ │ ├── role2.yaml
│ │ └── role3.yaml
│ ├── stg
│ │ └── roles
│ │ ├── role1.yaml
│ │ ├── role2.yaml
│ │ └── role3.yaml
│ └── prd
│ └── roles
│ ├── role1.yaml
│ ├── role2.yaml
│ └── role3.yaml
├── modules (These are common to all environments, to prevent leakage)
│ ├── acl [..]
│ ├── newrelic [..]
│ ├── nginx [..]
│ └── puma [..]
└── vendor
├── Puppetfile
├── Puppetfile.lock
└── modules [..]
Environments as a 'release' (for iteration on Puppet code over time)
puppet
├── environments (Managed by r10k/code manager)
│ ├── release_1
│ │ └── modules
│ │ ├── profiles [..]
│ │ ├── app1_specific_component [..]
│ │ ├── app2_specific_component [..]
│ │ ├── app2_specific_component [..]
│ │ ├── acl [..] (v1)
│ │ ├── newrelic [..]
│ │ ├── nginx [..]
│ │ └── puma [..]
│ ├── release_2
│ │ └── modules
│ │ ├── profiles [..]
│ │ ├── app1_specific_component [..]
│ │ ├── app2_specific_component [..]
│ │ ├── app2_specific_component [..]
│ │ ├── acl [..] (v1.1)
│ │ ├── newrelic [..]
│ │ ├── nginx [..]
│ │ ├── puma [..]
│ │ └── some_new_thing_for_release_2 [..]
│ └── release_3
│ └── modules
│ ├── profiles [..]
│ ├── app1_specific_component [..]
│ ├── app2_specific_component [..]
│ ├── app2_specific_component [..]
│ ├── acl [..] (v2.0)
│ ├── newrelic [..]
│ ├── nginx [..]
│ ├── puma [..]
│ ├── some_new_thing_for_release_2 [..]
│ └── some_new_thing_for_release_3 [..]
├── hiera.yaml
├── hieradata
│ ├── accounts.yaml
│ ├── common.yaml
│ ├── environments
│ │ ├── release_1.yaml
│ │ ├── release_2.yaml
│ │ └── release_3.yaml
│ └── roles
│ ├── role1
│ │ ├── default.yaml
│ │ ├── environments (server environments)
│ │ │ ├── vagrant
│ │ │ │ ├── defaults.yaml
│ │ │ │ └── release (optional, only if absolutely necessary)
│ │ │ │ ├── release_1.yaml
│ │ │ │ ├── release_2.yaml
│ │ │ │ └── release_3.yaml
│ │ │ ├── stg
│ │ │ │ ├── defaults.yaml
│ │ │ │ └── release (optional)
│ │ │ │ ├── release_1.yaml
│ │ │ │ ├── release_2.yaml
│ │ │ │ └── release_3.yaml
│ │ │ └── prd
│ │ │ ├── defaults.yaml
│ │ │ └── release (optional)
│ │ │ ├── release_1.yaml
│ │ │ ├── release_2.yaml
│ │ │ └── release_3.yaml
│ ├── role2
│ │ ├── default.yaml
│ │ └── environments
│ │ ├── vagrant
│ │ │ └── defaults.yaml
│ │ ├── stg
│ │ │ └── defaults.yaml
│ │ └── prd
│ │ └── defaults.yaml
│ └── role3
│ └── default.yaml
├── modules (Anything with ruby libraries should go here to prevent leakage)
│ ├── stdlib [..]
└── vendor
├── Puppetfile
├── Puppetfile.lock
└── modules [..]
Keep in mind that the nesting order (release/environment/role etc...) is flexible based on what makes the most sense for your implementation (and some can be eliminated if you're not going to use them).
I encourage you to take this information as merely a starting point, and not a concrete 'do this for instant success'. Having a highly skilled Puppet Architect work with you to understand your precise needs and environments will end up in a far better tuned and appropriate solution than the assumptions and 'cookie cutter' type solutions you are likely to find online (including mine).