I try to copy a file from a module :
#init.pp
file { '/home/michael/projets/test_puppet/LICENSE':
source => 'puppet:///modules/java/LICENSE',
replace => false,
}
But when I run puppet apply
puppet apply --modulepath=/home/michael/projets/test_puppet/modules/ manifests/init.pp
I get this error
Error: /Stage[main]/Main/File[/home/michael/projets/test_puppet/LICENSE]:
Could not evaluate: Could not retrieve information from environment production
source(s) puppet:///modules/java/LICENSE
My directory structure
.
├── manifests
│ └── init.pp
└── modules
├── java
└── stdlib
with this path puppet:///modules/java/LICENSE, the file should be under
/home/michael/projets/test_puppet/modules/java/files/LICENSE
Could you please confirm if you have the file LICENSE in above path.
Please go through the docs The Puppet File Server, it will help you to understand how it works.
Related
I recently created a Snakemake profile using the guide at Snakemake-Profiles/slurm. I was able to get the profile installed successfully, and it does work when calling the path directly. However, when using the profile name, such as
snakemake --profile slurm --dry-run
I get the error:
Error: profile given but no config.yaml found. Profile has to be given as
either absolute path, relative path or name of a directory available in either
/etc/xdg/snakemake or /home/GROUP/USERNAME/.config/snakemake.
I have indeed installed the profile under ~/.config/snakemake. Here is the tree of this directory:
/home/GROUP/USERNAME/.config/snakemake
.
└── slurm
├── cluster_config.yaml
├── config.yaml
├── CookieCutter.py
├── __pycache__
│ ├── CookieCutter.cpython-39.pyc
│ └── slurm_utils.cpython-39.pyc
├── settings.json
├── slurm-jobscript.sh
├── slurm-status.py
├── slurm-submit.py
└── slurm_utils.py
2 directories, 10 files
I can continue to specify the path to this profile when running Snakemake, but it would be useful to simply give it the name of the profile. Does anyone happen to know why Snakemake doesn't seem to be picking up that the profile slurm exists?
I've solved my issue by installing Snakemake in a Conda environment, and re-installing the profile. I'm not sure if it was the Conda environment or the profile re-install that fixed my issue.
I have just published a new TypeScript-based module to the NPM registry, ooafs. However, when I try to install it and import it in another TypeScript project, VSCode gives me the following error on the import statement: Cannot find module 'ooafs'.ts(2307).
This module's source files are compiled to JavaScript to a dist/ folder and definitions (.d.ts) are also generated.
Here's the tree of the published module (the one we download when we npm install):
.
├── dist
│ ├── Entry.d.ts
│ ├── EntryFilter.d.ts
│ ├── EntryFilter.js
│ ├── Entry.js
│ ├── EntryType.d.ts
│ ├── EntryType.js
│ ├── FSTypings.d.ts
│ ├── FSTypings.js
│ ├── index.d.ts
│ └── index.js
├── LICENSE
├── package.json
└── README.md
The package.json does contain the following entries:
{
"main": "dist/index.js",
"types": "dist/index.d.ts",
...
}
Because the module works normally on Runkit (pure JS), I assume the only problem I have is related to TypeScript, and it's not the first time TypeScript tells me a module doesn't exist when missing declaration files are the only problem.
Am I missing a step in the compilation process ?
Are my package.json properties wrong ?
If you need to see more code, the Github link is at the beginning of the question, and the published module structure can be found here: https://unpkg.com/ooafs#0.1.2/dist/.
Actually, the problem didn't come from my module (ooafs). It was a problem with the tsconfig.json of the project I was using the module in: The module property must be set to commonjs apparently.
Very late edit:
Also, I highly recommend setting esModuleInterop to true which allows you to import non-es6 modules in a more natural manner.
The answer is not the fix, and is certainly not ideal when you have to use top-level awaits (which don't work on commonjs).
You want to make sure your import path is the final file that node will try and load. So you cannot rely on folders resolving to folder/index.js and you cannot rely on giving file names without extensions (give the ".js" extension)
My understanding is that Bazel expects projects to be under a monorepo with a WORKSPACE file at the top-level and BUILD files in every project:
Repo
├── ProjectA
│ └── BUILD
├── ProjectB
│ └── BUILD
└── WORKSPACE
However, going through the Bazel NodeJS rules documentation, it seems to suggest that every project should have it's own WORKSPACE file where it defines its dependencies. i.e. ...
Repo
├── ProjectA
│ ├── BUILD
│ └── WORKSPACE
└── ProjectB
├── BUILD
└── WORKSPACE
This looks similar to a multi-repo with every project referencing other projects as an external dependency, which seemed okay to me, until I realized that for external dependencies, Bazel requires all transitive dependencies to be specified in the WORKSPACE file for every package, which is definitely not ideal.
What's the easiest way to use Bazel with NodeJS projects, with some projects possibly written in other languages? Also, is there an example somewhere for Bazel being used in a multi-repo setting?
Thanks!
I think the 2 possible options are in fact
Repo
├── MyProject
│ └── BUILD
├── third_party
│ └── ProjectB
│ └─ BUILD
└── WORKSPACE
or
Repo
├── MyProject
│ └── BUILD
└── WORKSPACE
where in the second case WORKSPACE references ProjectB with npm_install rule as defined in https://github.com/bazelbuild/rules_nodejs#using-bazel-managed-dependencies
I'm still trying to figure this out myself, but what I've gathered so far is that there is only one WORKSPACE file at the root of the repo. You need to have a package.json file (probably at the root) containing all the dependencies used in the whole repo then call npm_install or yarn_install in the WORKSPACE file to download them all.
Then your package BUILD file can reference a dependency with #npm//some_package as in:
filegroup(
name = 'sources',
srcs = ['index.js'],
)
npm_package(
name = 'pkg',
srcs = [ 'package.json'],
deps = [
':sources'
'#npm//lodash'
],
)
There are a few different dependency edge cases I haven't figured out yet so this may not be perfectly correct. Good luck.
Setup
I have the following tree structure in my project:
Cineaste/
├── cineaste/
│ ├── __init__.py
│ ├── metadata_errors.py
│ ├── metadata.py
│ └── tests/
│ └── __init__.py
├── docs/
├── LICENSE
├── README.md
└── setup.py
metadata.py imports metadata_errors.py with the expression:
from .metadata_errors.py import *
Thus setting a relative path to the module in the same directory (notice the dot prefix).
I can run metadata.py in the PyCharm 2016 editor just fine with the following configuration:
Problem
However, with this configuration I cannot debug metadata.py. PyCharm returns the following error message (partial stack trace):
from .metadata_errors import *
SystemError: Parent module '' not loaded, cannot perform relative import
PyCharm debugger is being called like so:
/home/myself/.pyenv/versions/cineaste/bin/python /home/myself/bin/pycharm-2016.1.3/helpers/pydev/pydevd.py --multiproc --module --qt-support --client 127.0.0.1 --port 52790 --file cineaste.metadata
Question
How can I setup this project so that PyCharm is able to run and debug a file that makes relative imports?
Today (PyCharm 2018.3) it is really easy but not obvious.
You can choose target to run: script name or module name by pressing the label "Script Path" in edit configuration window:
One of possible solutions could be to run your module through intermediate script which you'll run in debug mode.
E.g. test_runner.py:
import runpy
runpy.run_module('cineaste.metadata')
You might also try removing the last node (/cineaste) from the Working Directory. This configuration works (run and debug) for me (in Pycharm: 2017.2.2)
I would suggest not using * since that can cause many problems in the future, two classes or methods being named the same etc.
So I'm working on this project that has a Rails-like folder structure though it's handled by Node.js tooling (Grunt as task runner). I'm using Bower to manage my vendor assets.
My folder structure looks like this:
.
└── src
├── app
│ └── assets
│ ├── javascripts
│ └── stylesheets
│ └── application.scss
├── public
└── vendor
└── bower
Basically all the development source code lives in the app/assets folder, public is where production files go and vendor is where 3rd party stuff goes.
So as you can see, I have this application.scss file. This is the stylesheet manifest I'm using. It's responsible to import all the modules that should be compiled to my final stylesheet file later.
The problem is that I don't see a sane way to reference libraries installed through Bower from inside my manifest file.
With Rails Asset Pipeline/Sprockets I would do //= require_tree vendor/bower and that would work but I don't know what's the equivalent of doing that on the context of this project.
Do you guys have any suggestion on what could I do?
Ps.: Using Grunt tasks to "handle" this is out of question.
Just configure Bower to install packages on vendor/assets/components by creating a file called .bowerrc in your root directory.
{"directory": "vendor/assets/components"}
Everything inside vendor/assets and app/assets is added to the load path, so you can just reference those files.
You may need to put the actual file you want to load. Let's say you installed the normalize-scss package. You'll probably have to add this to your application.scss file:
#import "normalize-scss/normalize";
This is just a guess, but I'd bet on it.
EDIT: This will work on Rails apps, which apparently isn't your case. So if you're using Grunt to compile SCSS, you can add the Bower directory to your load path with the loadPath option.
The Gruntfile's Sass task may look with something like this:
{
sass: {
dist: {
files: {"src/assets/stylesheets/application.scss": "public/application.css"},
options: {
loadPath: ["vendor/bower"]
}
}
}
}
To import the file, you will do something like I said above (referencing the whole file). Didn't test the Grunt configuration, but it'll probably work.
Bower downloads whole git repositories.
Example:
bower install jquery
This would create the following structure
tree vendor/bower
bower
└── jquery
├── bower.json
├── component.json
├── composer.json
├── jquery.js
├── jquery-migrate.js
├── jquery-migrate.min.js
├── jquery.min.js
├── jquery.min.map
├── package.json
└── README.md
1 directory, 10 files
It doesn't make much sense to load all those files in my opinion.
What You could do is:
create a vendor/require directory
symlink all required files into this directory:
cd vendor/require; ln -s ../bower/jquery/jquery.min.js
then require all files with ruby's help or manually
Dir['path/to/vendor/require/*.js'].each do |file_name|
puts "<script type="text/javascript" src="#{file_name}"></script>"
end
You could also use Grunt and it's concat task:
grunt.initConfig({
concat: {
options: {
separator: ';',
},
dist: {
src: ['path/to/vendor/bower/jquery/jquery.min.js', 'path/to/vendor/bower/other-package/init.min.js'],
// or if You decide to create those symlinks
// src: ['path/to/vendor/require/*'],
dest: 'path/to/public/js/built.js',
},
},
});
For compass on sass You could use:
#import 'path/to/directory/*';