I am new to Pachyderm.
I have a pipeline to extract, transform and then save in the db.
Everything is already written in nodejs, docekrized.
Now, I would like to move and use pachyderm.
I tried following the python examples they provided, but creating this new pipeline always fails and the job never starts.
All my code does is take the /pfs/data and copy it to /pfs/out.
Here is my pipeline definition
{
"pipeline": {
"name": "copy"
},
"transform": {
"cmd": ["npm", "start"],
"image": "simple-node-docker"
},
"input": {
"pfs": {
"repo": "data",
"glob": "/*"
}
}
}
All that happens is that the pipeline fails and the job never starts.
Is there a way to debug on why the pipeline is failing?
Is there something special about my docker image that needs to happen?
Offhand I see two possible issues:
The image name doesn't have a prefix. By default, images are pulled from dockerhub, and dockerhub images are prefixed with the user who owns the image (e.g. maths/simple-node-docker)
The cmd doesn't seem to include a command for copying anything. I'm not familiar with node, but it looks like this starts npm and then does nothing else. Perhaps npm loads and runs your script by default? If so, it might help to post your script as well.
Related
I'm using PM2+ to manage my NodeJS deployments.
Usually, when I deploy an application with pm2 start src/app.js, I get details about versioning like in the screenshot below. However, when I deploy using an ecosystem file I only get N/A:
PM2 normally extracts this information directly using vizion.
But since it didn't work with the ecosystem file, I specified the GitHub repository directly just like the documentation stated.
This is my current pm2-services.json ecosystem-file:
{
"apps": [
{
"name": "my-node-app",
"cwd": "./my-node-app-repo-folder",
"script": "src/app.js",
"env": {
"NODE_ENV": "production"
},
"repo": "https://github.com/MyUserName/MyNodeAppRepo.git",
"ref": "origin/master"
}
]
}
For the ref field, I also tried putting refs/remotes/origin/master, remotes/origin/master and master.
Sadly none of them worked (I made sure they are correct using git show-ref).
Additional info:
NodeJS Version: v15.11.0
NPM Version: 7.6.3
PM2 Version: 4.5.6 (latest, by the time of writing this)
So, how do I get the Versioning field to display correctly?
Note: This isn't really an issue but rather a minor inconvenience. I just want to know what I'm doing wrong.
Let's say I have the following scripts in my package.json
{
"scripts": {
"release:public": "....",
"release:beta": "...."
}
}
But now I want to add a prerelease script that is identical for both release:public and release:beta. Is that possible to have a prerelease:* or another way to runs before both scripts?
I do understand your question and the example from RobC is usable, but the naming was not recommended. When you use pre in a script command name followed by a command name that also exists. The part wil run before (previous) the other (https://docs.npmjs.com/cli/v8/using-npm/scripts#npm-run-user-defined)
{
...
"scripts": {
"beforerelease": "....",
"release:public": "npm run beforerelease && ....",
"release:beta": "npm run beforerelease && ...."
},
...
}
But that is an even amount of work then using the pre-tag functionality like this
{
...
"scripts": {
"prerelease:public": "....",
"release:public": "....",
"prerelease:beta": "....",
"release:beta": "...."
},
...
}
Like Kousha probably has, i have a package with a lot of different run scripts. And i want to use one script that runs before a lot of the others. So the question is still standing: Is it possible in any way to use wildcards in de command part of a script tag in a package.json?
I have my node server on path F:\proj\dev-react-node-java\src\server. I used 'jasmine init' to create spec folder here and running 'jasmine' in terminal runs the specs (tests) correctly.
I wish to run the tests from F:\proj\dev-react-node-java so I used the command
jasmine --config=src/server/spec/support/jasmine.json
at this path but I get the message 'No specs found'. Why is it not using the correct configuration file (jasmine.json)?
I am sure --config reaches for this file because:
Giving wrong path gives 'Cannot find module' error.
Writing errorful json also generates and error.
Here is the jasmine.json code for reference:
{
"spec_dir": "spec",
"spec_files": [
"**/*[sS]pec.js"
],
"helpers": [
"helpers/**/*.js"
],
"stopSpecOnExpectationFailure": false,
"random": true
}
spec/support/jasmine.json is the default path as far as I understand since running 'jasmine' command at path say F:\proj\dev-react-node-java\src\server\spec also results in No specs found.
jasmine version is 3.6.1
P.S. This is my first question asked here. Please inform if I made any mistakes in asking. Thank you.
I did find the reason. It is indeed not an issue with the config flag but rather with my jasmine.json file.
What I thought the use of config flag was to specify the path to the file instead of the default spec/support/jasmine.json. It would then have the same behaviour as if the relative path to config was spec/support/jasmine.json.
But
F:\proj\dev-react-node-java>jasmine --config=src/server/spec/support/jasmine.json
is not the same as
F:\proj\dev-react-node-java\src\server>jasmine --config=spec/support/jasmine.json
What it does instead is like copying it to the path from where the command was called and then using it to run the tests.
Hence, what worked was changing the spec_dir field.
{
"spec_dir": "src/server/spec",
"spec_files": [
"**/*[sS]pec.js"
],
"helpers": [
"helpers/**/*.js"
],
"stopSpecOnExpectationFailure": false,
"random": true
}
A little more clarification/examples in the docs would have been nicer but perhaps I misunderstood the functionality.
I have a (demo) application hosted on Heroku. I've enabled Heroku's "review app" feature to spin up new instances for pull request reviews. These review instances all get a new MongoDB (on mLab) provisioned for them through Heroku's add-on system. This works great.
In my repository, I've defined some seeder scripts to quickly get a test database up and running. Running yarn seed (or npm run seed) will fill the database with test data. This works great during development, and it would be perfect for review apps as well. I want to execute the seeder command in the postdeploy hook of the Heroku review app, which can be done by specifying it under the environment.review section of the app.json file. Like so:
{
"name": "...",
"addons": [
"mongolab:sandbox"
],
"environments": {
"review": {
"addons": [
"mongolab"
],
"scripts": {
"postdeploy": "npm run seed"
}
}
}
}
The problem is, the seeder script relies on some development-only dependencies (faker, ts-node [this is a TypeScript project], and mongo-seeding) to execute. And these dependencies are not available in the postdeploy phase of an Heroku app.
I also don't think that "compiling" the typescript in the regular build step is the best idea. This seeder script is only used in development (and review apps). Besides, I'm not sure that would resolve the issue with missing dependencies like faker.
How would one go about this? Any tricks I'm missing?
Can I maybe skip Heroku's step where it actively deletes development dependencies? But only for review apps? Or even better, can I "exclude" just the couple of dependencies I need, and only for review apps?
The Heroku docs indicate that when the NODE_ENV variable contains anything but "production", the devDependencies will not be removed after the build step.
To make sure this only happens for Heroku review apps, you can set the NODE_ENV variable under the environments.review section of the app.json file. The following config should do the trick:
{
"name": "...",
"addons": [
"mongolab"
],
"environments": {
"review": {
"addons": [
"mongolab:sandbox"
],
"env": {
"NODE_ENV": "development"
},
"scripts": {
"postdeploy": "npm run seed"
}
}
}
}
I'm trying to use the linkurious library (a sigma fork), which provides a "main": "dist/sigma.require.js" (in the package.json). this allows me to do:
var sigma = require('linkurious');
however, the plugins are not included so I have to require them separately. the problem is that the plugins rely on the sigma variable being available in the global scope. so I've shimmed things as follows (from the package.json):
"browser": {
"sigma": "./node_modules/linkurious/dist/sigma.js",
"linkurious/plugins": "./node_modules/linkurious/dist/plugins.js"
},
"browserify-shim": {
"sigma": {"exports": "sigma"},
"linkurious/plugins": { "depends": [ "sigma" ] }
},
"browserify": {
"transform": [ "browserify-shim" ]
},
which, when run in a browser doesn't generate errors during inclusion of the plugins (I gather this means the global variable is available) but references to the plugins fail (as if they failed to attach themselves, or attached themselves to a non-global variable).
I'm using grunt-browserify to run the process where I have it configured like this (from the Gruntfile.js):
grunt.initConfig({
browserify: {
libs: {
files: { 'inc.js': ['index.js'] },
},
}
});
I've attached a little project to this issue with the minimal required code to demonstrate the problem in the hopes that someone else can replicate/figure out. unpack, type npm install; npm start and run a browser against http://localhost:8002/ to see the issue.
thanks in advance,
ekkis
sigma.zip
- edit I -
incidentally, bendrucker at the git repo (see: https://github.com/thlorenz/browserify-shim/issues/215) suggests I need to do a global transform. It's been explained to me that shimming doesn't work on node_modules files and for those I need a global transform. this doesn't make much sense to me as the whole point of shimming is that you don't own the code you're shimming. in any case, bendrucker pointed me to this other SO post where the question is posed but no answers are provided.
help?