npm glob pattern not matching subdirectories - node.js

In my package.json, I have a scripts block that uses **/*Test.js to match files. When run via npm, they do not match sub-directories more than one level. When executed on the command line directly, they work as expected.
Can anyone explain what is happening, and provide a workaround or solution?
package.json
{
"name": "immutable-ts",
"scripts": {
"test": "echo mocha dist/**/*Test.js",
}
}
Execution
% npm run test
> immutable-ts#0.0.0 test:unit .../immutable-ts
> echo mocha dist/**/*Test.js
mocha dist/queue/QueueTest.js dist/stack/StackTest.js
% echo mocha dist/**/*Test.js
mocha dist/queue/QueueTest.js dist/stack/StackTest.js dist/tree/binary/BinaryTreeTest.js
% ls dist/**/*
dist/collections.js dist/queue/QueueTest.js dist/tree/binary/BinaryTree.js dist/immutable.js.map dist/stack/Stack.js.map dist/tree/binary/BinaryTreeTest.js.map
dist/immutable.js dist/stack/Stack.js dist/tree/binary/BinaryTreeTest.js dist/queue/Queue.js.map dist/stack/StackTest.js.map
dist/queue/Queue.js dist/stack/StackTest.js dist/collections.js.map dist/queue/QueueTest.js.map dist/tree/binary/BinaryTree.js.map

Solution
Change your scripts so that what you pass to Mocha is protected from expansion by the shell:
"scripts": {
"test": "mocha 'dist/**/*Test.js'",
}
Note the single quotes around the parameter given to mocha.
Explanation
This issue is fixable without resorting to external tools. The root cause of your problem is that by npm uses sh as the shell that will run your script commands.
It is overwhelmingly the case that when a *nix process starts a shell it will start sh unless there is something telling it to do otherwise. The shell preference you set for logins does not constitute a way to "tell it otherwise". So if you have, say, zsh as your login shell, it does not entail that npm will use zsh.
Those implementations of sh that do not include any extensions beyond what sh should provide do not understand the ** glob in the way you want it to. As far as I can tell, it is interpreted as *. However, Mocha interprets the paths passed to it using its a JavaScript implementation of globs. So you can work around the issue by protecting your globs from being interpreted by sh. Consider the following package.json:
{
"name": "immutable-ts",
"scripts": {
"bad": "mocha test/**/*a.js",
"good": "mocha 'test/**/*a.js'",
"shell": "echo $0"
}
}
The shell script is just so that we can check what shell is running the script. If you run it, you should see sh.
Now, given the following tree:
test/
├── a.js
├── b.js
├── x
│   ├── a
│   │   ├── a.js
│   │   └── b.js
│   ├── a.js
│   └── b
│   └── a.js
└── y
├── a.js
└── q
With all a.js and b.js files containing it(__filename);. You get the following results:
$ npm run bad
> immutable-ts# bad /tmp/t2
> mocha test/**/*a.js
- /tmp/t2/test/x/a.js
- /tmp/t2/test/y/a.js
0 passing (6ms)
2 pending
$ npm run good
> immutable-ts# good /tmp/t2
> mocha 'test/**/*a.js'
- /tmp/t2/test/a.js
- /tmp/t2/test/x/a.js
- /tmp/t2/test/x/a/a.js
- /tmp/t2/test/x/b/a.js
- /tmp/t2/test/y/a.js
0 passing (5ms)
5 pending

You can inline the find command with the -name option in your scripts to replace the extended globbing syntax provided by zsh.
In your case, the command would be:
mocha `find dist -type f -name '*Test.js'`
You can realistically omit the -type f part if you're confident that you won't ever put "Test.js" in a directory name. (A safe assumption, most likely, but I included it for completeness sake)

The glob expansion is actually done by your shell and that's why it works from the command line.
You can do mocha --recursive and point at your test directory.

Related

How to run FZF in vim in a directory where the directory path comes from a function?

I'd like to run fzf file finder (inside vim) on a custom directory, but the directory name varies at runtime.
For e.g., say, vim is started at the root of project directory. The subdirectories look like:
$ tree -ad
.
├── docs
├── .notes
│   ├── issue_a
│   └── new_feature
├── README
├── src
└── tests
Every time I create a branch, I also create a directory in .notes/BRANCH_NAME where I store notes, tests results, etc. The .notes directory itself is ignored by git.
I'd like to run FZF on the .notes/BRANCH_NAME directory. The branch name will come from a function (say using https://github.com/itchyny/vim-gitbranch).
I am able to run fzf on the .notes directory by :FZF .notes, but I don't know how to run it on the branch directory within .notes.
Thanks!
Edit: Added what I'd tried:
I tried saving the output of gitbranch#name() to a variable and then use it to call fzf#run(), but didn't quite work:
function! NotesDir()
let branch=gitbranch#name()
let ndir=".notes/" . branch
call fzf#run({'source': ndir})
endfunction
command! NotesDir call NotesDir()
When I run :NotesDir while on branch issue_a, I see fzf window with an error:
> < [Command failed: .notes/issue_a]
.notes/issue_a indicates that ndir variable has the correct notes directory path, but I couldn't figure out how to pass it to fzf.
Looking at the documentation for fzf#run(), it looks like source could be either a string, interpreted as a shell command to execute, or a list, used as-is to populate the plugin's window.
So your mistake was passing a path as source, which was interpreted as an external command, instead of either an external command or a list of paths.
It also says that you should at least provide a sink but some examples don't have it so YMMV.
If I am reading that section correctly, the following approaches should work:
" with a string as external command
call fzf#run({'source': 'find ' .. ndir, 'sink': 'e'})
" with a list
let list = globpath('.', ndir, O, 1)
\ ->map({ _, val -> val->fnamemodify(':.')})
call fzf#run({'source': list, 'sink': 'e'})
NOTE: I don't use that plugin so this is not tested.

Renaming files in a directory based on an instructions file

I have a directory that regroups a .sql file and multiple data files.
/home/barmar/test.dir
├── database.sql
├── table1.unl
├── table2.unl
├── table3.unl
├── table4.unl
├── table5.unl
└── table6.unl
The .sql file contains an unload instructions for every .unl file the issue I have is that the names of the .unl files are not the same as the instructions on .sql.
Usually the name should be TABLE_TABID.unl Im looking for a way to retreive the names from the .sql file and rename the .unl files correctly.
The .sql file contains multiple instructions here's an example of the lines that contain the correct names.
{ unload file name = table1_89747.unl number of rows = 8376}
As you can see the only thing in common is the table name (table1) in the example
The expected result should be something like that:
/home/barmar/test.dir
├── database.sql
├── table1_89747.unl
├── table2_89765.unl
├── table3_89745.unl
├── table4_00047.unl
├── table5_00787.unl
└── table6_42538.unl
This sed line will generate commands to rename files like table1.unl to names like table1_89747.unl:
sed -n 's/.*name = \([^_]*\)\(_[^.]*\.unl\).*/mv '\''\1.unl'\'' '\''\1\2'\''/p' <database.sql
Assumptions: spaces exist around the = sign, and the filename is of the form FOO_BAR.unl, i.e. the underscore character and the extension are always present.
Sample output:
$ echo '{ unload file name = table1_89747.unl number of rows = 8376}' | sed -n 's/.*name = \([^_]*\)\(_[^.]*\.unl\).*/mv '\''\1.unl'\'' '\''\1\2'\''/p'
mv 'table1.unl' 'table1_89747.unl'
To generate and execute the commands:
eval $(sed -n 's/.*name = \([^_]*\)\(_[^.]*\.unl\).*/mv '\''\1.unl'\'' '\''\1\2'\'';/p' <database.sql | tr -d '\n')
Goes without saying, before running this make sure your database.sql doesn't have malicious strings that could lead to renaming files outside the current directory.

Recursive code to traverse through directories in python and filter files

I would like to recursively search through "project" directories for "Feedback Report" folder and if that folder has no more sub directories I would like to process the files in a particular manner.
After we have reached the target directory, I want to find the latest feedback report.xlsx in that directory(which will contain many previous versions of it)
the data is really huge and inconsistent in its directory structure. I believe the following algorithm should bring me close to my desired behavior but still not sure. I have tried multiple scrappy code scripts to convert into json path hierarchy and then parse from it but the inconsistency makes the code really huge and not readable
The path of the file is important.
My algorithm that I would like to implement is:
dictionary_of_files_paths = {}
def recursive_traverse(path):
//not sure if this is a right base case
if(path.isdir):
if re.match(dir_name, *eedback*port*) and dir has no sub directory:
process(path,files)
return
for contents in os.listdir(path):
recursive_traverse(os.path.join(path, contents))
return
def process(path,files):
files.filter(filter files only with xlsx)
files.filter(filter files only that have *eedback*port* in it)
files.filter(os.path.getmtime > 2016)
files.sort(key=lambda x:os.path.getmtime(x))
reversed(files)
dictionary_of_files_paths[path] = files[0]
recursive_traverse("T:\\Something\\Something\\Projects")
I need guidance before I actually implement and need to validate if this is correct.
There is another snippet that I got for path hierarchy from stackoverflow which is
try:
for contents in os.listdir(path):
recursive_traverse(os.path.join(path, contents))
except OSError as e:
if e.errno != errno.ENOTDIR:
raise
//file
Use pathlib and glob.
Test directory structure:
.
├── Untitled.ipynb
├── bar
│   └── foo
│   └── file2.txt
└── foo
├── bar
│   └── file3.txt
├── foo
│   └── file1.txt
└── test4.txt
Code:
from pathlib import Path
here = Path('.')
for subpath in here.glob('**/foo/'):
if any(child.is_dir() for child in subpath.iterdir()):
continue # Skip the current path if it has child directories
for file in subpath.iterdir():
print(file.name)
# process your files here according to whatever logic you need
Output:
file1.txt
file2.txt

Tell Mocha where to look for test files

How can I tell Mocha where to look for my test files? My test files dont reside in the standard project root ./test folder. My test sit in Modules/.*/Tests/.
I've attempted to use the --grep command line argument but I have a feeling this argument looks for test NAMES not for test FILE NAMES.
mocha --grep Modules\/.*\/Tests
The above command gives the error:
Warning: Could not find any test files matching pattern: test
No test files found
You have to match not only directory but also its files . Besides, do no need grep , use only -- :
mocha -- Modules/**/Tests/*.js
// or `mocha -- Modules/Tests/*.js` (I am not sure because you didn't give the project hierarchy )

Get path to first called/started module by node command

How to get path to directory where 'first called module' is?
For example we have bash file called startnode.sh in /bin/ folder
node ~/path/to/file/index.js
now in index.js we have complex code with requiring other files like:
var myMoudle = require("./module.js");
etc. Now we are in random folder: /qpa/dir/ then going to start our node process by bash file:
startnode
and in our module.js we going to get working directory:
console.log(process.cwd());
suprise! process.cwd() return /qpa/dir/ instead of ~/path/to/file/
why?
I need the directory where node process begin, directory to first called module.
Use env variables:
consider following structure:
test
├── bin.sh
└── test
└── hello.js
in bin.sh:
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" node BASE_PATH/test/test/hello.js
in hello.js:
console.log(__dirname); // BASE_PATH/test/test
console.log(process.env.DIR); // BASE_PATH/test
console.log(process.cwd()); // BASE_PATH
note: you need to replace BASE_PATH with your path (~/path/to/file/)

Resources