I'm writing a CLI that runs off nodejs and the basic usage scenario is that one would pass in a number of files, and/or folders, and/or glob patterns; eg:
my-cli file.foo file2.foo
my-cli folder/ folder2/
my-cli folder/**/*.foo folder2/**/*.foo
my-cli file.foo folder/ folder/**/*.foo
And so on. I'm wondering what's the most efficient way of handling whatever file/folder/glob style input? I'm using Optimist to get the argv._ (all the args) and this is my coffeescript (sorry) code handling the different variantions of the input:
_ = require 'lodash'
fs = require 'fs'
glob = require 'glob'
module.exports = (files) ->
htmlFiles = []
ext = 'html'
files = _.flatten files
_.forEach files, (file) ->
fileExt = file.split('.').pop()
if fileExt isnt ext
if fs.lstatSync(file).isDirectory()
file = if file.charAt(file.length - 1) isnt '/' then file + '/' else file
htmlFiles.concat glob.sync file + '**/*.' + ext
else
htmlFiles.push file
I feel this is a bit messy and I'm hoping that there's a nice library somewhere in npm that I could use or make use of some other tricks?
Related
I'm currently working on an application that should be able to list out the items in a directory, however, after looking at the OS module in Nim, I couldn't find a way to see what is inside a directory. Is it possible that it hasn't been implemented yet, or did I maybe just look at the wrong place to find such a function?
So in short, how can I see what is inside /home/username/Documents/? How can I list out its content in Nim?
You need to look in the Iterators section of the os module.
There is walkDir and related iterators for this purpose:
iterator walkDir(dir: string; relative = false; checkDir = false): tuple[
kind: PathComponent, path: string]
Walks over the directory dir and yields for each directory or file in
dir. The component type and full path for each item are returned.
You can use it like this:
import os
for kind, path in walkDir("/home/username/Documents/"):
case kind:
of pcFile:
echo "File: ", path
of pcDir:
echo "Dir: ", path
of pcLinkToFile:
echo "Link to file: ", path
of pcLinkToDir:
echo "Link to dir: ", path
Recursive traversal of subfolders with filtering by extensions:
import os
import sequtils
let
targetFolder = "/home/user/media/"
targetExt = #[".mp3", ".webm", ".mkv"]
proc scanFolder (tgPath: string, extLst: seq[string]): seq[string] =
var
fileNames: seq[string]
path, name, ext: string
for kind, obj in walkDir tgPath:
if $kind == "pcDir" :
fileNames = concat(fileNames, scanFolder(obj, extLst))
(path, name, ext) = splitFile(obj)
if ext in extLst:
fileNames.add(obj)
return fileNames
var fileList = scanFolder(targetFolder, targetExt)
for f in fileList:
echo f
My Node.js program wants to read the contents of the file "test.txt" on a Windows machine. It checks with fs.existsSync() that the file exists and reads its content. But now I want the program instead to give an error or warning if the name of the file on disk is actually "TEST.txt" or any other name which differs in case from the name my program is looking for, e.g. "test.txt".
Is there a straightforward way to figure out that even though existsSync() tells me a file exists, the file on disk has a name which differs in case from the file-name I am using to look for it?
You can use fs.readdir to get a list of all files in directory and then compare the filename to see if matches as is.
var fs = require('fs');
var path = __dirname;
var filename = 'test.txt';
var files = fs.readdirSync(path);
var exists = files.includes(filename);
// true if file on disk is "test.txt",
// false if file on disk is "TEST.txt"
console.log(exists);
Can anyone please suggest how to add multiple file extensions with the glob.sync method.
Something like:
const glob = require('glob');
let files = glob.sync(path + '**/*.(html|xhtml)');
Thank you :)
You can use this (which most shells support as well):
glob.sync(path + '**/*.{html,xhtml}')
Or one of these:
glob.sync(path + '**/*.+(html|xhtml)')
glob.sync(path + '**/*.#(html|xhtml)')
I'm looking to monkey-patch require() to replace its file loading with my own function. I imagine that internally require(module_id) does something like:
Convert module_id into a file path
Load the file path as a string
Compile the string into a module object and set up the various globals correctly
I'm looking to replace step 2 without reimplementing steps 1 + 3. Looking at the public API, there's require() which does 1 - 3, and require.resolve() which does 1. Is there a way to isolate step 2 from step 3?
I've looked at the source of require mocking tools such as mockery -- all they seem to be doing is replacing require() with a function that intercepts certain calls and returns a user-supplied object, and passes on other calls to the native require() function.
For context, I'm trying to write a function require_at_commit(module_id, git_commit_id), which loads a module and any of that module's requires as they were at the given commit.
I want this function because I want to be able to write certain functions that a) rely on various parts of my codebase, and b) are guaranteed to not change as I evolve my codebase. I want to "freeze" my code at various points in time, so thought this might be an easy way of avoiding having to package 20 copies of my codebase (an alternative would be to have "my_code_v1": "git://..." in my package.json, but I feel like that would be bloated and slow with 20 versions).
Update:
So the source code for module loading is here: https://github.com/joyent/node/blob/master/lib/module.js. Specifically, to do something like this you would need to reimplement Module._load, which is pretty straightforward. However, there's a bigger obstacle, which is that step 1, converting module_id into a file path, is actually harder than I thought, because resolveFilename needs to be able to call fs.exists() to know where to terminate its search... so I can't just substitute out individual files, I have to substitute entire directories, which means that it's probably easier just to export the entire git revision to a directory and point require() at that directory, as opposed to overriding require().
Update 2:
Ended up using a different approach altogether... see answer I added below
You can use the require.extensions mechanism. This is how the coffee-script coffee command can load .coffee files without ever writing .js files to disk.
Here's how it works:
https://github.com/jashkenas/coffee-script/blob/1.6.2/lib/coffee-script/coffee-script.js#L20
loadFile = function(module, filename) {
var raw, stripped;
raw = fs.readFileSync(filename, 'utf8');
stripped = raw.charCodeAt(0) === 0xFEFF ? raw.substring(1) : raw;
return module._compile(compile(stripped, {
filename: filename,
literate: helpers.isLiterate(filename)
}), filename);
};
if (require.extensions) {
_ref = ['.coffee', '.litcoffee', '.md', '.coffee.md'];
for (_i = 0, _len = _ref.length; _i < _len; _i++) {
ext = _ref[_i];
require.extensions[ext] = loadFile;
}
}
Basically, assuming your modules have a set of well-known extensions, you should be able to use this pattern of a function that takes the module and filename, does whatever loading/transforming you need, and then returns an object that is the module.
This may or may not be sufficient to do what you are asking, but honestly from your question it sounds like you are off in the weeds somewhere far from the rest of the programming world (don't take that harshly, it's just my initial reaction).
So rather than mess with the node require() module, what I ended up doing is archiving the given commit I need to a folder. My code looks something like this:
# commit_id is the commit we want
# (note that if we don't need the whole repository,
# we can pass "commit_id path_to_folder_we_need")
#
# path is the path to the file you want to require starting from the repository root
# (ie 'lib/module.coffee')
#
# cb is called with (err, loaded_module)
#
require_at_commit = (commit_id, path, cb) ->
dir = 'old_versions' #make sure this is in .gitignore!
dir += '/' + commit_id
do_require = -> cb null, require dir + '/' + path
if not fs.existsSync(dir)
fs.mkdirSync(dir)
cmd = 'git archive ' + commit_id + ' | tar -x -C ' + dir
child_process.exec cmd, (error) ->
if error
cb error
else
do_require()
else
do_require()
I am using R, on linux.
I have a set a functions that I use often, and that I have saved in different .r script files. Those files are in ~/r_lib/.
I would like to include those files without having to use the fully qualified name, but just "file.r". Basically I am looking the same command as -I in the c++ compiler.
I there a way to set the include file from R, in the .Rprofile or .Renviron file?
Thanks
You can use the sourceDir function in the Examples section of ?source:
sourceDir <- function(path, trace = TRUE, ...) {
for (nm in list.files(path, pattern = "\\.[RrSsQq]$")) {
if(trace) cat(nm,":")
source(file.path(path, nm), ...)
if(trace) cat("\n")
}
}
And you may want to use sys.source to avoid cluttering your global environment.
If you set the chdir parameter of source to TRUE, then the source calls within the included file will be relative to its path. Hence, you can call:
source("~/r_lib/file.R",chdir=T)
It would probably be better not to have source calls within your "library" and make your code into a package, but sometimes this is convenient.
Get all the files of your directory, in your case
d <- list.files("~/r_lib/")
then you can load them with a function of the plyr package
library(plyr)
l_ply(d, function(x) source(paste("~/r_lib/", x, sep = "")))
If you like you can do it in a loop as well or use a different function onstead of l_ply. Conventional loop:
for (i in 1:length(d)) source(paste("~/r_lib/", d[[i]], sep = ""))
Write your own source() wrapper?
mySource <- function(script, path = "~/r_lib/", ...) {
## paste path+filename
fname <- paste(path, script, sep = "")
## source the file
source(fname, ...)
}
You could stick that in your .Rprofile do is will be loaded each time you start R.
If you want to load all the R files, you can extend the above easily to source all files at once
mySource <- function(path = "~/r_lib/", ...) {
## list of files
fnames <- list.files(path, pattern = "\\.[RrSsQq]$")
## add path
fnames <- paste(path, fnames, sep = "")
## source the files
lapply(fnames, source, ...)
invisible()
}
Actually, though, you'd be better off starting your own private package and loading that.