Eslint import/no-unresolved ignore pattern for files starting with # - eslint

If I put this to my .eslintrc.js file I expect it to ignore only imports starting with #, but instead it turns off the rule for everything.
How can I turn off no-resolved only for files starting with #, as an example require('#testmodule');
rules: {
'import/no-unresolved': [2, { ignore: ['^#.+$'] }],
},

afaik the ignore pattern always starts at the begin of the line.
so your pattern looks for lines starting with #, e.g.: #require('testmodule');
but more importantly, you used ignore which expects a boolean and as ['^#.+$'] evals to true the rule simply gets ignored as you've noticed.
using this (or something adapted if you want to use mjs imports) should work:
rules: {
'import/no-unresolved': [2, { ignorePattern: ['require\(#.+$\)'] }],
},

Related

How can I get the relative path of the current director up to an arbitrary parent dir?

This is not a module just a workspace.
Folder structure:
workspace1
- workspace2
- workspace3
- workspace4
- workspace5
If I CD into workspace for the full path is: /Users/me/my-files/terraform/workspace1/workspace3/workspace4
How can I use terraform functions to be able to get just the path workspace1/workspace3/workspace4
Is there a way I can get the full path (https://www.terraform.io/docs/configuration/functions/abspath.html) and then trim out everything before workspace1? perhaps replace() can do that? There are many more features in the latest TF version though I want to check there isn't a feature that makes this easy I could not find in the docs.
# trying to use capture groups doesn't seem to work (just outputs full path)
locals {
test = replace(
abspath(path.root),
"/(.*)(workspace1.*)",
"$2"
)
}
output "test" { value = "${local.test}"
Edit
This match should work but it's not supported:
test = replace(
abspath(path.root),
"/.+?(?=workspace1)/",
"$1"
unsupported Perl syntax: `(?=`.
Assuming there aren't any complications like symlinks to make the problem more "interesting", perhaps you could do it by using abspath both on path.module and on a relative path from there to your codebase root and then use the length of the latter to trim the former. For example:
locals {
module_path = abspath(path.module)
codebase_root_path = abspath("${path.module}/../..")
# Trim local.codebase_root_path and one additional slash from local.module_path
module_rel_path = substr(local.module_path, length(local.codebase_root_path)+1, length(local.module_path))
}
In the above I'm assuming that the current module is two nesting levels under the codebase root, and using that to discover the absolute path of the codebase root in order to trim it off the absolute module path.
This works I think i had the syntax wrong:
#Folder structure
#workspace1
# - workspace2
# - workspace3
# - workspace4
# - workspace5
#abs path on disk: /Users/me/my-files/terraform/workspace1/workspace3/workspace4
locals {
test = replace(
abspath(path.root),
"/.+?(workspace1.*)/",
"$1"
)
}
output "test" { value = "${local.test}" }

Extracting pattern which does not necessarily repeat

I am working with ANSI 835 plain text files and am looking to capture all data in segments which start with “BPR” and end with “TRN” including those markers. A given file is a single line; within that line the segment can, but not always, repeats. I am running the process on multiple files at a time and ideally I would be able to record the file name in which the segment(s) occur.
Here is what I have so far, based on an answer to another question:
#!/bin/sed -nf
/BPR.*TRN/ {
s/.*\(BPR.*TRN\).*/\1/p
d
}
/from/ {
: next
N
/BPR/ {
s/^[^\n]*\(BPR.*TRN\)[^n]*/\1/p
d
}
$! b next
}
I run all files I have through this and write the results to a file which looks like this:
BPR*I*393.46*C*ACH*CCP*01*011900445*DA*0000009046*1066033492**01*071923909*DA*72
34692932*20150120~TRN
BPR*I*1611.07*C*ACH*CCP*01*031100209*DA*0000009108*1066033492**01*071923909*DA*7
234692932*20150122~TRN
BPR*I*1415.25*C*CHK************20150108~TRN
BPR*H*0*C*NON************20150113~TRN
BPR*I*127.13*C*CHK************20150114~TRN
BPR*I*22431.28*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*72346
92932*20150112~TRN
BPR*I*182.62*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*7234692
932*20150115~TRN
Ideally each line would be prepended with the file name like this:
IDI.Aetna.011415.64539531.rmt:BPR*I*393.46*C*ACH*CCP*01*011900445*DA*0000009046*1066033492**01*071923909*DA*72
34692932*20150120~TRN
IDI.BCBSIL.010915.6434438.rmt:BPR*I*1611.07*C*ACH*CCP*01*031100209*DA*0000009108*1066033492**01*071923909*DA*7
234692932*20150122~TRN
IDI.CIGNA.010215.64058847.rmt:BPR*I*1415.25*C*CHK************20150108~TRN
IDI.GLDRULE.011715.646719.rmt:BPR*H*0*C*NON************20150113~TRN
IDI.MCREIN.011915.6471442.rmt:BPR*I*127.13*C*CHK************20150114~TRN
IDI.UHC.011915.64714417.rmt:BPR*I*22431.28*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*72346
92932*20150112~TRN
IDI.UHC.011915.64714417.rmt:BPR*I*182.62*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*7234692
932*20150115~TRN
The last two lines would be an example of a file where the segment pattern repeats.
Again, prepending each line with the file name is ideal. What I really need is to be able to process a given single-line file which has the “BPR…TRN” segment repeating and write all segments in that file to my output file.
Try with awk:
awk '
/BPR/ { sub(".*BPR","BPR") }
/TRN/ { sub("TRN.*","TRN") }
/BPR/,/TRN/ { print FILENAME ":" $0 }
' *.rmt

Changing how nodejs require() fetches files

I'm looking to monkey-patch require() to replace its file loading with my own function. I imagine that internally require(module_id) does something like:
Convert module_id into a file path
Load the file path as a string
Compile the string into a module object and set up the various globals correctly
I'm looking to replace step 2 without reimplementing steps 1 + 3. Looking at the public API, there's require() which does 1 - 3, and require.resolve() which does 1. Is there a way to isolate step 2 from step 3?
I've looked at the source of require mocking tools such as mockery -- all they seem to be doing is replacing require() with a function that intercepts certain calls and returns a user-supplied object, and passes on other calls to the native require() function.
For context, I'm trying to write a function require_at_commit(module_id, git_commit_id), which loads a module and any of that module's requires as they were at the given commit.
I want this function because I want to be able to write certain functions that a) rely on various parts of my codebase, and b) are guaranteed to not change as I evolve my codebase. I want to "freeze" my code at various points in time, so thought this might be an easy way of avoiding having to package 20 copies of my codebase (an alternative would be to have "my_code_v1": "git://..." in my package.json, but I feel like that would be bloated and slow with 20 versions).
Update:
So the source code for module loading is here: https://github.com/joyent/node/blob/master/lib/module.js. Specifically, to do something like this you would need to reimplement Module._load, which is pretty straightforward. However, there's a bigger obstacle, which is that step 1, converting module_id into a file path, is actually harder than I thought, because resolveFilename needs to be able to call fs.exists() to know where to terminate its search... so I can't just substitute out individual files, I have to substitute entire directories, which means that it's probably easier just to export the entire git revision to a directory and point require() at that directory, as opposed to overriding require().
Update 2:
Ended up using a different approach altogether... see answer I added below
You can use the require.extensions mechanism. This is how the coffee-script coffee command can load .coffee files without ever writing .js files to disk.
Here's how it works:
https://github.com/jashkenas/coffee-script/blob/1.6.2/lib/coffee-script/coffee-script.js#L20
loadFile = function(module, filename) {
var raw, stripped;
raw = fs.readFileSync(filename, 'utf8');
stripped = raw.charCodeAt(0) === 0xFEFF ? raw.substring(1) : raw;
return module._compile(compile(stripped, {
filename: filename,
literate: helpers.isLiterate(filename)
}), filename);
};
if (require.extensions) {
_ref = ['.coffee', '.litcoffee', '.md', '.coffee.md'];
for (_i = 0, _len = _ref.length; _i < _len; _i++) {
ext = _ref[_i];
require.extensions[ext] = loadFile;
}
}
Basically, assuming your modules have a set of well-known extensions, you should be able to use this pattern of a function that takes the module and filename, does whatever loading/transforming you need, and then returns an object that is the module.
This may or may not be sufficient to do what you are asking, but honestly from your question it sounds like you are off in the weeds somewhere far from the rest of the programming world (don't take that harshly, it's just my initial reaction).
So rather than mess with the node require() module, what I ended up doing is archiving the given commit I need to a folder. My code looks something like this:
# commit_id is the commit we want
# (note that if we don't need the whole repository,
# we can pass "commit_id path_to_folder_we_need")
#
# path is the path to the file you want to require starting from the repository root
# (ie 'lib/module.coffee')
#
# cb is called with (err, loaded_module)
#
require_at_commit = (commit_id, path, cb) ->
dir = 'old_versions' #make sure this is in .gitignore!
dir += '/' + commit_id
do_require = -> cb null, require dir + '/' + path
if not fs.existsSync(dir)
fs.mkdirSync(dir)
cmd = 'git archive ' + commit_id + ' | tar -x -C ' + dir
child_process.exec cmd, (error) ->
if error
cb error
else
do_require()
else
do_require()

Using Pickle with spork?

Pickle doesn't seem to be loading for me when I'm using spork...
If I run my cucumber normally, the step works as expected:
➜ bundle exec cucumber
And a product exists with name: "Windex", category: "Household Cleaners", description: "nasty bluish stuff" # features/step_definitions/pickle_steps.rb:4
But if I run it through spork, I get an undefined step:
You can implement step definitions for undefined steps with these snippets:
Given /^a product exists with name: "([^"]*)", category: "([^"]*)", description: "([^"]*)"$/ do |arg1, arg2, arg3|
pending # express the regexp above with the code you wish you had
end
What gives?
So it turns out there is an extra config line necessary for features/support/env.rb when using spork in order to have Pickle be able to pickup on AR models, a la this gist:
In features/support/env.rb
Spork.prefork do
ENV["RAILS_ENV"] ||= "test"
require File.expand_path(File.dirname(__FILE__) + '/../../config/environment')
# So that Pickle's AR adapter properly picks up the application's models.
Dir["#{Rails.root}/app/models/*.rb"].each { |f| load f }
# ...
end
Adding in this line fixes my problem. This is more of a spork issue than guard, per se. I'll update my question...

Groovy CliBuilder: only last LongOpt is taken in account

I'm trying to use the groovy CliBuilder to parse command line options. I'm trying to use multiple long options without a short option.
I have the following processor:
def cli = new CliBuilder(usage: 'Generate.groovy [options]')
cli.with {
h longOpt: "help", "Usage information"
r longOpt: "root", args: 1, type: GString, "Root directory for code generation"
x args: 1, type: GString, "Type of processor (all, schema, beans, docs)"
_ longOpt: "dir-beans", args: 1, argName: "directory", type: GString, "Custom location for grails bean classes"
_ longOpt: "dir-orm", args: 1, argName: "directory", type: GString, "Custom location for grails domain classes"
}
options = cli.parse(args)
println "BEANS=${options.'dir-beans'}"
println "ORM=${options.'dir-orm'}"
if (options.h || options == null) {
cli.usage()
System.exit(0)
}
According to the groovy documentation I should be able to use multiple "_" values for an option when I want it to ignore the short option name and use a long option name only. According to the groovy documentation:
Another example showing long options (partial emulation of arg
processing for 'curl' command line):
def cli = new CliBuilder(usage:'curl [options] <url>')
cli._(longOpt:'basic', 'Use HTTP Basic Authentication')
cli.d(longOpt:'data', args:1, argName:'data', 'HTTP POST data')
cli.G(longOpt:'get', 'Send the -d data with a HTTP GET')
cli.q('If used as the first parameter disables .curlrc')
cli._(longOpt:'url', args:1, argName:'URL', 'Set URL to work with')
Which has the following usage message:
usage: curl [options] <url>
--basic Use HTTP Basic Authentication
-d,--data <data> HTTP POST data
-G,--get Send the -d data with a HTTP GET
-q If used as the first parameter disables .curlrc
--url <URL> Set URL to work with
This example shows a common convention. When mixing short and long
names, the short names are often one
character in size. One character
options with arguments don't require a
space between the option and the
argument, e.g. -Ddebug=true. The
example also shows the use of '_' when
no short option is applicable.
Also note that '_' was used multiple times. This is supported but
if any other shortOpt or any longOpt is repeated, then the behavior is undefined.
http://groovy.codehaus.org/gapi/groovy/util/CliBuilder.html
When I use the "_" it only accepts the last one in the list (last one encountered). Am I doing something wrong or is there a way around this issue?
Thanks.
not sure what you mean it only accepts the last one. but this should work...
def cli = new CliBuilder().with {
x 'something', args:1
_ 'something', args:1, longOpt:'dir-beans'
_ 'something', args:1, longOpt:'dir-orm'
parse "-x param --dir-beans beans --dir-orm orm".split(' ')
}
assert cli.x == 'param'
assert cli.'dir-beans' == 'beans'
assert cli.'dir-orm' == 'orm'
I learned that my original code works correctly. What is not working is the function that takes all of the options built in the with enclosure and prints a detailed usage. The function call built into CliBuilder that prints the usage is:
cli.usage()
The original code above prints the following usage line:
usage: Generate.groovy [options]
--dir-orm <directory> Custom location for grails domain classes
-h,--help Usage information
-r,--root Root directory for code generation
-x Type of processor (all, schema, beans, docs)
This usage line makes it look like I'm missing options. I made the mistake of not printing each individual item separate from this usage function call. That's what made this look like it only cared about the last _ item in the with enclosure. I added this code to prove that it was passing values:
println "BEANS=${options.'dir-beans'}"
println "ORM=${options.'dir-orm'}"
I also discovered that you must use = between a long option and it's value or it will not parse the command line options correctly (--long-option=some_value)

Resources