.arangosh.rc not sourced on Mac OSX - arangodb

I am following the ArangoDB documentation, and I'm currently following the section ArangoDB Shell Configuration; here, they describe an .arangosh.rc file that is sourced from your home directory, placing custom code into the arango shell's global scope. Following the documentation to a T, I've made an .arangosh.rc file in my home directory ~/.arangosh.rc and added the example function
timed = function (cb) {
var internal = require("internal");
var start = internal.time();
cb();
internal.print("execution took: ", internal.time() - start);
};
I've tried exiting and restarting the arango shell as well as completely restarting my terminal session but I can't get arangosh to source the rc file. When I try invoking timed() I get a
ReferenceError: timed is not defined
Blockquote

As far as I can see the condition for sourcing ~/.arangosh.rc changed somewhere in 2.6, but this looks like an error to me. I have reverted that change in the 2.7, 2.8 and devel branches, so the file will get sourced there now. The fix will be contained in the next official releases.
If you want to apply it before that, the commit id for 2.7 is 8e85a2fbb67c8c50c75cf93aefb7365e1e9fd7d1
It also looks like that in 2.7 any "globals" in the rc file need to be attached to the global object. For example,
timed = function (cb) { ... };
should become
global.timed = function (cb) { ... };
I have also updated the docs to reflect this change.

Related

Neovim LSP: pyright server does not dynamically regognize changes in sub folders

I setup Neovim LSP using the nvim-lspconfig and the lsp-installer where I also installed the pyright server.
Without any further configuration it worked out of the box. However when I have a class in a subfolder and add a new method, pyright does not recognize this method when I want to access it in a different file. When I restart neovim, or open and close the file, pyright suddenly recognizes the newly added method.
I also tried :LspRestart with no effect.
I tried to add some settings to the pyright server:
return {
settings = {
python = {
analysis = {
autoSearchPaths = true,
diagnosticMode = "workspace",
useLibraryCodeForTypes = true,
}
}
},
}
But this also had no effect.
:LspLog also does not show anything which could point to the issue:
[START][2022-07-15 11:11:05] LSP logging initiated
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
[WARN][2022-07-15 11:11:09] ...lsp/handlers.lua:456 "stubPath typings is not a valid directory."
[WARN][2022-07-15 11:11:20] ...lsp/handlers.lua:109 "The language server pyright triggers a registerCapability handler despite dynamicRegistration set to false. Report upstream, this warning is harmless"
I also could not find any setting regarding to this issue here which could solve this.
Since I am new to python, the way I import and structure classes might not be common and might be an issue which could cause this problem.
As a main entry point I have main.py in the root folder
All other source files are in a program/ folder which does not have a __init__.py
Inside program/ there are folders which each have a __init__.py file f.e. core/
core/__init__.py:
from .myClass import myClass
and in main.py I import it like this:
from subfolder.core import myClass
myClass.newMethod() # this is only recognized by lsp/pyright after the file is closed and reopen
Is the issue a bug in pyright (not likely I guess), a missing setting or my strange folder/import structure?
Can you try this: create (or modify) pyproject.toml, put it in the project root directory. Inside pyproject.toml, add the following lines:
[tool.pyright]
extraPaths = ["program/core" ,"program/directory_2", "program/directory_3"]
The idea is that you have to add the sub directories manually, which is really tedious but at least it works in my case.

Using node-cmd module while handling Squirrel Events function

I'm building a desktop app for Windows using electron-packager and electron-squirrel-startup, I would like to execute some Windows cmd commands during the installation of my application. To do so I was planning to use node-cmd node module, but I doesn't really work inside the handleSquirrelEvents function. An example command like this:
function handleSquirrelEvent(application) {
const squirrelEvent = process.argv[1];
switch (squirrelEvent) {
case '--squirrel-install':
case '--squirrel-updated':
var cmd=require('node-cmd');
cmd.run('touch example.created.file');
}
};
Seems to work. A file example.created.file in my_app/node_module/node-cmd/example directory is created.
But any other code does not work. Even if I only change the name of the file to be "touched" nothing happens.
Ok, example.created.file already exists in this directory and I suspect that you can only use update.exe supported commands in case '--squirrel-updated' sections. So this will not work.

How do I include this directory in the $PATH env var?

I'm building a package for Github's Atom editor and Im running into a challenge trying to get a child process to execute with node js. I'm pretty sure that the problem is that the environment that Atom runs in, doesn't include the path to the mrt script. So when I run this from within my package:
exec = require("child_process").exec
child = undefined
child = exec("/usr/local/bin/mrt add iron-router", { cwd: path },(error, stdout, stderr) -
console.log "stdout: " + stdout
console.log "stderr: " + stderr
console.log "exec error: " + error if error isnt null
return
)
in the console, I get:
Atom has a web inspector built right into it and you can actually see the Paths that atom has included. So when I go to Atom's console and type: process.env.PATH it shows the paths: /usr/bin:/bin:/usr/sbin:/sbin. So I somehow need to make atom aware of that mrt script's path. Anyone know how I might go about doing that?
I also reached out on on Atom's discussion forum yesterday, but have yet to come up with a solution.
Edit:
I should also note that the normal command for excuting the mrt package installer is mrt add package-name but as advised on Atom's discussion forum, I've been using the full path.
Edit 2:
I've creating symlinks to node in my /usr/bin directory, and it's working now. Now I'm trying to get node to create the symlinks for me using fs.symlink but that doesn't seem to be working.
To sum it up, the problem is that Atom uses PATH from where it is launched. Consequently, the path to node and the path to mrt where not included in Atom's path. The solution came to me when someone on the Atom Discussion forum pointed out Atom's Class BufferedNodeProcess.
At the time of Answer there is a slight bug with that class so I was not able to use it - the Github team works fast, I wouldn't be surprised if it was fixed within the next couple days. I was, however, able use some of the code to get Atom's environments. Also, I ended up using node's spawn method instead of execute since that's what BufferedNodeProcess uses. Plus you can read each individual line of the stdout.
options =
cwd: atom.project.getPath()
options.env = Object.create(process.env) unless options.env?
options.env["ATOM_SHELL_INTERNAL_RUN_AS_NODE"] = 1
node = (if process.platform is "darwin" then path.resolve(process.resourcesPath, "..", "Frameworks", "Atom Helper.app", "Contents", "MacOS", "Atom Helper") else process.execPath)
mrt = spawn(node, [
"/usr/local/lib/node_modules/meteorite/bin/mrt.js"
"add"
"iron-router"
], options )
mrt.stdout.on "data", (data) ->
console.log "stdout: " + data
return
mrt.stderr.on "data", (data) ->
console.log "stderr: " + data
return
mrt.on "close", (code) ->
console.log "child process exited with code " + code
return

in node.js, cannot get rid of a bad symlink

I have an uglify function that creates a file lib-0.1.4-min.js and then symlinks that to lib-production-min.js. 0.1.4 is the current version.
due to synchronization of this directory, sometimes the lib-production-min.js is a broken link.
when I run the compile function, fs.existsSync( "lib-production-min.js" ) returns false. when I try to create the symlink later, node errs out with file already exists.
var version = 'lib-0.1.4-min.j';
var prod = 'lib-production-min.js';
// if production exists, get rid of it
if( fs.existsSync(prod) ) fs.unlinkSync( prod ); // not exists - not deleted
// link version to production
fs.symlinkSync( version, prod ); // ERROR: file already exists
how do I check if this deadlink is in the directory?
will normal fs.unlinkSync( "lib-production-min.js" ) delete it?
fs.lstat() or fs.lstatSync() might help you. They are supposed to bring the information about the link itself, not following it.
Use fs.readlinkSync(symlinkPath) to get the file pointed by the symlink, and then use fs.existsSync with that path.
The problem is that the link file exists, is the destination of the link the one that is missing.

How do you get the path of the running script in groovy?

I'm writing a groovy script that I want to be controlled via a properties file stored in the same folder. However, I want to be able to call this script from anywhere. When I run the script it always looks for the properties file based on where it is run from, not where the script is.
How can I access the path of the script file from within the script?
You are correct that new File(".").getCanonicalPath() does not work. That returns the working directory.
To get the script directory
scriptDir = new File(getClass().protectionDomain.codeSource.location.path).parent
To get the script file path
scriptFile = getClass().protectionDomain.codeSource.location.path
As of Groovy 2.3.0 the #SourceURI annotation can be used to populate a variable with the URI of the script's location. This URI can then be used to get the path to the script:
import groovy.transform.SourceURI
import java.nio.file.Path
import java.nio.file.Paths
#SourceURI
URI sourceUri
Path scriptLocation = Paths.get(sourceUri)
Note that this will only work if the URI is a file: URI (or another URI scheme type with an installed FileSystemProvider), otherwise a FileSystemNotFoundException will be thrown by the Paths.get(URI) call. In particular, certain Groovy runtimes such as groovyshell and nextflow return a data: URI, which will not typically match an installed FileSystemProvider.
This makes sense if you are running the Groovy code as a script, otherwise the whole idea gets a little confusing, IMO. The workaround is here: https://issues.apache.org/jira/browse/GROOVY-1642
Basically this involves changing startGroovy.sh to pass in the location of the Groovy script as an environment variable.
As long as this information is not provided directly by Groovy, it's possible to modify the groovy.(sh|bat) starter script to make this property available as system property:
For unix boxes just change $GROOVY_HOME/bin/groovy (the sh script) to do
export JAVA_OPTS="$JAVA_OPTS -Dscript.name=$0"
before calling startGroovy
For Windows:
In startGroovy.bat add the following 2 lines right after the line with
the :init label (just before the parameter slurping starts):
#rem get name of script to launch with full path
set GROOVY_SCRIPT_NAME=%~f1
A bit further down in the batch file after the line that says "set
JAVA_OPTS=%JAVA_OPTS% -Dgroovy.starter.conf="%STARTER_CONF%" add the
line
set JAVA_OPTS=%JAVA_OPTS% -Dscript.name="%GROOVY_SCRIPT_NAME%"
For gradle user
I have same issue when I'm starting to work with gradle. I want to compile my thrift by remote thrift compiler (custom by my company).
Below is how I solved my issue:
task compileThrift {
doLast {
def projectLocation = projectDir.getAbsolutePath(); // HERE is what you've been looking for.
ssh.run {
session(remotes.compilerServer) {
// Delete existing thrift file.
cleanGeneratedFiles()
new File("$projectLocation/thrift/").eachFile() { f ->
def fileName=f.getName()
if(f.absolutePath.endsWith(".thrift")){
put from: f, into: "$compilerLocation/$fileName"
}
}
execute "mkdir -p $compilerLocation/gen-java"
def compileResult = execute "bash $compilerLocation/genjar $serviceName", logging: 'stdout', pty: true
assert compileResult.contains('SUCCESSFUL')
get from: "$compilerLocation/$serviceName" + '.jar', into: "$projectLocation/libs/"
}
}
}
}
One more solution. It works perfect even you run the script using GrovyConsole
File getScriptFile(){
new File(this.class.classLoader.getResourceLoader().loadGroovySource(this.class.name).toURI())
}
println getScriptFile()
workaround: for us it was running in an ANT environment and storing some location parent (knowing the subpath) in the Java environment properties (System.setProperty( "dirAncestor", "/foo" )) we could access the dir ancestor via Groovy's properties.get('dirAncestor').
maybe this will help for some scenarios mentioned here.

Resources