Node.js REPL function docstring - node.js

Theseday, I start the node.js. I have a problem in nodejs REPL.
When I use Python, I just use "?" method to check the function's docstring ( in python repl, ipython too)
That's cos function structure in ipython by using "math.cos?" order.
Otherwise, I can't find a this method in node.js repl. This function is very necessary when we do programming.
Can you give me a some tip for checking docstring in node.js ?

I afraid there is no build in command to check the function’s structure or docstring in node-repl right now as you can do in python-repl or ipython. But the good new is the contributors are already working on a feature like this.
For the time being you can use node-help or node-repl as an alternative. Although they are not good enough, they should work for now.

Related

How to display all the functions / api available in a class / file in python

I am very new to python and started using pycharm ide for development.
How to display all the functions / API available in a class/file in python.
e.g. like when we code in scala/java in ide wee get all the functions/ methods available for that class/object by just typing object_name.(dot) and it shows all the API.
not able to get that for python in pycharm.
for example, I wanted to see all the functions available in the python List. I searched dir(module-name) but that is not what I need while developing faster.
thx in advance.
dir(module-name) and dir(class-name) is exactly how you do this. Yes, it doesn't return things in a userfriendly manner and you may need to dig down further for nested modules, but the only better option would be to learn from the package/api docs.

How to use helm-semantic-or-imenu for code navigation with type annotated python code

I would like to use the helm-semantic-or-imenu command to navigate components of type annotated Python code, but whatever code analyzer is used to dentify the components doesn't seem to recognize the type annotated python code. Functions with the return type annotation doesn't get recognized at all and functions with annotated arguments show the type instead of the arguments names in the signatures
The main problem I have is that I do not properly understand the components that is involved in making this work (when it does work). Obviously it might help to somehow update the code analyzer, but in which project do I find that? helm? semantic? imenu? or as someone mentioned somewhere else with regards to code analysis python.el? I could really use some help getting started to solve this. If the code analyzer is found in python.el, can I then try to modify and make emacs use a local version preferentially over the installed one?
EDIT:
After making the initial post I finally made a break through in trying to figure out where the components come from. I searched for python*.el in all of the file systemsystem and discovered these:
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python.elc
./usr/share/emacs/26.2/lisp/cedet/semantic/wisent/python-wy.elc
I found the source for emacs 26.2 and discovered that indeed it seems python-el is responsible for parsing python files for semantic. It also internally uses the python-wy for recognizing a large portion of the language components. But unfortunately that is where I hit a brick wall. I was hoping to be able to monkey patch the function that recognizes a function definition via an re or something, but semantic actually solves the problem the right way. So python-wy seems to be auto-generated from a formal grammar definition file (in emacs git admin/grammars/python.wy) and figuring out how to modify that it is unfortunately much beyond my abilities.
The semantic python backend doesn't appear to parse type annotations correctly (and there hasn't been much recent development on those libraries as far as I can tell). Since helm-semantic-or-imenu favors semantic when it is active, you can disable semantic altogether for python buffers unless you use its other features (personally I only use it for C/C++).
When the semantic mode-specific libraries are loaded they set imenu-create-default-create-index and imenu-default-goto-function, causing imenu to use semantic instead of python.el's imenu function.
To disable semantic support for your python files you can customize the semantic-new-buffer-setup-functions, only adding entries for modes you want semantic support for, eg. in your semantic hook (or alternatively with the customize UI),
(setq semantic-new-buffer-setup-functions
'((c-mode . semantic-default-c-setup)
(c++-mode . semantic-default-c-setup)
(srecode-template-mode . srecode-template-setup-parser)
(texinfo-mode . semantic-default-texi-setup)
;; etc.
;; (makefile-automake-mode . semantic-default-make-setup)
;; (makefile-mode . semantic-default-make-setup)
;; (makefile-gmake-mode . semantic-default-make-setup)
))

How to run Java from Python [duplicate]

What is the best way to call java from python?
(jython and RPC are not an option for me).
I've heard of JCC: http://pypi.python.org/pypi/JCC/1.9
a C++ code generator for calling Java from C++/Python
But this requires compiling every possible call; I would prefer another solution.
I've hear about JPype: http://jpype.sourceforge.net/
tutorial: http://www.slideshare.net/onyame/mixing-python-and-java
import jpype
jpype.startJVM(path to jvm.dll, "-ea")
javaPackage = jpype.JPackage("JavaPackageName")
javaClass = javaPackage.JavaClassName
javaObject = javaClass()
javaObject.JavaMethodName()
jpype.shutdownJVM()
This looks like what I need.
However, the last release is from Jan 2009 and I see people failing to compile JPype.
Is JPype a dead project?
Are there any other alternatives?
You could also use Py4J. There is an example on the frontpage and lots of documentation, but essentially, you just call Java methods from your python code as if they were python methods:
from py4j.java_gateway import JavaGateway
gateway = JavaGateway() # connect to the JVM
java_object = gateway.jvm.mypackage.MyClass() # invoke constructor
other_object = java_object.doThat()
other_object.doThis(1,'abc')
gateway.jvm.java.lang.System.out.println('Hello World!') # call a static method
As opposed to Jython, one part of Py4J runs in the Python VM so it is always "up to date" with the latest version of Python and you can use libraries that do not run well on Jython (e.g., lxml). The other part runs in the Java VM you want to call.
The communication is done through sockets instead of JNI and Py4J has its own protocol (to optimize certain cases, to manage memory, etc.)
Disclaimer: I am the author of Py4J
Here is my summary of this problem: 5 Ways of Calling Java from Python
http://baojie.org/blog/2014/06/16/call-java-from-python/ (cached)
Short answer: Jpype works pretty well and is proven in many projects (such as python-boilerpipe), but Pyjnius is faster and simpler than JPype
I have tried Pyjnius/Jnius, JCC, javabridge, Jpype and Py4j.
Py4j is a bit hard to use, as you need to start a gateway, adding another layer of fragility.
Pyjnius docs and Github.
From the github page:
A Python module to access Java classes as Python classes using JNI.
PyJNIus is a "Work In Progress".
Quick overview
>>> from jnius import autoclass
>>> autoclass('java.lang.System').out.println('Hello world')
Hello world
>>> Stack = autoclass('java.util.Stack')
>>> stack = Stack()
>>> stack.push('hello')
>>> stack.push('world')
>>> print stack.pop()
world
>>> print stack.pop()
hello
If you're in Python 3, there's a fork of JPype called JPype1-py3
pip install JPype1-py3
This works for me on OSX / Python 3.4.3. (You may need to export JAVA_HOME=/Library/Java/JavaVirtualMachines/your-java-version)
from jpype import *
startJVM(getDefaultJVMPath(), "-ea")
java.lang.System.out.println("hello world")
shutdownJVM()
I'm on OSX 10.10.2, and succeeded in using JPype.
Ran into installation problems with Jnius (others have too), Javabridge installed but gave mysterious errors when I tried to use it, PyJ4 has this inconvenience of having to start a Gateway server in Java first, JCC wouldn't install. Finally, JPype ended up working. There's a maintained fork of JPype on Github. It has the major advantages that (a) it installs properly and (b) it can very efficiently convert java arrays to numpy array (np_arr = java_arr[:])
The installation process was:
git clone https://github.com/originell/jpype.git
cd jpype
python setup.py install
And you should be able to import jpype
The following demo worked:
import jpype as jp
jp.startJVM(jp.getDefaultJVMPath(), "-ea")
jp.java.lang.System.out.println("hello world")
jp.shutdownJVM()
When I tried calling my own java code, I had to first compile (javac ./blah/HelloWorldJPype.java), and I had to change the JVM path from the default (otherwise you'll get inexplicable "class not found" errors). For me, this meant changing the startJVM command to:
jp.startJVM('/Library/Java/JavaVirtualMachines/jdk1.7.0_79.jdk/Contents/MacOS/libjli.dylib', "-ea")
c = jp.JClass('blah.HelloWorldJPype')
# Where my java class file is in ./blah/HelloWorldJPype.class
...
I've been integrating a lot of stuff into Python lately, including Java. The most robust method I've found is to use IKVM and a C# wrapper.
IKVM has a neat little application that allows you to take any Java JAR, and convert it directly to .Net DLL. It simply translates the JVM bytecode to CLR bytecode. See http://sourceforge.net/p/ikvm/wiki/Ikvmc/ for details.
The converted library behaves just like a native C# library, and you can use it without needing the JVM. You can then create a C# DLL wrapper project, and add a reference to the converted DLL.
You can now create some wrapper stubs that call the methods that you want to expose, and mark those methods as DllEport. See https://stackoverflow.com/a/29854281/1977538 for details.
The wrapper DLL acts just like a native C library, with the exported methods looking just like exported C methods. You can connect to them using ctype as usual.
I've tried it with Python 2.7, but it should work with 3.0 as well. Works on Windows and the Linuxes
If you happen to use C#, then this is probably the best approach to try when integrating almost anything into python.
I'm just beginning to use JPype 0.5.4.2 (july 2011) and it looks like it's working nicely...
I'm on Xubuntu 10.04
I'm assuming that if you can get from C++ to Java then you are all set. I've seen a product of the kind you mention work well. As it happens the one we used was CodeMesh. I'm not specifically endorsing this vendor, or making any statement about their product's relative quality, but I have seen it work in quite a high volume scenario.
I would say generally that if at all possible I would recommend keeping away from direct integration via JNI if you can. Some simple REST service approach, or queue-based architecture will tend to be simpler to develop and diagnose. You can get quite decent perfomance if you use such decoupled technologies carefully.
Through my own experience trying to run some java code from within python in a manner similar to how python code runs within java code in python, I was unable to find a straightforward methodology.
My solution to my problem was by running this java code as BeanShell scripts by calling the BeanShell interpreter as a shell command from within my python code after editing the java code in a temporary file with the appropriate packages and variables.
If what I am talking about is helpful in any manner, I am glad to help you share more details of my solutions.

How can I load Raphael within IPython notebook, avoiding some issues that arise due to require.js?

In an IPython notebook, one would expect the following code to cause Raphael.js to load successfully into the global namespace.
from IPython.display import Javascript
raphael_url = "https://cdnjs.cloudflare.com/ajax/libs/raphael/2.1.0/raphael-min.js"
Javascript('alert(Raphael);', lib=[raphael_url])
However, it does not work in recent versions of IPython which use require.js. Turns out, Raphael.js, which IPython loads using jQuery.getScript(), recognizes the presence of require.js and as such does not insert itself into the global namespace. In fact, if one first runs javascript code removing the window.define object, Raphael no longer realizes require.js is present, and it inserts itself into the global namespace as I would like. In other words, the code above works after running the following:
Javascript('window.define = undefined;')
Thus, the only way I am able to get Raphael to load within a recent version of IPython notebook is to delete (or set aside) window.define.
Having identified the problem, I am not familiar enough with require.js to know what piece of software is acting against protocol. Is Raphael using a poor way of testing for the existence of require.js? Should IPython be using require.js directly instead of jQuery.getScript() when it loads user-provided javascript libraries? Or is there a way I as the user ought to be embracing require.js, which will give me the Raphael object without needing any special hacks? (If the answer to the last question is yes, is there a way I can also support older versions of IPython notebook, which do not use require.js?)
The first part of my answer won't please you, but the loading and requirement of javascript library in the IPython-notebook-webapp has not yet been solved, so for now I would suggest not to build to much on the assumption you can load library like that, and rely more on custom.js for now.
That being said, if raphael is not in global namespace require is smart enough to cache it, and give you reference to it. Then in the callback you can just assign to a global :
require(['raphael'], function( raph ){
window.raphael = raph;
})
Or something like that should do the trick.

I'm getting an invalid syntax error in configparser.py

I'm trying to get the pymysql module working with python3 on a Macintosh. Note that I am a beginning python user who decided to switch from ruby and am trying to build a simple (sigh) database project to drive my learning python.
In a simple (I thought) test program, I am getting a syntax error in confiparser.py (which is used by the pymysql module)
def __init__(self, defaults=None, dict_type=_default_dict,
allow_no_value=False, *, delimiters=('=', ':'),
comment_prefixes=('#', ';'), inline_comment_prefixes=None,
strict=True, empty_lines_in_values=True,
default_section=DEFAULTSECT,
interpolation=_UNSET):
According to Komodo, the error is on the second line. I assume it is related to the asterix but regardless, I don't know why there would be a problem like this with a standard Python module.
Anyone seen this before?
You're most certainly running the code with a 2.x interpreter. I wonder why it even tries to import 3.x libraries, perhaps the answer lies in your installation process - but that's a different question. Anyway, this (before any other imports)
import sys
print(sys.version)
should show which Python version is actually run, as Komodo Edit may be choosing the wrong executable for whatever reason. Alternatively, leave out the parens and it simply fails if run with Python 3.
In Python 3.2 the configparser module does indeed look that way. Importing it works fine from Python 3.2, but not from Python 2.
Am I right in guessing you get the error when you try to run your module with Komodo? Then you just have configured the wrong Python executable.

Resources