I am creating a npm library where I need to read the files of the folder from where my library function were invoked from command line and then operate on those files.
By operation I mean to check if a variable exist, if a function exists, modifying variable, function,etc.
The files will be a Typescript files.
Any help on how to proceed will be great.
Seems like you need some kind of AST parser like Esprima or babel-parser. These tools can parse the content of JS/TS files, build the abstract syntax tree that can be traversed, modified and converted back to the source code.
There's a lot of useful tools available in Babel toolset that simplifies these operations. For example, babel-traverse simplifies searching the target statement or expression, babel-types that helps to match the type of the AST nodes and babel-generator that generates the source code from the AST.
It's going to be very difficult to get these answers without running the files.
So the best approach is probably to just import the files as usual and see what side-effects running the files had. For example, you can check if a file exported anything.
If this doesn't solve your problem, you will have to parse the files. The best way to do that might be to use the typescript compiler itself:
https://github.com/microsoft/TypeScript/wiki/Using-the-Compiler-API
Related
For the TypeScript ANTLR target that Sam and I have been working on, I would like to have the code generation tool create a single typescript file to hold all the classes generated from a named grammar input. Is this output file structure going to be hard?
So for example, I'd like Expr.g4 -> Expr.g4.ts. That one file TypeScript file could contain named exports for {ExprLexer, ExprParser, and ExprListener} classes, visitor code if requested, maybe even some loose factory functions etc.
I've been looking into the source code under tool/src/org/antlr/v4/codegen to find out how the number and names of the output files are determined, in particular finding CodeGenPipeline.java, This class works in conjunction with the language-specific target class, but the pipeline has a lot (perhaps too much) knowledge of possible output files built into it. None of what I see in CodeGenPipeline.java seems well matched to my 1:1 input-to-output file model.
It seems like the knowledge of what files should be generated for a given language target should come from the language.stg file if possible, but I can't find any evidence that approach has been implemented. Can anyone fill me in on any reasons that approach can't hasn't been tried or worked?
In CMake, we can use find_dependency() in an package -config.cmake file to "forwards the correct parameters for QUIET and REQUIRED which were passed to the original find_package() call." So, naturally we'll want to do that instead of calling find_package() in such files.
Also, for dependency on a threads library, CMake offers us the FindThreads module, so that we write include(FindThreads), prepended by some preference commands, and get a bunch of interesting variables set. So, that's preferable to find_package(Threads).
And thus we have a dilemma: What to put in -config.cmake files, for a threads library dependency? The former, or the latter?
Following a discussion in comments with #Tsyarev, it seems that:
find_package(Threads) includes the FindThreads module internally.
... which means it "respects" the preference variables affecting FindThreads behavioe.
so it makes sense, functionally and aesthetically, to just use find_package() in your main CMakeLists.txt and find_dependency() in -config.cmake.
I'm trying to cleanly write some universal javascript code, for node and browser.
Most of the code is env-agnostic, however, some implementation parts detect the environment (node or browser) and conditionally execute different code.
I would like to activate node typings ONLY for those specific files. However, I couldn't find a way to do so. Either:
node typings, when referenced in even one file only, are made effective for all files (bad, since I could inadvertently rely on node specificities)
if not referencing node typings at all, typescript obviously complains about a lot of unknown definitions, which would be painful to patch by hand
Do anyone has a clean way of activating some type definitions for a selected set of files ?
It's not possible at this time.
A solution: building node-dependant and node-independant files separately. This could be done automatically with a script.
I am trying to write source code in one language and have it converted to both native c++ and JS source. Ideally the converted source should be human readable and resemble the original source as best it can. I was hoping haxe could solve this problem for me. So I code in haxescript and have it convert it to its corresponding C++ and JS source. However the examples I'm finding of haxe seems to create the final application for you. So with C++ it will use msbuild (or whatever compiler it finds) and creates the final exe for you from generated C++ code. Does haxe also create the c++ and JS source code for you to view or is it all done internally to haxe and not accessible? If it is accessible then is it possible to remove the building side of haxe so it simply creates the source code and stops?
Thanks
When you generate CPP all the intermediate files are generated and kept wherever you decide to generate your output (the path given using -cpp pathToOutput). The fact that you get an executable is probably because you are using the -main switch. That implies an entry point to your application but that is not really required and you can just pass to the command line a bunch of types that you want to have built in your output.
For JS it is very similar, a single JS file is generated and it only has an entry point if you used -main.
Regarding the other topic, does your Haxe code resembles the generated code the answer is yes, but ... some of the types (like Enum and Abstract) only exist in Haxe so they will generate code that functionally works but it might look quite different. Also Haxe has an always-on optimizer/analyzer that might mungle your code in unexpected ways (the analyzer can be disabled). I still find that it is not that difficult to figure out the Haxe source from the generated code. JS has support for source mapping which is really useful for debugging. So in the end, Haxe doesn't do anything to obfuscate your generated code but also doesn't do much to try to preserve it too strictly.
I have a code snippet similar to this:
# Compile protobuf headers
env.Protoc(...)
# Move headers to 'include' (compiled via protobuf)
env.Command([include headers...], [headers...], move_func)
# Compile program (depends on 'include' files)
out2 = SConscript('src/SConscript')
Depends(out2, [include headers...])
Basically, I have Protoc() compiling protobuf files, then the headers are moved to the 'include' directory by env.Command() and finally the program is compiled through a SConscript file in the 'src'.
Since these are header files that are being moved (that the src compilation depends on), they are not explicitly defined as a dependency by scons (as far as I understand). Thus, the compilation runs, but the header files haven't been moved so it fails. I have tried exposing the dependency via Depends() and Requires() without success.
I understand that in the usual case, scons should "figure-out" dependencies, but I don't know how it could do that here.
Thanks!
You seem to be thinking in "make" ways about your build process, which is the wrong approach when using SCons. You can't order single build steps by putting them in different SConscripts, and then including those in a special order. You have to define proper dependencies between your actual sources (C/CPP files for example) and a target like a program or PDF file. Then SCons is able to figure out the correct build order, and will traverse through the folder structure of your project automatically. If required, it will enter subfolders more than once when the dependency graph (DAG) dictates this. Defining this kind of dependencies between inputs and outputs is usually done, using a Builder...and in your case the Install() builder would be a good fit. Please also regard the hints for #2 in the list of "most frequently-asked FAQs" ( https://bitbucket.org/scons/scons/wiki/FrequentlyAskedQuestions).
Further, I can only recommend to read a little more in the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ) to get a better feeling for how to do things in a more "SConsy" way. If you get stuck, feel free to ask further questions on our mailing list at scons-users#scons.org (see http://www.scons.org/lists.php ).
Finally, if you have a lot of steps that you want to execute in serial, and that don't require any special input/output files, SCons is probably not the right tool for your current task. It's designed as a file-oriented build system with automatic parallelization in mind, a simple (Python?) script might be better at the mere serial stuff...