Theos lists it supports
Third party frameworks can be placed inside $THEOS/lib, and utilised with instance_EXTRA_FRAMEWORKS. (kirb)
But I am not sure how to make it work or trouble shooting, can someone explain what's this for, and how to use it? If I already build a binary, and the binary needs some frameworks, how to do it?
I tried to follow the samples like putting frameworks under $THEOS/lib and add the flag, but when it runs, for example I am adding AWSCore.framework and AWSS3.framework, it reports library not loaded, image not found
I need to understand how the framework is added into my binary, and what's the run path etc. and how to debug where goes off. Thank you. Does the binary already contains the framework, or I should copy it to somewhere?
Related
I want to write an extension based application, which means an application contains one or multiple program with a primary program (as core for example) and some extensions that aren't necessary but could be added to core to extend features of application. the extension host of vscode is a very good example and interesting. I tried to understand how it works several times and also searched for this approach but nothing found. I want to know how to write a program that can dynamically add and remove an small pieces of codes as extension. I don't know how this piece of codes should be, I don't know how they must be loaded/unloaded to/from program, I don't know if each of them should be a separate programs or not and such this "I don't know"s in this context.
I welcome to any idea and guides in this problem.
I usually code in cpp/Rust.
I have a stack which was originally built in Hypercard then migrated to Metacard. Obviously, it has expanded greatly over that time. Some core features broke when I tried to migrate to Runrev which is why I've waited till now to finally do that. I'm keeping it as a stack rather than an exe so I can save changes to it. I've built a standalone player to launch it and that is working. I've included the revmessagebox.rev stack in the Standalone Stack settings. This does add it but, incorrectly. I can put messages to it from my stack but, it won't run commands and it's missing all it's icons. I'm also included the revimagelibrary.rev and revtools.rev stacks in the hopes of fixing this but, no dice. I was also hoping that including revimagelibrary.rev would get my old Metacard icons to display but, no dice. I appreciate any help I can get on this.
Rich
I don't think you can. The message box is part of the IDE and requires the development environment to run. When you build a standalone your scripts etc are compiled and an interpreter for commands is no longer present.
To replicate it in a standalone you could use a simple window with a field to accept text and would require you passing the text entered to a "Do" command. The other functions present with the message box (accessed via the icons you mentioned) are also development tools and don't make much sense in a standalone.
The message box is not only integrated into the IDE, the engine also has hooks that directly support it. I'm not sure those hooks are included with the engine that is built into a standalone, so even if you adapt the existing message box for your standalone it still may not work correctly.
The solution, as others have said, is to build your own stack that functions as a pseudo-message box. It is easy to display messages in your own stack, and pretty easy to execute simple commands using the "do" command. It is somewhat more difficult to execute complex or multi-line commands. But I agree with Dunbarx that I'd assess the need for such a thing if you are planning this standalone for distribution. It's a non-standard interface element.
What James said. But note that though the msg box is indeed integral to the IDE, it is still just a stack, and that stack can be replicated to whatever extent you need.
That said, the msg box is usually used as a development tool, to test short scripts (usually one-liners), to get or set property values quickly, as a simple calculator, that sort of stuff. If you need that sort of functionality, you should probably integrate it more comprehensively into the structure of your project.
Craig Newman
As part of a major refactoring of my Node.js app (going DDD), I'm looking for a library that through inspecting code is able to visualize module dependencies (by means of 'requiring' them) between different node-modules.
Visualizing in Table-format is fine, I don't need fancy graphs.
Any Node libraries out there?
If you may accept also some fancy graphs: http://hughsk.github.com/colony/
I do not know if this exists, but I found the following by quick search:
http://toolbox.no.de/packages/subdeps
http://toolbox.no.de/packages/fast-detective
Maybe subdeps is not exactly what you want right now, but I think you could use these projects to make that project yourself?
See also https://github.com/pahen/madge
Create graphs from your CommonJS, AMD or ES6 module dependencies. Could also be useful for finding circular dependencies in your code. Tested on Node.js and RequireJS projects. Dependencies are calculated using static code analysis.
I just published my node-dependency-visualizer, which is a small module, that creates a digraph from your node dependencies. Paired with graphviz/dot you can create a dependency graph as svg (or other image format) which you can include with your documentation, embed in your Readme.md, ...
However, it does not check, whether the dependencies are actually needed in code - not sure, whether the OP meant that with "requiring". Of course this question is old, but this tool might be helpful for others, too.
Sample image (Angluar cli):
I'd like to use #Grape in my groovy program but my program consists of several files. The examples on the Groovy Grape page all seem to assume that your script will consist of one file. How can I do this? Should I just add it to one of the files and expect that the imports will work from the others? If so, then is it common to place all the #Grape calls in one file with no other code? Do I need to add the Grape call to all files that will import the package? Do I need to download the JAR and create a Gradle file, which I was getting away without at this point?
the grape engine and the #grab annotation were created as part of core groovy with single file scripts in mind, to allow a chunk of text to easily become a fully functional program.
for larger applications, gradle is an awesome build tool with lots of useful features.
but yes, you can manage all the application dependencies just with grape.
whether you annotate every file or a single one does not matter, just make sure the #grab annotated file is read before you try to use the external class.
annotating the main class is probably better as you will easily lose track of library versions if you have the annotations scattered.
and yes, you should consider gradle for any application with more than a dozen files or anything you might want to reuse elsewhere as a library.
In my opinion, it depends how your program is to be run...
If your program is to be run as a collection of standalone scripts, then I'd probably stick the #Grab required for each script at the top of each of them.
If your program is more of a standard style program with a single point of entry, then I'd go for using a build tool like Gradle (as you say), as you get a lot of easy wins by using it.
Firstly, it makes it easy to define your dependencies (and build a single large jar containing all of them)
Secondly, Gradle makes it really easy to start writing tests, include code coverage plugins, or useful tools like codenarc to suggest possible fixes or improvements to your code. These all become invaluable not only for improving your code (or knowing your code works), but also when refactoring your code, you know you've not broken anything that used to work.
There is this executable that is packed however neither peid nor protection_id nor RDG tell me what it is, as they dont know.
How do i go about finding the packer?
Or what if its' custom made?
It could easily have been derived from another packer in such a way as to destroy the signature by which the packer is recognized by those tools. Someone with experience looking at packed binaries might be able to detect obvious signs that it originated from a specific tool, but if all three tools fail to detect it, there's a good chances that it's custom made. A sign that it's custom made would be if the unpacking code is fairly simple and doesn't go through more than a few KB of code before executing the payload. Also look for signs that it doesn't look like it could pack generic program binaries.