I'm having trouble installing openscenegraph with vrml plugins Can anyone provide some suggestions?
I'm working on Snow Leopard, and I downloaded the latest OpenVRML 0.18, and OpenSceneGraph 3.0.1
I can get OpenSceneGraph to work, however, I need to load VRML files, and when I use osg to read the file, it says that it can't find any plugin.
I tried installing OpenVRML by compiling from source, but it fails configuring, since it seems to be unable to fine libboost_thread-mt library file, even though I have it installed, and linking to it works.
OSG picks plugins by extension of the file it's trying to load - if your file ends in .vrml, try changing to .wrl and see if it loads - maybe your installation download had the VRML plugin, but it wasn't finding it.
If you just have a few files, the it'll be easier to use something like:
http://www.cs.princeton.edu/~min/meshconv/
to convert them to stl or obj files, rather than building the plugin. Otherwise, you'll want to consult the OSG mailing list/forum. Join the forum now, anyways, new users' messages don't often get posted right away.
Related
For a university project, I am trying to find a not so cumbersome way to effectively modify certain applications from the official Debian repository, such as eog. I want to clarify that I am unfamiliar with Linux and GTK. My idea was to be able to work comfortably in terms of finding variable and function definitions, trying step by step debugging while getting used to gtk+ and the application's source code. I tried to understand the code while working from the terminal, but in my opinion, it was a pain i* t** a**.
So far, I managed to install the application's build dependencies with
sudo apt build-dep eog
and I received the source-code with
apt-get source eog
After I installed eclipse, I tried to get gtk+ running with the minimum example from the gtk+ reference manual. I found a very useful easy explanation here. It's the answer from Wed, 04 November 2015 12:51.No problem so far. So in theory, I should be able to write GTK+ applications in Eclipse. But when I am trying to make a new project and include eog's .src and .h files, I am running into a mass of unresolved inclusions, missing header files, undefined references etc...
So I wanted to ask: Did anybody work on similar tasks and can provide some help? Or: Does anyone have a better idea maybe?
If anybody should come across this ever again: I found an alternative solution: When you download the source-files of eog, you will come across the meson-build system. Eclipse supports C meson-build projects. I just copied all downloaded source files into the meson-build project in eclipse in could compile it right away.
I've created a nimble library package as per the documentation. When I try to build it using nimble build I get the following error.
Error: Nothing to build. Did you specify a module to build using the bin key in your .nimble file?
I can do this and it does fix the error but according to the documentation adding the bin key to the .nimble file turns my package into a binary package.
Other things I have tried:
Use nimble install: This does not appear to verify that my code will actually compile and will happily install anything to the local package directory (I added a C# class to my .nim file, for example, and it was successfully installed).
Use nimble c: This works but I have to pass in the path to the nim file I want to compile and the binDir entry in the .nimble file is ignored resulting in the output being placed in the same directory as the file being built. This complicates the development cycle because I have to manually clean up after the compiler.
Use the compiler directly. This is pretty much the same as the previous option with the same flaws.
I guess I could also create a separate .nim file and import my library after it is installed but this is a big overhead for just wanting to verify that a package in the early stages of development will actually compile.
I just want to be able to verify that the source code in my library package is syntactically correct and will compile. How is this meant to be done for library packages?
From your provided link to the nimble package manager documentation I have the feeling that
https://github.com/nim-lang/nimble#tests
is what you are looking for. But I have never used the test command, so I am not sure. I do my test manually still, I read the nimble docs maybe 4 years ago and can not really remember. And currently there is much package manager related work going on, I heard there is a new, alternative package manager called nimph, and from a forum thread I think I read something that nimble is going to change and improve also. Maybe you should consider subscribing to the Nim forum, that is the place where the bright Nim devs are. Well, at least a few of them.
I am getting into a position where I have to use other people code for projects, for example openTLD. I want to change some of the code to give it more functionality and use it in a diffrent way. What I have found is that many people have packaged their files in such a way that you are supposed to use
cmake
and then
make
and sometimes after that
make install
I don't want to install the software on my system. What I am looking to do is get these peoples code to a point where I can add to it in Eclipse or even just using Nano and then compile it.
At what point is the code in a workable/usable state. Can I use it after doing cmake or do I need to also call make? Is my thinking correct that it would be better to edit the code after calling cmake as opposed to before? I am not going to want my finished code to be cross platform supported, it will only be on Linux. Is it easer to learn cmake and edit the code befor running cmake as opposed to not learning cmake and using the code afterwards, if that is possible?
You question is a little open ended.
Looking at the opentld project, there is a binary and a library available for use. If you are interested in using the binary in your code, you need to download the executables(Linux executables are not posted). If you are planning to use the library, you have two options. Either you use the pre-built library or build it during your build process. You would include the header files in your custom application and link with the library.
If you add more details, probably others can pitch in with new answers or refine the older ones.
Possibly this is related to Using a plugin generated with Firebreath in a Firefox Extension?; however, my question is possibly more specific, so here goes...
I'm working on Linux (Ubuntu 11.04), and I have built a Mozilla/Firefox (Firefox 7) plugin using FireBreath. The resulting plugin on this platform is an "npXXX.so" file, which I got symlinked in ~/.mozilla/plugins. Then, I have coded an extension that uses this plugin - and apart from the symlink, nothing else seems required - all seems to work just smashing :)
So, knowing that "firefox supports installing your plugin via XPI. This is not recommended by the FireBreath team", now I'd still like to package the extension AND the plugin into an XPI file. So, I'm reading a bit on Structure of an installable bundle - MDN, and I can see these two possibilities:
/components/* XPCOM components (*.js, *.dll), and interface files from *.xpt (>=1.7)
...
/plugins/* NPAPI Plugins (>=1.8)
...
binary-component components/linux/mycomponent.so ABI=Linux_x86-gcc3
Now, it says: "The older XPCOM- and LiveConnect-based APIs for plugins should not be used.", so I guess the "/components" directory should not be used (even if it is given as an example in the above page). And I can not find this stated explicitly anywhere, but I'm guessing FireBreath builds NPAPI plugins - so presumably "/plugins" is the way to go. (There is also mention of "/platform", but it clearly says it's been deprecated for Firefox > 3.6).
Ok, so far so good... So I try to copy the plugin file to plugins/linux inside the extension directory:
cp -L ~/.mozilla/plugins/npXXX.so plugins/linux/
... and then insert the following in chrome.manifest:
binary-component plugins/linux/npXXX.so ABI=Linux_x86-gcc4
... then I zip the whole extension directory (the plugin included) as an .xpi, try to install it on a different computer. There, the .xpi succesfully installs, the .so file is indeed unpacked under the profile's extensions/XXX/plugins/linux/ directory - and every cross-platform (javascript) code of the extension works fine; except that the plugin cannot be found.
Now, of course, the user could themselves symlink the extension .so to ~/.mozilla/plugins/; however, I would like to spare the user of that :)
How would I go about this kind of packaging thing - is there a recommended way to do it?
Many thanks in advance for any answers,
Cheers!
Edit: found Shipping a plugin as a Toolkit bundle - MDN which claims only install.rdf , and a plugins/obj.so is needed; then I found Running Quake Live in Firefox 4, 5 and 6 on Linux [fixes inside], referring to a QuakeLivePlugin_433-modded_ff10.xpi, and that one does indeed follow such a simple structure.. If I install that, I get both a Quake extension and a Quake plugin (and that even with Error Console complaining "Could not read chrome manifest file '/path/to/extensions/quakeliveplugin#idsoftware.com/chrome.manifest'.") .... but if I try the same with my FireBreath plugin (e.g. just an install.rdf and plugin in /plugins), only extension gets shown - no plugin (and no reasonable error messages).. Could this be a problem with FireBreath?
Well, I'll post this as an answer - I have just confirmed that FireBreath plugin does in fact work being packaged in the simple "toolkit bundle" way as an .xpi extension.
Basically, I just cleared up my development PC's Firefox profile, and tried to install the .xpi carrying the plugin there - and on the dev PC, the plugin shows in about:plugins and runs just fine (even if it's just unpacked in profile/extensions/EXT/plugins/obj.so, and not in ~/.mozilla/plugins)... In fact, I packaged both the extension and the plugin in separate .xpi's, which were then merged in a single one as recommended in Multiple Item Package - MDN - and that works fine too (upon loading the merged xpi, one gets prompted about installing two extensions - one for the plugin carrying one, and the other for the "plain" extension)...
So the problem was on the other test computer only - and the problem seems to be that I'm using Gnome libraries in the plugin, and while my dev PC is Ubuntu 11.04 - I think this test PC was Ubuntu 10.04 ... So, quite likely, the problem is incompatible Gnome libraries in the plugin build; unfortunately, I don't get many errors back from firefox, even if I do:
NSPR_LOG_MODULES=IPCPlugins:5 NSPR_LOG_FILE=/tmp/plugins.log /path/to/firefox -P myprofile
(... as recommended in Logging Multi-Process Plugins - MDN - however, the /tmp/plugins.log remains empty). The only thing Firefox on the problem machine spits is something like this to stdout:
WARNING: Application calling GLX 1.3 function "glXCreatePixmap" when GLX 1.3 is not supported! This is an application bug!
WARNING: Application calling GLX 1.3 function "glXDestroyPixmap" when GLX 1.3 is not supported! This is an application bug!
(firefox:6548): GLib-GObject-WARNING **: /build/buildd/glib2.0-2.24.1/gobject/gsignal.c:1149: unable to lookup signal "text-insert" for non instantiatable type `AtkText'
... and I cannot tell if this has something to do with the plugin or not... But I guess at least the packaging part got confirmed as working now :) Cheers!
EDIT: After a while, the Firefox on the test PC did spit out the following (although I would have expected this message to pop up instantly):
LoadPlugin: failed to initialize shared library /path/to/profile/extensions/extXXX/plugins/npXXX.so [/usr/lib/libstdc++.so.6: version ``GLIBCXX_3.4.14' not found (required by /path/to/profile/extensions/extXXX/plugins/npXXX.so)]
... which finally confirms it was a build problem I have had.
I have beeen looking around for a way to add code highlighting to my Subversion and Apache installation that hosts my local subversion projects. It runs on Fedora Core 10 installed in a VM. I would like to use syntaxhighlighter but I have not idea how i can get Apache to automatically insert the required javascript into my source code files (without tainting the source).
It is possible to modify my existing installation of Apache 2.2/SVN 1.5.5 to use syntaxhighlighter? (this is what it looks like)
There is a project called WebSVN hosted by Collabnet that seems to have something similar built in, however after the trouble I've gone to get the web subversion working (And Fedora configured nicely), I don't want to use OpenCollabnet's version of WebSVN. Plus their version does not support the latest subversion and Apache.
How can I add some form of code highlighting to my Apache that serves the subversion source?
I was using Trac for web-based project management software. It does issue tracking and wiki, but it also provides a repository browser which has syntax coloring. It supports a bunch of different syntax colorers. GNU Enscript, SilverCity, Pygments
Trac is installed, I checked out Enscript, SilverCity and Pygments.
There were no packages for FC10 for the first two, but there was a Pygments package.....it looks rather nice.
Demos here
The C++ highlighting is what i'm interested in, it looks decent: C++ Highlighting
Although Pygments is obviously not as nice as syntaxhighlighter, which I would still prefer to use if someone knows an easy way to setup.