i have Dae, and separately textures that i'd like to include in the exproter.
I use collada2gltf exporter and do next steps
./COLLADA2GLTF-bin -i /Users/Andy/downloads/Ball_Dae/ball_v2.dae -o /Users/Andy/downloads/Ball_Dae/Ball_Output --binary --metallicRoughnessTextures /Users/Andy/downloads/Ball_Dae/texture/
What i need is to create GLTF from dae and as pbr textures use textures from the /texture folder.
Not sure if i'm doing something right.
Per the COLLADA2GLTF docs, if you want the textures to be separate from the rest of the glTF file, you must use the --separate option. It's possible that option is incompatible with --binary, which typically implies bundling everything into a single binary file. If the documented options aren't working, I'd suggest filing a bug on the COLLADA2GLTF repository.
Related
It is possible to set both the favicon and the logo of the rustdoc for a crate by using:
#![doc(html_favicon_url = "<url_to>/favicon.ico")]
#![doc(html_logo_url = "<url_to>/logo.png")]
as documented here.
However I do not want to upload my logo publicly and would therefore like to automatically include these files in /target/doc and reference them from there.
Currently I have put the respective data urls (base64 encoded) into these fields and it works fine, but it enormously bloats the source file where these attributes are set.
I know I could just copy the images into target/doc after generating the documentation using a script and then reference them using the relative url, but I would like to avoid this, so that I can still generate the documentation using cargo doc.
Edit
The suggestion from the comment to set the --output flag of rustdoc using rustdocflags in .cargo/config.toml also did not work, because it leads to error: Option 'output' given more than once. Apart from that, it is not suited for me, because (at least as far as I understand) I can only give absolute paths there, whereas I would need a solution using relative paths for the images, because I have those images stored in a subdirectory of the cargo root directory, to allow for easy transfer to another system using git, etc.
Thanks to the latest comment from eggyal I finally figured out how to do this:
In my build.rs I copy the files to target/doc/:
fn main() {
// Copy the images to the output when generating documentation
println!("cargo:rerun-if-changed=assets/doc");
std::fs::copy("assets/doc/logo.ico", "target/doc/logo.ico").expect("Failed to copy crate favicon when building documentation.");
std::fs::copy("assets/doc/logo.png", "target/doc/logo.png").expect("Failed to copy crate logo when building documentation.");
}
and then I just had to make sure, to use an absolute path when referencing them, like so:
#![doc(html_favicon_url = "/logo.ico")]
#![doc(html_logo_url = "/logo.png")]
In general it would be better to read the CARGO_TARGET_DIR environment variable instead of hardcoding target/doc, but this is not yet available in build scripts.
I am trying to customize the color of the LaTeX inline formula when using Sphinx documentation package, and html output.
The details:
I have a file called func.rst, which includes the following line:
Let :math:`x_{1}` be a binary variable.
which is rendered successfully into LaTeX in the documentation I created with Sphinx.
(I have 'sphinx.ext.imgmath' listed in extensions in conf.py)
My goal is to have x_{1} colored in red.
Things I tried:
Adding the color inside the formula:
Let :math:`\color{red}x_{1}` be a binary variable.
while also defining
latex_elements['preamble'] = '\usepackage{xcolor}'
in the conf.py file.
Trying to define all math output globally with:
latex_elements['preamble'] = r'''
\usepackage{xcolor}
\everymath{\color{red}}
\everydisplay{\color{red}}
'''
Needless to say, both (and many more less promising ideas) failed.
Copying over my answer on cross-posted question at tex.sx:
As you seem to be targeting html with math rendered as PNGs images (or SVGs), the current config value to configure isn't latex_elements, but imgmath_latex_preamble.
I tested since and it works.
For completeness sake, I am adding here the full solution. (THANKS jfbu!)
In conf.py I defined extensions = ['sphinx.ext.imgmath', <some_more_unrelated_stuff>]
Also in conf.py I defined
imgmath_latex_preamble=r'\usepackage{xcolor}'
(EDIT: in ooposed to what I previously wrote,there is no need to define in addition imgmath_latex="/usr/local/texlive/2017/bin/x86_64-darwin/latex" thanks jfbu again)
In the .rst file where I have the latex expression, I have
Let :math:`\color{red}x_{1}` be a binary variable.
In the terminal I run
make clean html
("make clean" is the sphinx's best friend)
And its working! wohoo!
Is there a kind of "include" directive in RPM spec? I couldn't find an answer by googling.
Motivation: I have a RPM spec template which the build process modifies with the version, revision and other build-specific data. This is done by sed currently. I think it would be cleaner if the spec would #include a build-specific definitions file, which would be generated by the build process, so I don't need to search and replace in the spec.
If there is no include, is there an idiomatic way to do this (quite common, I believe) task?
Sufficiently recent versions of rpmbuild certainly do support %include:
%include common.inc
Unfortunately, they aren't very smart about it -- there is no known set of directories, in which it will look for the requested files, for example. But it is there and variables are expanded, for example:
%include %{_topdir}/Common/common.inc
RPM does not support includes.
I have solved similar problems with either m4 macro processor or by just concatenating parts of spec (when the "include" was at the beginning).
If you only need to pass a few variables at build time, and not include several lines from another file, you can run
rpmbuild --define 'myvar SOMEVALUE' -bb myspec.spec
and you can use %myvar in the spec.
I faced this same issue recently. I wanted to define multiple sub-packages that were similar, but each varied just slightly (they were language-specific RPMs). I didn't want to repeat the same boiler-plate stuff for each sub-package.
Here's a generic version of what I did:
%define foo_spec() %{expand:%(cat '%{myloc}/main-foo.spec')}
%{foo_spec bar}
%{foo_spec baz}
%{foo_spec qux}
The use of %{expand} ensures that %(cat) is only executed a single time, when the macro is defined. The content of the main-foo.spec file is then three times, and each time %1 in the main-foo.spec file expands to each of bar, baz and qux, in turn, allowing me to treat it as a template. You could easily expand this to more than one parameter, if you have the need (I did not).
For the underlying issue, there maybe two additional solutions that are present in all rpm versions that I am aware of.
Subpackages
macro and rpmrc files.
Subpackages
Another alternative (and perhaps the "RPM way") is to use sub-packages. Maximum RPM also has information and examples of subpackages.
I think the question is trying to structure something like,
two spec files; say rpm_debug.spec and rpm_production.spec
both use %include common.spec
debug and production could also be client and server, etc. For the examples of redefining a variable, each subpackage can have it's own list of variables.
Limitations
The main advantage of subpackages is that only one build takes place; This may also be a disadvantage. The debug and production example may highlight this. That can be worked around using strip to create variants or compiling twice with different output; perhaps using VPATH with Gnu Make). Having to compile large packages and then only have simple variations, like with/without developer info, like headers, static libraries, etc. can make you appreciate this approach.
Macros and Rpmrc
Subpackages don't solve the problem of structural defines that you wish for an entire rootfs hierarchy, or larger collection of RPMs. We have rpmbuild --showrc for this. You can have a large amount of variables and macros defined by altering rpmrc and macros when you run rpm and rpmbuild. From the man page,
rpmrc Configuration
/usr/lib/rpm/rpmrc
/usr/lib/rpm/redhat/rpmrc
/etc/rpmrc
~/.rpmrc
Macro Configuration
/usr/lib/rpm/macros
/usr/lib/rpm/redhat/macros
/etc/rpm/macros
~/.rpmmacros
I think these two features can solve all the problems that %include can. However, %include is a familiar concept and was probably added to make rpm more full-featured and developer friendly.
Which version are you talking about? I currently have %include filename.txt in my spec file and it seems to work just like the C #include directive.
> rpmbuild --version
RPM version 4.8.1
You can include the *.inc files from the SOURCES directory (%_sourcedir):
Source1: common.inc
%include %{SOURCE1}
In this way they will go automatically into SRPMS.
I've used scripts (name your favorite) to take a template and create the spec file from that. Also, the %files tag can import a file that is created by another process, e.g. Python's bdist-rpm.
I have a Flash component that's just a library of compiled code with some exposed API calls. Normally we distribute this as a SWC or MXP, and it works just fine. Recently I had a client express interest in using my component, but they do all their development in MTASC only. MTASC doesn't support SWC files, so ss there a good way to send precompiled code that would work in MTASC? I'm not able to send them the original source code, but if there's some other method I'd appreciate it. I do have access to the source, so I can recompile it however necessary. Thanks!
I did find an answer, and I'm not 100% sure if this is exactly the process since I'm no longer at that job and don't have the computer/process in front of me anymore. It was a bit of a hack.
What it involved basically was unzipping the SWC file and getting a .swf and a bunch of .asi files out.
The .asi files are really just ActionScript files, but they contain intrinsic definitions, or just prototypes or footprints of whats actually there. The real meat of it is still in the .swf.
So you rename all those .asi files to .as and then put them into your MTASC classpath. Since they contain definitions, you shouldn't be getting any more "undefined variable" or "undefined function" errors at compile time. Now you just need to pull in the SWF, where the actual function bodies are defined, using loadMovie. once the loadMovie is complete, you should be able to use all of the functions.
The only caveat of course is that you have to wait for that SWF to load before calling of any of the functions from the SWC.
so step-by-step, it looks like this:
1.) unzip the SWC file. this can be done using WinZip or OS X terminal unzip command
2.) Rename .asi files to .as
3.) add new .as files to MTASC classpath
4.) add AS code to load the .swf in and make sure none of the SWC functions are called before the SWF is loaded
5.) compile
I'm pretty sure this is what we did, but i'm not in a spot to try it out right now.,
Hope this helps, let me know if you have any other questions and I'll see if I can help figure it out any more.
In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.