How to link to local images on Node.js version of Tiddlywiki? - node.js

I'm using the Node.js version of TiddlyWiki, and I'd like to link to images on my filesystem.
The documentation listed here doesn't work; in the [img[path]] tag, for the path part I put something like /Users/documents/ken/path_to_image.jpg yet nothing shows up in the tiddler.
My wiki exists in /Users/documents/ken/wiki.

I know this is an old post, but zacts stated that you can use a macro plugin or simply use the [img] tag to point to the relative path of the image from the tiddlywiki.html file, but the op is using the node.js version, and zacts apparently didn't read that. There is no tiddlywiki.html file for TiddlyWiki on node.js. That only works with the static .html version of tiddlywiki, not the node.js version.
Currently there is no way to point to a local file through the node.js version of Tiddlywiki as node.js is not a webserver, therefore it does not see subfolders like /images/ off of the root url. The only way is to run a parallel web server on the same machine and use the full web url to the images served up from the web server.

In case someone else stumbles across this problem:
I could not find this documented anywhere, but what seems to work is to just copy the image in the tiddlers directory, then restart the nodejs server, and search for the image title from tiddlywiki. There will be a tiddler that contains that image, that you can edit at your leisure.
Alternatively, copy the image as image_name.png (or image_name.jpg) into the tiddlers directory, and create a image_name.png.meta text file with the following contents:
title: image_name
type: image/jpeg
Upon restart of the tiddlywiki nodejs server, a tiddler with title image_name which contains the image will be there.

If you are using the Node.js version, you can simply put it in the ./files folder, and then use [img[. /files/xxx.jpg]] to reference it.

I had this same issue recently, and I found a neat little solution for it. Let me send you the links, and I'll post the snippets here.
I happened to stumble across this tiddlywiki image gallery homepage that linked to a macro plugin that lets you link in local images. Here is the link to the tiddler for the plugin: http://www.richshumaker.com/tw5/tw-photo.html#External%20Image%20Path. Here is the original TiddlyWiki google groups post of the plugin for this: https://groups.google.com/forum/#!msg/tiddlywiki/ChRV6sjQpn4/bCm35_XhGmkJ.
I hope this helps! =) (note: when I get more time I may clean up the formatting of this post).

It is very simple, you use _canonical_uri field
_canonical_uri field
The field value is something like "./wiki/path_to_image.jpg" (mine is "./files") in the same level as the tiddlers folder. I did not experimented with files outside the root folder of the wiki. The dot in the path might be ommited.
The content type might be "audio/mp3" "image/jpg" look at the "parser" shadow tiddlers. Your Browser might support more content types like "audio/wav" but you would have to add this line to "$:/core/modules/parsers/audioparser.js" For example. Might be the same thing for images. Check your browser support.
I really do not know why this fact is so obscure, but it work wonders.

Related

How to change the size of the images through the magnolia cms imaging module

I have put the jar file inside the library of my magnolia(https://nexus.magnolia-cms.com/#nexus-search;classname~ImagingServlet), but when I invoke the url to return the cropped image, I get an error, if someone can help me.
[this is what appears to me1
I would guess that you are either missing the destinations folder or user has no access to it. Alternatively you are missing imaging support for DAM - see more at various additional jars you might need in documentation.
Also it seems you just copied thumbnail link from the admincentral, so perhaps you might want to read more on using themes and predefined image variations.

How do I serve MathJax from a local Happstack server?

I'm not a developer/programmer. I'm just someone trying to use Gitit to take notes. I've got it to the point where it runs on Windows, but the math looks best using MathJax. I don't want to rely on a remote CDN to get the MathJax working (power cuts and internet disconnections are very frequent here). The author of the app mentions it can be setup in "4 lines of code" in Happstack:
mathjax-script: https://d3eoax9i5htok0.cloudfront.net/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML
# specifies the path to MathJax rendering script.
# You might want to use your own MathJax script to render formulas without
# Internet connection or if you want to use some special LaTeX packages.
# Note: path specified there cannot be an absolute path to a script on your hdd,
# instead you should run your (local if you wish) HTTP server which will
# serve the MathJax.js script. You can easily (in four lines of code) serve
# MathJax.js using http://happstack.com/docs/crashcourse/FileServing.html
# Do not forget the "http://" prefix (e.g. http://localhost:1234/MathJax.js)
The link to the tutorial is broken, so I'd be grateful for some assistance. Is there is any MathJax configuration I need to change, or simply extracting the files will do? I'll be writing lots of math in gitit. I'd prefer not to set up Apache etc. to serve MathJax. Gitit already uses Happstack, I'd prefer using that. Thanks!
EDIT: Just to be clear I'm not sure how to assign the port 1234 to serve this script
Ok I got MathJax working using portable Apache and the MathJax archive downloaded from docs.mathjax.org. The URL needs to be of the form (assuming you extracted the files into apache2/htdocs/MathJax):
http://localhost/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML
I wanted to keep this lightweight by reusing the same instance of Happstack as Gitit, but that seems beyond my skills/available time right now.
EDIT: Just found out that ghc will pack everything into one exe when building. So I doubt it is even possible to use the same Happstack instance, as the root directory of the server doesn't exist?
From the documentation, the static directory should work just fine:
On receiving a request, gitit always looks first in the static
directory (or in whatever directory is specified for static-dir in the
configuration file). If a file corresponding to the request is found
there, it is served immediately. If the file is not found in static,
gitit next looks in the static subdirectory of gitit's data file
($CABALDIR/share/gitit-x.y.z/data). This is where default css, images,
and javascripts are stored. If the file is not found there either,
gitit treats the request as a request for a wiki page or wiki command.
So, you can throw anything you want to be served statically (for
example, a robots.txt file or favicon.ico) in the static directory.
You can override any of gitit's default css, javascript, or image
files by putting a file with the same relative path in static. Note
that gitit has a default robots.txt file that excludes all URLs
beginning with /_.
(source: https://github.com/jgm/gitit)
Download the MathJax.js file from e.g. cdn.mathjax.org and place it in data/static/js/MathJax.js. Then change the config you quote to:
mathjax-script: http://localhost:5001/js/MathJax.js

KML relative URL warning

I have this little piece of KML code which shows an image when clicked on the placemark, all images i have are store locally, and it works fine when using the .kml file it self.
Once the file is compressed into .kmz i'll get a warning (yellow marked) on the first line of the CDATA, covering some of my text.
It says: "This balloon may be using incorrectly formatted image URL"
All of my images works fine, they are not missing, and the relative URL is correct, but the syntax is not.
Anyone out there know of a solution to get rid of that hardcoded message ? or even better how to "tune" the code, so this warnings don't show :)
I've seen a couple of examples, stating it should help, but none suits my need, unfortunally.
This was some of the solutions i've looked at, but still not got it working.
Option 1: Fix the URLs
Your base URL is one directory down from where you thought it was, so you can simply add “../” to the beginning of each offending relative URL. This works fine in earlier versions of Google Earth as well, because older versions will look in both directories (and as a bonus, your content will render faster in older versions).
To fix the above example, we’d change:
<img src="images/image.png”>
to
<img src="../images/image.png">
Option 2: Add a <base> tag
As with any other browser, you can add a <base> tag to your HTML to set the base URL of that content. The href parameter of the <base> tag must be an absolute URL, so you’ll have to hard-code your server name and path. Adding the <base> tag to your BalloonStyle can fix all of your URLs in one go.
To fix the above example, we’d add: <base href="http://host.example.com/kmz/somelayer/"> to the BalloonStyle (or description, if we only have a few affected placemarks).
Option 3: Move the files
If you can’t change the balloon content, you can still correct it by moving the resources it points to into the correct locations. Depending on the URL, there are a couple of ways to do this. In our example, you’d move or copy the “images” folder and its contents to the KMZ archive. If the offending URL was “../files/another_image.png” (which should have been “../../files/another_image.png”), you could move or copy the files folder into the somelayer folder to fix the problem.
In many cases, though, there will be many layers all referencing the files folder, so moving the files folder into each layer folder can get tedious. If you have access to the web server configuration, you can solve this by adding an HTTP redirect from each incorrect location that redirects up a directory. You could also move the KMZ file up a directory, but this will change the URL that people must use to access your KMZ file.
I wanted to look at your KML to see if it was properly formatted and test it on my end.

How can I download source code from Linux Cross Reference library (generated by LXR engine)?

I wanted to download Linux kernel module source code from http://lxr.free-electrons.com/source/net/bluetooth/. Is there any tool like SVN to download the source code generated by LXR engine?
Thanks in advance!
If you're still looking for an answer, here is a procedure based on a low documented feature of LXR.
Display the file you are interested in
Modify the URL in the browser address bar, adding at the end ?_raw=1 and go there (i.e. press return key)
The file is then displayed "as is", without any decoration (it is sent as text/plain)
You can now save the file from browser menu command File->Save as
NOTES:
The ?_raw=1 argument can be used to have HTML files interpreted by your browser, i.e. displayed as HTML because they will be sent as text/html.
The feature has been present in LXR for ages, though in versions older than 0.10 the argument is spelled ?raw=1 (without underscore).
I checked that ?raw=1 works with free-electrons though they use 0.3.1 which was released in 2003!
The documentation on lxr states that there is a git repo.
http://lxr.linux.no/
I have never tried it, but it may be what you are looking for
Not exactly an answer, but I was looking for a related thing - the possibility to download individual C files from LXR as plain text. If it was possible to download files as text, in principle you could write some sort of a parser/automatic downloader for a whole directory.
The documentation for the LXR engine seems to be here:
LXR Cross Referencer - Browse /doc at SourceForge.net
... and as far as I could see from the 1.0 PDF manual (note that lxr.free-electrons.com seems to run on version 0.3.1, though), there is no mention of plain-text source files being exported (in addition) to the HTML.
So, very likely, as the plain-text source files seem not to be available in an LXR export, there will be no tool that will be able to download. (Btw, I too wish there was a tool like this, otherwise the only option is to clone the entire Linux source tree via git).
Note, however, that there is also an experimental version of LXR from lxr.linux.no ; that one generates HTML that requires JavaScript, and it will show a "download" button in its interface; then one can download the plain-text source file. As lxr.linux.no is down for me for the moment, here's a link to an annotated HTML page to another site, that seems to use the same engine (there is no note ATM for the LXR engine numeric version):
http://lxr.missinglinkelectronics.com/#linux+v2.6.38/sound/drivers/dummy.c
... and this is how the link looks like to obtain plain-text version:
http://lxr.missinglinkelectronics.com/linux+v2.6.38/+save=sound/drivers/dummy.c
Note that this is a different URL format than what lxr.free-electrons.com would use:
http://lxr.free-electrons.com/source/sound/drivers/dummy.c?v=2.6.38
... and there is a note on the start page ( http://lxr.missinglinkelectronics.com/ )
once you enable JavaScript, which states that:
lxr.missinglinkelectronics.com is currently running an experimental fork of the LXR software provided by lxr.linux.no.
... or, in other words: the link format for downloading plain-text source files from lxr.linux.no, will not work for the (current) lxr.free-electrons.com installation.
Here you can browse the references and also can download the source file
https://code-grep.com/view/project/54b083273b2082684a000008/linux-3.19-rc2
On free-electrons.com, it works by adding the argument "raw=1" in the URL. For example, this URL...
http://lxr.free-electrons.com/source/drivers/misc/lis3lv02d/lis3lv02d.c?v=3.8
... will become this :
http://lxr.free-electrons.com/source/drivers/misc/lis3lv02d/lis3lv02d.c?v=3.8&raw=1
The resulting page can then be saved using the "file saving" feature of your browser. On Linux and Windows, this is usually mapped to the ctrl+s keyboard shortcut.

XUL accessing resources and application structure

So I'm new to XUL.
As a language it seems easy enough and I'm already pretty handy at javascript, but the thing I can't wrap my mind around is the way you access resources from manifest files or from xul files. So I did the 'Getting started with XULRunner' tutorial... https://developer.mozilla.org/en/getting_started_with_xulrunner
and I'm more confused than ever... so I'm hoping someone can set me straight.
Here is why... (you may want to open the tutorial for this).
The manifest file, the prefs.js and the xul file all refer to a package called 'myapp', that if everything I've read thus far on MDN can be trusted means that inside the chrome directory there must be either a jar file or directory called myapp, but there is neither. The root directory of the whole app is called myapp, but I called mine something completely different and it still worked.
When I placed the content folder, inside another folder called 'foo', and changed all references to 'myapp' to 'foo', thus I thought creating a 'foo' package, a popup informed me that it couldn't find 'chrome://foo/content/main.xul', though that's exactly where it was.
Also in the xul file it links to a stylesheet inside 'chrome://global/skin/' which doesn't exist. Yet something is overriding any inline styling I try to do to the button. And when I create a css file and point the url to it, the program doesn't even run.
Can someone please explain what strange magic is going on here... I'm very confused.
When you register a content folder in a chrome.manifest you must use the following format:
content packagename uri/to/files/ [flags]
The uri/to/files/ may be absolute or relative to the location of the manifest. That is, it doesn't matter what the name of the containing folder is relative to your package name; the point is to tell chrome how to resolve URIs of the following form:
chrome://packagename/content/...
The packagename simply creates a mapping to the location of the files on disk (wherever that may be).
The chrome protocol defines a logical package structure, it simply maps one URL to another. The structure on disk might be entirely different and the files might not even be located on disk. When the protocol handler encounters an address like chrome://foo/content/main.xul it checks: "Do we have a manifest entry somewhere that defines the content mapping for package foo?" And if it then finds content foobar file:///something/ - it doesn't care whether that URL refers to a file, it simply resolves main.xul relatively to file:///something/ which results in file:///something/main.xul. So file:///something/browser.xul will be the URL from which the data will be read in the end - but you could also map a chrome package to another chrome URL, a jar URL or something else (theoretically you could even use http but that is forbidden for security reasons).
If you look into the Firefox/XULRunner directory you will see another chrome.manifest there (in Firefox 4/5 it is located inside omni.jar file). That's where the mappings for global package are defined for example.

Resources