I am using Mithril and am dynamically mounting some content. the first time the page loads everything works fine. Once I click a button, which causes one thing to unmount and another to mount, the Mathjax won't render again. I have tried calling the MathJax.Hub.Queue(["Typeset",MathJax.Hub]); function on element and page load, but it still does not work. The console indicates that MathJax.Hub is undefined after the remount. MathJax itself is being loaded synchronously.
What can I do?
Your error message suggests that you may be using MathJax version 3 rather than version 2. The MathJax API has changed significantly with version 3, and MathJax.Hub is no longer part of version 3. The replacement you are looking for is probably MathJax.typesetPromise(). See the MathJax documentation for typesetting and converting math for more details.
Related
I would like to be able to render using raylib with the Ray4Laz .dll bindings and outputting renders to a Lazarus TOpenGLControl, but it is giving an access violation error.
I can't even load up a texture with the library and render that without it showing a completely different image that's all garbled. It's probably pointing to some wrong memory address.
In general I would like to understand what might be the issue even if this can't be achieved without some major rewrites.
As the title states, I'm having a finicky issue with .NET Core 2.2. I'm using tag helpers all over, but on the specific page that I'm having trouble with, it's actually the most simple use-case of all:
<a class="logout" asp-page="/Admin/Logout">Logout</a>
Some relevant notes:
As of yesterday, it worked in all environments without any issues.
This morning I made some changes seemingly unrelated to this page, and published again.
In the published version (on Azure), the tag helpers for this page only don't render, but instead appear in the source code as literals. (e.g. <a asp-page="..."></a>)
Still works without issue locally.
Here is the directory structure. The page in question is /Admin/Index.cshtml:
And my _ViewImports.cshtml (which again, I haven't changed in months):
#using redacted
#namespace redacted.Pages
#addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
Explicitly adding the taghelper to the .chshtml file has solved this for me in the past. Unfortunately I am not aware of why this issue occurs randomly. In my specific use case,we were using custom and third party tag helpers which we suspect was causing the issues.
Try adding this line to the /Admin/Index.cshtml file:
#addTagHelper *, Microsoft.AspNetCore.Mvc.TagHelpers
I have a scraping script written in Ruby which uses Selenium, Watir and ChromeDriver, all is working just fine with a Chrome browser window, but trying to run in headless mode just hits;
Selenium::WebDriver::Error::UnknownError: unknown error: Element <input id="desired_element" type="checkbox" name="desired_element" checked="checked"> is not clickable at point (660, 594). Other element would receive the click: <html lang="en">...</html>
I'm using Watir 6.8 and latest Chromedriver 2.33.
Any ideas whats n=behind the different behavior of the page when in headless vs non headless, and how I can deal with this?
The error message is telling you what the problem is when it says "Other element would receive the click:"
This means some other element on the screen is covering the checkbox you are trying to interact with. Likely this is caused by whatever the default browser size in headless mode being different than the default size of your browser when it is run non-headless, resulting in a different arrangement of elements
We can verify if this is the case by asking the size of the window in both headless and normal modes and seeing if the resulting values are the same.
size = browser.window.size
puts "The browser window is #{size.width} wide by #{size.height} high"
There are a few potential ways to solve this:
Specify or alter the browser 'window' size. for example
browser.window.resize_to 1024, 768
I prefer this, and normally have a command such as that to set the
browser size right after it is initialized. Set either to the
minimum supported size for your site, or the minimum recommended
size
Use another means to 'click' on the checkbox, such as sending a space at it
#browser.checkbox(name: "desired_element").send_keys " "
I do not prefer this as it doesn't really solve the source of the problem and you may experience other similar issues interacting with other elements on the site as your script progresses.
The reason for this kind of error is that web page doesn't load well enough to locate the element. As you have mentioned that the tests previously passed when you had headless true such issues might be because of .click() , please try replacing .click with .send_keys
.send_keys(selenium.webdriver.common.keys.Keys.SPACE)
When you use .send_keys() you might hit one more issue if it is failing to find elements, for solving this you will have to find the elements
elements = driver.find_element_by_tag_name("html")
elements.send_keys(Keys.SPACE)
Hope this helps you.
In manifest.json, we specify our background page and can put an html or a js file for it. Since it is only a script that executes what sense does it make to have an html file for it?
I mean where is UI going to get shown anyway?
Similarly the devtools_page property has to be an html file. What sense does that make?
It will not be shown anywhere (that's the essence of "background"), but some elements on it make sense.
You can have an <audio> tag, and if you play it, it will be heard.
You can have an <iframe> with some other page loaded invisibly.
..and so on
As for devtools_page, it would actually be visible in the interface (as an extra panel in the DevTools)
It is possible that devtools_page must be an HTML file just for legacy reasons: it was not updated when manifest version 2 rolled out with changes to how background pages are specified. Still, the same arguments as above apply.
background_page is a legacy feature from the initial support of extensions in Chrome. background.scripts was added in Chrome 18. I can't speak for Google's original intentions but I'd guess that in the original design using an page felt more natural and would be less likely to confuse developers. Once they realized how many background_pages were just being used to load JavaScript it made sense to explicitly support that.
I have a big SVG document here, containing a map of all the quests in a certain online game. Each quest node is inside a SVG <a> element, linking to a distinct named anchor in a big HTML document that loads in another tab, containing further details about that particular quest. This works exactly as desired in desktop Safari, and I'd expect it to work just as well in any browser that supports SVG at all since I'm using only the most basic form of linking, but it fails badly on Mobile Safari (iOS 6) - which is my single most important browser target, considering that the game in question is for the iPad. It only scrolls to the correct anchor on the initial load of the HTML page; clicking a different quest in the SVG tab will cause a switch to the HTML tab, and the hash (fragment ID) in the address bar changes, but the page doesn't auto-scroll.
This appears to be a known limitation in Mobile Safari - hash-only changes in the URL apparently used to force a page reload, and that got over-fixed such that nothing gets triggered at all now. The fixes I've found online all seem to be applicable only in cases where the URL change is being generated programatically, from within the same document, rather than static links from a different document.
Further details:
I've tried doing the named anchors in both the old <a name="..."> form, and the newer <h1 id="..."> form. No difference.
I've tried adding an onhashchange handler, to force the scrolling to take place, but the handler isn't being called at all (verified by putting an alert() in it).
I could presumably fix the problem by having each quest's details in a separate HTML file, but that would severely affect usability - with all the details in a single file, you can use your browser's Find feature to search through them all at once. (Also, deploying 1006 files to my web hosting after each update would be a bit of a pain...)
Anybody have an idea for a work-around?