Resolve file name on url redirect - nsis

I use wininet (with an nsis plugin called inetc) to download a file.
I want to save the file with the same file name that is used in the url for example http://some.domain.com/myfile.doc should be saved as myfile.doc. The problem is when I have a url with redirection. For example I can get http://some.domain.com/ which redirects to http://some.domain.com/myfile.doc, and I want to save it as myfile.doc.
How do I tackle this problem?

INetC was not really designed to support that but I guess you could call INetC::head in a loop and parse the returned header until it is no longer a redirect...
Edit:
Since INetC is only designed to deal with files named by the caller it just relies on the high-level WinInet default handling.
While it might be possible to modify INetC or create a new plugin, it might be less work to just have the server do the work. It could return the content-disposition header that INetC::head can download or a special URL like server.com/?getname that just returns the name so you grab the name first with INetC::get and then perform the real INetC::get with the correct destination filename...

Related

How to append into txt file through karate.write(value,file.txt) function? [duplicate]

In my karate tests i need to write response id's to txt files (or any other file format such as JSON), was wondering if it has any capability to do this, I haven't seen otherwise in the documentation. In the case of no, is there a simple JavaScript function to do so?
Try the karate.write(value, filename) API but we don't encourage it. Also the file will be written only to the current "build" directory which will be target for Maven projects / stand-alone JAR.
value can be any data-type, and Karate will write the bytes (or plain-text) out. There is no built-in support for any other format.
Here is an example.
EDIT: for others coming across this answer in the future the right thing to do is:
don't write files in the first place, you never need to do this, and this question is typically asked by inexperienced folks who for some reason think that the only way to "save" a response before validation is to write it to a file. No, please don't waste your time - and please just match against the response. You can save it (or parts of it) to variables while you make other HTTP requests. And do not write your tests so that scenarios (or features) depend on other scenarios, this is a very bad practice. Also note that by default, Karate will dump all HTTP requests and responses in the log file (typically in target/karate.log) and also in the HTML report.
see if karate.write() works for you as per this answer
write a custom Java (or JS function that uses the JVM) to do what you want using Java interop
Also note that you can use karate.toCsv() to convert JSON into CSV if needed.
My justification for writing to a file is a different one. I am using karate explicitly to implement a mock. I want to expose an endpoint wherein the upstream system will send some basic data through json payload using POST/PUT method and karate will construct the subsequent payload file and stores it the specific folder, and this newly created payload file will be exposed through another GET call.

p:remotecommand to upload files

I would like to upload file programatically to my jsf application. The user should select a directory on his system, and a js script should loop on any file in dir and send each one to the listener serverside
I cannot use FileUpload, because it cannot select a whole dir with thousands of file, so I was thinking to use jquery and send the file to a remotecommand, but I have no clue to how send the file itself (normally I pass just string)
so I was thinking to use jquery and send the file to a remotecommand, but I have no clue to how send the file itself
Don't go there. It is a bad attempt to a workaround for a bad design choice. You'd most likely run into similar problems and what about the user selecting a lot of files for second time if it fails halfway? It might become slow, might run into browser limits (search for uploading multiple files in plain html)...
If you still want to do it via a webbrowser (which according to one of your other questions you do not want to), maybe try something like https://webdeltasync.github.io/ (disclaimer: I did not use it myself, and there might be similar ones (https://www.google.com/search?q=browser+based+rsync), it is just a hint in which direction to find a real solution)

How do I serve MathJax from a local Happstack server?

I'm not a developer/programmer. I'm just someone trying to use Gitit to take notes. I've got it to the point where it runs on Windows, but the math looks best using MathJax. I don't want to rely on a remote CDN to get the MathJax working (power cuts and internet disconnections are very frequent here). The author of the app mentions it can be setup in "4 lines of code" in Happstack:
mathjax-script: https://d3eoax9i5htok0.cloudfront.net/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML
# specifies the path to MathJax rendering script.
# You might want to use your own MathJax script to render formulas without
# Internet connection or if you want to use some special LaTeX packages.
# Note: path specified there cannot be an absolute path to a script on your hdd,
# instead you should run your (local if you wish) HTTP server which will
# serve the MathJax.js script. You can easily (in four lines of code) serve
# MathJax.js using http://happstack.com/docs/crashcourse/FileServing.html
# Do not forget the "http://" prefix (e.g. http://localhost:1234/MathJax.js)
The link to the tutorial is broken, so I'd be grateful for some assistance. Is there is any MathJax configuration I need to change, or simply extracting the files will do? I'll be writing lots of math in gitit. I'd prefer not to set up Apache etc. to serve MathJax. Gitit already uses Happstack, I'd prefer using that. Thanks!
EDIT: Just to be clear I'm not sure how to assign the port 1234 to serve this script
Ok I got MathJax working using portable Apache and the MathJax archive downloaded from docs.mathjax.org. The URL needs to be of the form (assuming you extracted the files into apache2/htdocs/MathJax):
http://localhost/MathJax/MathJax.js?config=TeX-AMS-MML_HTMLorMML
I wanted to keep this lightweight by reusing the same instance of Happstack as Gitit, but that seems beyond my skills/available time right now.
EDIT: Just found out that ghc will pack everything into one exe when building. So I doubt it is even possible to use the same Happstack instance, as the root directory of the server doesn't exist?
From the documentation, the static directory should work just fine:
On receiving a request, gitit always looks first in the static
directory (or in whatever directory is specified for static-dir in the
configuration file). If a file corresponding to the request is found
there, it is served immediately. If the file is not found in static,
gitit next looks in the static subdirectory of gitit's data file
($CABALDIR/share/gitit-x.y.z/data). This is where default css, images,
and javascripts are stored. If the file is not found there either,
gitit treats the request as a request for a wiki page or wiki command.
So, you can throw anything you want to be served statically (for
example, a robots.txt file or favicon.ico) in the static directory.
You can override any of gitit's default css, javascript, or image
files by putting a file with the same relative path in static. Note
that gitit has a default robots.txt file that excludes all URLs
beginning with /_.
(source: https://github.com/jgm/gitit)
Download the MathJax.js file from e.g. cdn.mathjax.org and place it in data/static/js/MathJax.js. Then change the config you quote to:
mathjax-script: http://localhost:5001/js/MathJax.js

jZebra text file printing

I am using jZebra applet to print a dynamic generated text file. But it seems, the browser cached the file and printing the same old contents even after file contents are changed. "applet.clear()" did not help me. What am I missing?
A dirty trick for forcing against caching is to simply put a JavaScript timestamp at the end of the URL. This makes the URL appear unique to the web browser and works for me every time, especially for IE and should also correct your problem for Java.
If you have a URL, i.e.
var url = "http://foo.bar";
Change it to:
var url = "http://foo.bar?" + new Date().getTime();
Since jZebra allows the file URL to be provided either as a parameter, or allows the contents of the file to be appended, you should probably specify how you're appending your file next time for better clarification.
-Tres

XUL accessing resources and application structure

So I'm new to XUL.
As a language it seems easy enough and I'm already pretty handy at javascript, but the thing I can't wrap my mind around is the way you access resources from manifest files or from xul files. So I did the 'Getting started with XULRunner' tutorial... https://developer.mozilla.org/en/getting_started_with_xulrunner
and I'm more confused than ever... so I'm hoping someone can set me straight.
Here is why... (you may want to open the tutorial for this).
The manifest file, the prefs.js and the xul file all refer to a package called 'myapp', that if everything I've read thus far on MDN can be trusted means that inside the chrome directory there must be either a jar file or directory called myapp, but there is neither. The root directory of the whole app is called myapp, but I called mine something completely different and it still worked.
When I placed the content folder, inside another folder called 'foo', and changed all references to 'myapp' to 'foo', thus I thought creating a 'foo' package, a popup informed me that it couldn't find 'chrome://foo/content/main.xul', though that's exactly where it was.
Also in the xul file it links to a stylesheet inside 'chrome://global/skin/' which doesn't exist. Yet something is overriding any inline styling I try to do to the button. And when I create a css file and point the url to it, the program doesn't even run.
Can someone please explain what strange magic is going on here... I'm very confused.
When you register a content folder in a chrome.manifest you must use the following format:
content packagename uri/to/files/ [flags]
The uri/to/files/ may be absolute or relative to the location of the manifest. That is, it doesn't matter what the name of the containing folder is relative to your package name; the point is to tell chrome how to resolve URIs of the following form:
chrome://packagename/content/...
The packagename simply creates a mapping to the location of the files on disk (wherever that may be).
The chrome protocol defines a logical package structure, it simply maps one URL to another. The structure on disk might be entirely different and the files might not even be located on disk. When the protocol handler encounters an address like chrome://foo/content/main.xul it checks: "Do we have a manifest entry somewhere that defines the content mapping for package foo?" And if it then finds content foobar file:///something/ - it doesn't care whether that URL refers to a file, it simply resolves main.xul relatively to file:///something/ which results in file:///something/main.xul. So file:///something/browser.xul will be the URL from which the data will be read in the end - but you could also map a chrome package to another chrome URL, a jar URL or something else (theoretically you could even use http but that is forbidden for security reasons).
If you look into the Firefox/XULRunner directory you will see another chrome.manifest there (in Firefox 4/5 it is located inside omni.jar file). That's where the mappings for global package are defined for example.

Resources