I have some documents in HTML and i need it to be printed/generated on server (no UI, automated, linux based).
I'm very satisfied with Google Chrome "html to pdf" of the documents but i'm wondering is it possible to use that "component" of "html to pdf" printing engine from Google Chrome Browser somehow for this purpose?
Actually i found the solution:
First one wkhtmltopdf http://code.google.com/p/wkhtmltopdf/
And at the end i realized that mpdf (php lib) can help me too :)
If you needed an HTTP API service to convert HTML to PDF from an URL you may want to check this answer that I wrote that explains how to do it.
Example:
https://dhtml2pdf.herokuapp.com/api.php?url=https://www.github.com&result_type=show
shows in the browser the PDF generated of the site https://www.github.com.
See the project in Github.
Hope it helps.
Related
I am using nodejs server to serve static files. When the pdf file is served, the browser displays the title as URL of the pdf path.
127.0.0.1:8080/docs/sample
How can I set this to a custom title say "Sample"
I have tried following things but no luck :
res.setHeader('Content-Disposition', 'inline;filename="sample.pdf"');
Setting meta tag of pdf file as "sample"
Any help will be much appreciated.
If you're using a static file server, then yes you are serving it as a download. Modern browsers often contain a built-in PDF viewing plugin, so instead of asking the user where to save the file, the browser will instead just display the PDF right in a browser tab. It still downloaded it, it just saved it to some temporary cache on your machine in that case.
What I'm getting at is that you cannot control the browser title in that case because it's just the browser trying to be nice and make things convenient for the user. The PDF file itself would have no idea if it was being displayed in the browser's built-in viewer or in Adobe Reader on the desktop. There are no HTTP headers you could send down to set the title either because browsers expect page titles to be set from HTML or JavaScript running on an actual web page.
Now, if you were to embed the PDF file in an HTML page with some kind of PDF viewer then you could control the page title with a simple <title>some title</title> tag or calling document.title = 'some title'; from JavaScript. That works because the browser is rendering an actual web page that you control, and that page just happens to have an embedded PDF viewer on it.
Here's an example of an embeddable PDF viewer. http://pdfobject.com/
//hack alert
You could trick the browser by setting the last fragment of the url to whatever you like . i.e. change sample in your example to the desired title.
(tested in chrome 58.0.3029.110)
HTH
In spidermonkey,I want to get page URL in one string creating function.How to realize it?I don't know where is the spidermonkey forum site.Can someone tell me?Thanks!
The standalone Spidermonkey is just a Javascript engine, it does not expose web related features (HTML, DOM). If you are running a Javascript script inside Firefox, you can get the URL through the standard DOM objectwindow .
I would like to modify my extension's popup dynamically (at run-time). And want to specify a custom popup HTML file that's loaded from my server.
In Firefox, I can easily accomplish this with XUL overlays which I can specify at run-time.
And document.loadOverlay() does allow me to specify a 'remote' URL for the overlay.
Is the same possible in Chrome?
I've been playing with chrome.browserAction.setPopup( details ) API, but it seems that the details.popup param must specify a local file, and not a remote URL.
I have answered this exact same question on the Chromium-Extensions mailinglist.
There is no API to load external popups but you can do that with plain JavaScript. What you could do (I have done that in the past):
Use an iframe + extension messaging within the popup. The iframe
points to some external url not hosted in the extension.
Use templates (jQuery templates example), load those template files to
your background page, and just use them to construct your popup.
Download the html contents using XHR and load them within the popup
by constructing the DOM.
I usually use the template approach, but I use the popup iframe approach when I want to manage the entire popup in the server side so I don't have to push updates to the extension gallery. I am not a fan of downloading the HTML contents, templating seems safer.
Hope this helped!
I'd like to scrape all the URLs my searches return when searching for stuff via Google. I've tried making a script, but Google did not like it, and adding cookie support and captcha was too tedious. I'm looking for something that - when I'm browsing through the Google search pages - will simply take all the URLs on the pages and put them inside a .txt file or store them somehow.
Does any of you know of something that will do that? Perhaps a greasemonkey script or a firefox addon? Would be greatly appreciated. Thanks!
See the JSON/Atom Custom Search API.
I've done something similar for Google Scholar where there's no API available. My approach was basically to create a proxy web server (a java web app on Tomcat) that would fetch the page, do something with it and then show to user. This is 100% functional solution but requires quite some coding. If you are interested I can get into more details and put up some code.
Google search results are very easy to scrape. Here is an example in php.
<?
# a trivial example of how to scrape google
$html = file_get_contents("http://www.google.com/search?q=pokemon");
$dom = new DOMDocument();
#$dom->loadHTML($html);
$x = new DOMXPath($dom);
foreach($x->query("//div[#id='ires']//h3//a") as $node)
{
echo $node->getAttribute("href")."\n";
}
?>
You may try IRobotSoft bookmark addon at http://irobotsoft.com/bookmark/index.html
I find that when I am doing web development there are a few browser plugins that are very useful to me.
For Firefox I am using:
Firebug - Great for inspecting the HTML elements and working with CSS.
YSlow for Firebug - Developed by Yahoo! and gives timing and tips about page resources.
Live HTTP headers - Lets you inspect the headers that are sent to your browser.
For IE I am using:
Fiddler - "a Web Debugging Proxy which logs all HTTP(S) traffic between your computer and the Internet"
I am always looking for other great tools to use. So what is everyone else using?
In addition to what you have:
Web Developer toolbar adds alot of extra functionality (cookie, form, image inspection, viewing generated DOM, etc).
HTML Validator - great for a quick check to make sure your pages are valid. Also good when there are display errors, you can quickly see if it's from improperly generated HTML.
ColorZilla - I use this alot to pull exact colors from a page to the clipboard.
Fireshot -- takes screenshots and annotates them convieniently, helpful.
Extended Statusbar modifies the status bar to show speed, percentage, time, and loaded size (useful for seeing how many images are being loaded, page weight, etc)
ShowIP Displays the IP address of the current page in the status bar
external IP Displays your external IP address in the statusbar
On a side note, I also find it useful to run these extensions in FirefoxPortable, so that I've got a browser setup specifically for development work with the relevant extensions installed, and to avoid slowing down or destabilizing my primary browser (eg. Firebug used to crash my browser all the time when accessing Gmail).
URL Params (Firefox extension) to view the POST and GET parameters of a webpage. Useful for checking your forms.
HttpFox
The one that prevents you from accessing StackOverflow is pretty useful.
All of these are Firefox plugins.
Firebug for Javascript and CSS debugging. Firebug allows for example to examine DOM tree while javascript modifies it. Firebug is my main tool.
Live HTTP Headers for looking at what data actually is inside request and responses.
Web Developer toolbar contains smaller utilities. For example it can validate html and CSS.
Dust Me Selectors finds which pieces of CSS are unused.
IE Developer Toolbar
Venkman debugger for Firefox
Firecookie and console 2
How about twitterfox to help twitter with developer colleagues and friends.
MeasureIt
For getting exact size of items rendered on a page in FF.
Firebug - Also let's me see the JS requests being sent from one page to another and which data is being sent.
- I can see the data inside the JS variables
- Replaces Error Console. It also outputs in the statue if it has found an error, so I can inspect it.
- Good for seeing the structure of the html when developing AJAX application.