How can I download a file using browser APIs in Linux? - linux

There is an API in windows called URLDownloadToCacheFile that downloads data to the Internet cache and returns the file name of the cache location for retrieving the bits.
Is there any API (C/C++) in linux that downloads a file from internet?
There are some libraries (eg. curl, ...) that useful for downloading but I want a simpler API that does not deponds on any other libraries except browser.
Note that I want a C/C++ API, not a command line tool.
Thanks

A browser is an external application. On a typical Linux system, there's nothing which has the status comparable to that of IE on Windows. You can use Firefox as your browser; you can also completely uninstall Firefox and use only Chrome; you can even use w3m exclusively and no single GUI-based browser.

You seem to be somewhat confused about the differences between Windows and other operating systems.
There's no monolithic "browser" or "Internet cache" built into linux. In Windows you're simply using a function from a library they provide, but it's integrated into the OS (along with Internet Explorer).
There's really no parallel in linux. The OS is not tightly coupled with applications running on it. Using cURL, etc is how you approach what you're trying to do.

Like the other answers noted, there is no such thing as a built-in HTTP API on Linux systems, and you should be quick to accept that you need a HTTP library to do the task. But fear not, linking to libraries and deploying programs linked to libraries is much easier and less error-prone than under windows systems, so library dependencies are not that much of an issue.
libcurl is a well-established solution for HTTP and HTTPS.

Related

Package node.js app as cross-platform executable, not for desktop app

There are a lot of questions on this topic, but they don't seem to distinguish between executables for desktop or server-side apps. I suppose my first question would be: what's the difference? For example, Zeit/pkg says they are a "node.js binary compiler", whereas nwjs (formerly node-webkit) says they are a "an app runtime based on Chromium and node.js".
I tried zeit/pkg and it works great, but have read that there can be performance issues unless it's configured properly. I wanted to make sure I was choosing the right tool and came across nwjs. It seems to do a lot of the same stuff pkg does, but has a larger following, as well as more docs and a robust api. Can I use nwjs as a server-side executable (i.e. not using the desktop feature) the same way I would use pkg?
This answer states that nwjs "is an option, but it really isn't set-up to do a server - client type relationship", but then a comment says "you can launch a server from node-webkit just in the way you launch it in Node.js. It's just that node-webkit provide another way beyond B/S architecture".
So, is nwjs effectively the same as pkg, or fundamentally different?
I realize that there's also Electron, which states "build cross platform desktop apps" and appears similar to nwjs. I'm not trying to get into a Electron vs nwjs debate, but rather desktop vs. server, if there's a difference.
you got most things already, only few clarifications are needed. Reason nw.js / Electron declares itself as for desktop application is, it's core architectural design is intended to integrate node.js with chromium to have UI enables create application does have UI. You can still use part of those framework (node.js side) without initiating visible ui, in that case behaviorwise it'll be similar to plain node.js does. Still there is caveat, like as it tightly integrated with chromium in core already for some cases you should have screen to chromium correctly initiates (or creating virtual buffer as lot of CI does, or etcs).
Also, when your concern is performance, I'd doubt using UI framework for server side work achieves what you desire - while there won't be huge, integration between node to chromium have overhead compare to bare node.js obvioulsy.
Getting back to original question, I feel question itself is somewhat vague. If the intention is truly server side application probably you won't need to package it but correctly deploy node and its dependency modules or packaging it sort of installable manner instead of creating single binary as pkg does.

Why Chrome on Linux shows "External protocol request" dialog for unknown protocol?

I am creating a custom protocol handler for Google Chrome on Linux. My link looks like this:
Trigger my app with param
I have noticed that if 'myprotocol:' is not registered (my app not installed), Google Chrome on Linux displays "External Protocol Request" dialog and tries to use xdg-open:
While on other OS, such as Windows 10 and OS X El Capitan nothing is displayed if protocol is not registered.
I have also verified that Firefox works consistently for unknown protocols on Windows, OS X and Linux - nothing is displayed.
Chrome behavior on Linux is quite confusing for users.
Any idea why Chrome on Linux (I was testing on Ubuntu 14.04) acts differently from any other OS and web browsers?
The issue is really that if Chrome lacks a local protocol handler then it wants to use the handler configured in the user's environment. No two OSes provide exactly the same API to launch a default handler. Figuring out what this program would be before actually launching it is not even a clear API on Windows or Linux.
Both the "Mac" and Windows implementations end up knowing which external application is ultimately responsible for the protocol and therefore are able to suppress unhandled calls without issuing a call warning. But the windows implementation is actually a kludge that relies on observations of the windows registry on existing versions on windows. This type of API violation is more dangerous on Linux where many flavors have very different forks of the related settings tools.
It is actually considered a bug that Windows and OsX don't issue an alternate warning that they've called nothing, so you may want to comment here if you think that is the right behavior.
Here is my observation of how the 3 systems work based on the current source:
Linux
In Linux, when you register protocol handlers with the (window) system, you do something like:
xdg-settings set default-url-scheme-handler myprotocol evolution.desktop
Now, the application evolution is responsible for your protocol and anything can call:
xdg-open myprotocol:...
To now open evolution on these links. The other OSes have similar mechanisms, but may not have an external program as the call stub.
This is nice and abstract and knowing/saying the external app you are calling is xdg-open prevents much complication in the linux implementation. But it is not exactly the information the user probably wants. Getting that information would require using xdg-settings instead and risks being incorrect if there is or ever will be a way to conditionally override the default handler in some flavors of this system.
Windows
In the Windows handler, apparently you can just go snooping around in the registry and then make an educated guess as to what calling the api is going to actually do. Technically, chrome has to do this since the way it opens external programs is through a system API, so there is not an external stub like xdg-open to refer to in the warning.
Mac
In the "mac" handler, there is a proper API to ask about the app your specific URL will launch, so chrome does, then if the application name the empty string it can completely drop the call before generating the warning.

Google chrome extension with NPAPI moving to NaCl

I have recently developed a google chrome extension that uses an NPAPI plugin made using the FireBreath framework. I just now found out that google will shortly discontinue these types of plugins and eventually ban all existing extensions that use them. So, I would like to eventually move to the NaCl / PPAPI architecture, but I am not sure if this architecture can even support what I am currently doing in the NPAPI plugin.
In my current NPAPI plugin I am supporting OSX and Windows. On the OSX version, the plugin executes the system() function which executes a small 1 line applescript. It looks like this:
osascript -e 'tell app ...
On the windows version, it executes functions in a COM library. Both versions end up doing the same exact thing. Another option I have is executing a python script, if I were to go this route, I would most likely want to embed python in the native component.
Is any of this possible anymore with NaCl / PPAPI?
The ability to run arbitrary system() function or execute arbitrary functions from a COM library is #1 reason for NPAPI deprecation. Ditto for execution of a python script (you can execute python script in NaCl, of course - but it'll not be able to call system() function or a COM library either).
It's not news: as was noted in the Chrome Comic book on the day of Chromium release NPAPI plugins are unrestricted and that it's a big problem: http://www.google.com/googlebooks/chrome/small_30.html
It was obvious even back then that this situation can only be tolerated for so long. Plugins were tolerated for five years because some important things were unimplemenatble without them but now it's time to kill plugins and make sure nothing in browser can access OS directly.
If you want to implement some functionality which can not be implemented in browser currently because there are no appropriate API the right way is to ask about it on chromium-dev and add this API to Chromium (and perhaps other browsers, too). For example access to COM ports (not libraries) was added recently (see http://developer.chrome.com/apps/app_hardware.html).
Since you are already using an extension, you may want to look at Native Messaging as a replacement for your use of NPAPI.
If you don't need an interaction between browser and the application, you can use external protocol support. You need to register protocol in the registry on Windows. I don't know how external protocols work on OSX. When user clicks external protocol link, Chrome shows a dialog which allows user to launch the application.

What’s the best way to distribute a binary application for Linux?

I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.

Does an RDP client library under Linux exist?

Are there any libraries for connecting as a client via Remote Desktop Protocol (RDP) in Linux? The language used is secondary to the issue of existence. Any mainstream language would do (e.g. C++, Perl, Java, Ruby, PHP, Python), and even less popular ones like OCaml or Scheme.
Is there any option available other than taking the rdesktop source and hacking a library out of that?
There is a set of cross-platform open source RDP libraries available in FreeRDP project. They are written in C and under Apache Licence 2.0. See http://www.freerdp.com
Typing rdp into my Mandriva Software Managment tool revealed libxrdp which is a library that xrdp depends on but I don't know the details so it may not be what you want.
The project website is xrdp.sourceforge.net.
You can look at these implementations:
FreeRDP (Apache License) - mostly C.
FreeRDP C# bindings
FreeRDP-WebConnect for HTML5 stuff
rdesktop (GPLv2) - mostly C.
rdpy (GPLv3) - python but bitmap stuff is written in C (borrowing code from rdesktop)
properJavaRDP (GPL) - java
Non portable implementations:
Terminals (MS-CL) - visual studio project.
And the reference documents:
http://msdn.microsoft.com/en-us/library/cc240445.aspx
http://msdn.microsoft.com/en-us/library/cc240452.aspx (message flows / connection sequence)
rdesktop is going to be your best option. The code is quite clean and I don't think making a library would be a huge deal.
Another option if you prefer Java is the ProperJavaRDP Client http://properjavardp.sourceforge.net/ . It's nearly a strait port of rdesktop.
Sorry, but a quick strace and nm of rdesktop reveal nothing beyond X, crypto, and compression libs.
rdesktop does allow embedding into other windows, how does it not serve your purpose?
See the -X option in help
You could embed rdesktop in a window of your own per J-16 SDIZ's suggestion and then send X.org events to that window programmatically. A similar route would be to install a VNC server on the Windows machine and run a VNC client on the Linux machine. That way you can also programmatically send X.org events to the VNC client.
This is what browsershots.org uses to programmatically control various web browsers in a cross-platform way through Python. Have a look at the gui directory if the browsershots.org client source code
I've gotten xrdp to work with RHEL on EC2: xrdp.org

Resources