A list of windows COM objects? - com+

Hiya I just want to do some windows script host scripts, and I know that it is possible to interface with this thing called "COM" by using the WScript.createObject() method. What I don't know is what objects are available for instantiating with this method. My google-fu only shows me the thousand and one sophisticated methods available for instantiating and communicating with COM objects. There is one stack overflow answer with a powershell script which supposedly lists /currently available/ com objects. What I am insterested in is a list of objects that can reasonably be expected to be available by default on most machines with some version of windows installed. This appears to be a tall order. In particular what I am interested in is any /default/ available COM objects which may be used for image resizing and cropping. I want to know if this is possible without installing some external third party utility like image magick.

Your best bet is probably to use that powershell script (or Microsoft's OLE View) to view what is installed on a machine after a fresh install of Windows.
There is no "standard list", unfortunately. It even changes depending on what version of Windows you have installed. The most comprehensive answer would be to install a collection of Windows OSs (XP, Vista, 7, maybe 8) and find the intersection of the lists.
Alas, that is a lot of work for a basically investigative procedure. If there is a particular component or task you're trying to find out about, you might ask a new question about just that. Or, you could simply try it on some clean installs of Windows.

Related

Does go have OS-specific packages that cannot be used on another OS?

I cannot call myself an absolute beginner in go, and I'm definitively not an expert.
Today, I noticed something VERY confusing. I was experimenting with Unix sockets and Windows named pipes, and from my research, there are 2 packages that support Windows named pipes:
https://github.com/natefinch/npipe
https://github.com/Microsoft/go-winio (I could not find ANY documentation, help, how-to-use, whatsoever about this package)
My OS is Linux, and I decided to give it a go: go get the package(s), and write the code to later test on a Windows machine, but, to my surprise, at least in VSCode, those packages are not recognized by the tooling.
When I look at npipe for example, I see it only has npipe_windows.go, which, if I'm not mistaken, is supposed to automatically be used on Windows.
So, I think there is the concept of OS-specific packages in Go, right? And if so, does it mean that I cannot use, for example, VSCode's go tools to code against Windows packages on Linux?
That, in my opinion, would be extremely inconvenient to have to switch systems in order to write something that works both on Linux and Windows... Although I guess that's only true if we're developing on Linux, and Windows should cover both.
But for me, it doesn't make sense NOT to be able to develop something on Linux; the best environment to develop on IMHO (except Apple-related code of course)
Am I missing something here?
Thank you
So, I think my question actually involves more the tooling rather than the go language itself. gopls is the tool used by VSCode if you choose to use the go language server.
As thre README says, it's in alpha and not stable, and there are known issues listed on the repo, which seem to be the source of my confusions.
I think the main issue that's related to what I'm seeing is:
x/tools/gopls: does not handle build tags

Tools to help manage sets of multiple versions of executables on Linux?

We are in a networked Linux environment and what I'm looking for is a FOSS or generic system level method for managing what versions of executables and libraries get used, per session. The executables would preferably be installed on the network. The executables will be in-house tools and installs of commercial packages like Houdini, Maya and Nuke.
The need for this is that we'd prefer to have multiple versions of the software installed and available for the artists but there needs to be an easy way to select which version to use. As an added benefit, I'd like to be able to track the version of software used to generate a given output as metadata. I've worked at studios that did this successfully but I was not 100% up to speed on how it was achieved. Every executable in a given set was assigned a single uber version for the set. That way, the "approved packages" of the studio tools were all collapsed into a single package of tools that were known to work together.
Due to the way they install, some programs make setting this up easy (It's as simple as adding their install directories to $PATH). Other programs don't make it quite so easy. I'm particularly worried about how to handle the libraries a program might install. What's needed is a generic access method I can use to wrap everything into a clean front end.
Does anyone know of such a system available in the wild or am I going to have to implement it from scratch? Google hasn't been very helpful in finding a solution.
Thanks!
Check out the "modules" system at http://modules.sourceforge.net/ ; it's quite widely used in HPC.
There is eselect . I have only used it on funtoo(offspring of gentoo) but it seems to be doing what you need. It is also written entirely in BASH, so it should be quite possible to port to other distros.

Is it possible to intercept dns queries using LSP/SPI?

I wrote my own LSP which is working fine. However, I can not catch dns queries. For example there is no function like WSPGetHostByName or WSPGetAddrInfo.
My lsp also supports UDP protocol but it is not working. If I run nslookup from console (cmd.exe) it seems working but i can not catch gethostbyname. Does anyone know how to do that? I don't think writing NSP (Name Service Provider) is a solution. But I might be wrong.
Thanks
We have developed a LSP that can "intercept" DNS queries. The only way to do it is by hooking into all of the DNS functions, keep in mind there are a few challenges you need to solve:
You need to use a good hooking library that will support both 32bit and 64bit code.
The library license must be right for your application, there are some free libraries, but can be used freely only with free projects.
When you hook the functions, you need to make sure not to modify certain values that are not IP based and defer the query to the real function.
Intercepting UDP will not work since the queries are going out from MS DNS client, so unless you write a low level driver like: TDI, NDIS or WFP you must hook the functions (or write a NSP). NSLookup works for you because it creates the DNS queries itself.
My solution would be as follows:
Take the well known web browser: firefox.exe
copy it into a new name: icefoxy.exe
modify the EXE so it will load a custom DLL.
I have already done this a few months ago, but since Firefox is constantly getting updates, that means:
EITHER: keep one version and do not update (at your own risk, may cause security problems since that means vulnerabilities will not be fixed)
OR: Update your modification every time firefox.exe changes.
The DLL can easily be written using Delphi.
The Firefox modification needs assembly language, unless you know how to download all necessary files to compile firefox yourself, have access to a C/C++ compiler (likely mingw-gcc), and be prepared of the fact that there are 2 mutually exclusive standards of C++, and if your g++ (part of the gcc suite) is incompatible with your Firefox source, then your attempt will fail.
I am not a C++ expert myself, so I took the (for me, at least) easier route using machine language, that way I do not need to be a C/C++ expert to get the job done.
Some relative points:
What functions must be hooked to intercept all Firefox's access to dns server(s) ?
I noticed, that if you load a Delphi DLL into Icefoxy.exe (a renamed copy of Firefox.exe)
then a Delphi form's colors are missing, eg. if you set (either in object ispector or in code):
Label1.Color := clLime;
you still see a label withOUT lime background color. I do not know the exact reason, but it seems that Delphi VCL is relying to be used in an EXE, and when you use Delphi VCL components inside a DLL instead of an EXE, some things (such as color) does not work as intended.
I'd be happy to post my code (both assembly language modifications to Firefox and the Delphi DLL source code) , but how/where can I post it so it is publicly viewable ?
I used Delphi 7 to make the DLL.
if you use Delphi 2009 or later, you need to take extra care that any string data passed between the Delphi code and any non-Delphi code has the correct encoding, due to the fact that In Delphi 2009 and all newer versions, the type String is an alias to unicodestring, where in older Delphi versions, the type String is an alias to AnsiString.
At the time I did this, it was just a small experiment to find out if I can force Firefox to load my own DLL inti it's process address space.
Another interesting idea would be to get access to the DOM (Document Object Model) of Firefox from a Delphi DLL, that would give a working alternative to using TWebBrowser (based on ActiveX version of Microsoft's Internet Explorer).
I know there have been components like TWebBrowser based on Firefox, but their problem is that nobody cared to update them for a very long time, so they are compatible only with some very outdated version of Firefox.

What’s the best way to distribute a binary application for Linux?

I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.

GUI/TUI linux library

Is there any UI library that can be to build both a text user interface (ncurses) and graphical user interface (GTK? QT?) from the same source?
I know that debconf can be used with various frontends, I would like to build something similar but programmable.
The library that powers YaST independence to do ncurses, gtk and qt with one codebase provides what you are looking for, and it is not tied to YaST itself.
Actually libyui only requires the standard C++ library and phtreads (IIRC). The UI plugins require of course the respective libraries (Qt, ncurses). YaST uses libyui via a set of YCP bindings that export a YCP like API on top of libyui.
The library is a bit lowlevel (one layer below an event loop), my colleage Klaus Kämpf wrote about using it some time ago in his blog, including binding to scripting languages it using swig.
The only part that is SUSE specific is the packaging, so you would need to package it yourself. Stackoverflow did not allow me to link more than once. The code of the library is linked from Klaus blog. Replace libyui for "qt" and "ncurses" for the plugin's code.
Also google for "YaST Independence From YCP" to find a blog entry from Andreas Jäger on the subject.
you could write your program to uses ncurses, and then use PDCurses to convert it to an X11 application - as the readme advertise.
I know it because I've used it as portable curses, though I've never tested its X11 capabilities
Not exactly a library but you could consider writing a web app that degrades well to Lynx
The GoboLinux guys have created their own toolkit for python called AbsTK, they use it for their installer, which actually works really good. I have never used the toolkit myself, but the apps built with it seems solid.
There's Cursed GTK, but it seems a bit dated. I found some references to a port of Qt to ncurses called Qt Console, but it seems to have disappeared.
By using a library that targets both the text-mode and GUI environments, you have a big risk of getting stuck with the worst of both worlds.
You will be better off structuring your code using the MVC pattern, and providing separate views and controllers for each platform you target. Pushing all the logic down to the model classes has several other benefits:
The code will be easier to test because you are forced to keep the user interface out of the actual domain logic.
Your program can have user interfaces that have very little in common, e.g. a web UI, or an UI driven by speech.
You can run the program easily with no UI at all (i.e. script it) by accessing the model classes directly in the same way that the controller classes do.
I think what's used for configuring the linux kernel when compiling is dialog/cdialog/xdialog. But it's been a while since I've compiled a kernel, so my memory may be off. The most promising link I can find is this one for Xdialog.
Maybe tcl/tk would provide what you want http://www.tcl.tk/
Here's the page on interfacing with curses. There is a claim there of integration with ncurses.
http://www2.tcl.tk/2372

Resources