NTLMSetUserInfo on Windows CE 6 - windows-ce

I want to create my own user configuration utility targetted for WinCE6 in C++. All the NTLM functions (such as this one) require ntlmssp.lib which I cannot find. I have searched my Platform Builder directories but can only find the dll file not the lib.
Can anyone shed some light on how I actually access these functions, or find the lib file?

The lib is probably generated (the DEF is in public\common\oak\lib) when you build your platform to only include the entry points you've selected in your design. For example, I do see ntlmssp.lib in my release directory for a couple designs.
The proper way to do this is to roll and install an SDK based on your OS Design, then you'll get that installed in the right place on your dev machine (or make your utility as a subproject of your OS Design, which will then look in the release folder for the LIB).

Related

Building the client oflibmysqlclient on Windows with MSVC

ALL,
Does anyone succeeds building a client to libmysqlclient on Windows?
Following this instructions I can build the library itself
But then trying to follow mySQL documentaton which reads
To specify header and library file locations, use the facilities provided by your development environment.
With the old mySQL-Connector-C I was able to download just the code for the connector build it and then it had only 1 mysql.h
With the new library (8.0) I have t get the whole package, and there fore it will have multiple copies of mysql.h (yes, I did check by dong search of the file from Windows Explorer and Terminal/Bash).
In terms of library - it is easy as it will be hopefully just one and I can sue -L option for the linker.
But how do I get the proper include folder?
TIA!!
BTW, the tag here needs t be changed - it is not called connector-c anymore

Cross platform installation conventions & best practices

I manage an open source project that we currently distribute as a zipped bundle of files. We provide a bundle for Windows and Mac -- we currently expect Linux users to compile it themselves.
This program comes with a bunch of auxiliary files that the user will need to access. These include example files and example/default scripts (like plugins) that the user will need to be able to easily find (preferably not searching through a maze of /usr/local/foo/examples/scripts).
The user will also have their own files (that they may want to store in random locations), but they will also have their own collection of scripts (that they probably want centralized so they are always available).
I would like to support installation in multi-user environments where the user does not have permissions to mess with the program installation. The program will include an API (shared library and header) and a Python wrapper for that too. It would be nice to make those available automatically.
We build the project with CMake - and currently use CPack to bundle the zip files. CPack has much more capability than we are currently using. This is not a mechanical question of how to build the package/installation files, but a convention question of where to put all the stuff?
We would like to have an Application on MacOS, an installer for Windows, and packages for Linux. Mac Apps package icons, fonts, images, etc. nicely, but they don't seem to support user-visible files very well.
I would love for there to be a cross-platform standard way of handling this situation, but I have trouble finding decent examples on individual platforms.
Is there anything better for us to do than just a zip of files?
Providing archive of those extra files is probably one of the best solutions. You may encourage users to download them on first start of program - let users decide where they want them themselves.

Script Create Package for Windows Store apps

I am maintaining a set of eleven Windows Store apps. I would like to automate the "Create Package" task, which I am currently doing through the wizard in Visual Studio, in order to produce test packages (signed with my test certificate).
Is there a way to script this task? I was thinking probably using MSBuild or PowerShell, my goal is to have a single script to run that would generate all my app packages and copy them all to a given target directory.
I found some documentation about using the wizard on MSDN, but nothing about scripting the task.
Any ideas?! Thanks.
MSBuild will create app packages for you, in the AppPackages folder. You can also do it manually using MakeAppx, but I've found it to be a bit more cumbersome.
Some things to note: There is a build target called Publish you should use (/t:Publish) when making the actual packages. You should look into the different command-line switches, such as DebugSymbols.
You'll likely want to use the 32-bit MSBuild, as I've had issues with the 64-bit and things like the Multilingual App Toolkit. Also in regards to the MApp Toolkit, make sure you do a full rebuild before building your app package. If an entry is not in a given language and is in another, the entry for the secondary language will be used, so you can end up with multiple languages all popping up on the same page.
Hope this helps and happy coding!

What’s the best way to distribute a binary application for Linux?

I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.

How to make binary distribution of Qt application for Linux

I am developing cross-platform Qt application.
It is freeware though not open-source. Therefore I want to distribute it as a compiled binary.
On windows there is no problem, I pack my compiled exe along with MinGW's and Qt's DLLs and everything goes great.
But on Linux there is a problem because the user may have shared libraries in his/her system very different from mine.
Qt deployment guide suggests two methods: static linking and using shared libraries.
The first produces huge executable and also require static versions of many libraries which Qt depends on, i.e. I'll have to rebuild all of them from scratches. The second method is based on reconfiguring dynamic linker right before the application startup and seems a bit tricky to me.
Can anyone share his/her experience in distributing Qt applications under Linux? What method should I use? What problems may I confront with? Are there any other methods to get this job done?
Shared libraries is the way to go, but you can avoid using LD_LIBRARY_PATH (which involves running the application using a launcher shell script, etc) building your binary with the -rpath compiler flag, pointing to there you store your libraries.
For example, I store my libraries either next to my binary or in a directory called "mylib" next to my binary. To use this on my QMake file, I add this line in the .pro file:
QMAKE_LFLAGS += -Wl,-rpath,\\$\$ORIGIN/lib/:\\$\$ORIGIN/../mylib/
And I can run my binaries with my local libraries overriding any system library, and with no need for a launcher script.
You can also distribute Qt shared libraries on Linux. Then, get your software to load those instead of the system default ones. Shared libraries can be over-ridden using the LD_LIBRARY_PATH environment variable. This is probably the simplest solution for you. You can always change this in a wrapper script for your executable.
Alternatively, just specify the minimum library version that your users need to have installed on the system.
When we distribute Qt apps on Linux (or really any apps that use shared libraries) we ship a directory tree which contains the actual executable and associated wrapper script at the top with sub-directories containing the shared libraries and any other necessary resources that you don't want to link in.
The advantage of doing this is that you can have the wrapper script setup everything you need for running the application without having to worry about having the user set environment variables, install to a specific location, etc. If done correctly, this also allows you to not have to worry about from where you are calling the application because it can always find the resources.
We actually take this tree structure even further by placing all the executable and shared libraries in platform/architecture sub-directories so that the wrapper script can determine the local architecture and call the appropriate executable for that platform and set the environment variables to find the appropriate shared libraries. We found this setup to be particularly helpful when distributing for multiple different linux versions that share a common file system.
All this being said, we do still prefer to build statically when possible, Qt apps are no exception. You can definitely build with Qt statically and you shouldn't have to go build a lot of additional dependencies as krbyrd noted in his response.
sybreon's answer is exactly what I have done. You can either always add your libraries to LD_LIBRARY_PATH or you can do something a bit more fancy:
Setup your shipped Qt libraries one per directory. Write a shell script, have it run ldd on the executable and grep for 'not found', for each of those libraries, add the appropriate directory to a list (let's call it $LDD). After you have them all, run the binary with LD_LIBRARY_PATH set to it's previous value plus $LDD.
Finally a comment about "I'll have to rebuild all of them from scratches". No, you won't have to. If you have the dev packages for those libraries, you should have .a files, you can statically link against these.
Not an answer as such (sybreon covered that), but please note that you are not allowed to distribute your binary if it is statically linked against Qt, unless you have bought a commercial license, otherwise your entire binary falls under the GPL (or you're in violation of Qt's license.)
If you have a commercial license, never mind.
If you don't have a commercial license, you have two options:
Link dynamically against Qt v4.5.0 or newer (the LGPL versions - you may not use the previous versions except in open source apps), or
Open your source code.
The probably easiest way to create a Qt application package on Linux is probably linuxdeployqt. It collects all required files and lets you build an AppImage which runs on most Linux distributions.
Make sure you build the application on the oldest still-supported Ubuntu LTS release so your AppImage can be listed on AppImageHub.
You can look into QtCreator folder and use it as an example. It has qt.conf and qtcreator.sh files in QtCreator/bin.
lib/qtcreator is the folder with all needed Qt *.so libraries. Relative path is set inside qtcreator.sh, which should be renamed to you-app-name.sh
imports,plugins,qml are inside bin directory. Path to them is set in qt.conf file. This is needed for QML applications deployment.
This article has information on the topic. I will try it myself:
http://labs.trolltech.com/blogs/2009/06/02/deploying-a-browser-on-gnulinux/
In a few words:
Configure Qt with -platform linux-lsb-g++
Linking should be done
with –lsb-use-default-linker
Package everything and deploy (will
need a few tweaks here but I haven't yet tried it sorry)

Resources