Does anyone have a guide on setting up AHBot in an AzerothCore 3.3.5 server? I can find no mention of ahbot in the world.conf and no DB tables like auctionhousebot in the database or any references. It is included in TrinityCore so all the google references that I find point me back there.
Our server is just two of us and it would be nice to have some items on the auction house. At the moment, if you need a silver bar you have to go out in the world and find it. That was exciting the first couple of times... but gets old.
I followed the guide to install off github: https://www.azerothcore.org/wiki/Installation and am using the latest release under Ubuntu linux.
Thank you for any help. You are my last hope...
AHBot is not currently an 'official' supported module of AzerothCore (which would be why you aren't finding any settings in the worldserver or anywhere else regarding it).
There is a module in development that isn't currently part of the AC list, but I've gotten it working on my personal server: https://github.com/AyaseCore/mod-ahbot Note that this is a bit different than a normal module as there is also a git patch that needs to be applied manually.
Theres a module in azerothCore repository you have to apply patch manually but it wont compile on Win x64 and unix x64 comes up with an error about 9 arguments, but seems to compile for some
Although I am relatively new to using Linux, I would like to know more on how deploying packages works. I have tried searching for this but have had no luck. I have seen countless packages and install scripts that use the same looking 'graphical' command line install for the user to select options for the package. Take the Debian net install for example. [1]
As I have a lot to learn, I would only like a summary of how this is possible, and any resources that anyone has on how developers do this.
Thanks in advance.
[1] http://doudoulinux.org/blog/public/screenshots/install/install-selected-tasks.png
OK, now, i believe, i understand what you're after. I'll still insist that interactive configuration of packages is distro specific and shall not be the main form of package configuration.
It is preferred to ship in a default working configuration and then document how the user can change configuration files (normally in /etc) to reconfigure the package. It is often useful to ship several default configurations (example package: wpa_supplicant which ships with several examples of network configuration, all disabled by default) and allow the user to choose by uncommenting lines.
Debian
The debian specific way to get packages configured is debconf, its configuration is a simple shell script (or a perl script, or whatever else that can talk over STDIN/STDOUT) and a template file. The template file is what will provide options in the aptitude/apt-get interface as in the screenshot in your question.
It is worth reading the debian guidelines on package configuration to have an idea of which kind of configuration is too much.
And they also have a thorough tutorial. Since you said you do not have experience packaging i also recommend reading the introduction to packaging, which will tell you where the files shall be placed. Also, dbhelper is a great tool to place files in the correct place in the package directory.
On other distros
Each distro has its own way of adding configuration to packages. Debian is notable for its debconf as it is one of the most feature rich tools for configuring packages (together with dpkg-reconfigure).
Developers of the software that is sipped in a distro package are often different people from the ones that do the packaging. Configuration options left by developers are often much more thorough than in the package (e.g. inclusion or exclusion of certain libraries).
The fact that you're most familiar with the debian distro (that is an assumption based on your question tags), might give you a misleading idea of package configuration. Only debian based distros have so many configurable packages, other distros often use package dependencies to install differently configured packages. For example:
RedHat based distros (.rpm packages) have no tool such as debconf (as far as i am aware), they use distinct (conflicting) package names to install differently configured packages.
Arch Linux based distros enforce the user configuration. In essence, arch forces the user to configure their packages by going into their configuration files and changing the configuration (that is a very good thing if you want to learn).
Funny enough, both RPM and Arch based distros often ship po-debconf, an adapted debconf for that distro. Yet, i cannot tell much about it since i never tried it.
I'm trying to profile a web application with xDebug and Webgrind since I'm doing it in a remote Linux server. For some weird reason it doesn't show call names or file source. I was suspecting that there might be some kind problem with readying the script files (not sure if it's doing it) but giving target folders 777 didn't make any difference. Does anybody have a clue where I'm failing?
Thanx!
The webgrind version found on google code does not work for xDebug 2.3.
Here is a fork that works: webgrind
Ok, 24h later:
Seems that webcache grind doesn't support cachegrind file function compression feature that was introduced in xDebug 2.3 (released 2015). The latest webgrind was released ~2008-2009 so makes sens that it doesn't work. The same applies to WinCacheGrind client. Currently seems that only Windows cachegrind analyzer is qCacheGrind and linux client kCacheGrind
When I maange to find some free time I'll fork the project and make it compatible with compression.
I'm going to upgrade my company's subversion server from version 1.6 to 1.7. The server runs on linux (Ubuntu AFAIK).
I've read all those:
Subversion 1.7 release notes
I've also read those posts:
subversion-client-version-confusion
how-to-upgrade-svn-server-from-1-6-to-1-7
Here and now, I know how to perform this. It's not a big deal. What concerns me the most is the current hooks infrastructure. There are several scripts in bash and perl.
As for now I've found no information referring hooks infrastructure changes, but maybe there are some known issues I missed? Is there anything against the upgrade I should know?
PS: Try and see what comes method is absolutely unavailable. I'd like the upgrade to be as fluent as possible. Repository users shouldn't even notice any changes. I can't allow myself any failure in that matter.
The Subversion compatibility guarantees promise that your hook scripts are called exactly the same in 1.6 as in 1.7. In 1.7 (and future versions) more arguments can be passed to scripts, but the old arguments still match the old behavior. So if you created your scripts like the templates, to ignore 'extra' arguments you shouldn't see a difference.
Subversion 1.7 didn't change the repository format since 1.6, so you can even (accidentally) use the svnlook from 1.6 to access the repository after upgrading.
Try and see what comes method is absolutely unavailable...
Yes, the try and see what comes method is available. You build a copy of your Subversion 1.6 environment, make the Subversion 1.7 changes, and test until everything is correct.
I don't see how you can accomplish your goal of a quiet upgrade unless you copy and test.
I guess it depends what you do with your hooks...
If your hooks are using svnlook, you should have no issues. If you're using an API (like the Python API), you probably are also okay as long as you're doing svnlook type of stuff.
Where you might start heading into problems is if you poked and prodded where you weren't suppose to poke and prod. For example, instead of doing svnlook, you do svn. There are a couple of places where the parameters have changed. Also, if you did an svn checkout (an absolute no-no in a hook) and then looked in the .svn directories, you'll get a surprise. Follow the rules, color in the lines, and your hooks won't have any issues.
I don't know of any issues from Revision 1.1 to revision 1.7 that should affect well behaved hooks hooks, and I suspect that you will not have any issues as long as we are still in Subversion 1.x. When Subversion 2.x comes out, all bets are off.
Yes, there have been some changes in how hooks work. The start-commit hook has an extra field that wasn't in versions 1.4 and earlier (The capabilities field), but nothing that would affect current hooks. And, in either Subversion 1.5 or 1.6, users now can set revision properties when doing a commit. These don't affect current hooks, but might be features that you want to incorporate in your current hooks.
The upgrade has been performed and succeeded. Subversion server was updated without issues. Hooks were designed without any hacks or slashes, respecting the rules and common sense. It was risky but promising and came out profitable (checkouts are light-speed now).
Just for sake of completeness: there was a consecutive centrally managed client upgrade. And there were issues, however not critical and predictable. After transition svn client 1.6 -> 1.7.7 working copy format changed. Every existing working copy had to be manually upgraded (or wiped out and checked out clean again).
Server upgrade is safe though.
I just finished porting an application from Windows into Linux.
I have to create an installer of the application.
The application is not open source => I should distribute the application's binaries (executable file, couple .so files, help files and images).
I found several methods to do it:
- RPM and DEB packages;
- installer in .sh files;
- Autopackage.
I don't like first method (RPM and DEB packages) because I don't want to mantain different packages for different Linux distros.
What is the best way to distribute a binary application for Linux?
Having been through this a couple of times with commercial products, I think the very best answer is to use the native installer for each supported platform. Anything else produces an unpleasant experience for the end-user, and in practice you have to test on every platform you want to support anyway, so it's not really a significant burden to maintain packages for each. The idea that you can create a binary that can "just work" on every platform out there, including some you've never even heard of, just really doesn't work all that well.
My recommendation is that you pick a platform or two to support initially (Red Hat and Ubuntu would be my suggestions) and then let user demand drive the creation of additional installation packages. Perhaps make it known that you're willing to support additional platforms, for a modest fee that covers your time and effort in packaging and testing on that platform. If a platform proves to be very different, you may need to charge more for ongoing support.
Oh, and I cannot overemphasize the value of virtual machines for scenarios like this. You need to build VMs for each platform you support, and perhaps multiple VMs per platform to make it easy to test different configurations.
There were a lot of good answers (mine included :)) here. Although that is more about binary compatibility (which you do need to worry about).
For installer I would recommend autopackage (we successfully released several versions of our software with it), they did the "installer.sh" part already and more (desktop integration for example).
You have to be careful and test your upgrade scenarios and stuff, depending on how complex you package structure is, but it is pretty neat overall. I fixed few bugs with dependency handling in 1.2.6, so it should be fine.
UPDATE: The original question was deleted, so reposting full answer here, ignore all references to autopackage, that was merged into Listaller, not sure if relevant parts survived.
For standard libraries (like crypto++, pthreads, etc) that are likely to be available in a distribution -- link dynamically and tell users to get them from their distro repository. Or link statically if it is feasible.
For weird libraries that you must control version of (if you want to deploy Qt4 app on territory of enemy gnomes for example), compile them yourself and install into a private spot only your app knows about.
Never install private libs into standard places unless you can be sure to not interfere with package systems of all distros you support. (and that they can't interfere with you either).
Use rpath instead of LD_LIBRARY_PATH, and set it properly for all you binaries and all dlls that reference each other. You can set rpath on you binary to "$ORIGIN;$ORIGIN/../lib;/opt/my/private/libs" and have linker search those places before any standard paths. (have to setsome linker flag for origin to work I think). Make sure to set rpath on your libs too: for example QtGui needs QtCore, and if user happens to install standard package with different version, you absolutely don't want it picked up (exe -> ../lib/QtGui.so (4.4.3) -> /usr/local/lib/QtCore.so (4.4.2) -- a sure way to die early).
If you compile with any rpath, you can change it later with chrpath, thus making it possible to tweak install location as part of post processing or install script.
Maintain binary compatibility. GLIB_C is pretty much static for your users, so you should link against some sufficiently old version. 2.3 is a safe bet. You can use APBuild -- a gcc wrapper that enforces GLIB_C version and does few other binary compatibility tricks, so you don't have to compile all you apps on a really old distro.
If you link to anything statically, it generally will have to be rebuilt with APBuild too, otherwise it is bound to drag newer GLIB_C symbols. All .so's you install privately will naturally have to be built with it too. Sometimes you have to patch third party libs to use older symbols. (I had to patch ruby to return real permissions instead of effective ones, since there is no such functions in older GLIB_C. Still not sure if I broke anything :)).
For integration with desktop environments (file associations, mime-types, icons, start menu entries, etc) use xdg-utils. Beware though, like everything on linux they don't really like spaces in filenames :). Make sure to test those things on each target distro -- xdg implementations are riddled with bugs and quirks.
For actual install you can either provide variety of native packages (rpm, deb and a few more), or roll out your own installer, or find installer that works on all distros bypassing native package managers. We successfully used Autopackage (same people who made APbuild) for that.
It's possible to install an RPM on Debian and an APT on RHEL.
If you are going to statically link this program, or dynamically link only with libraries that you will be distributing in the package, then it doesn't much matter how you distribute it. The simplest way is tar.gz and that would work.
OTOH if it is dynamically linked with system libraries, and particularly if it has dependencies on dynamic libraries that will be shared with the client's other applications, then you kind of need to do either RPM, APT, or both.
You may want to try out InstallBuilder. It is crossplatform (runs on Windows, Linux, Mac OS X, Solaris and nearly any other Unix platform out there). It is used by Intel, Motorola, GitHub, MySQL, Nokia/Trolltech and many other companies so you will be in good company :) In addition to binary installers, it can also create cross-distro RPMs and DEB packages.
InstallBuilder is commercial, but we offer free licenses for open source programs and very significant discounts for mISVs or solo-developers, just drop us a line.
Create a .tar.bz2 archive with the binary, then publish a feed for it, like this:
<?xml version="1.0" ?>
<interface uri="http://mysite/myprog.xml"
xmlns="http://zero-install.sourceforge.net/2004/injector/interface">
<name>MyProgram</name>
<summary>what it does</summary>
<description>A longer description goes here.</description>
<implementation main='bin/myprog'
id="sha1new=THEDIGEST"
version='1.0'>
<archive href='http://mysite/myprogram-1.0.tar.bz2'
size='10000'/>
</implementation>
</interface>
Sign it with your GPG key. You can use the tools on 0install.net to calculate the digest and add the GPG signature for you in the correct format.
Then, put it on your web-site at the address in the uri attribute. Any user on most Linux distributions (e.g. Ubuntu, Fedora, Debian, Gentoo, ArchLinux, etc) can then install and run your program with:
0launch http://mysite/myprog.xml
Their system will also check for updates periodically. There are various GUIs for the different desktop environments, but the command-line will work everywhere.
Also look at some of the existing feeds for inspiration.
I tell you an additional possibility, although I am not aware of its status: the Loki installer. Loki was a company doing videogames porting for Linux. It went down in 2002, but the installer is available.
InstallShield is also available for linux. No idea on the status though.
Although many people are proposing you to go with tar.gz, please don't. I assume you want to provide a pleasant experience for the installation procedure to your users. A tar.gz is one of the most low level, low quality, low usability choices you can do. It works everywhere because it does basically nothing, as you know.
The guys at freedesktop.org and the LSB are quite clear on where to put stuff. What you need is a friendly program to do that. Autopackage imho has the numbers (I love it), but despite its age, I haven't seen a single program out there distributed as an autopackage.
Evaluate it carefully, but don't skip the chance of being part of the momentum in favour of it, just because it's not popular. If it works for you, and it works for your users, everything else does not matter.
There is no best way (universally speaking).
tar.gz the binaries, that should work.
Today, I would also look at Snapcraft and Flatpak which are embraced by some popular distributions. I explored other options and it is what ended up working best for me. Flatpak in particular also helped me learn about standard Linux desktop conventions to follow.
You may also want to look at AppImage (https://appimage.org/). The concept is that it produces a single binary file that the user downloads, sets executable, and runs directly; no installation necessary, no dependencies to install (since the app image typically includes all the dependencies except basic stuff like glibc). This makes for a really great user experience!
Some downsides:
The image may be large, since it probably includes all files/libraries/... the app depends on.
As the image creator, you're responsible for security updates to any of the libraries you add into your image.
An AppImage is great for a user-run application that's pretty isolated from anything else on the system (i.e. daemons, system configuration, etc.), but if your app relies on things like udev integration, desktop file installation, dbus registration, etc. this isn't easy, since the apps files aren't available when the app isn't running (making udev rules hard), and there is by definition no installer that gets run (making desktop file installation hard).
I've also looked into this at work and I'd have to agree there really isn't a "best way". If your application is being distributed as source then I'd go with the make/configure methods packaged up in a tar.gz. That seems fairly universal in the Linux world.
A good way to get an idea of what to do is to look at larger organziation and see how they distribute their binaries.