AlienVault by default is a .iso image. It installed on the core of Debian. I want to install it on Ubuntu 12.04. How can I do that?! Is it possible or not? (AlienVault is a SIEM product; it is an open-source monitoring security logs .., and is used in a Security Operations Center. I need to install it on Ubuntu. All the files of this product are in the pool directory of its Debian .iso image.
Sadly, you cannot. OSSIM is an installable distribution. There are no individual packages. You can only install on bare system or in VM.
You can install it on VM or operating system. It can not be installed how you want it since it is not a package. If you are asking it having a specific network diagram at the back of your mind that needs to be changed since SIEM will be installed independently, but it can still integrate different operating solutions to itself even after.
Related
I read some docs, and probably the direct answer is "not, that's not possible!".
But I was wondering if there is a chance to reach the goal with some kind of system like VirtualBox, Docker, ....
I'd like to understand if it makes sense trying those ways before to waste too much time. Any help will be appreciated.
Building macOS packages on Linux is possible with some caveats:
macOS apps can only be code signed on macOS.
npm packages using native modules must offer prebuilt binaries for the target platform.
It's probably best to just avoid native modules where possible.
You can install Mac on Virtual Box so probably you can do that...
Something like this https://techsviewer.com/install-macos-sierra-vmware-windows/
For linux, things are similar. Just download VMDK (Virtual Machine Disk) and use it.
At Debian after you create machine you need few settings commands and that is all...
Performance are ok
I think its a fairly common problem, but I need to have community opinion and so I am posting this question.
Use case: I am trying to create a single package (32bit) for all the linux distros (32bit, 64bit) that I want to support.
Problem: The INSTALLER
Needs to be able to run pre/post install scripts.
Should be able to run on both 32bit and 64 bit machines
Should be able to support older and newer distros (Centos 6 and above)
Should have an online repository for updating packages.
Should be able to run without X server
Should not have any dependency on a software that cannot be installed using standard yum/zypper/apt commands. Should not depend on any non standard repository.
I came across this link:
https://www.reddit.com/r/linux/comments/4ohvur/nix_vs_snap_vs_flatpak_what_are_the_differences/
It lists many alternatives, but none of them seems to satisfy all the above requirements. (Or have I overlooked something)
In addition I looked at the following two alternatives:
QT installer Fwk (needs X to run, if I am right)
self-extracting scripts with tars bundled.
The only solution that fits all the needs is "self-extracting scripts with tars bundled". But it requires a lot of work, effectively managing all the installation/upgrade stuff myself. Before I go ahead with this alternative, can anyone please confirm if he/she has any success in creating a single package for many distros?
I don't believe in the concept of rolling one's own installer with a self-extracting archive overly much. Every distro is different, and should be addressed using their own installation mechanisms. Also, writing your own installer is re-inventing the wheel.
I'd advertise using the packaging methods of all the distros you're targetting. Essentially, one SPEC file is typically enough to support CentOS 6,7, and all modern Fedora versions. Use mock or the copr service to generate all the binary packages for the distros you're targeting; then a debian rules file should be enough to generate Debian, Ubuntu and Mint packages. Add a pacman script if you want to support Arch Linux, too (it's pretty easy).
Admittedly, this way, you end up with a whole bunch of different packages, and not one. However, now you have an installer for each system that actually fits that system, is linked against the libraries available on that distro, and thus, you don't have to include all the dependencies like in a flatpack etc.
Installation from Distribution-specific packages is almost always "smoother" than installation through some self-extracting archive that wasn't actually designed for my specific distro version, so this is probably a big plus for your users. Also, having packages usually makes it very easy and stable to offer an update path, should you decide your software needs patches later on.
I'm currently working on a raspberry pi, which has OpenELEC as operating system. Unfortunately, apt-get can't be used on this distribution. I have a lot of things to install, and it would be way too long to do it without apt-get.
So my question is : Do you know any equivalent command of apt-get that can be used on OpenELEC, or a way to use apt-get on this OS?
Eventually, which OS would you advice me so I don't encounter this problem anymore?
Thanks in advance.
From OpenELEC WIKI:
Unlike other XBMC distributions, OpenELEC isn't based on Ubuntu,
Debian or Arch - in fact, it's not based on any distribution. Instead,
OpenELEC has been built from scratch specifically to act as a media
center. This means it can be streamlined to certain hardware and only
needs to include the packages absolutely required, making OpenELEC as
streamlined as possible. In addition, OpenELEC is designed to be
managed as an appliance - it can automatically update itself, is
managed almost entirely from within XBMC and boots in seconds. You
never need to see a management console or have Linux knowledge to use
it.
So the answer is: you can not use apt-get or similar.
You can change OS and use the official OS Raspbian, or if you need a media center you can use OSMC
I want to change linux distro my Development(Host) Machine which I use for embedded development.
I cross-compile applications for many different processors. It is required for me to download different different libraries to evaluate their functionality/Performance/Stability on different devices , as well as on PC.
So Is ubuntu 9.04 a good choice for me?
Thanks,
Sunny.
If you are using gcc or other source based compiler that runs on linux then I would say yes, you want a linux distro, and ubuntu is currently the most popular/best. I would try to avoid distro specific things, drive down the middle of the road and you should be able to use any distro equally well.
That will largely depend on your needs. For an embedded system, I'd go with any distribution that sports a very small footprint and supports the necessary hardware.
Depending on your hardware, Debian might work fine. You could create your image with debootstrap which allows for fairly small customized installs. It still includes apt and other things which might not be desirable, although that could be to your benefit if you need to push out updates.
If you did go with Debian, you could most likely do all your development on Ubuntu and then push to your embedded system.
i use ubuntu for my host system and a chrooted gentoo install for building apps for an embedded target. I found gentoo was a good choice as it is source distributed and easy to select what version of a particular library is installed.
One thing that is good to know is that ubuntu and derivatives uses dash and not bash as /bin/sh. This confuses crosstools and can give you severe headaches.
My company has a software product that's written in C for a Linux platform, built with autotools and distributed via binary packages. To make the binaries, we first produce a source RPM and then compile the source from the SRPM.
Currently we only provide RPM packages for 64-bit Fedora 10, but we want to start providing packages for multiple Linux distributions - 32-bit as well as 64-bit - and possibly different versions of each distribution as well (e.g. Fedora 11 as well as Fedora 10).
I've heard that the best way to produce builds for multiple Linux flavours is to have a single build server and use a different chrooted environment for each set of packages that you want to build. Does anyone have a good resource that explains this in more detail, maybe with examples of well known projects that use this build mechanism, or have a better alternative to achieve the same goal ?
Maybe you can research the following projects to get started:
Novell Build service
Fedora Koji
You can use LSB appchecker to test your application/dynlib/shell script compatibility. After that you can use RPM for all RPM distribution and use alien for all apt-get distribution and tar.gz for other
Tools like checkinstall will help you to produce packages for different distros. Personally, if you are looking to integrate with existing package management systems, you will also want to host multiple repositories on your servers and provide packages there, then have users configure their package managers to pull the apps off your servers.
Depending on what your software exactly does and which dependencies it has (if any) on local libraries, you may be able to build your software using an older glibc distribution and have it work in many different distributions. This is what we do with InstallBuilder. If you do not have dependencies on specific packages, it is also possible to create RPM or DEB packages that will run on most RPM or DEB-based Linux distros out there. Cross-Linux development, in any case, it is not easy :) Good luck!
This is one of the cases covered by Bob Aiello in this article on build agents. We have several customers who use this approach to build on several platform in parallel.