Linux - generic network configuration - linux

I am creating a set of utilities to configure a Linux system for a particular network and domain configuration. One of the steps of this configuration is configuring the network interface. Although I can easily configure things like Samba, or NTP (since they have the same configuration file syntax, regardless of distro), networking seems to be a bit more difficult.
With Debian using /etc/network/interfaces, Fedora and Red Hat using /etc/sysconfig/network-scripts, and other distros using their respective network managers (I personally use netctl on Arch), getting a script to be able to configure a network interface seems nearly impossible. Is there any portable way of configuring a network interface?
Failing that, what would be a good way to create 'modules' for various distros? My project will use Autotools for configuring and installing the utilities, so some 'compilation' can be done then, but how can I detect which network manager is in use? Problems with this approach include what to do when no module exists for the network manager in use. Or should I leave it to the compiling user to decide which 'module' to enable?
This question then extends to packaging - when creating this as a package, how can I support the multiple network managers in existence, even on one distro? In Arch, for example, the user may have netctl installed, which uses one set of configuration files, or Wicd or perhaps GNOME network-manager.

You can create your software with plug-in API in mind. The API describes, how network interface can be configured. You would then have one plug-in for NM, one for Wicd, one for /etc/network/interfaces etc. The user can choose, what plug-ins to compile during the ./configure stage, so that he doesn't need to install unneeded dependencies.
For multiple enabled plug-ins (for example NM and Wicd) you can use D-Bus to check, what service is registered to manage network interfaces.

Related

What to consider when writing distribution-independent Linux applications?

I wish to write a graphical tool with which one could configure and query information about a Linux system.
In order to achieve some independence from the underlying Linux distribution, I am planning to require that the target system uses systemd, and that the target system has the PackageKit console program installed.
With this, I will have excluded Slackware Linux, since it does not use systemd.
What other considerations should I have in mind when designing such a tool? With the use of an abstraction layer away from the package manager, and with the use of systemd, are there any other things that I would have to consider?

Is webmin advisable to run on arm based embedded system in debian?

Looking forward to imlement web based system which will allow following things to be able to do:
To manage the web-based system configuration
Possible to configure operating system internals, such as users, disk quotas, services or configuration files, as well as modify and control open source apps
Custom modules to configure propitiatory hardware functions
You could go with Webmin/Virtualmin combination. It can do nearly all mentioned things. It has both free GPL and Pro version with extended functionality.
You just need to run install.sh script to have GPL version installed.
wget http://software.virtualmin.com/gpl/scripts/install.sh
Later, if you'd like you could purchase the Pro version.

How can I run a distributed domain under RedHawk

I am trying to create a distributed domain using RedHawk 2.0.1 and cannot find enough information on setting it up in the manual. I have two related issues. I want to run the domain manager on the same host as the IDE but run one or more components on another node. I see how to create a new node project but do not see how to specify the network location it should run on. I can add it to the domain but it simply runs two device managers on the local host. I also do not see details of how to make specific components run on the alternate node. Does this require manually adding allocation properies?
The related issue is that I would like to use a non-x86 node as the remote node. I am trying to use an ARM processor and following the instructions in the Sub$100 manual I was able to build and install the runtime system on my ARM, but I find that the GPP device's GPP.spd.xml still has x86 as the processor name while the prf.xml has arm as the required property.
The manual seems to indicate that the binaries for all nodes will be in the sdr of the domain manager, so am I supposed to copy the sdr entries for my arm gpp device and all components back to the sdr of the domain manager host and then they will be deployed back to my arm at domain and waveform launch?
Are there better detailed instructions for distributed domains somewhere that I am missing?
I believe the last supported version of REDHAWK for the Sub$100 project was 1.10, so we're in uncharted territory. That being said, let's take a stab at it.
The first thing you should do is make sure the /etc/omniORB.cfg file for your Domain Manager looks like this:
InitRef = NameService=corbaname::<external IP>:2809
InitRef = EventService=corbaloc::<external IP>10.3.1.245:11169/omniEvents
where should be replaced with your network IP (i.e., not localhost or 127.0.0.1). Restart the CORBA naming and event services with this command:
sudo $OSSIEHOME/bin/cleanomni
The next step is to configure your ARM device to point to the Domain Manager. Edit the /etc/omniORB.cfg file on the ARM device to match the one from your Domain Manager, even the IP address. Note that you don't have to start the naming and event services on the ARM device.
Now for running the GPP on the ARM device, you will have to create that node on the ARM device, since the Domain doesn't know about that device yet and cannnot access its filesystem. Page 16 of the 1.10 version of the Sub$100 document (http://ufpr.dl.sourceforge.net/project/redhawksdr/redhawk-doc/1.10.0/REDHAWK-Sub100-Manual-v1.10.0.pdf) has the instructions for installing the GPP.
Note that the newest version of the GPP is actually a C++ Device now, so the second step should be 'cd framework-GPP/cpp' and the third step should be 'git checkout 2.0.1'. Once that's installed, there are still a couple of more issues to take care of. First, run the following command:
$SDRROOT/dev/devices/GPP/cpp/devconfig.py --location $SDRROOT/dev/devices/GPP
That will configure your GPP to recognize that it is on an ARM platform (as long as your processor is an armv7l processor).
Next, run the following:
$SDRROOT/dev/devices/GPP/cpp/create_node.py --domainname <RH Domain Name>
That will actually create the DeviceManager profile that will contain your GPP.
The final step involves making sure that the node will be configured correctly. Check out Page 21, Steps 5. Basically you can remove the x86_64 implementation and replace any instances of 'x86' with an 'armv7l'.
As for your question about building your components, yes, you have to build them for the platform of interest and then install them to the Domain Manager SDRROOT. If you have a cross-compiler set up to build your components (and the framework), this will make your life a lot easier. However, if you don't, the workaround is to build the components on your ARM device, then install the XML files and the executable to the Domain. In order to make any components work with your ARM GPP, they will need to have an ARM implementation with a processor name that matches that of your GPP in their SPD.
I know that's a lot and I haven't run through these instructions in a while, so let me know if you have any questions or anything doesn't work.
apparently replies are very limited in length so I'll call this an answer. Thanks for your response. I have actually tried part of this but will try to see if your information gets me any further. After writing this question I explored a little further. I found that the code I had compiled on the ARM and installed still had "x86" and "x86-64" in the domain profile for the device manager and no "armv7l" so I patched the profile and tried starting the device manager on the ARM manually (after setting the omniORB.cfg to point to the name server on the domain manager host. It started up fine and said it was trying to connect, and the name server on the domain manager host now had an entry for the ARM device manager but the IDE did not list the additional device manager and if I killed the ARM device manager it said that it was intrupted while waiting to register so I assume that the device manager registered with the name server but never got a reply from the domain manager. This does not make me hopeful that your steps will work, but I'll give them a try.
Update. Following more closely the steps in sub$100 document, it appears that $SDRROOT/dev/devices/GPP/cpp/devconfig.py did not edit GPP.spd.xml to put in the correct processor and compiler version but after hand editing these, I was able launch the full domain (domainManager, deviceManager, GPPdevice) on the ARM processor and was able to connect to this running domain from the IDE running on x86. After exporting and rebuilding my waveform components and editing their domain profiles I was able to use the IDE to launch successfully a very small three component waveform and control it. So running the entire domain on the ARM works ok.
But I still cannot start the deviceManager on the ARM and have it register with the DomainManager on x86 (after editing the DCD to point to the x86 domain, ie, running a distributed domain with two nodes. It starts and says it is registering with the domainManager, and it must partly succeed because the devMgr shows up in the NamingService under the domain, but the IDE never shows the new deviceManager in the domain. And the devMgr never starts the GPPdevice. If the devMgr is killed it prints "Interrupted waiting to register with DomainManager", so even though it got registered into the Naming Service it appears that the DomainManager never replied to the registration request.

How applications are managed in Linux?

I am looking for design suggestions/documents etc. which contains specifics of system applications management in Linux (Ubuntu, Debian etc.)
Can you please point to a source of information or suggest a design?
I'm not sure to understand what you mean by system applications management (certainly it can mean several different things).
In practice, Linux distributions have some package management system to deal with that issue. init or systemd (etc....) is in charge of starting/stopping/managing daemons and servers. Its configuration is related to packaging.
Read also Ubuntu Packaging Guide and How To Package For Debian and Debian new Maintainer guide etc...
If you are coding some service application, read Advanced Linux Programming and about daemon(3) & syslog(3)
Also, study the source code of relevant system applications (similar to the one you are dreaming of), since Linux is generally (mostly) free software.

Linux per program firewall similar to windows and mac counterparts

Is it possible to create GUI firewall that works as Windows and Mac counterparts? Per program basis. Popup notification window when specific program want to send\recv data from network.
If no, than why? What Linux kernel lacks to allow existence of such programs?
If yes, than why there aren't such program?
P.S. This is programming question, not user one.
Yes it's possible. You will need to setup firewall rules to route traffic through an userspace daemon, it'll involve quite a bit of work.
N/A
Because they're pretty pointless - if the user understands which programs he should block from net access he could just as well use one of multiple existing friendly netfilter/iptables frontends to configure this.
It is possible, there are no restrictions and at least one such application exists.
I would like to clarify a couple of points though.
If I understood this article correct, the firewalls mentioned here so far and iptables this question is tagged under are packet filters and accept and drop packets depending more on IP addresses and ports they come from/sent to.
What you describe looks more like mandatory access control to me. There are several utilities for that purpose in Linux - selinux, apparmor, tomoyo.
If I had to implement a graphical utility you describe, I would pick, for example, AppArmor, which supports whitelists, and, to some extent, dynamic profiling, and tried to make a GUI for it.
OpenSUSE's YaST features graphical interface for apparmor setup and 'learning' , but it is specific to the distribution.
So Linux users and administrators have several ways to control network (and files) access on per-application basis.
Why the graphical frontends for MAC are so few is another question. Probably it's because Linux desktop users tend to trust software they install from repositories and have less reasons to control them this way (if an application is freely distributed, it has less reasons to call home and packages are normally reviewed before they get to repositories) while administrators and power users are fine with command line.
As desktop Linux gets more popular and people install more software from AUR or PPA or even from gnome-look.org where packages and scripts are not reviewed that accurately (if at all) a demand for such type of software (user-friendly, simple to configure MAC) might grow.
To answer your 3rd point.
There is such a program which provides zenity popups, it is called Leopard Flower:
http://sourceforge.net/projects/leopardflower
Yes. Everything is possible
-
There are real antiviruses for linux, so there could be firewalls with GUI also. But as a linux user I can say that such firewall is not needed.
I reached that Question as i am currently trying to migrate from a Mac to Linux. There are a lot of applications I run on my Mac and on my Linux PC. Some of them I trust fully. But others I am not fully trusting. If they are installed from a source that checks them or not, do i have to trust them because someone else did? No, I am old enough to choose myself.
In times where privacy is getting more and more complicate to achieve, and Distributions exist that show that we should not trust everyone, I like to be in control of what my applications do. This control might not end at the connection to the network/Internet but it is what this question (and mine is about.
I have used LittleSnitch for MacOSX in the past years and I was surprised how often an application likes to access the internet without me even noticing. To check for updates, to call home, ...
Now where i would like to switch to Linux, I tried to find the same thing as I want to be in control of what leaves my PC.
During my research I found a lot of questions about that topic. This one, in my opinion, best describes what it is about. The question for me is the same. I want to know when an application tries to send or receive information over the network/internet.
Solutions like SELinux and AppAmor might be able to allow or deny such connections. Configuring them means a lot of manual configuration and does not inform when a new application tries to connect somewhere. You have to know which application you want to deny access to the network.
The existence of Douane (How to control internet access for each program? and DouaneApp.com) show that there is a need for an easy solution. There is even a Distribution which seems to have such a feature included. But i am not sure what Subgraph OS (subgraph.com) is using, but they state something like this on there website. It reads exactly like the initial question: "The Subgraph OS application firewall allows a user to control which applications can initiate outgoing connections. When an unknown application attempts to make an outgoing connection, the user will be prompted to allow or deny the connection on a temporary or permanent basis. This helps prevent malicious applications from phoning home."
As it seems to me, there are only two options at the moment. One is to Compiling Douane manually mysqlf or two, switch distribution to Subgraph OS. As one of the answers state, everything is possible - So i am surprised there is no other solution. Or is there?

Resources