I am building cross-platform application, consisting of several modules, exchanging with each other.
That means my question is related to both Windows and Linux.
Q: If using TCP/IP for inter-process communication, is there any kind of special optimization, performed by OS in case both endpoints are on localhost?
Somewhere I've heard, in this case Windows can bypass network drivers and use just shared memory. I have no idea about the source/proof of this statement, but the idea to switch off some unused stuff sounds logic.
Is that true and if yes, where I can read the details?
Related
I am working in a Linux based embedded project with C/C++ and python applications. And we need an Inter Process Communication (IPC) method to transport JSON based messages between those applications. Initially DBus was an obvious option since it is present in almost all Linux distributions and is quite stable and proved software. Also there are libraries for many programming languages. Also DBus has a very granular and nice permission system - which is a requirement for our project (security reasons).
But unfortunately we have experienced some drawbacks of DBus:
We have hit some stability bugs like in some specific congestion situations there were some memory leaks which lead to dead IPC and only application restart helped.
Only the usage of DBus introduced 3-5 MB of ram usage per each application (which on a system with 512 MB RAM and multiplied by 25 applications does make some room for improvements).
The data flow model (signals / methods) seem to be a bit too complicated for the use-case we need.
Our next idea is to switch to some of Message broker available. But we also look for some nice to have features:
Be able to broadcast or Multicast messages to multiple applications
To have presence of applications when they connect/disconnect from the bus-server (the server can broadcast when new applications connect and when applications disconnect).
Watchdog of connected applications. Sometimes the apps might behave wrong on the IPC (by not answering to IPC messages) and the server with watchdog could detect that and disconnect that application and inform others that the application is dead.
How do we avoid DBus in this scenario?
I am building a message layer for processes running on an embedded Linux system. I am planing to use sockets. This system might be ported to different operating systems down the road so portability is a concern. Performance is below portability in priority order.
I have a few questions regarding my way forward.
I am thinking of using internet sockets over TCP/IP for this communication between local processes for the sake of portability. Is there any reason that I should not do that and use domain sockets?
Does it really improve the portability when using internet sockets instead of domain sockets?
If this is indeed the way forward, can you point me in the right direction (how to use ports for each process etc.) with some online resources?
Sorry for the rather long post.
I need some input regarding a project that I am going to undertake.
I am trying to make an application that collects kernel debugging information from a guest Linux OS, located inside a VmWare Virtual Machine, and send them to a host OS efficiently.
So far, I have found a similar project, but written for Windows[1].
The author of the project wrote a DLL that is loaded into memory, and replaces the implementation of the KdSendPacket and KdReceivePacket functions, to use the VmWare GuestRpc[2] mechanism, instead of the slow serial port.
The data are then send to a debugging application on the host(Kd or WinDbg) trough a named pipe.
The author claims that there is a speed-up up to 45%, by avoiding the serial port transmission.
I am trying to achieve something similar ,but for Linux, and try to make the debugging process a little faster, than using the serial port.
My concrete questions are :
Do any similar applications exist?
I didn't manage to find any.
Would such an application be worth it ,comparing its functionality to netconsole[3], for example?
What method of intercepting printk messages would you suggest ?
Is there an equivalent of KdSendPacket/KdReceivePacket on Linux ?
[1]. http://virtualkd.sysprogs.org/dox/operation.html
[2]. http://articles.sysprogs.org/kdvmware/guestrpc.shtml
[3]. http://www.kernel.org/doc/Documentation/networking/netconsole.txt
Using the serial port is really suboptimal.. even the (virtual) network would be preferable to that, but getting back to host-guest IPC channels, VMware's VMCI comes to mind.
many approaches can use to achieve your goal, below methods can be applied if network is connected:
use syslog service and transfer log though network to your server:
syslogd, syslogng seems support sending log to a log server with some filter critiera.
directly call tcp/udp socket functions in your kernel module to sends your collected data back to server.
other approaches, you may write application on host machine that calls hypervisor's share memory access function to read the memory buffer of your kernel module. However, the xen/kvm hypervisor both support these apis and i am not sure about weather vmware have this kind of library.
I'm preparing to write a multithread network application. At the moment I'm wondering what's the best thread pattern for my program. Whole application will handle up to 1000 descriptors (local files, network connections on various protocols and additional descriptors for timers and signals handling). Application will be optimized for Linux. Program will run on regular personal computers, so I assume, that they will have at least Pentium 4.
Here's my current idea:
One thread will handle network I/O
using epoll.
Second thread will
handle local-like I/O (disk I/O,
timers, signal handling) using epoll
Third thread
will handle UI (CLI, GTK+ or Qt)
Handling each network connection in separate thread will kill CPU because of too many context switches.
Maybe there's better way to do this?
Do you know any documents/books about designing multirhread applications? I'm looking for answers on questions like: What's the rational number of threads? etc.
You're on the right track. You want to use a thread pool pattern to handle the networking rather than one thread per network connection.
This website may also be helpful to you and lists the most common design patterns and in what situations they can be used.
http://sourcemaking.com/design_patterns/
To handle the disk I/O you might like to consider using mmap under linux. It's very fast and efficient. That way, you will let the kernel do the work and you probably won't need a separate thread for that.
I'm currently playing with Boost::asio which seems to be quite good. It uses epoll on linux. As it appears you are using a cross platform gui toolkit like Qt, then boost asio will also provide cross platform support so you will be able to use it on windows or linux. I think there might be a cross platform mmap too.
I need to produce an embedded ARM design that has requirements to do many things that embedded Linux would do. However the design is cost sensitive and does not need huge amounts of horse power. Mostly will be talking to serial interfaces. Ideally I would like to use one of the low end ARMs. What is the lowest configuration of an ARM that you have successfully used embedded Linux on.
Edit:
The application needs a file system on some kind of flash device and the ability to run applications for processing the data. Some of the applications might be written by others than myself. I also need to ability to load new applications or update old apps using the serial ports to accept the apps.
When I have looked at other embedded OSes they seem to be more of a real time threading solution than having the ability to run applications. I am open to what ever will get the job done.
I think you need to weigh your cost options here.
ARM + linux is an option but you will be paying a very high operating overhead for such a simple (from your description) set of features. You can't just look at the cost of the ARM chip but must also consider external RAM which will very likely be required as well as flash to get enough space available to run the kernel + apps.
NOTE: you may be able to avoid the external requirements with a very minimal kernel and simple apps combined with a uC with large internal resources.
A second option is a much simpler microcontroller with a light weight OS. This will cut your hardware costs on the CPU and you can likely run something like this without external RAM or flash (dependent on application RAM and program space requirement)
third option: I don't actually see anything in your requirements that demands any OS at all be used. Basic file systems are very simple, for instance there are even FAT drivers out there for 8 bit PIC's. Interfacing to an SD card only requires a SPI port and minimal external circuitry.
The application bit could be simple or complex. I've built systems around PIC18 microcontollers that run a web server and allow program updates via a simple upload screen, it just stores the new program into an EEPROM or flash, reboots into a bootloader and copies the new program into internal program memory. You could likely design a way to do this without the reboot via a cooperative multitasking type of architecture. Any way you go the programmers writing the apps are going to need to have knowledge of the architecture and access to libraries / driver you write. Your best bet to simplify this is to provide as simple an API as possible and to try to automate the build process for them.
The third option will be the "cheapest" in terms of hardware as there will be very little overhead in the processing of your applications allowing you to get away with minimal processing power and memory. It likely will require some more programming/software architecting on your part but won't require nearly the research you will need to undertake to get linux up and running in addition to learning to write the needed device drivers under a linux paradigm.
As always you have to include the software development costs in the build cost of the device. If you plan to build 10,000+ of these your likely better off keeping hardware costs down and putting more man power into designing a software solution that allows that hardware to meet the design goals. If your building 10 of them, your better off spending an extra $15-20 on hardware if it can cut down on your software development costs. For example an ARM with MMU with full linux kernel support and available device drivers.
I kind of feel that your selecting the worst of both worlds at the moment, your paying extra to get a uC you can run linux on but by doing so your also selecting a part that will likely be the most complex to get linux up and running on, especially having not worked with linux on embedded platforms before.
I've had success even on ARM7TDMI, so I don't think you're going to have any trouble. If you have a low-requirements system, you could use any kind of lightweight real-time executive and have a lot better experience than you would getting Linux to work.
I've used a TS-7200 for about five years to run a web server and mail server, using Debian GNU Linux. It is 200 MHz and has 32 MB of RAM, and is quite adequate for these tasks. It has serial port built in. It's based on a ARM920T.
This would be overkill for your job; I mention it so you have another data point.
For several years I've been using a gumstix to do prototyping and testing and I've had good results with it. I don't know if the processor they are using (Intel PXA255 on my board) is considered low-cost, but the entire Verdex line seems pretty cheap to me for an adaptable device.
ucLinux is designed specifically for resource constrained targets, but perhaps more importantly for targets without an MMU.
However you have to have a good reason to use Linux on such a system rather than a small real-time executive. Out-of-the-box networking, readily available drivers and protocol stacks for complex hardware and support for existing POSIX legacy or open source code are a few perhaps. However if you don't need that, Linux is still large, and you may be squandering resources for no real benefit. In most cases you will still need off-chip SDRAM and Flash if you choose Linux of any flavour.
I would not regard serial I/O as 'complex hardware', so unless you are running a complex, but standard protocol, your brief description does not appear to warrant the use of Linux IMO
My DLINK DIR-320 router runs Linux inside.
And I know some handymen, flashing it with Optware and connecting USB-hub, HDDs, USB-flash, and much more.
It's low-cost ready for use "platform". (If you don't need mass production). But maybe more powerful than you need.
Additionally, it can be configured wirelessly via web-interface even through your pda :)