Windows 10 IoT Enterprise - Soft Real-time Performance with Audio Service enabled - windows-10-iot-enterprise

I want to use soft real-time performance on Windows 10 IoT Enterprise, but in the required steps it lists disabling the Windows Audio Service. Is there any way to keep audio on the machine while still utilizing RTP? This feature is exclusive to this version of Windows and is a drawing point for Kiosk Mode, but it removes any and all audio from your device. That doesn't seem like a fair trade...
One thought I had was dedicating a core(s) to the service because perhaps the issue was that the service was being pervasive to the CPU cores regardless of RTP which in turn affects RTP, but this was really just a shot in the dark since there is zero explanation as to why any of the services are required to be disabled in the first place. Regardless, it was not possible to even attempt this because Windows does not allow changing core affinity for Windows processes (from what little I have seen online on this subject).

Related

Alternative to azure accelerated networking

I am looking for an alternative to azure accelerated networking. Usecase remains the same. I wish to have better response times on my VM which has support for hyperthreading. My concern is around cpu core underutilization brought forth by the accelerated networking requirement of maintaining 4 CPUs. The application doesn't even use up 2 cores. Let me know if there are any possible solutions.
Receive Side Scaling (RSS) is one known option...
If the Windows VM supports Accelerated Networking, enabling that feature would be the optimal configuration for throughput. For all other Windows VMs, using Receive Side Scaling (RSS) can reach higher maximal throughput than a VM without RSS. RSS may be disabled by default in a Windows VM.
On Linux VMs, it is enabled by default.

How to avoid DBus for Linux in embedded environment?

I am working in a Linux based embedded project with C/C++ and python applications. And we need an Inter Process Communication (IPC) method to transport JSON based messages between those applications. Initially DBus was an obvious option since it is present in almost all Linux distributions and is quite stable and proved software. Also there are libraries for many programming languages. Also DBus has a very granular and nice permission system - which is a requirement for our project (security reasons).
But unfortunately we have experienced some drawbacks of DBus:
We have hit some stability bugs like in some specific congestion situations there were some memory leaks which lead to dead IPC and only application restart helped.
Only the usage of DBus introduced 3-5 MB of ram usage per each application (which on a system with 512 MB RAM and multiplied by 25 applications does make some room for improvements).
The data flow model (signals / methods) seem to be a bit too complicated for the use-case we need.
Our next idea is to switch to some of Message broker available. But we also look for some nice to have features:
Be able to broadcast or Multicast messages to multiple applications
To have presence of applications when they connect/disconnect from the bus-server (the server can broadcast when new applications connect and when applications disconnect).
Watchdog of connected applications. Sometimes the apps might behave wrong on the IPC (by not answering to IPC messages) and the server with watchdog could detect that and disconnect that application and inform others that the application is dead.
How do we avoid DBus in this scenario?

Why is a Trusted Execution Environment more significant for mobile devices?

I've been trying to understand what a Trusted Execution Environment is and how they work. Why is there such a strong emphasis on mobile devices? I've been trying to look for what's the difference in personal computers versus mobile devices with respect to a TEE. What am I missing?
Even though it’s late; I will add my comments in simplest possible way for reference.
As the world starts to move toward Enterprise Mobility, using mobile devices for work starts to become essential for different companies and organizations. From there a need come up to secure that devices, not only the data, but the processes and memory allocation as well; Especially when governments and sensitive departments start to use mobile devices.
Starting from the very low level of mobile devices architecture, every mobile device has a processor, processor manufacturers come up with new technology which creates two isolated areas running at the same time (e.g. ARM Trustzone) on the CPU and controlled by SoC (Software on Chip).
First area is where everyone use on mobile devices (Normal World / Rich Execution Environment - REE), the second one is the secure area (Secure World / Trusted Execution Environment - TEE). Each area has its own operating system running at the same CPU but their processes and memory allocation are totally separate.
Many mobile device manufactures (e.g Samsung), start to utilize that area, by loading third party secure Operating System (OS) into there (e.g Kinibi OS from Trustonic).
Developing applications (Trusted Application - TA) in the secure world is not easy process, provisioning them there is another story and integrating that applications with the normal world is another story as well (Some sort of especial SDK provided by TEE OS owner has to be used).
It is worth to mentioning that applications running in the TEE can have extraordinary privileges and normally TEE OS Owners limit what TA’s can do.
Lastly, although TEE is considered a secure area for sensitive processes (So far). There are other ways to achieve same level (or even better) of security on mobile devices.

Can you think of a reason why windows might not enable audio if noone is logged in?

I'm having a bizarre problem with some virtual servers created to record podcasts. They run on amazon AWS as windows server 2012 instances and a small c# app tells FFMPEG to do the heavy lifting of capturing from the virtual screen and reading from the virtual sound card (Virtual Audio Cable: https://en.wikipedia.org/wiki/Virtual_Audio_Cable) via DirectShow filters
The problem I have is if I leave the machine to do its stuff unattended, the recordings are sometimes silent. If I log in via VNC and watch it doing its stuff the audio is recorded just fine. All other aspects of the test op are the same, and the virtual machine is shut down between successive recordings so each one should theoretically be a clean slate. The app runs under a logged in session (hence the use of VNC rather than RDP)
I'm now wondering if there is some optimisation of the windows sound engine whereby it doesn't bother playing audio if it thinks noone is listening. The confusing thing to me is that not every virtual machine suffers these problems; some of them record fine (and they're all created from the same seed virtual hard disk image) in unattended mode
I'm asking this question with the aim of getting together a list of things I can check/look into/debug.. I don't have much knowledge of how MME/DirectSound/WASAPI work internally...

How to do power save on a ARM-based Embedded Linux system?

I plan to develop a nice little application that will run on an arm-based embedded Linux platform; however, since that platform will be battery-powered, I'm searching for relevant information on how to handle power save.
It is kind of important to get decent battery time.
I think the Linux kernel implemented some support for this, but I can't find any documentation on this subject.
Any input on how to design my program and the system is welcome.
Any input on how the Linux kernel tries to solves this type of problem is also welcome.
Other questions:
How much does the program in user space need to do?
And do you need to modify the kernel?
What kernel system calls or APIs are good to know about?
Update:
It seems like the folks involved with the "Free Electrons" site have produced some nice presentations on this subject.
http://free-electrons.com/services/power-management/
http://free-electrons.com/docs/power
http://free-electrons.com/docs/optimizations
But maybe someone else has even more information on this subject?
Update:
It seems like Adam Shiemke's idea to go look at the MeeGo project may be the best tip so far.
It may be the best battery powered Embedded Linux project out there at this moment.
And Nokia is usually kind of good at this type of thing.
Update:
One has to be careful about Android since it has a "modified" Linux kernel in the bottom, and some of the things the folks at Google have done do not use baseline/normal Linux kernels. I think that some of their power management ideas could be troublesome to reuse for other projects.
I haven't actually done this, but I have experience with the two apart (Linux and embedded power management). There are two main Linux distributions that come to mind when thinking about power management, Android and MeeGo. MeeGo uses (as far as I can tell) an unmodified 2.6 kernel with some extras hanging on. I wasn't able to find a lot on exactly what their power management strategy is, although I suspect more will be coming out about it in the near future as the product approaches maturity.
There is much more information available on Android, however. They run a fairly heavily modified 2.6 kernel. You can see a good bit on the different strategies implemented in http://elinux.org/Android_Power_Management (as well as kernel drama). Some other links:
https://groups.google.com/group/android-kernel/browse_thread/thread/ee356c298276ad00/472613d15af746ea?lnk=raot&pli=1
http://www.ok-labs.com/blog/entry/context-switching-in-context/
I'm sure that you can find more links of this nature. Since both projects are open source, you can grab the kernel code, and probably get further information from people who actually know what they are talking about in forms and groups.
At the driver level, you need to make sure that your drivers can properly handle suspend and shut devices off that are not in use. Most devices aimed at the mobile market offer very fine-grained support to turn individual components off, and to tweak clock settings (remember, power is proportional to clock^2).
Hope this helps.
You can do quite a bit of power-saving without requiring any special support from the OS, assuming you are writing (or at least have the source code for) your application and drivers.
Your drivers need to be able to disable their associated devices and bring them back up without requiring a restart or introducing system instability. If your devices are connected to a PCI/PCIe bus, research which power states they support (D0 - D3) and what your driver needs to do to transition between these low-power modes. If you are selecting hardware devices to use, look for devices that adhere to the PCI Power Management Specification or have similar functionality (such as a sleep mode and a "wake up" interrupt signal).
When your device boots up, every device that has the ability to detect whether it is connected to anything needs to do so. If any ports or buses detect that they are not being used, power them down or put them to sleep. A port running at full power but sitting unused can waste more power than you might think it would. Depending on your particular hardware and use case, it might also be useful to have a background app that monitors device usage, identifies unused/idle resources, and acts appropriately (like a "screen saver" for your hardware).
Your application software should make sure to detect whether hardware devices are powered up before attempting to use them. If you need to access a device that might be placed in a low-power mode, your application needs to be able to handle a potentially lengthy delay in waiting for the device to wake up and respond. Your applications should also be considerate of a device's need to sleep. If you need to send a series of commands to a hardware device, try to buffer them up and send them out all at once instead of spacing them out and requiring multiple wakeup->send->sleep cycles.
Don't be afraid to under-clock your system components slightly. Besides saving power, this can help them run cooler (which requires less power for cooling). I have seen some designs that use a CPU that is more powerful than necessary by a decent margin, which is then under-clocked by as much as 40% (bringing the performance down to the original level but at a fraction of the power cost). Also, don't be afraid to spend power to save power. That is, don't be afraid to use CPU time monitoring hardware devices for opportunities to disable/hibernate them (even if it will cause your CPU to use a bit more power). Most of the time, this tradeoff results in a net power savings.
One of the most important things to think of as a power aware application developer is to avoid unnecessary timers. If possible use interrupt driven solutions instead of polled solutions. If a timer must be used then use as long poll interval as is possible.
For example if something special should be done at a certain room temperature it is unnecessary to check the temperature every 100 ms since temperature in a room changes slowly. A more reasonable polling interval is could be 60 s.
This affects the power consumption in several ways. In Linux the CPUIDLE subsystem takes the CPU (SOC) to as deep power saving state as possible depending on when it predicts the next wakeup to occur. Having a lot of timers in a system will fragment the sleep making it impossible to go to the deeper sleep states for longer periods. A typical deep sleep state for CPUIDLE turns the CPU off but keeps the RAM in self refresh. When a timer triggers the CPU will boot and serve the timer of the application.
It's not actually your topic, but it might come in handy to log your progress: i was looking for testing / measuring my embedded linux system. chris desjardins from this forum recommended me this:
I have successfully used bootchart in the past:
http://elinux.org/Bootchart
Here is a list of other things that may also help:
http://elinux.org/Boot_Time

Resources