when I use MsStore's some applications(ex. HoloAnatomy Demo) more than 30 minuts, my hololens2 is so heated that it is shutdowned. Is it normal? Can I resolve this problem?
It is recommended that you operate HoloLens at +50ºF (+10ºC) to +80ºF (+27ºC), you can check more detail on temperature range from Temperature and regulatory information, here is a quote to the guidelines:
Store device in an environment within temperature range (either in Standby or Off) for an hour before using the device.
Use device in an environment within temperature range.
Use device indoors.
Use device in shade; even indoors avoid direct sunlight though windows or skylights.
If you follow the above guidelines but experience unexpected overheating issues, ensure Full telemetry is enabled before submitting Feedback.
Related
I want to measure the arbitration time on the MAC layer at runtime.
Setup: I have installed a patched Linux kernel with an ath9k driver. I have no tools other than my two computers to find out what I am looking for. Both exchange messages via a 5G ad hoc network, so nothing else between them. Moreover, both use ptpd via ethernet. I followed the instructions on https://wireless.wiki.kernel.org/en/users/drivers/ath9k/debug to enable debug mode. However, when I enter dmesg I am not sure I see more information than before. Additionally, retransmissions on the MAC layer are disabled.
Anyway, I found out that I cannot see what exactly happens on my Wifi adapter with my setup. But still, I want to get close, for instance, to see a tendency, rise, or fall of arbitration time. I had the idea to use kernel traces to get as close as possible to the MAC layer. Btw, if I have wrong assumptions, I appreciate any hint. I assume the messages pass the following layers (omitting the application here):
kernel (tx) UDP socket -> IP -> 80211a -> PHY -> 80211a -> IP -> UDP socket (rx) kernel
Current state: So far, I figured out with a little help of trace-cmd, ftrace, and the source code of ath9k that the closest I can get happens in functions of xmit and mac. In my traces I can see skb is consumed at one point and a DMA for the wifi device. From there, I suppose, it is up to my wifi adapter to do the arbitration and transmission. The next I can trace happens on the receiver kernel. There I can see something like an rx interrupt. So between those two measurements occur MAC layer operations and the physical transmission. Am I correct so far? Is it true that the duration on IP is negligible because there is no routing in ad hoc (A)? If so, I can measure the mac layer operations and transmission time so far, but I can not separate them from each other.
In the source code of ath9k are lots of other functions that did not appear in my traces like ath_tx_send_normal, ath_tx_complete. So I am wondering if I missed something in the ath9k debug tutorial. Is there a possibility at all of retrieving the information I am looking for (B)? If not, is it sufficient enough to calculate the physical transmission time and subtract it from the duration it takes from one kernel to the other (C)?
Besides, I would like to know how I can access strings that appear in functions like read_file_phy_err, ath_tx_complete, and the ath_dbg information (D).
I would really appreciate any hints to help me understand the processes on Linux kernel, ath9k, and 80211. If I could not make clear what I do, I will try to clarify. Thanks in advance.
I am working on 3d photography and need to syncronize 4 azure kinect cameras.
I am not happy with cpp and working with python. Could anyone help me finding a code (same as green screen) for synchronizing in python?
Please refer to Synchronize multiple devices for physically connecting multiple Azure Kinect cameras.
I have written a simple Python example of configuring 2 Azure Kinect devices to be synced together where one is master and the other subordinate: k4a_sync.py. It assumes that the k4a python package has already been installed and that the devices are already physically connected with a sync wire before running the example.
Note that based on reading the system timestamps of the collected captures, the time between captures seems to be on the order of around 100-200 ms, which is unexpectedly way higher than the synchronization settings should have produced. This may be related to an open issue: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1665
I am working on a universal application, and I am trying to detect whether it runs on a desktop computer or on a real IoT device (Raspberry PI 2). Following the recommendation, I am trying to use API contract checks, however this returns true even on the desktop machine:
ApiInformation.IsApiContractPresent( "Windows.Devices.DevicesLowLevelContract", 1, 0 );
Obviously when I try to call GpioController.GetDefault(), it fails on the desktop, but strangely with a FileNotFoundException: "The specified module could not be found."
So what is the right way to detect a real device?
Thanks for your help,
György
Edit:
On some desktops GpioController.GetDefault() returns null, while on other machines it fails with FileNotFoundException.
Edit:
My goal is to ensure that I can safely call any GPIO or IoT Core specific APIs without using try-catch blocks everywhere to handle exceptions when running on a non-IoT device.
You can find the type of device your app is running on by
Windows.System.Profile.AnalyticsInfo.VersionInfo.DeviceFamily
Source:
https://msdn.microsoft.com/en-us/library/windows/apps/windows.system.profile.analyticsversioninfo.aspx
Microsoft does suggest to maximise your reach with universal apps by checking for capabilities instead of just checking the device family.
There's an article about all that here:
https://msdn.microsoft.com/en-us/library/windows/apps/dn894631.aspx
It depends on what aspect of a "real device" you want to check. Using API Contract information is not a good proxy, as you have found (although it should return null, not crash, on desktop -- that's a bug). Using AnalyticsInfo can be a reasonable proxy but you have to be careful about receiving new values over time, plus it actually identifies the OS type rather than the physical hardware. Enumerating hardware devices is the best way to detect hardware, but they can come and go dynamically as the user plugs and unplugs them.
What is it you are looking to do differently based on the answer?
I'm building a list of variables available on mobile devices, for device signature analysis. Here is what I've identified so far. Please help me fill out the list. Thanks!
General HTTP Variables
IP Address
Cookie
User Agent
Smart Phone Variables
Device ID
Geolocation
Device MAC Address
Javascript Variables
Current Time
Time Zone
Screen Size
Supported Fonts
Preferred Language
Installed Components
Cookies Enabled
The general pattern for getting variables might look something like this:
The Panopticlick project fingerprints browsers using a verity of techniques. This includes version numbers of the browser and all isntalled components (Flash, java, ect). The Project also looks at supported fonts, and prefered language, screen size and time zone.
Check out the results.
I plan to develop a nice little application that will run on an arm-based embedded Linux platform; however, since that platform will be battery-powered, I'm searching for relevant information on how to handle power save.
It is kind of important to get decent battery time.
I think the Linux kernel implemented some support for this, but I can't find any documentation on this subject.
Any input on how to design my program and the system is welcome.
Any input on how the Linux kernel tries to solves this type of problem is also welcome.
Other questions:
How much does the program in user space need to do?
And do you need to modify the kernel?
What kernel system calls or APIs are good to know about?
Update:
It seems like the folks involved with the "Free Electrons" site have produced some nice presentations on this subject.
http://free-electrons.com/services/power-management/
http://free-electrons.com/docs/power
http://free-electrons.com/docs/optimizations
But maybe someone else has even more information on this subject?
Update:
It seems like Adam Shiemke's idea to go look at the MeeGo project may be the best tip so far.
It may be the best battery powered Embedded Linux project out there at this moment.
And Nokia is usually kind of good at this type of thing.
Update:
One has to be careful about Android since it has a "modified" Linux kernel in the bottom, and some of the things the folks at Google have done do not use baseline/normal Linux kernels. I think that some of their power management ideas could be troublesome to reuse for other projects.
I haven't actually done this, but I have experience with the two apart (Linux and embedded power management). There are two main Linux distributions that come to mind when thinking about power management, Android and MeeGo. MeeGo uses (as far as I can tell) an unmodified 2.6 kernel with some extras hanging on. I wasn't able to find a lot on exactly what their power management strategy is, although I suspect more will be coming out about it in the near future as the product approaches maturity.
There is much more information available on Android, however. They run a fairly heavily modified 2.6 kernel. You can see a good bit on the different strategies implemented in http://elinux.org/Android_Power_Management (as well as kernel drama). Some other links:
https://groups.google.com/group/android-kernel/browse_thread/thread/ee356c298276ad00/472613d15af746ea?lnk=raot&pli=1
http://www.ok-labs.com/blog/entry/context-switching-in-context/
I'm sure that you can find more links of this nature. Since both projects are open source, you can grab the kernel code, and probably get further information from people who actually know what they are talking about in forms and groups.
At the driver level, you need to make sure that your drivers can properly handle suspend and shut devices off that are not in use. Most devices aimed at the mobile market offer very fine-grained support to turn individual components off, and to tweak clock settings (remember, power is proportional to clock^2).
Hope this helps.
You can do quite a bit of power-saving without requiring any special support from the OS, assuming you are writing (or at least have the source code for) your application and drivers.
Your drivers need to be able to disable their associated devices and bring them back up without requiring a restart or introducing system instability. If your devices are connected to a PCI/PCIe bus, research which power states they support (D0 - D3) and what your driver needs to do to transition between these low-power modes. If you are selecting hardware devices to use, look for devices that adhere to the PCI Power Management Specification or have similar functionality (such as a sleep mode and a "wake up" interrupt signal).
When your device boots up, every device that has the ability to detect whether it is connected to anything needs to do so. If any ports or buses detect that they are not being used, power them down or put them to sleep. A port running at full power but sitting unused can waste more power than you might think it would. Depending on your particular hardware and use case, it might also be useful to have a background app that monitors device usage, identifies unused/idle resources, and acts appropriately (like a "screen saver" for your hardware).
Your application software should make sure to detect whether hardware devices are powered up before attempting to use them. If you need to access a device that might be placed in a low-power mode, your application needs to be able to handle a potentially lengthy delay in waiting for the device to wake up and respond. Your applications should also be considerate of a device's need to sleep. If you need to send a series of commands to a hardware device, try to buffer them up and send them out all at once instead of spacing them out and requiring multiple wakeup->send->sleep cycles.
Don't be afraid to under-clock your system components slightly. Besides saving power, this can help them run cooler (which requires less power for cooling). I have seen some designs that use a CPU that is more powerful than necessary by a decent margin, which is then under-clocked by as much as 40% (bringing the performance down to the original level but at a fraction of the power cost). Also, don't be afraid to spend power to save power. That is, don't be afraid to use CPU time monitoring hardware devices for opportunities to disable/hibernate them (even if it will cause your CPU to use a bit more power). Most of the time, this tradeoff results in a net power savings.
One of the most important things to think of as a power aware application developer is to avoid unnecessary timers. If possible use interrupt driven solutions instead of polled solutions. If a timer must be used then use as long poll interval as is possible.
For example if something special should be done at a certain room temperature it is unnecessary to check the temperature every 100 ms since temperature in a room changes slowly. A more reasonable polling interval is could be 60 s.
This affects the power consumption in several ways. In Linux the CPUIDLE subsystem takes the CPU (SOC) to as deep power saving state as possible depending on when it predicts the next wakeup to occur. Having a lot of timers in a system will fragment the sleep making it impossible to go to the deeper sleep states for longer periods. A typical deep sleep state for CPUIDLE turns the CPU off but keeps the RAM in self refresh. When a timer triggers the CPU will boot and serve the timer of the application.
It's not actually your topic, but it might come in handy to log your progress: i was looking for testing / measuring my embedded linux system. chris desjardins from this forum recommended me this:
I have successfully used bootchart in the past:
http://elinux.org/Bootchart
Here is a list of other things that may also help:
http://elinux.org/Boot_Time