I am trying to create an installer that checks to see if the current hardware meets minimum system requirements.
In order to do this I need processor, total physical RAM, operating system version and also mainly SSD and TPM information.
I have searched the forums but i haven't found a function that will give me this information's like SSD and TPM
Does anyone have any idea how i might accomplish this using NSIS script?
Related
I am attempting to write an x86_64 PC emulator. I was wondering in what memory location the UEFI is mapped. I know that a BIOS is usually mapped from 0xf0000-0xfffff and 0xf0000000-0xffffffff. Is UEFI mapped to the same locations?
Yes, the UEFI firmware is loaded to the same locations as well as legacy BIOS. Otherwise, why is cs:ip is pointing 0xFFFFFFF0 in its initial state?
Check out the OvmfPkg in EDK II. This is an open-source UEFI firmware for virtual machines. You can load it into famous emulators like Bochs, QEMU.
You can also use VMware's EFI firmware, but in that it is proprietary, you might want to read VMware's license before you really want to proceed with it.
UEFI is not loaded in a specific memory location. Something needs to be placed where the processor starts fetching instructions, and then SEC and PEI stages prepare for DXE to be uncompressed somewhere dynamically - loading individual PE/COFF as it goes.
The way you find out what memory regions are reserved is by calling the GetMemoryMap boot service at runtime.
You may find the documentation of the existing EDK2 virtual machine port helpful.
I was checking project Embedded ECG data acquisition system from instructables and there is mension a TODO:
Combining the OS and bare-bone firmware
UNDER CONSTRUCTION
** Since the bootloader only loads one firmware to the Core,
I need to modify the ELF file, to have Linux and bare-bone Core at the same time **
It seems to me as interresting approach how to make full featured Linux and critical realtime OS on one board (for example Raspberry PI). It is really possible? I have heard, that Linux can be setup to not use some cores. But I suppose that Linux use virtual memory and bare-bone firmware does usually not. Can the memory be shared between these OS. What about interruptions? Can these two OS handle interruptions separately? Can boot loader load these two systems to both core at once? I can imagine that one thread in boot loader will skip to address of bare-bone OS. It is correct approach?
Yes, it is possible, even if the full setup is not straightforward.
A couple of examples:
Xilinx released a white paper explaining how to run Linux + FreeRTOS on a dual-core Zynq ARM
Evidence explained how to run Linux + Erika Enterprise RTOS on a dual-core Freescale imx6 ARM
Those examples are based on system partitioning by hard-coding the assignment of the different cores to different OSs.
If your system is capable of hardware-assisted virtualization, you can use an hypervisor for making (and enforcing) such partitioning. You can for example use Siemen's Jailhouse, KVM or Xen.
Kind of. This is what people already do to some extent with network stack / driver. For example IsoStack idea works in a similar way. There's a project which actually implements this on linux by dedicating cores to network cards, but my google-fu is failing me.
I'm running embedded Linux (Debian on ARM/X86_64). Since it is very much like a full OS, with some hardware differential and a different platform, you may consider it as a regular machine. So, this will be used in the robotics field where the computer will ALWAYS be hard reset by turning off power. It would disqualify me to use a UPS so I would need to make the system infallible.
I'm running some processor-intensive tasks, like OpenCV and OpenNI and OpenKinect. How do I use an uber-powerful filesystem, like ZFS to mirror the entire disk on the SSD for error correction? Does ZFS perform well in Linux? I'm still kinda a newbie in Linux so I don't understand it's internal workings.
My list of possible platforms are:
--Debian#RaspberryPi
--kUbuntu#ODROID-X2
--Ubuntu#PandaBoard
--Ubuntu#NUC-i3/5.
Also, how can I make sure the filesystem doesn't get damaged during reset? I need the computer to start in good time, A.K.A, <3 minutes for the competition.
I will probably be using a 32GB SSD, so I guess a 16GB partition mirrored 2x works or 12 # 3x. I only need to get an OpenCV install working because the code will be downloaded from a SAMBA NFS automatically!
Thanks for your help and good luck ;)!
ZFS is not suited for low memory systems. It do perform well on system with 4GB of RAM and more.
I am looking at some pointers for understanding how the Linux kernel implements the setting up of various hardware clocks. This basically relates to working with setting up the various clocks that hardware features like the LCD, UART etc will use. For example when Linux boots how does it handle setting up the clocks for UART or USB. Maybe something like a Clock manager or something.
I am basically trying to implement something similar for a different OS on a new hardware that i am working on. Any help would be really appreciated.
[Edit]
Thanks for the replies and the links. So here is what i have implemented up until now. This should give you an idea of where I'm headed.
I looked up the Hardware Reference Manual for the particular system I'm targeting and wrote some code to monitor/modify the signals/pins of the peripherals I am interested in i.e. turning them ON/OFF from the command line.Now a collection of these clocks/signals together control a peripheral.The HRM would say that if you want to turn on the UART or something then turn on such and such signals/pins. And #BjoernD yes I am using something like a mmap() function to talk to the peripherals.
The meat of my question is that I want to understand the design and implementation of a Clock/Peripheral Manager which uses the utility that I have already written. This Clock/Peripheral Manager would give me the control of enabling/disabling the peripherals I want.Basically this Manager would enable me to make changes in the init code that is right now running. Also during run time processes can call this Manager to turn ON/OFF the devices so that power consumption is optimized. It might not have made perfect sense but I'm myself trying to wrap my head around this.
Now I'm sure something like this would have been implemented in Linux or for that matter any OS for performance issues (nobody would want to waste power by turning on all peripherals at boot time). I want to understand the Software Architecture of it. Reference from any OS would do as of now to atleast get a headstart. Also I am not writing my own OS, there is an OS in place but Im looking more at a board level software aka BSP to work on. But thanks for the OS link anyways, they are really good. Appreciate it.
Thanks!
What you want to achieve is highly specific to a) the platform you are using and b) the device you want to use. For instance, on x86 there are 3 ways to communicate with a device:
Interrupts allow the device to signal the CPU. The OS usually provides mechanisms to register interrupt handlers - functions that are called upon occurrence of an interrupt. In Linux see request_irq() and friends in linux/include/interrupt.h
Memory-mapped I/O is physical memory of the device that the platform's BIOS makes available in the same way you also access plain physical memory - simply by writing to a memory address. What exactly is behind such memory (e.g., network interface config registers or an LCD frame buffer) depends on the device and is usually specified in the device's data sheet.
I/O ports are accessed through a special address space and special instructions (INB/OUTB & co.). Other than that they work similar to I/O memory.
There's a multitude of ways to find out what resources a device provies and where the BIOS mapped them. Some platforms use ACPI tables (google yourself for the 1,000k page spec), PCI provides info on devices in a standardized way through the PCI config space, USB has similar ways of discovering devices attached to the bus, and some devices, e.g., UARTS, are simply specified to be available at a pre-configured I/O range that is fixed for your platform.
As a start for understanding Linux, I'd recommend "Understanding the Linux kernel". For specifics on how Linux handles devices and what is there to write drivers, have a look at Linux Device Drivers. Furthermore, you will need to have a look at the peculiarities of your platform and the device you want to drive.
If you want to start an own OS, a UART is certainly something that will be veeery helpful to print debug output, so you might want to go for this first.
Now that I wrote down all this, it seems that your actual question is: How to get started with Operating System design. This question should be highly valuable for you: What are some resources for getting started in operating system development?
The two big power users in most computers are the CPU and the disks. Both of these have capabilities for power saving in Linux. The CPU clock can be slowed down when the system is not busy, and the disk motors can be stopped when no I/O is happening. For a UART, even if you save all of the power that it uses by turning off its clock, it is still tiny compared to the others because a UART doesn't have much logic in it.
Best ways to save power are
1) more efficient power supply
2) replace rotating disk with SSD
3) Slow down the CPU and memory bus
I'm trying to get hold of CPU architecture information under Linux.
I understand the information is available via the sysfs filesystem.
I have CentOS 5 running in a Xen VM. The sysfs filesystem is mounted. However, the /sys/devices/system/cpu/cpu0/ directory is almost empty. The only entry is a single file, "online", with a value of "1".
What gives? where's all my CPU information?
The actual cpu information is still in /proc/cpuinfo.
The sysfs-files are used to control things like scheduling and frequency settings, not to get information on the cpus themselves.
Okay, I've just had a chat with a sysadmin at work.
Looking at some machines, it looks like this information simply is not pushed by VMs. The VMs think they have a virtual CPU - rather than a CPU of the type of the real underlying CPU - and the cache information simply is not published.
It is published (and it's nice to finally see it!) on real machines with reasonably modern kernels.