I want to search for a graphic card that suits my motherboard, but I don't know how!
For example, I have this motherboard, I don't know which type of graphic card should I search for.
To Answer your question, it just depends on your budget and how much power your POWER SUPPLY can handle. ECS is not considered a high end motherboard manufacturer but it really don't matter.
The Graphics card is just like a CPU, it requires power, a lot of it at times. When a game runs, it heavily depends on the GPU to run the game which draws a lot of power. As you can see on your motherboard it has a 16x PCI express card, meaning it will take basically ANY card that supports x16 PCI express.
EXPANSION SLOT 1 x PCI Express x16 Gen2.0 slot <<--- here
Now your only problem is determining how much power you need. i'd recommend getting getting a power supply that has at least 2 -> 6 pin plugs (you can google that) and is at least 600 watts of power or more to support any graphics card you buy.
It has a built-in graphics card. See the photo on the link you sent. VGA and DVI, see also the specs under rear panel I/o. If you insist on another graphics card, Google "pci graphics card". Your motherboard has 4 PCI slots. Read the spec.
By the way, this question is not appropriate for stackoverflow. SuperUser would be more appropriate.
Related
In terms of working on microcontrollers and microprocessors, I have been said to work with SPI Interface instead of using USB. When I was going to deeper on my hardware selection, I have noticed that there are many other Interface like MIPI Dsi, Csi or so on. So what are differences? Can I choose something with mipi dsi interface and use it on my system which will be a sensor system?
Your question is rather broad for Stack Overflow and has a really wide scope; but fundamentally, all the interfaces you've lists, SPI, USB, MIPI DSI, MIPI CSI, etc, are all just communication interfaces, or ways for external components like sensors, cameras, displays, input devices, storage units, etc to talk to a processor; each usually being designed with specific goals in mind. USB for example was designed to be generic and used to connect peripherals to desktop/laptop consumer systems from keyboads and mice to webcams to other devices; while MIPI DSI was specifically designed to interface mobile/embedded displays to the host processor. Due to these design goals for each interface, they all have quite different physical and link-layer implementations (ie, the actual number of wires and the way the signals on those wires are sent and received).
When looking to which interface is right for your system, you need to look at what the processor supports and what interfaces the device you wish to hook up to it supports. If say, you have a simple accelerometer and a simple microprocessors, its likely both with use either a SPI or I2C interface. If its a larger processor and a camera, then either USB or MIPI CSI might be good options. Also recognize that depending on software support, writing code to interface with a USB sensor can be quite a bit more complex than one for a SPI interface (thus the widespread use of SPI and I2C in embedded systems). Without more details, I cant say whats a good way to hook things up (and that type of question might better fit on electronics stackexchange: https://electronics.stackexchange.com/).
I am looking into persona devices as described in Appendix G of the Redhawk manual.
Is there a detailed "how to" for this anywhere?
In my scenario my 'Programmable Device' would be a Redhawk FEI device that interfaces with a kernel API that controls tuners, fans, gps, buttons and LCD displays. I would like to break this out into three or four persona devices that interface with the main FEI Device.
Thought I'd ask.
If you head to Geon's github and look at the RFNoC_ProgrammableDevice and RFNoC_DefaultPersona, you can get an idea of how these Devices interact with one another. It should be noted that these Devices are still under development. Unfortunately, the manual appendix you mentioned and these examples are really the closest thing to a "how to" there is right now.
That being said, this pattern is generally reserved for FPGAs, with the programmable Device controlling access to the programmable hardware (and FEI functionality, if present) and the persona(s) controlling access to specific bit file capabilities. If you're not interacting with an FPGA, then the pattern will most likely be more trouble than it's worth to obtain modularity.
I realized after many years of using and programming computers that the stack of software that actually draws on the screen is mostly a mystery to me.
I have worked on some embedded LCD GUI applications and I think that provides some clues as to a simplified stack but the whole picture for something like the Windows operating system is still murky.
From what I know:
Lowest level 0 is electronic hardware (integrated circuits) that provide a digital interface to turn a pixel on the screen a certain color or grey scale shade. The interface is documented in data sheets so you know how to toggle the digital lines to turn any pixel the way you want it.
Next level 1 is a hardware driver. This usually abstracts the hardware into a common interface. Something like SetPixel() etc.
Next level 2 is 2D/3D graphics library (of which I have limited widget/single screen experience). The lower levels seem to provide a buffer or range of memory that represents the pixels on the screen. The graphics library abstracts this so you can call functions like DrawText("text", 10, 10, "font") and it will set the pixels for you in the right way.
Next level would be the magic of the OS. The windows/buttons/forms/WPF/etc is created in memory and then routed to the appropriate driver while also being directed to a certain part of the screen?
But how does something like Windows really work?
I would assume that the GPU fits between level 0 and level 1. The GPU drives the pixels on the display directly and now the level 1 drivers are a GPU driver. There are more functions available to enable the added functionality a GPU provides. (what would this be though? Does the OS pass on an array of triangles in 3D space and the GPU processes this into a 3D perspective view and then chuck it on the screen?)
The biggest mystery to me though is when you get into the windows part of things. You can have sketch up, visual studio and a FPS game all running at the same time and be able to switch between them, or in some cases tile them on the screen or have then spread across multiple screens. How is this tracked and rendered? Each of these would have to be running in the background and the OS would have to say which graphics pipe should be connected to which part of the screen. How would Windows say this part of the screen is a 3D game and this part is a 2D WPF app etc?
On top of that all you have DirectX used in one application and Qt in another. I remember having multiple games or apps running that use the same technology so how would that work? From what I can see you would have Application->Graphics library (DirectX, WPF etc)->Frame Buffer->Windows director (where and what part of the screen should this frame buffer be scaled to)->Driver?
In the end it is just bits toggling to indicate which pixel should be what color but it is one hell of a lot of toggling bits along the way to get there.
If I fire up Visual Studio and create a basic WPF app what is all going on in the background when I drop a button on the screen and hit start? I have seen the VS designer to drop it on, created it in XAML and I have even manually drawn things pixel by pixel in an embedded system but what happens in between, the so-called meat of this sandwich?
I have used Android, iOS, Windows and Linux and it seem to be a common functionality but I have never seen or heard an explanation of the how behind what I outline above, I only have a slightly educated guess.
Is anyone able to shed some light on how this works?
VGA
Assuming x86, VGA memory is mapped at a standard video buffer address in the lowest 1 MiB (0x000B8000 for text mode and 0x000A0000 for graphics mode). There are also many VGA registers that control the behaviour of the card. There were two widely used video modes, mode 0x12 (16-color 640x480) and mode 0x13 (256-color 320x200). Mode 0x12 involved switching planes (blue, green, red, white) with VGA registers, while mode 0x13 involved having a 256-color palette which can be modified using VGA registers.
Normally, an OS relying on VGA would set the mode using BIOS while booting, or write to the appropriate VGA registers at runtime (if it knows what it is doing). To draw to the screen, the video driver would either simply write to the video memory (mode 0x13) or combine that with writing to VGA registers too (mode 0x12).
Most cards in use today are still (partly) VGA compatible.
VBE
Some years later, VESA invented "VESA BIOS Extensions", which was a standard interface for video cards and allowed higher resolutions and greater color depths. The video memory was exposed through two different ways: banked mode and linear framebuffer. The banked mode would expose some small portion of the video memory to a low address (0x000A0000) and the video driver would need to switch banks almost each time the screen is to be updated. The linear framebuffer is a much more convenient solution, which would map the entire video memory to a non-standard high address.
During boot, an OS would call the VBE interface to query for supported modes and to set the most convenient one, or it would bypass the VBE interface and write directly to the needed video hardware registers (if it knows what it is doing). In either between the banked mode and the linear framebuffer, the video driver would write to the specified memory address to which the video memory is mapped.
Most cards in use today are still (partly) VBE compatible.
Modern video interfaces
The most modern video interfaces usually aren't documented as widely as VGA and/or VBE. However, the video memory is still mapped at an address, while hardware registers and/or a buffer contain modifiable information about the behaviour of the graphics card. The difference is that the interfaces aren't standardised anymore and nowadays an advanced OS requires different drivers for each graphics card.
I'm using my iPhone to scan in a complex 2D barcode. Problem is, the iPhone camera doesn't do so well at very close distances (less than 3 inches).
I was wondering if there were a way I could affix a Bluetooth low energy "sticker" to a piece of paper. The idea being instead of using the camera to scan a 2D barcode, I could just put my iPhone near the paper and "scan" it.
I'm extremely new to Bluetooth tech, so it's quite possible that what I'm asking for is completely ridiculous. Please forgive me, if that is the case.
Unlike NFC, Bluetooth Low-Energy devices need a power source, so it's imposible to just "print" them. They need a BLE chip and a battery to operate. So while you could use BLE same way you use NFC (proximity-based actions), you won't be able to do it with just a sticker.
Register at bluetooth sig for manufactorer id. Then put manufacturer id in advertisement package 0xff with id (16 bit) followed by the data. You must be sure your length is correct or iOS can't decode it.
For NFC, your scanner must be pretty close to the tag. But BLE devices work within several tens of meters without any problem. This is like an active RFID chip.
Of course, you need a power source for it. But if you print this BLE tag to a piece of expensive equipment, the cost of the tag and the battery is not a problem. You can use a button cell battery to power the BLE tag up. Let is broadcast/advertise some info once a second. Of course, you have to add some security mechanism if you want to be away from any replay attacks.
I'm attempting to create a generic graphics controller for VGA monitors with an Altera FPGA via a VGA connector, but I cannot find any good online resources explaining the standard specification which monitors use. I've found all the pin descriptions and some resources which describe how to create a specific graphics controller, such as this 8 colour 480x640 controller, but no resources I've found describe the actual expected 'protocol' which monitors.
For example, nowhere have I found what the exact timings are supposed to be between different parts of the signal -- in the above, specific timings in µs are given but not why. Are all the sections supposed to be in these set proportions or is there some arbitrariness with regards to pause timings between rows, etc.... What would the pseudo-code look like if you were thinking of implementing it in code (and be able to change resolution / colour depth)?
Again, I'm looking for the expected 'protocol' for a generic controller -- similar to what an OS would use when no monitor type is specified. An pointers to the right direction would be appreciated.
I haven't done any lower level VGA stuff for years, but a book I used that that may be of some help is: Programmer's Guide to the EGA, VGA, and Super VGA Cards
The table of contents for the book is
as follows:
Introduction to the Programmer's Guide
The EGA, VGA, and Super VGA Features
Graphics Hardware and Software
Types of Graphics Systems
Principles of Computer Graphics
Alphanumeric Processing
Graphics Processing
Color Palette and Color Registers
Reading the State of the EGA and VGA
The EGA/VGA Registers
The EGA/VGA BIOS
Programming Examples
The Super VGA
Graphics Coprocessors
Super VGA Code Basics
The Adapter Interface
The 8514/A
The XGA
ATI Technologies
Chips and Technologies
Cirrus Logic
The Video7 Super VGA Chip Set
IIT
NCR
Oak
S3 Incorporated
The Trident Super VGA Chip Sets
The Tseng Labs Super VGA Chips
The Paradise Super VGA Chips
Weitek
This site:
http://server.oersted.dtu.dk/www/sn/31002/?Materials/vga/main.html
Has a pretty good discussion on VGA.
The key to what you're asking is answered with this clip from the site: http://web.mit.edu/6.111/www/s2004/NEWKIT/vga.shtml
"As with RS-232, the standard for VGA video is that there are lots of standards. Every manufacturer seems to list different timings in the manuals for their monitors. The values given in the table above are not particularly critical. On a CRT monitor, the lengths of the front and back porches control the position of the image on the display. If the image appears offset to the right or left, or up or down, try adjusting the front and back porch values for the corresponding direction (or use the image position adjustments on the monitor, which accomplish the same thing)."
The problem is backwards compatability doesn lend itself well to a simple equation to determine these values. There is a modern spreadsheet that will calculate values for monitors that use the most recent standards, but if your playing around with VGA the old analog monitors will let you do tricks that you can't do on an led type display.
Your resolution is limited to how fast the electronics can turn on and off the electron beam, but the horizonal placement is only limited by your clock and what ever phase adjustments are possible on your FPGA.
For instance you can setup 640x480 timing on your sync pulses and instead of clocking data at 25MHz you can use 100 or 200 MHz and simply require a min on time for each pixel. Effectively allowing you to smooth scroll 1/8th of the width of a pixel. You may be able to do simalar tweeking to the distance between scan line although I've never tried it.