FPGA IO configuration: Effect of weak pull up/down on an output - io

What is the effect of configuring a pull mode on a pin designated as output in the synthesis?
Does the pull mode still take effect?
Is its use only apparent if we set the output to tristate?
See the example below with a Lattice Diamond tool, I am able to configure the pull mode for an output.

It only has an effect on the output when the pin is in tristate. If you configure a weak pullup, I assume it burns some power if you drive the output low, and vice versa for weak pulldowns.
I prefer to configure an explicit IO buffer in order to make the tristate control explicit, but that is just a stylistic preference.

Related

Possible to access low level touchpad input at user-level (esp. in Windows) to provide better gestures/palm rejection?

I have a laptop whose touchpad is very sensitive to spurious light, grazing touches of anything other than the finger being used, causing unwanted gesture input--even with the sensitivity set to low in the control panel. I can (and will) probably learn over time to hold my wrists in a manner to minimize the problem--but as someone interested in algorithms in things like signal processing, vision, etc., I thought it might be a fun project to try and write a more intelligent filtering algorithm for touch input.
I'm not scared by the math/algorithmic aspect--but what I have zero knowledge of is how the software stack for input devices works, on what level in the stack such code would need to run, and how privileged/close to the kernel I would need to get to have access to that (and whether such a level is even sufficiently documented and accessible to make this possible). Most of the stack presumably handles touch data at "mouse level" abstraction, i.e. as a pointer x/y pair, whereas filtering to eliminate spurious touches would presumably need to act on a sort of "pixel map" of the pad with the areas registering touch "bright", before some sort of "blob detection" on this computes the pointer coordinates.
Where is this transformation ("pad image" to "pointer") performed--in the driver for the touchpad, in the OS kernel, in some userspace code, etc.? Is it even performed at all, or does the capacitive sensing circuitry directly detect only the centroid of the points of contact to begin with? (I can't find a good description of even how multi-touch with capacitive sensing works, fundamentally on a physics level) Is this the sort of thing that's only possible to modify in something like Linux where every line of code in the whole system is modifiable, or is there a good way to "hook" this process even in OSes that are otherwise proprietary?

How to capture keyboard input during runtime in Verilog?

I've been trying to find a way to capture keyboard input during runtime simulation of my Verilog code. Is this even possible?
I have taken a look at resources like asic-world and the Quick Reference for Verilog found on Google, but found nothing regarding a way to take keyboard inputs.
There seems to be a fundamental misunderstanding here in the difference between a hardware description language used to simulate a design versus using that same description to implement a design in actual hardware. It's like drawing a picture of a pinwheel, blowing on that picture, and expecting the pinwheel to start turning.
You can certainly build a 3-D model of that pinwheel, simulate the force of the wind on that model and watch it turn, and then send that model to a 3-d printer to get your pinwheel. I suppose you could put wind sensors in front of your monitor, and write a program that converts a value from the sensor to a value used in the simulation. The point is, the simulator has no knowledge that the value came from someone blowing on the monitor, it just sees a parameter value change.
Unless you are designing the keyboard hardware yourself and simulating that, there really is not much point in taking keyboard input from a computer and using that to stimulate your design in simulation. The operating system has already abstracted away the keyboard hardware and provides you with a string of character codes. The reason you are simulating in the first place is to verify the functionality of your design. If you find a problem, you are going to want to replay the exact same stimulus until you fix your problem.
Just like the pinwheel example, I do know it's possible for someone to set up a program that reads keyboard input and provides that as stimulus to a simulation. But that involves inter-process communication(IPC) and specific tool knowledge to set that up.

What is the easiest way to check if SPI is correctly initialized on STM32?

I have STM32F103C8 chip and a LED display that is connected via SPI1 (PA7-PA4) interface. I want to ensure that my initialization code is correct before starting to write the rest of the code. Is there a way to check if my initialization code is correct?
P. S. I'm trying to code in Rust for STM32, that's why I have to write rather low-level code and initialization is a bit harder than in C with high-level STM32 libraries.
Use logic analyzer to see what is going on when you send your data.
Test with simple 74hc595 serial-in parallel-out shift register. It has SPI input. You can put leds on outputs or just measure voltages to make sure that data is right.

What are the advantage and disadvantages of right justified I2S mode over Left justified I2S mode and Basic I2S mode?

I2S specifications provide different modes of operation. I tried looking for advantages of one mode over the other but could not find any good literature. Can anyone provide or explain the advantage of one mode over the other ?
With samples that are aligned to the most significant bit, the sample length does not matter (because if the sent and expected sample lengths differ, any ignored or read-as-zero bits are the LSBs).
Otherwise, all choices are arbitrary and do not have any advantage whatsoever over each other.
(This is why hardware designers choose one I²S mode at random, instead of selecting an obviously best mode.)

Can we turn on/off Scroll Lock light by software instructions?

Is it possible to turn on/off Scroll Lock light by software instructions like outb or something else? And if the answer is yes, can we restrict the use of these instructions so that they are only allowed in ring-0?
question part one (is it possible?): ledd
ledd is part of the ledcontrol package, which allows you to show
arbitrary information on the normally-unused keyboard LEDs. It is fully
scriptable, so you can show any TRUE/FALSE condition accessible or
indicate an arbitrary value. It supports blinking LEDs with priority
levels and animations. The LEDs not used by ledd should function as
normal.
Take a look at the SetKeyboardState Win32 API function

Resources