linux compile for enable uart2 - linux

I am using openwrt, initially, under /dev, there is only ttyO0 with is serial port for console. and I am using it to connect the board(siamilar to beaglebone black).
Now, I am wire a gps to uart2. but I think somehow the openwrt is not enable it by default.
I checked the device tree, am335x-bone.dts(I am using bone for my board, cause my board is not BBB). I not too much setting in it. most of config is from am33xx.dtst and am335x-bone-common.dtsi.
I check the am33xx dtsi, there are some code like this under the ocp{}
uart0: serial#44e09000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart1";
clock-frequency = <48000000>;
reg = <0x44e09000 0x2000>;
interrupts = <72>;
status = "disabled";
};
uart1: serial#48022000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart2";
clock-frequency = <48000000>;
reg = <0x48022000 0x2000>;
interrupts = <73>;
status = "disabled";
};
uart2: serial#48024000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart3";
clock-frequency = <48000000>;
reg = <0x48024000 0x2000>;
interrupts = <74>;
status = "disabled";
};
uart3: serial#481a6000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart4";
clock-frequency = <48000000>;
reg = <0x481a6000 0x2000>;
interrupts = <44>;
status = "disabled";
};
uart4: serial#481a8000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart5";
clock-frequency = <48000000>;
reg = <0x481a8000 0x2000>;
interrupts = <45>;
status = "disabled";
};
uart5: serial#481aa000 {
compatible = "ti,omap3-uart";
ti,hwmods = "uart6";
clock-frequency = <48000000>;
reg = <0x481aa000 0x2000>;
interrupts = <46>;
status = "disabled";
};
I did change uart2 from disabled status to "okay" and also change code in am335x-bone-common.dtsi undert &am33xx_pinmux{}
uart0_pins: pinmux_uart0_pins {
pinctrl-single,pins = <
0x170 (PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
0x174 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
>;
};
uart2_pins: pinmux_uart2_pins {
pinctrl-single,pins = <
0x150 (PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
0x154 (PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
>;
};
uart0 part is default there, I add code for uart2 part below it.
and in am335x-bone.dts, I add this code to enable uart2
&uart2 {
pinctrl-names = "default";
pinctrl-0 = <&uart0_pins>;
status = "okay";
};
after compile this, I can see the /dev/ttyO2 shows up in openwrt. but when I use script write to this port and then read from it. nothing shows up.
this is my script I use lua since it's build-in
local clock = os.clock
function wait(n) -- seconds
local t0 = clock()
while clock() - t0 <= n do end
end
while true do
print("Writing")
wuar0 = io.open("/dev/ttyO0","w")
wuar1 = io.open("/dev/ttyO1","w")
wuar2 = io.open("/dev/ttyO2","w")
wuar0:write("This is uart0 \n")
wuar1:write("This is uart1 \n")
wuar2:write("This is uart2 \n")
wuar0:flush()
wuar1:flush()
wuar2:flush()
wuar0 = io.close()
wuar1 = io.close()
wuar2 = io.close()
wait(2)
print("Reading")
ruar0 = io.open("/dev/ttyO0","r")
ruar1 = io.open("/dev/ttyO1","r")
ruar2 = io.open("/dev/ttyO2","r")
print(ruar0:read())
print(ruar1:read())
print(ruar2:read())
ruar0:flush()
ruar1:flush()
ruar2:flush()
ruar0 = io.close()
ruar1 = io.close()
ruar2 = io.close()
wait(2)
end
Did I do it right? if not, what I need to do to enable the uart2.
I did a lot research, but most of them are out of update. not work in my case.
or if anyone could tell me what is the step to enable this or how I can check if it is enable or not.
any information would help a lot. Thanks.

Related

Set I2S as master clock for tlv320aic3110

I am trying to configure the device tree of my ST board to use the tlv320aic3110 as codec but I haven't been able to get any sound yet. I thought I configured I2S as master but, when I trying to debug it, I saw this message in the log:
[ 20.307088] st,stm32-i2s 44004000.audio-controller: I2S MCLK frequency is 12000000Hz. mode: slave, dir: input
I have tried following ST's own documentation for this but that didn't work either, I still get that message and no sound out of the board.
According to the logs, the MCLK seems to get registered correctly.
This is what I have for the codec in the device tree:
...
sound0: sound#0 {
compatible = "simple-audio-card";
simple-audio-card,name = "MySound";
simple-audio-card,widgets =
"Microphone", "Microphone Jack",
"Headphone", "Headphone Jack",
"Speaker", "Speaker";
simple-audio-card,routing =
"MIC1LP", "Microphone Jack",
"MIC1RP", "Microphone Jack",
"MIC1LP", "MICBIAS",
"MIC1RP", "MICBIAS",
"Headphone Jack", "HPL",
"Headphone Jack", "HPR",
"Speaker", "SPL",
"Speaker", "SPR";
simple-audio-card,format = "i2s";
simple-audio-card,bitclock-master = <&sound0_master>;
simple-audio-card,frame-master = <&sound0_master>;
simple-audio-card,bitclock-inversion;
dais = <&i2s1_port>;
simple-audio-card,convert-rate = <48000>;
sound0_master: simple-audio-card,cpu {
sound-dai = <&i2s1>;
system-clock-frequency = <12000000>;
};
simple-audio-card,codec {
sound-dai = <&codec>;
system-clock-frequency = <12000000>;
};
};
};
...
&i2s1 {
status = "okay";
#clock-cells = <0>;
clock-names = "pclk", "i2sclk", "x8k", "x11k";
clocks = <&rcc SPI1>,
<&rcc SPI1_K>,
<&scmi_clk CK_SCMI_PLL3_Q>,
<&scmi_clk CK_SCMI_PLL3_R>;
i2s1_port: port {
i2s1_endpoint: endpoint {
remote-endpoint = <&tlv320aic3110_tx_endpoint>;
format = "i2s";
mclk-fs = <256>;
};
};
};
...
&i2c1 {
pinctrl-names = "default", "sleep";
pinctrl-0 = <&i2c1_pins_a>;
pinctrl-1 = <&i2c1_sleep_pins_a>;
i2c-scl-rising-time-ns = <96>;
i2c-scl-falling-time-ns = <3>;
clock-frequency = <100000>;
status = "okay";
/* spare dmas for other usage */
/delete-property/dmas;
/delete-property/dma-names;
codec: codec#18 {
compatible = "ti,tlv320aic3110";
reg = <0x18>;
pinctrl-0 = <&codec_pins_a>;
#sound-dai-cells = <0>;
status = "okay";
clocks = <&i2s1>;
clock-names = "MCLK";
system-clock-frequency = <12000000>;
ai31xx-micbias-vg = <MICBIAS_2_0V>;
reset-gpios = <&gpiof 6 GPIO_ACTIVE_LOW>;
/* Regulators */
HPVDD-supply = <&scmi_v3v3_sw>; /* 3V3_CODEC */
SPRVDD-supply = <&scmi_vdd_usb>; /* 5V0_CODEC */
SPLVDD-supply = <&scmi_vdd_usb>; /* 5V0_CODEC */
AVDD-supply = <&scmi_v3v3_sw>; /* 3V3_CODEC */
IOVDD-supply = <&scmi_v3v3_sw>; /* 3V3_CODEC */
DVDD-supply = <&scmi_v1v8_periph>; /* 1V8_CODEC */
ports {
#address-cells = <1>;
#size-cells = <0>;
port#4 {
reg = <4>;
tlv320aic3110_tx_endpoint: endpoint {
remote-endpoint = <&i2s1_endpoint>;
frame-master = <&tlv320aic3110_tx_endpoint>;
bitclock-master = <&tlv320aic3110_tx_endpoint>;
};
};
};
};
...
Am I missing something? Should I change something in the driver code or is everything I need to change just in the device tree?

How to configure multiple I/O Expander PCF8574a in a device tree?

I am currently adding an I/O Expander PCF8574a in my device tree am335x-boneblack.dts. I have two I/O expander, one at 0x38 and another at 0x39.
The code below works fine for a single expander but if I add PCF8574a with address 0x39 in the similar manner, it shows an error.
&i2c1 {
pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins_default>;
status = "okay";
clock-frequency = <400000>;
pcf8574a: pcf8574a#38 {
compatible = "nxp,pcf8574a";
reg = <0x38>;
gpio-controller;
#gpio-cells = <2>;
};
};
Error log :
"Duplicate label 'pcf8574a' on /ocp/i2c#4802a000/pcf8574a#39 and /ocp/i2c#4802a000/pcf8574a#38"
which I completely understand.
But I dont know how to add another node or say sub node to make this work. Any suggestions?
have you tried this
&i2c1 {
pinctrl-names = "default";
pinctrl-0 = <&i2c1_pins_default>;
status = "okay";
clock-frequency = <400000>;
pcf8574a_38: pcf8574a#38 {
compatible = "nxp,pcf8574a";
reg = <0x38>;
gpio-controller;
#gpio-cells = <2>;
};
pcf8574a_39: pcf8574a#39 {
compatible = "nxp,pcf8574a";
reg = <0x39>;
gpio-controller;
#gpio-cells = <2>;
};
};

Confusion about Texture2D and ShaderResourceViews

I am new to Direct3D11 and I am currently trying to create a texture programatically within my code using this code I found online:
// Some Constants
int w = 256;
int h = 256;
int bpp = 4;
int *buf = new int[w*h];
//declarations
ID3D11Texture2D* tex;
D3D11_TEXTURE2D_DESC sTexDesc;
D3D11_SUBRESOURCE_DATA tbsd;
// filling the image
for (int i = 0; i<h; i++)
for (int j = 0; j<w; j++)
{
if ((i & 32) == (j & 32))
buf[i*w + j] = 0x00000000;
else
buf[i*w + j] = 0xffffffff;
}
// setting up D3D11_SUBRESOURCE_DATA
tbsd.pSysMem = (void *)buf;
tbsd.SysMemPitch = w*bpp;
tbsd.SysMemSlicePitch = w*h*bpp; // Not needed since this is a 2d texture
// initializing sTexDesc
sTexDesc.Width = w;
sTexDesc.Height = h;
sTexDesc.MipLevels = 1;
sTexDesc.ArraySize = 1;
sTexDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
sTexDesc.SampleDesc.Count = 1;
sTexDesc.SampleDesc.Quality = 0;
sTexDesc.Usage = D3D11_USAGE_DEFAULT;
sTexDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
sTexDesc.CPUAccessFlags = 0;
sTexDesc.MiscFlags = 0;
hr = m_pd3dDevice->CreateTexture2D(&sTexDesc, &tbsd, &tex);
and that' all fine and dandy, but I am a bit confused about how to actually load this into the shader. Below I initialized this ID3D11ShaderResourceView:
ID3D11ShaderResourceView* m_pTextureRV = nullptr;
I found on the Microsoft tutorials I need to use the CreateShaderResourceView. But how exactly do I use it? I tried this:
hr = m_pd3dDevice->CreateShaderResourceView(tex, NULL , m_pTextureRV);
but it gives me an error, telling me that m_pTextureRV is not a valid argument for the function. What am I doing wrong here?
The correct way to call that function is:
hr = m_pd3dDevice->CreateShaderResourceView(tex, nullptr, &m_pTextureRV);
Remember that ID3D11ShaderResourceView* is a pointer to an interface. You need a pointer-to-a-pointer to get a new instance of one back.
You should really consider using a COM smart-pointer like Microsoft::WRL::ComPtr instead of raw pointers for these interfaces.
Once you have created the shader resource view for your texture object, then you need to associate it with whatever slot the HLSL expects to find it in. So, for example, if you were to write an HLSL source file as:
Texture2D texture : register( t0 );
SamplerState sampler: register( s0 );
float4 PS(float2 tex : TEXCOORD0) : SV_Target
{
return texture.Sample( sampler, tex );
}
Then compile it as a Pixel Shader, and bind it to the render pipeline via PSSetShader. Then you'd need to call:
ID3D11ShaderResourceView* srv[1] = { m_pTextureRV };
m_pImmediateContext->PSSetShaderResources( 0, 1, srv );
Of course you also need a ID3D11SamplerState* sampler bound as well:
ID3D11SamplerState* m_pSamplerLinear = nullptr;
D3D11_SAMPLER_DESC sampDesc = {};
sampDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sampDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sampDesc.ComparisonFunc = D3D11_COMPARISON_NEVER;
sampDesc.MinLOD = 0;
sampDesc.MaxLOD = D3D11_FLOAT32_MAX;
hr = m_pd3dDevice->CreateSamplerState( &sampDesc, &m_pSamplerLinear );
Then when you are about to draw:
m_pImmediateContext->PSSetSamplers( 0, 1, &m_pSamplerLinear );
I strongly recommend you check out the DirectX Tool Kit and the tutorials there.

Device Tree for PHY-less connection to a DSA switch

We have a little problem with creating a device tree for our configuration of a Marvell DSA switch and a Xilinx Zynq processor. They are connected like this:
|——————————————| |——————————————————————————————|
| e000b000—|———— SGMII ————|—port6 (0x16) port3 —— PHY3
| Zynq | | mv88e6321 |
| e000c000—|—x x—|—port5 port4 —— PHY4
|——————————————| |——————————————————————————————|
|___________ MDIO _______________|
And we have a device tree for the Linux kernel, which looks like this:
ps7_ethernet_0: ps7-ethernet#e000b000 {
#address-cells = <1>;
#size-cells = <0>;
clock-names = "ref_clk", "aper_clk";
clocks = <&clkc 13>, <&clkc 30>;
compatible = "xlnx,ps7-ethernet-1.00.a";
interrupt-parent = <&ps7_scugic_0>;
interrupts = <0 22 4>;
local-mac-address = [00 0a 35 00 00 00];
phy-handle = <&phy0>;
phy-mode = "gmii";
reg = <0xe000b000 0x1000>;
xlnx,ptp-enet-clock = <0x69f6bcb>;
xlnx,enet-reset = "";
xlnx,eth-mode = <0x0>;
xlnx,has-mdio = <0x1>;
mdio_0: mdio {
#address-cells = <1>;
#size-cells = <0>;
phy0: phy#16 {
compatiable = "marvell,dsa";
reg = <0x16>;
} ;
} ;
} ;
dsa#0 {
compatible = "marvell,dsa";
#address-cells = <2>;
#size-cells = <0>;
interrupts = <10>;
dsa,ethernet = <&ps7_ethernet_0>;
dsa,mii-bus = <&mdio_0>;
switch#0 {
#address-cells = <1>;
#size-cells = <0>;
reg = <0 0>;
port#3 {
reg = <3>;
label = "lan0";
};
port#4 {
reg = <4>;
label = "lan1";
};
port#5 {
reg = <5>;
label = "lan2";
};
port#6 {
reg = <6>;
label = "cpu";
};
};
};
} ;
The problem is, as you can see from the picture, there is no PHY attached to the port 6, i.e. the connection between the Zynq and the switch is PHY-less, but I had to specify <phy0> in the device tree to make the dsa driver to see the switch. But then it tries to talk to a non-existent PHY and fails, obviously.
So the question is: how to create a proper device tree for a dsa switch connected to a processor like this?
Thank you for any help!
(There is a somewhat similar question P1010 MAC to Switch port direct connection without PHY but I cannot comment on it and there is no answer, unfortunately)
Instead of specifying &phy0 when there is none, you can write it as fixed-link
fixed-link = <0 1 1000 0 0>;
Where 0 is emulated PHY ID, 1-> full-duplex and speed is 1000 Mb/s.
You would also want to disable autonegotiation for the processor port to which switch port 6 is connected.
ps7_ethernet_0: ps7-ethernet#e000b000 {
#address-cells = <1>;
#size-cells = <0>;
clock-names = "ref_clk", "aper_clk";
clocks = <&clkc 13>, <&clkc 30>;
compatible = "xlnx,ps7-ethernet-1.00.a";
interrupt-parent = <&ps7_scugic_0>;
interrupts = <0 22 4>;
local-mac-address = [00 0a 35 00 00 00];
fixed-link = <0 1 1000 0 0>;
phy-mode = "gmii";
reg = <0xe000b000 0x1000>;
xlnx,ptp-enet-clock = <0x69f6bcb>;
xlnx,enet-reset = "";
xlnx,eth-mode = <0x0>;
xlnx,has-mdio = <0x1>;
mdio_0: mdio {
#address-cells = <1>;
#size-cells = <0>;
} ;
} ;
dsa#0 {
compatible = "marvell,dsa";
#address-cells = <2>;
#size-cells = <0>;
interrupts = <10>;
dsa,ethernet = <&ps7_ethernet_0>;
dsa,mii-bus = <&mdio_0>;
switch#0 {
#address-cells = <1>;
#size-cells = <0>;
reg = <22 0>;
port#3 {
reg = <3>;
label = "lan0";
};
port#4 {
reg = <4>;
label = "lan1";
};
port#5 {
reg = <5>;
label = "lan2";
};
port#6 {
reg = <6>;
label = "cpu";
};
};
};
} ;
I'm assuming switch chip SMI address is 0x16; if not make reg = <22,0> to <0,0> as before under switch#0.
Also you may need to add mdio driver reg address and compatible property , which are not specified in your device tree.

Why my texture can't show up in full screen mode DX11 game

I'm drawing a texture in a DX11 game. It's strange that the texture never shows up in the full screen mode.
I list my state setting here for reference.
BOOL BlendEnable[] = {TRUE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE, FALSE};
UINT8 RenderTargetWriteMask[] = {0xF, 0xF, 0xF, 0xF, 0xF, 0xF, 0xF, 0xF};
D3D11_BLEND_DESC bs11 = {0};
bs11.AlphaToCoverageEnable = 0;
bs11.IndependentBlendEnable = false;
for(size_t i = sizeof(BlendEnable) / sizeof(BlendEnable[0]); i--;)
{
bs11.RenderTarget[i].BlendEnable = BlendEnable[i];
bs11.RenderTarget[i].RenderTargetWriteMask = RenderTargetWriteMask[i];
}
bs11.RenderTarget[0].BlendOp = D3D10_DDI_BLEND_OP_ADD;
bs11.RenderTarget[0].BlendOpAlpha = D3D10_DDI_BLEND_OP_ADD;
bs11.RenderTarget[0].DestBlend = D3D10_DDI_BLEND_INV_SRC_ALPHA;
bs11.RenderTarget[0].DestBlendAlpha = D3D10_DDI_BLEND_ZERO;
bs11.RenderTarget[0].SrcBlend = D3D10_DDI_BLEND_SRC_ALPHA;
bs11.RenderTarget[0].SrcBlendAlpha = D3D10_DDI_BLEND_ONE;
bs11.RenderTarget[0].RenderTargetWriteMask = D3D10_DDI_COLOR_WRITE_ENABLE_ALL;
D3D11_DEPTH_STENCIL_DESC depthDesc;
depthDesc.DepthEnable = 0;
depthDesc.DepthWriteMask = D3D10_DEPTH_WRITE_MASK_ZERO;
depthDesc.DepthFunc = D3D10_COMPARISON_NEVER;
depthDesc.BackEnable = 0;
depthDesc.FrontEnable = 0;
depthDesc.StencilEnable = 0;
depthDesc.StencilReadMask = 0;
depthDesc.StencilWriteMask = 0;
depthDesc.FrontFace.StencilDepthFailOp = D3D10_DDI_STENCIL_OP_KEEP;
depthDesc.FrontFace.StencilFailOp = D3D10_DDI_STENCIL_OP_KEEP;
depthDesc.FrontFace.StencilFunc = D3D10_DDI_COMPARISON_ALWAYS;
depthDesc.FrontFace.StencilPassOp = D3D10_DDI_STENCIL_OP_KEEP;
depthDesc.BackFace.StencilDepthFailOp = D3D10_DDI_STENCIL_OP_KEEP;
depthDesc.BackFace.StencilFailOp = D3D10_DDI_STENCIL_OP_KEEP;
depthDesc.BackFace.StencilFunc = D3D10_DDI_COMPARISON_ALWAYS;
depthDesc.BackFace.StencilPassOp = D3D10_DDI_STENCIL_OP_KEEP;
What's the most likely issue it can be?
Thanks,
Marshall
Not sure if this will solve your issues but the D3D11_BLEND_DESC/D3D11_DEPTH_STENCIL_DESC variables are being set as D3D10_ flags as far as I know this usually compiles but should be the D3D11_ equivalent.
So the texture shows under windowed mode?

Resources