I'm trying to use I2C with beaglebone but I cannot write more than 1 byte - linux

I'm trying to control DAC5571 with Beaglebone Black. My kernel version is:
Linux beaglebone 4.14.108-ti-r137 #1stretch SMP PREEMPT Tue Aug 25 01:48:39 UTC 2020 armv7l GNU/Linux
I can partially the control the DAC IC. As you can see here, you need to send 3 bytes; Slave Address, CTRL/MSB and LSB. The IC recognizes Slave address bytes and CTRL/MSB. I can read and confirm the output of the output pin. But when I start increasing the voltage value slowly as Vout += 0.05, output increasing as 0.2, 0.4, 0.6, etc...
I've checked with my oscilloscope and I can confirm that the third byte is being transmitted as 0x00 no matter what its the actual value is.
Here is my source code:
int DAC5571::writeI2CDeviceByte(int value)
{
cout << "Starting DAC5571 I2C sensor state write" << endl;
char namebuf[MAX_BUS];
snprintf(namebuf, sizeof(namebuf), "/dev/i2c-%d", I2CBus);
int file;
if ((file = open(namebuf, O_RDWR)) < 0) {
cout << "Failed to open DAC5571 Sensor on " << namebuf << " I2C Bus" << endl;
return(1);
}
if (ioctl(file, I2C_SLAVE, I2CAddress) < 0) {
cout << "I2C_SLAVE address " << I2CAddress << " failed..." << endl;
return(2);
}
int buffer[2];
buffer[0] = value>>8;
buffer[1] = value & 0xFF;
cout << "buffer [0] is " << buffer[0] << endl;
cout << "buffer [1] is " << buffer[1] << endl;
if (write(file, buffer, 2) != 2) {
cout << "Failure to write values to I2C Device address." << endl;
return(3);
}
close(file);
cout << "Finished DAC5571 I2C sensor state write" << endl;
return 0;
}
Here is the console output:
Starting DAC5571 I2C sensor state write
buffer [0] is 3
buffer [1] is 128
Finished DAC5571 I2C sensor state write
I've seen on my research there is a header file called as "i2c-core.h" but I could not include into my projects which has a block write function. Not sure if that would help in my situation.
Can anyone please help me to solve my issue about not being able to transmit LSB part of the data?
Thank you.

int buffer[2];
buffer[0] = value>>8;
buffer[1] = value & 0xFF;
if ( write(file, buffer, 2) != 2) { ... }
The elements of buffer are of type int, which is 4 bytes long. So when you write buffer with a length of 2, you write 2 bytes, which are the first two bytes of the integer buffer[0]. In your example buffer[0] is 3, so since this machine is little-endian, it consists of the bytes 03 00 00 00 and you write out 03 00.
You probably want unsigned char buffer[2]; or maybe uint8_t buffer[2];, so that each element is a byte.

Related

how to work with references using ffi?

I have folowing c++ function compiled with node-gyp.
void InterfaceTestOne(const string & mytest)
{
const string *ptr = &mytest;
printf("The variable X is at 0x%p\n", (void *)ptr);
cout << mytest.length() << endl;
cout << mytest << endl;
}
and nodejs code:
var math = ffi.Library(main, {
"_Z16InterfaceTestOneRKSsi": ["void", [ref.refType(string)]],
});
var output = ref.allocCString('hello alex');
console.log("length", output.length) #12
console.log("address", output.address()) #4353704472
console.log("ref", output.ref()) # <Buffer#0x103804228 18 42 80 03 01 00 00 00>
math._Z16InterfaceTestOneRKSsi(output.ref())
#c++ output
The variable X is at 0x0x103804230
4347093136 <-- mytest size
This is TestOne
hello alex�����s�P��X��0��
����������������������p
p�]���^#��psp���5�����#4������(�����G����� ��qf���6�ՃP���f���6�Ճ���q�X؃ \���u������������3������3������!Ą
p�?�����p �J
����h����������8n���������8v
�d�����s����� ����������(������a......
So instead of string size 12 i got in c++ size 4353704472
ref address in nodejs and c++ also is difference 0x103804228 and 0x0x103804230.
and looks that's why i got in addition to my text a ton of garbage.
Any idea how to fix that?

example for yaml-cpp 0.5.3 in linux

I am pretty new to yaml-cpp. After did tutorials, that tutorials are fine. But When I try to Parse my own yaml file, it's a litte difficult for me. I am confused with "operator" and "node".
yaml file is shown below.
Device:
DeviceName: "/dev/ttyS2"
Baud: 19200
Parity: "N"
DataBits: 8
StopBits: 1
Control:
Kp: 5000
Ki: 8
FVF: 100
VFF: 1962
Could you anyone give me an example to get data from that yaml file? thanks for your help.
Also i followed this question , I can build it. But When I run it, I got Segmentation fault (core dumped)
Code:
#include <yaml-cpp/yaml.h>
#include <string>
#include <iostream>
using namespace std;
int main()
{
YAML::Node config = YAML::LoadFile("init.yaml");
//read device
std::string DeviceName = config["Device"][0]["DeviceName"].as<std::string>();
int Baud = config["Device"][1]["Baud"].as<int>();
std::string Parity = config["Device"][2]["Parity"].as<std::string>();
int DataBits = config["Device"][3]["DataBits"].as<int>();
int StopBits = config["Device"][4]["StopBits"].as<int>();
//read control
int Kp = config["Control"][0]["Kp"].as<int>();
int Ki = config["Control"][1]["Ki"].as<int>();
int FVF = config["Control"][2]["FVF"].as<int>();
int VFF = config["Control"][3]["VFF"].as<int>();
cout <<"DeviceName" << DeviceName << endl;
cout <<"Baud" << Baud << endl;
cout <<"Parity" << Parity << endl;
cout <<"DataBits" << DataBits << endl;
cout <<"StopBits" << StopBits << endl;
cout <<"Kp" << Kp << endl;
cout <<"Ki" << Ki << endl;
cout <<"FVF" << FVF << endl;
cout <<"VFF" << VFF << endl;
return 0;
}
your code above results in a bad conversion exception because you access the map items in a wrong way.
instead of
std::string DeviceName = config["Device"][0]["DeviceName"].as<std::string>();
just write
std::string DeviceName = config["Device"]["DeviceName"].as<std::string>();
best regards
robert

What could be the possible reasons for segmentation fault?

The code snippet is following:
ValueMapIter valueIter;
for (valueIter = activeValues->begin(); valueIter !=activeValues->end(); ++valueIter)
{
cout << "Before First" << endl;
cout << "sizeactivevalue:" << activeValues->size() << endl;
cout << "first:" << valueIter->first << "Second:" << valueIter->second << endl;
}
The PROGRAM output while running with gdb is:
The program outputs
Before First
sizeactivevalue:10
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x2aaaac8ad940 (LWP 8346)]
0x000000000043e732 in ValueManager::InsertModValue (this=0xd455c90, Id=4615, PId=7753, eId=1100000010570903, iId=2, inId=44301, pe=830795, t=25, bl=2, ste=3, sde=2)
at /home/pathtofile/valuemanager.cpp:304
304 cout << "first:" << valueIter->first << "Second:" << valueIter->second << endl;
How can it receive a segmentation fault, while I have a local copy of ValueMapIter and it ran on the code correctly previously.
The program is multithreaded; there is only one activeValues map. The snippet is inside the InsertModValues function.
The size of the activeValue map is 10 then How can the iter not have a valid first element ?

Getting value of a boolean control in v4l2

I am trying to adjust the brightness of a camera. Before setting brightness, I am trying to check whether brightness is in auto mode or not. We can do so by checking volatile flag but I am running kernel version 2.6.32 which does not have this functionality. So, the other option is to check V4L2_CID_AUTOBRIGHTNESS, which I am doing but it is returning EINVAL.
I am using following code to get the value:
struct v4l2_control control;
control.id = V4L2_CID_BRIGHTNESS; // This is working fine
//control.id = V4L2_CID_AUTOBRIGHTNESS; // This is giving EINVAL in ioctl
if (-1 == ioctl(camDesc, VIDIOC_G_CTRL, &control))
cerr << "VIDIOC_G_CTRL" << " :: " << errno << endl ;
else
cout << "Successfully got property. Value :: " << control.value << endl;
May be V4L2_CID_AUTOBRIGHTNESS is a boolean property that's why it is giving EINVAL, but then I am not able to find any other method by which I can get the value of boolean property.
In the V4L2 the set of IOCTL is mostly implemented (or not implemented) on side of the actual sensor. This one is not an exception. Thus, you have two potential problems here: a) a driver of the actual sensor does not implement the specific IOCTL, b) the IOCTL is only to set a property, though I think you still may read last set value.
Some of the properties (user controls) can be disabled, so if we will directly change there value using v4l2_control, then it may give some error.
Best way to do is, by checking the appropriate flags using queryctrl and then set the actual value.
struct v4l2_queryctrl queryctrl;
queryctrl.id = V4L2_CID_BRIGHTNESS; // V4L2_CID_AUTOBRIGHTNESS i.e. any user ctrl
if (-1 == ioctl(camDesc, VIDIOC_QUERYCTRL, &queryctrl))
{
if (errno != EINVAL)
exit(EXIT_FAILURE);
else
{
cerr << "ERROR :: Unable to set property (NOT SUPPORTED)\n";
exit(EXIT_FAILURE);
}
}
else if (queryctrl.flags & V4L2_CTRL_FLAG_DISABLED)
{
cout << "ERROR :: Unable to set property (DISABLED).\n";
exit(EXIT_FAILURE);
}
else
{
struct v4l2_control control;
control.id = queryctrl.id;
control.value = eValue; // Any value
if (-1 == ioctl(camDesc, VIDIOC_S_CTRL, &control))
exit(EXIT_FAILURE);
cout << "Successfully set property." << endl;
}

Different values depending on floating point exception flags set

Short question:
How can setting the _EM_INVALID exception flag on the FPU result in different values?
Long question:
In our project we have turned off floating point exceptions in our Release build, but turned on ZERODIVIDE, INVALID and OVERFLOW using _controlfp_s() in our Debug build. This is in order to catch errors if they are there.
However, we would also like results of numerical calculations (involving optimisation algorithms, matrix inversion, Monte Carlo and all sorts of things) to be consistent between Debug and Release build to make debugging easier.
I would expect that the setting of the exception flags on the FPU should not affect the calculated values - only whether exceptions are thrown or not. But after working backwards through our calculations I can isolate the below code example that shows that there is a difference on the last bit when calling the log() function.
This propagates to a 0.5% difference in the resulting value.
The below code will give the shown program output when adding it to a new solution in Visual Studio 2005, Windows XP and compile in Debug configuration. (Release will give a different output, but that's because the optimiser reuses the result from the first call to log().)
I hope that someone can shed a bit of light on this. Thanks.
/*
Program output:
Xi, 3893f76f, 7.4555176582633598
K, c0a682c7, 7.44466687218
Untouched
x, da8caea1, 0.0014564635732296288
Invalid exception on
x, da8caea2, 0.001456463573229629
Invalid exception off
x, da8caea1, 0.0014564635732296288
*/
#include <float.h>
#include <math.h>
#include <limits>
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
unsigned uMaskOld = 0;
errno_t err;
cout << std::setprecision (numeric_limits<double>::digits10 + 2);
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
cout << "Xi, " << hex << setw(8) << setfill('0') << *(unsigned*)(&Xi) << ", " << dec << Xi << endl;
cout << "K, " << hex << setw(8) << setfill('0') << *(unsigned*)(&K) << ", " << dec << K << endl;
cout << endl;
cout << "Untouched" << endl;
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception on" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, 0, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception off" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, _EM_INVALID, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
return 0;
}
This is not a complete answer, but it is too long for a comment.
I suggest you isolate the code that does the questionable calculations and put it in a subroutine, preferably in a source module that is compiled separately. Something like:
void foo(void)
{
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
x = log(Xi/K);
…Insert output statements here…
}
Then you would call the routine with different settings:
cout << "Untouched:\n";
foo();
cout << "Invalid exception on:\n";
…Change FP state…
foo();
This guarantees that the same instructions are executed in each case, eliminating the possibility that the compiler has for some reason generated separate code for each sequence. The way you have compiled the code, I suspect it is possible the compiler may have used 80-bit arithmetic in one case and 64-bit arithmetic in another, or may have used 80-bit arithmetic generally but converted some result to 64-bit in one case but not another
Once that is done, you can partition and isolate the code further. E.g., try evaluating Xi/K once before any of the tests, storing that in a double, and passing it to foo as a parameter. The tests whether the log call differs depending on the floating-point state. I suspect that is the case, as it is unlikely the division operation would differ.
Another advantage of isolating the code this way is that you could step through it in the debugger to see exactly where behavior diverges. You could step through it, one instruction at a time, with different floating-point states simultaneously in two windows and examine the results at each step to see exactly where the divergence is. If there is no divergence by the time you reach the log call, you should step through that, too.
Incidental notes:
If you know Xi and K are close to each other, it is better to compute log(Xi/K) as log1p((Xi-K)/K). When Xi and K are close to each other, the subtraction Xi-K is exact (has no error), and the quotient has more useful bits (the 1 that we already knew about and some zero bits following it are gone).
The fact that slight changes in your floating-point environment cause a .5% change in your result implies your calculations are very sensitive to error. This suggests that, even if you make your results reproducible, the errors that necessarily exist in floating-point arithmetic cause your result to be inaccurate. That is, the final error will still exist, it just will not be called to your attention by the difference between two different ways of calculating.
It appears in your C++ implementation that unsigned is four bytes but double is eight bytes. So printing the encoding a double by aliasing it to an unsigned omits half of the bits. Instead, you should convert a pointer to the double to a pointer to const char and print sizeof(double) bytes.

Resources