Torch C++: API to check NAN - pytorch

I am using libtorch C++. In python version we can easily check the value of a tensor by calling its numpy value, and in numpy we have np.isnan(). I was wondering if there is a built in function in libtorch C++ to check whether a tensor has any NAN value?
Thanks,
Afshin

Adding on to Fábio's answer (my reputation is too low to comment):
If you actually want to use the information about NANs in an assert or if condition you need convert it from a torch::Tensor to a C++ bool like so
torch::Tensor myTensor;
// do something
auto tensorIsNan = at::isnan(myTensor).any().item<bool>(); // will be of type bool

Try at::isnan.
int main() {
torch::Tensor tensor = torch::rand({2, 3});
std::cout << tensor << std::endl;
std::cout << at::isnan(tensor) << std::endl;
return 0;
}
Note: I had to install the nightly build of libtorch since the stable release did not have isnan.

Related

How do I convert strings of numbers, coming from a .txt file into an int array using stoi::("string")

I've been trying to take a .txt document with three number entries, read those entries as strings and convert those entries in ints, then put them into an int array, but had no success in doing so and i have no clue as to why. Note that the entries as well as some variable names are pre determined by the assignment, additionally we have to use the std::stoi("string") command, which i am not familiar with nor has any syntax been provided to us (which is especially strange since we are usually not allowed to stray to far from the lecture material)
What I excpected to happen is that the numbers from the .txt file were converted into an array, however what actually happened is that an "unhandled exception" (my apologies if that term does not make sanes we have to programm in our native language) occured and the string library opened itself, marking the error on line 107. The problematic line in my code seems to be "auftraegearray[i++] = std::stoi(MengeanAuftraegen);"
int main()
{
std::fstream Auftraege;
Auftraege.open("Auftraege37.txt", std::ios::out);
Auftraege << "10" << std::endl;
Auftraege << "1" << std::endl;
Auftraege << "20" << std::endl;
Auftraege.close();
int i = 0;
int auftraegearray[4];
std::string MengeanAuftraegen;
Auftraege.open("Auftraege37.txt", std::ios::in);
while (!Auftraege.eof())
{
getline(Auftraege, MengeanAuftraegen);
std::cout << MengeanAuftraegen << std::endl;
auftraegearray[i++] = std::stoi(MengeanAuftraegen);
}
Auftraege.close();

how to convert mpf_class to String

Hello and sorry for my basic English. I'm trying to convert from mpf_class to a String. I know there is a function (get_str()) but it show me only digits and its exponent separated. I want to get the whole expression in a string. I tried using ostreamstring and it work but I want to know if there is another way to do that. Let me know if I made myself clear.
Basically what I did was:
std::ostringstream show;
mpf_class result, Afact,Bfact,Cfact;
result=Afact*Bfact/Cfact;
show << result;
ui->lineEdit_5->setText(QString::fromStdString(show.str()));
As you can see, I'm working in a QT project and I need to show the result in a QLineEdit and with ostreamstring it works. I just was wondering if there is a gmp function to do that. thanks
Not sure whether this can help you, but you can actually print an mpf_class object and use I/O manipulators on it as a typical float object.
Here is my code
#include <gmpxx.h>
#include <iostream>
#include <iomanip>
int main(void) {
mpf_class a;
a = 3141592653589793.2;
std::cout << a << std::endl;
// Outputs 3.14159e+15
std::cout << std::uppercase << std::showpos << std::setprecision(3) << a << std::endl;
// Outputs +3.14E+15
}
Then you can use an std::ostringstream object instead of std::cout.
Reference: https://gmplib.org/manual/C_002b_002b-Formatted-Output.html

Different values depending on floating point exception flags set

Short question:
How can setting the _EM_INVALID exception flag on the FPU result in different values?
Long question:
In our project we have turned off floating point exceptions in our Release build, but turned on ZERODIVIDE, INVALID and OVERFLOW using _controlfp_s() in our Debug build. This is in order to catch errors if they are there.
However, we would also like results of numerical calculations (involving optimisation algorithms, matrix inversion, Monte Carlo and all sorts of things) to be consistent between Debug and Release build to make debugging easier.
I would expect that the setting of the exception flags on the FPU should not affect the calculated values - only whether exceptions are thrown or not. But after working backwards through our calculations I can isolate the below code example that shows that there is a difference on the last bit when calling the log() function.
This propagates to a 0.5% difference in the resulting value.
The below code will give the shown program output when adding it to a new solution in Visual Studio 2005, Windows XP and compile in Debug configuration. (Release will give a different output, but that's because the optimiser reuses the result from the first call to log().)
I hope that someone can shed a bit of light on this. Thanks.
/*
Program output:
Xi, 3893f76f, 7.4555176582633598
K, c0a682c7, 7.44466687218
Untouched
x, da8caea1, 0.0014564635732296288
Invalid exception on
x, da8caea2, 0.001456463573229629
Invalid exception off
x, da8caea1, 0.0014564635732296288
*/
#include <float.h>
#include <math.h>
#include <limits>
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
unsigned uMaskOld = 0;
errno_t err;
cout << std::setprecision (numeric_limits<double>::digits10 + 2);
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
cout << "Xi, " << hex << setw(8) << setfill('0') << *(unsigned*)(&Xi) << ", " << dec << Xi << endl;
cout << "K, " << hex << setw(8) << setfill('0') << *(unsigned*)(&K) << ", " << dec << K << endl;
cout << endl;
cout << "Untouched" << endl;
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception on" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, 0, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
cout << "Invalid exception off" << endl;
::_clearfp();
err = ::_controlfp_s(&uMaskOld, _EM_INVALID, _EM_INVALID);
x = log(Xi/K);
cout << "x, " << hex << setw(8) << setfill('0') << *(unsigned*)(&x) << ", " << dec << x << endl;
cout << endl;
return 0;
}
This is not a complete answer, but it is too long for a comment.
I suggest you isolate the code that does the questionable calculations and put it in a subroutine, preferably in a source module that is compiled separately. Something like:
void foo(void)
{
double Xi = 7.4555176582633598;
double K = 7.44466687218;
double x;
x = log(Xi/K);
…Insert output statements here…
}
Then you would call the routine with different settings:
cout << "Untouched:\n";
foo();
cout << "Invalid exception on:\n";
…Change FP state…
foo();
This guarantees that the same instructions are executed in each case, eliminating the possibility that the compiler has for some reason generated separate code for each sequence. The way you have compiled the code, I suspect it is possible the compiler may have used 80-bit arithmetic in one case and 64-bit arithmetic in another, or may have used 80-bit arithmetic generally but converted some result to 64-bit in one case but not another
Once that is done, you can partition and isolate the code further. E.g., try evaluating Xi/K once before any of the tests, storing that in a double, and passing it to foo as a parameter. The tests whether the log call differs depending on the floating-point state. I suspect that is the case, as it is unlikely the division operation would differ.
Another advantage of isolating the code this way is that you could step through it in the debugger to see exactly where behavior diverges. You could step through it, one instruction at a time, with different floating-point states simultaneously in two windows and examine the results at each step to see exactly where the divergence is. If there is no divergence by the time you reach the log call, you should step through that, too.
Incidental notes:
If you know Xi and K are close to each other, it is better to compute log(Xi/K) as log1p((Xi-K)/K). When Xi and K are close to each other, the subtraction Xi-K is exact (has no error), and the quotient has more useful bits (the 1 that we already knew about and some zero bits following it are gone).
The fact that slight changes in your floating-point environment cause a .5% change in your result implies your calculations are very sensitive to error. This suggests that, even if you make your results reproducible, the errors that necessarily exist in floating-point arithmetic cause your result to be inaccurate. That is, the final error will still exist, it just will not be called to your attention by the difference between two different ways of calculating.
It appears in your C++ implementation that unsigned is four bytes but double is eight bytes. So printing the encoding a double by aliasing it to an unsigned omits half of the bits. Instead, you should convert a pointer to the double to a pointer to const char and print sizeof(double) bytes.

Creating pointer to the sub arrays of mass allocated one dimensional array and release VC++ build

This is my first post I hope I am not making any mistake.
I have the following code. I am trying to allocate and access a two dimensional array in one shot and more importantly in one byte array. I also need to be able to access each sub array individually as shown in the code. It works fine in the debug mode. Though in the release build in VS 2012, it causes some problems during runtime, when the compiler optimizations are applied. If I disable the release compiler optimizations then it works. Do I need to do some kind of special cast to inform the compiler?
My priorities in code is fast allocation and network communication of complete array and at the same time working with its sub arrays.
I prefer not to use boost.
Thanks a lot :)
void PrintBytes(char* x,byte* data,int length)
{
using namespace std;
cout<<x<<endl;
for( int i = 0; i < length; i++ )
{
std::cout << "0x" << std::setbase(16) << std::setw(2) << std::setfill('0');
std::cout << static_cast<unsigned int>( data[ i ] ) << " ";
}
std::cout << std::dec;
cout<<endl;
}
byte* set = new byte[SET_SIZE*input_size];
for (int i=0;i<SET_SIZE;i++)
{
sprintf((char*)&set[i*input_size], "M%06d", i+1);
}
PrintByte((byte*)&set[i*input_size]);

Reading a value in associative array creates a new key

I have code such as this. I use
pvalueholder is class that is polymorphic , it can hold all sort of types, string..etc..
It also can have a type undefined.
typedef hash_map<pvalueholder,pvalueholder,pvaluehasher > hashtype;
hashtype h;
pvalueholder v;
v="c";
h[v]=5; // h has one element
pvalueholder v2=h[v]; // here h gets a new key/value how is that possible?
cout << (string) (h[v]) << endl; // here h gets another new key/value how is that possible?
int i =0;
for (hashtype::iterator h1=h.begin(); h1!=h.end();h1++)
{
cout << "no: " << i++ << endl;
} // this prints three lines, it should print one...
Two values are undefined here, the third one is 5 as expected.
size_t pvaluehasher::operator() (const pvalueholder& p) const
{
cout << "hashvalue:" << p.value->hashvalue() << endl;
return p.value->hashvalue();
}
returns
Here is what is printed:
hashvalue:84696444
hashvalue:84696444
hashvalue:84696444
returns:1
hashvalue:84696444
returns:1
hashvalue:84696444
returns:1
returns:1
hashvalue:84696444
Do you have any ideas what it may be?
Thank you.
Solution:
the function operator()(parameter1,parameter2) needs to be different in case of Microsoft STL.
For microsoft, it needs to return less than relationship between parameter1 and parameter2.
For gcc, it needs to return equality. I returned equality.
The comparison function for the keys was not correct...
The function returned true for equality while it has to return less than in case of Microsoft STL.
My guess would be that your hash function is incorrect - meaning it produces different hash values given the same key "c".
Show the declaration for pvalueholder and full code for pvaluehasher.
It's almost impossible to comment on hash_map, because it's never been standardized, and the existing implementations aren't entirely consistent. Worse, your code doesn't seem to be correct or compilable as it stands -- some places the value associated with the key seems to be an int, and other places a string.
Using std::tr1::unordered_map and fixing the rest of the code to compile and seem reasonable, like this:
#include <unordered_map>
#include <iostream>
#include <string>
using namespace std;
typedef std::tr1::unordered_map<std::string, int> hashtype;
std::ostream &operator<<(std::ostream &os, std::pair<std::string, int> const &d) {
return os << d.first << ": " << d.second;
}
int main() {
hashtype h;
std::string v = "c";
h[v]=5; // h has one element
int v2=h[v];
cout << h[v] << endl;
int i =0;
for (hashtype::iterator h1=h.begin(); h1!=h.end();h1++)
{
cout << *h1 << endl;
} // this prints three lines, it should print one...
return 0;
}
The output I get is:
5
c: 5
This seems quite reasonable -- we've inserted only one item, as expected.
Solution: the function operator()(parameter1,parameter2) needs to be different in case of Microsoft STL. For microsoft, it needs to return less than relationship between parameter1 and parameter2. For gcc, it needs to return equality. I returned equality. The comparison function for the keys was not correct... The function returned true for equality while it has to return less than in case of Microsoft STL.

Resources