Getting extra newlines with fmt in Windows - fmt

I started using fmt for printing recently. I really like the lib, fast, easy to use. But when I completed my conversion, there are ways that my program can run that will render with a bunch of additional newlines. It's not every case, so this will get a bit deep.
What I have is a compiler and a build manager. The build manager (picture Ninja, although this is a custom tool) launches compile processes, buffers the output, and prints it all at once. Both programs have been converted to use fmt. The key function being called is fmt::vprint(stream, format, args). When the build manager prints directly, things are fine. But when I'm reading the child process output, any \n in the data has been prefixed with \r. Windows Terminal will render that fine, but some shells (such as the Visual Studio output window) do not, and will show a bunch of extra newlines.
fmt is open source so I was able to hack on it a bunch and see what is different between what it did and what my program was doing originally. The crux is this:
namespace detail {
FMT_FUNC void print(std::FILE* f, string_view text) {
#ifdef _WIN32
auto fd = _fileno(f);
if (_isatty(fd)) {
detail::utf8_to_utf16 u16(string_view(text.data(), text.size()));
auto written = detail::dword();
if (detail::WriteConsoleW(reinterpret_cast<void*>(_get_osfhandle(fd)),
u16.c_str(), static_cast<uint32_t>(u16.size()),
&written, nullptr)) {
return;
}
// Fallback to fwrite on failure. It can happen if the output has been
// redirected to NUL.
}
#endif
detail::fwrite_fully(text.data(), 1, text.size(), f);
}
} // namespace detail
As a child process, the _isatty() function will come back with false, so we fall back to the fwrite() function, and that triggers the \r escaping. In my original program, I have an fwrite() fallback as well, but it only picks up if GetStdHandle(STD_OUTPUT_HANDLE) returns nullptr. In the child process case, there is still a console we can WriteFile() to.
The other side-effect I see happening is if I use the fmt way of injecting color, eg:
fmt::print(fmt::emphasis::bold | fg(fmt::color::red), "Elapsed time: {0:.2f} seconds", 1.23);
Again Windows Terminal renders it correctly, but in Visual Studio's output window this turns into a soup of garbage. The native way of doing it -- SetConsoleTextAttribute(console, FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_INTENSITY);-- does not trigger that problem.
I tried hacking up the fmt source to be more like my original console printing code. The key difference was the _isatty() function. I suspect that's too broad of a question for the cases where console printing might fail.

\r is added because the file is opened in text mode. You could try (re)opening in binary mode or ignore \r on the read side.

Related

How to check the available free space in Haxe?

I don't find a way to check the free space available in a device using Haxe, Openfl, Lime or another library.
I would like to avoid download data that will exceed the size recommended for an app in each device.
What do you do to check that?
Try creating a file of that size! Then either delete it or reopen and write (not append) over its contents.
I don't know whether all platforms Haxe supports will work fine with this trick, but this algorithm is reported to work in many places and languages (I personally tested it in Ruby and saw the same suggestion for C++/.NET). To check whether X bytes of disk space are available:
open a new file for writing
seek X-1 bytes from the beginning
write a byte of data (whatever you want, 0, 42...)
close the file (probably unrelated to the task at hand, but don't forget to do that anyway)
If there's insufficient disk space, you'll likely get an exception at some point in this algorithm. You'll have to find out what errors to expect and process them properly.
Using ihx I've found this is working and requires nothing but Haxe Standard Library:
haxe interactive shell v0.3.4
type "help" for help
>> import sys.io.*;
>> var f = File.write('loca', true)
sys.io.FileOutput : { __f => #abstract }
>> f.seek(39999, FileSeek.SeekBegin)
Void : null
>> f.writeByte(0)
Void : null
>> f.close()
Void : null
After these manipulations, I had a file named loca of exactly 40000 bytes in my working directory.
By the way, be careful when doing things like these in ihx since it re-runs the entire session with the last entered line appended each time.
Ongoing experimentation:
However, when there's insufficient disk space, it may not fail with errors. In this case you'll have to check the real size with sys.FileSystem.stat(path).size. And don't forget to delete the file if there's not enough space.

Issue with filepath name, possible corrupt characters

Perl and html, CGI on Linux.
Issue with file path name, being passed in a form field, to a CGI on server.
The issue is with the Linux file path, not the PC side.
I am using 2 programs,
1) program written years ago, dynamic html generated in a perl program, and presented to the user as a form. I modified by inserting the needed code to allow a the user to select a file from their PC, to be placed on the Linux machine.
Because this program already knew the filepath, needed on the linux side, I pass this filepath in a hidden form field, to program 2.
2) CGI program on Linux side, to run when form on (1) is posted.
Strange issue.
The filepath that I pass, has a very strange issue.
I can extract it using
my $filepath = $query->param("serverfpath");
The above does populate $filepath with what looks like exactly the correct path.
But it fails, and not in a way that takes me to the file open error block, but such that the call to the CGI script gives an error.
However, if I populate $filepath with EXACTLY the same string, via hard coding it, it works, and my file successfully uploads.
For example:
$fpath1 = $query->param("serverfpath");
$fpath2 = "/opt/webhost/ims/DOCURVC/data"
A comparison of $fpath1 and $fpath2 reveals that they are exactly equal.
A length check of $fpath1 and $fpath2 reveals that they are exactly the same length.
I have tried many methods of cleaning the data in $fpath1.
I chomp it.
I remove any non standard characters.
$fpath1 =~ s/[^A-Za-z0-9\-\.\/]//g;
and this:
my $safe_filepath_characters = "a-zA-Z0-9_.-/";
$fpath1 =~ s/[^$safe_filepath_characters]//g;
But no matter what I do, using $fpath1 causes an error, using $fpath2 works.
What could be wrong with the data in the $fpath1, that would cause it to successfully compare to $fpath2, yet not be equal, visually look exactly equal, show as having the exact same length, but not work the same?
For the below file open block.
$upload_dir = $fpath1
causes complete failure of CGI to load, as if it can not find the CGI (which I know is sometimes caused by syntax error in the CGI script).
$uplaod_dir = $fpath2
I get a successful file upload
$uplaod_dir = ""
The call to the cgi does not fail, it executes the else block of the below if, as expected.
here is the file open block:
if (open ( UPLOADFILE, ">$upload_dir/$filename" ))
{
binmode UPLOADFILE;
while ( <$upload_filehandle> )
{
print UPLOADFILE;
}
close UPLOADFILE;
$msgstr="Done with Upload: upload_dir=$upload_dir filename=$filename";
}
else
{
$msgstr="ERROR opening for upload: upload_dir=$upload_dir filename=$filename";
}
What other tests should I be performing on $fpath1, to find out why it does not work the same as its hard-coded equivalent $fpath2
I did try character replacement, a single character at a time, from $fpath2 to $fpath1.
Even doing this with a single character, caused $fpath1 to have the same error as $fpath2, although the character looked exactly the same.
Is your CGI perhaps running perl with the -T (taint mode) switch (e.g., #!/usr/bin/perl -T)? If so, any value coming from untrusted sources (such as user input, URIs, and form fields) is not allowed to be used in system operations, such as open, until it has been untainted by using a regex capture. Note that using s/// to modify it in-place will not untaint the value.
$fpath1 =~ /^([A-Za-z0-9\-\.\/]*)$/;
$fpath1 = $1;
die "Illegal character in fpath1" unless defined $fpath1;
should work if taint mode is your issue.
But it fails, and not in a way that takes me to the file open error block, but such that the call to the CGI script gives an error.
Premature end of script headers? Try running the CGI from the command line:
perl your_upload_script.cgi serverfpath=/opt/webhost/ims/DOCURVC/data

Linux serial port listener and interpreter?

I'm using a serial device for a project, and what I'm trying to accomplish PC side, is listening for a command sent by the serial device, interpreting the query, running some code depending on the query, and transmitting back the result.
To be honest I tried out using PHP as the listener, and it works, unfortunately the infinite loop required to make the script act as a receiver, loads the CPU to 25%. So it's not really the best option.
I'm using cygwin right now, I'd like to create a bash script using linux native commands.
I can receive data by using:
cat /dev/ttyS2
And send a response with:
echo "command to send" > /dev/ttyS2
My question is, how do I make an automated listener to be able to receive and send data? The main issue I have, is actually how do I stop the cat /dev/ttyS2 command once information was received, put it into a variable which then I could compare with a switch, or a series of if else then blocks. Afterwards send back a response and start the cycle all over again?
Thanks
Is this not what you're looking for?
while read -r line < /dev/ttyS2; do
# $line is the line read, do something with it
# which produces $result
echo $result > /dev/ttyS2
done
It's possible that reopening the serial device on every line has some side-effect, in which case you could try:
while read -r line; do
# $line is the line read, do something with it
# which produces $result
echo $result > /dev/ttyS2
done < /dev/ttyS2
You could also move the output redirection, but I suspect you will have to turn off stdout buffering.
To remain fairly system independent, use a cross platform programming language: like Python, use a cross platform serial library like : pySerial and do the processing inside a script. I have used pySerial and I could run the script cross platform with almost no changes in source code. By using BASH you're limiting yourself a fair little.
If you use right tools, it is possible to actually have your CPU usage to be exactly 0 when your device does not have any output.
To accomplish this, you should use some higher level language (Perl, Python, C/C++ would do, but not bash) and use select loop on top of file handle of your serial device. This is an example for Perl http://perldoc.perl.org/IO/Select.html, but you can use any other language as long as it has support for select() syscall.
I would recommend to use C/C++ with Qt 5.1.1, it's really easy and if you are familiar with the framework it'll be a piece of cake!!!
Here you can find more information and here more helpful examples, give it a try,
it's really pain free!! Also you can develop on win and then port your code to linux...straight forward.
Declare an object like this:
QSerialPort mPort; //remember to #include <QtSerialPort/QSerialPort>
//also add QT += serialport to your .pro file
Then add this code:
MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent)
{
setupUi(this);
connect(this->pushButton,SIGNAL(clicked()),this,SLOT(sendData()));
mPort.setPortName("ttyS0");
mPort.setBaudRate(QSerialPort::Baud115200);
mPort.setParity(QSerialPort::EvenParity);
if(!mPort.open(QSerialPort::ReadWrite))
{
this->label->setText(tr("unable to open port, %1").arg(mPort.error()));
}
connect(&(this->mPort),SIGNAL(readyRead()),this,SLOT(readData()));
}
void MainWindow::sendData()
{
QByteArray data = lineEdit->text().toLatin1();
if(mPort.isOpen())
{
mPort.write(data);
}
else
{
this->label->setText(tr("port closed %1").arg( mPort.error()));
}
}
void MainWindow::readData()
{
QString newData;
int bread=0;
while(bread < mPort.bytesAvailable() ){
newData += mPort.readAll();
bread++;
}
this->textEdit->insertPlainText("\n" + newData);
}

Running a main function inside another main function on a .hs file changes the behaviour IO?

I'm currently finishing my first Haskell Project and, on the final step of the work, my I/O function seems to behave strangely after I connected the different haskell files.
I have a main file (f1.hs) which loads some info of a multimedia library and saves it into variables on a new .hs file (f2.hs). I also have a "data processing and user interface" file (f3.hs), which reads those variables and, depending on what the user orders, it sorts them and displays them on screen. This f3.hs file works with menus, commanded by the valus of the keyboard input (getLine).
In order to make the work sequence "automatic", I made a "main" function on the f1.hs file, which creates the f2.hs file and then with the System.Cmd module, I did a system "runhaskell f3.hs". This routes the user from the f1.hs file to the main function of f3.hs.
The strange thing is that, after I did that, all the getLine seem to appear before the last line of the function prompt.
What it should appear would be:
Question One.....
Answer: (cursor's place)
but what I get is:
Question One.....
(cursor's place)
Answer:
This only happens if I runhaskell f1.hs. If I try to runhaskell f3.hs directly, it works correctly (though I can't do it on the final job, as the f2.hs file needs to be created first). Am I doing something wrong with this sequence?
I'm sorry for the lack of code, but I thought that it wouldn't be any help for the understanding of the problem...
This is typically caused by line buffering, meaning the text does not actually get printed to the console until a newline is printed. The solution is to manually flush the buffer, i.e. something like this:
import System.IO
main = do ...
putStr "Answer: "
hFlush stdout
...
Alternatively, you can disable buffering by using hSetBuffering stdout NoBuffering, though this comes at a slight performance cost, so I recommend doing the flushing manually when you need to print a partial line.

fopen crashes only when running from release executable

I make several calls to a function that reads data from an input file. Everything works fine in debug mode, but when I try to run the executable from release mode, the line with fopen crashes the program after a few calls. My code is:
From header file:
#define presstankdatabase "presst_database.txt"
In function:
FILE *fidread;
fidread = fopen(presstankdatabase,"r");
if (fidread==NULL) {
printf("Failed to open pressurant tank database: %s\n",presstankdatabase);
return 1;
}
In debugging, I've inserted comment lines just before and just after the line starting with fidread =, and after several calls the program crashes and I get the message "A problem caused the program to stop working correctly. Please close the program." The comment just before the fopen call is displayed, but the comment just after does not. My understanding of fopen is that is should return either a pointer or NULL, but it crashes before it even gets to the check. The only thing I can think of is that somehow I'm having memory problems, but I don't know how that would fit in with fopen crashing. Does anyone know what might be going on? Thanks!
EDIT 1: I increased the size of three variables, and the only places they're used (except in printf() calls), are as shown below.
char *constid = (char*)malloc(sizeof(char)*20);
Used like so:
strcpy(constid,"Propellant");
strcpy(constid,"Propellant tank");
strcpy(constid,"Pressurant tank");
If the variables are sized to 20, as shown above, it crashes. But if they're larger (I've tried 120 and 100), the program runs. The variables aren't used in any other places other than fprintf() or printf() calls.
presstankdatabase should be a pointer to a string containing the filename to open. If fopen() crashes then that pointer is probably invalid (or NULL). Without any more code it is not possible to debug it further. Use the VC debugger to see what's happening...
EDIT:
Another common cause of this is a filename string that suddenly stops being NULL-terminated.
You should add a printf() call to print the filename before opening. It will most probably fail to produce the expected output. If not, then you have a more interesting form of memory corruption that will take some more work to weed out.
EDIT 2:
If the printf() call shows the correct string, then you probably have memory corruption somewhere else in your code that has mangled some internal structure of the C library. A common cause is going beyond the end (or the beginning for that matter) of a static array or a region provided by malloc().

Resources