I want to write a testing program. It will open a special *.tests file and test direct program with tests from the file.
I need to:
Run some program. e.g ./main -testing 45 563 67
Listen to result.
How I can to do it? I want to run program main with some tests and listen to its result.
You should ues the QProcess class to start your program.
QString program = "./main";
QStringList arguments;
arguments << "-testing" << "45" << "563" << ...;
QProcess *myProcess = new QProcess(parent);
myProcess->start(program, arguments);
Then you can use the waitForFinished to wait for it to finish.
exitCode will give you the return code.
The readAllStandardOutput (or *Error) methods allow you to read what the process has output to the console.
Related
I have a client program that can be executed in a linux terminal. The client sends this message to the server, and immediately dies once it receives the ack from the server:
struct Msg {
char my_id[16];
};
The server just appends this my_id to a log file.
The thing is, I want Msg::my_id to be the same across the terminal/shell the client is executed from. How would I do this?
Say, I am a Linux user, and open two terminals: terminals X and Y.
I ran my client from X three times, and from Y twice. In that case, what should I add to the client in order for me to see three Xs and two Ys in the server side log file?
One thing I can think of is to take the ppid and send it. Would this always work? If not, what'd be better alternatives?
#Author,
Proceed based on comment from Barmar.
I thought of sharing:
#include <unistd.h>
#include <string.h>
#if defined( CYGWIN_NT ) || defined( LINUX )
#include <iostream>
using namespace std;
#else
#include <iostream.h>
#endif
int main()
{
// SAMPLE PROGRAM.
struct Msg
{
char my_id[16];
};
Msg obj;
// I am not handling fork/... other related enhancements/updates at code.
sprintf( obj.my_id, "%llu", getppid() );
cout << obj.my_id << "\n";
sprintf( obj.my_id, "%s", ttyname(STDIN_FILENO) );
cout << obj.my_id << "\n";
return 0;
}
$ g++ -DCYGWIN_NT getmy_id.cpp -o ./a.out
$ ./a.out
47130
/dev/pty0
$ echo $$ current terminal pid
47130 current terminal pid
We can also achieve the same using:
user's last login:
How to get user's last login including the year in C
pretty new to Linux and im trying to read in command line arguments in a Linux operating system. I want to be able to execute the commands i give as command line arguments programatically. Here is what I have so far:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int main(int argc, char* argv[])
{
int counter;
for(counter = 1; counter < argc; counter++){
pid_t pid = fork();
if(pid < 0)
{
perror("Forking failed");
exit(1);
}
else if(pid == 0)
{
char *args[] = {argv[counter], NULL};
printf("Argument to be passed: %s \n", argv[counter]);
execvp(args[0], args);
perror("Command failed.");
exit(0);
}
printf("Process %s completed successfully.\n", argv[counter]);
}
exit(0);
}
My output on terminal:
darren#darren-VirtualBox:~/Desktop$ ./cmdarguments /home/darren/Desktop/fullpathdoc1 /home/darren/Desktop/fullpathdoc2
Process /home/darren/Desktop/fullpathdoc1 completed successfully.
Process /home/darren/Desktop/fullpathdoc2 completed successfully.
darren#darren-VirtualBox:~/Desktop$ Argument to be passed: /home/darren/Desktop/fullpathdoc2
This is the second program that simply prints this statement.
Argument to be passed: /home/darren/Desktop/fullpathdoc1
This is the first program that simply prints this statement.
I want to be able to print out the process name, and say process completed after each command line argument has been successfully executed. For some reason, my output results in everything seeming to execute backwards, with my process completed messages coming up first as well as reading in the command lines from right to left. Can someone please help with my code and how I can rectify this?
When there are multiple processes, which process get to run first is totally up to your operating system(Linux)'s decision.
Broadly, the parent process -- that's where fork() returns > 0 -- needs to wait for the child process to complete. Bear in mind that the three calls to execvp() result in three, concurrent, processes. So if you don't monitor them, they'll proceed in their own merry way. There is already a discussion of this issue on SO:
how to correctly use fork, exec, wait
I want to read stdout and stderr from a subprocess in the same thread as described in this post. While running the code inside Python2.7 works as expected, the select() call in Python3.3 seems to do not what it should.
Have a look - here is a script that would print two lines on both stdout and stderr, then wait and then repeat this a couple of times:
import time, sys
for i in range(5):
sys.stdout.write("std: %d\n" % i)
sys.stdout.write("std: %d\n" % i)
sys.stderr.write("err: %d\n" % i)
sys.stderr.write("err: %d\n" % i)
time.sleep(2)
The problematic script will start the script above in a subprocess and read it's stdout and stderr as described in the posted link:
import subprocess
import select
p = subprocess.Popen(['/usr/bin/env', 'python', '-u', 'test-output.py'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
r = [p.stdout.fileno(), p.stderr.fileno()]
while p.poll() is None:
print("select")
ret = select.select(r, [], [])
for fd in ret[0]:
if fd == p.stdout.fileno():
print("readline std")
print("stdout: " + p.stdout.readline().decode().strip())
if fd == p.stderr.fileno():
print("readline err")
print("stderr: " + p.stderr.readline().decode().strip())
Note that I start the Python subprocess using the -u option which causes Python to not buffer stdout and stderr. Also I print some text before calling select() and readline() to see, where the script blocks.
And here is the problem: running the script in Python3, after each cycle the output blocks for 2 seconds despite the fact, that two more lines are waiting to be read. And as indicated by the text before each call of select() you can see that it's select() which is blocking (not readline()).
My first thought was that select() only resumes on a flush on Python3 while Python2 it returns always when there's data available but in this case only one line would be read each 2 seconds (which is not the case!)
So my question is: is this a bug in Python3-select()? Did I misunderstand the behavior of select()? And is there a way to workaround this behavior without having to start a thread for each pipe?
Output when running Python3:
select
readline std
stdout: std: 0
readline err
stderr: err: 0
select <--- here the script blocks for 2 seconds
readline std
stdout: std: 0
select
readline std
stdout: std: 1
readline err
stderr: err: 0
select <--- here the script should block (but doesn't)
readline err
stderr: err: 1
select <--- here the script blocks for 2 seconds
readline std
stdout: std: 1
readline err
stderr: err: 1
select <--- here the script should block (but doesn't)
readline std
stdout: std: 2
readline err
stderr: err: 2
select
.
.
Edit: Please note that it has no influence whether the child process is a Python script. The following C++ program has the same effect:
int main() {
for (int i = 0; i < 4; ++i) {
std::cout << "out: " << i << std::endl;
std::cout << "out: " << i << std::endl;
std::cerr << "err: " << i << std::endl;
std::cerr << "err: " << i << std::endl;
fflush(stdout);
fflush(stderr);
usleep(2000000);
}}
It seems that the reason is buffering in subprocess.PIPE and first readline() call reads all available data (i.e., two lines) and returns first one.
After that, there is no unread data in pipe, so select() is not returning immediately. You can check this by doubling readline calls:
print("stdout: " + p.stdout.readline().decode().strip())
print("stdout: " + p.stdout.readline().decode().strip())
and ensuring that second readline() call doesn't block.
One solution is to disable buffering using bufsize=0:
p = subprocess.Popen(['/usr/bin/env', 'python', '-u', 'test-output.py'],
stdout=subprocess.PIPE, stderr=subprocess.PIPE, bufsize=0)
Another possible solution is to do a non-blocking readline() or to ask pipe file object its read buffer size, but I don't know is it possible.
You can also read directly from p.stdout.fileno() to implement non-blocking readline().
Update: Python2 vs. Python3
The reason why Python3 differs from Python2 here is likely to be in new I/O module (PEP 31136). See this note:
The BufferedIOBase methods signatures are mostly identical to that of RawIOBase (exceptions: write() returns None , read() 's argument is optional), but may have different semantics. In particular, BufferedIOBase implementations may read more data than requested or delay writing data using buffers.
How can I parse a parameter like --message "text" to /usr/bin/gksudo using QProcess to show my individualized text?
Just with /usr/bin/gksudo and calling my script.sh it works fine.
Here the minimal example:
QString cmd = QString("/usr/bin/gksudo");
QStringList param = ( QStringList << "--message my Text" << "path/to/script.sh")
QProcess.start( cmd, param );
Even if i try to add the parameter to the cmd i fail. And no password prompt is shown.
QString cmd = QString("/usr/bin/gksudo --message MyText");
Solution
--message and my Text are both own elements.
QStringList param = ( QStringList << "--message" << tr("my Text") << "path/to/script.sh")
QProcess takes the first parameter as the command to run and then passes the following arguments, delimited by a space, as arguments to the command.
When you do this: -
QStringList param = ( QStringList << "--message my Text" << "path/to/script.sh")
And then pass param to QProcess, it's passing "path/to/script.sh" as a command line parameter to gksudo, but what you want is a single argument with --message. You need to unify the parameters with extra quotes. So, in the case of your last example, that would be: -
QString cmd = QString("/usr/bin/gksudo \"--message MyText"\");
Note the two additional \" around --message MyText
Passing this to QProcess means there are two arguments; the call to gksudo and its command line argument "--message MyText"
I wrote this code for my class and when i debug it runs but shuts down within seconds i dont know what i'm doing wrong here. I am really new to C++ so.
here is the code:
#include "stdafx.h"
#include<iostream>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
double gallons;
double startmile;
double endmile;
double totalmilestravelled;
cout << "This Program Calculates your vehicle's gas mileage on this trip\n" << endl;
cout << "What is the number of gallons consumed on the trip: ";
cin >> gallons;
cout << "\nWhat was your ending mile?";
cin >> endmile;
cout << "\nWhat was your starting mile?";
cin >> startmile;
totalmilestravelled = endmile-startmile;
double mpg = totalmilestravelled/gallons;
cout << "your gas mileage is: " << mpg << endl;
return 0;
}
and this is the error:
The program '[9848] gasmileage.exe: Native' has exited with code 0 (0x0).
That's not an error. The program exited normally. When you run a program, it executes and exits with an exit code specified by the program. In this case you return 0, so the program exits with code 0. If you want the program to "pause" to allow you to see the result of the program before it closes, add this just before the return statement:
cin.ignore(128, '\n');
cin.get();
The first line discards newlines that were left over in the standard input. Don't worry about this too much until you learn more about the input stream, but you need to do this if you are attempting to read a string after reading numeric input from the user. The second line will prompt the user for some input (push return). You don't care what the input is, and you aren't going to do anything with the input. You just want to force the program to wait for user input so that you can see what's going on before continuing with the program (which in this case the program immediately exit).
Think about programs that say "Press any key." It's the same thing we're doing here. Giving the user a moment to view the output.