when a server fopen a pipe for read and starts a thread by std::async, for the first time, the client can fopen the pipe for write, but if the client writes again, the client is blocked until the thread ends, the server can fopen to read the pipe.
Any idea?
Kai
Apologizing for my confusing description. Here are snippets of code.
Server side:
#define FIFO_FILE "/tmp/fifo"
while (1)
{
char readbuf[25];
FILE *file = fopen(FIFO_FILE, "r");
fgets(readbuf, 25, file);
std::queue<std::string>().swap(msg);
msg.push(readbuf);
while (!feof(file))
{
fgets(readbuf, 25, file);
msg.push(readbuf);
}
std::cout << "message num: " << msg.size() << std::endl;
std::future<void> ret = std::async (std::launch::async, command_process, std::ref(msg), std::ref(binfo));
fclose(file);
std::cout << "messages are being processed!\n";
}
client side:
int main(int argc, char *argv[])
{
FILE *fp;
if((fp = fopen(FIFO_FILE, "w")) == NULL) {
perror("fopen \n");
return -1;
}
fputs(argv[1], fp);
fclose(fp);
return 0;
}
After the server starts, it waits at fgets. When the client sends a string to FIFO, the server receives the string and store it in msg,
then pass it to thread command_process for processing, the server waits
at fopen for further message. HOWEVER if the client tries to send message again, it would be blocked at the client's fopen until the server's thread finishes processing, the server continues to receive the client's message.
I expect the client should not be blocked to send out message and the server should receive the message immediately, then abort the previous message to process new message.
Kai
Related
I have two programs, one for client and one for server. They can talk to each other with a chat using Threads.
After I added two buttons, one for client and one for server to send file from server to client.
When I connect the client to server I start the Thread which waits to receive something with recv() function.
The problem is that when I want to send a file to client because the thread is awake the transfer does not work.
When I do not use threads the transfer works.
My question is: Can I from a button click to stop that Thread?
PS: I am a begginer in Threads
This is my code with Threads:
delegate System::Void PrintStringDel(String^ str);
void PrintString(String^ str) {
txtReceive->Text += str;
txtReceive->SelectionStart = txtReceive->Text->Length;
txtReceive->ScrollToCaret();
}
void ReceiveThread() {
int ByteReceived;
char buff[1024];
while(true) {
ByteReceived = recv(SendingSocket, buff, sizeof(buff), 0);
if(ByteReceived > 0) {
PrintStringDel^ print = gcnew PrintStringDel(this, &Form1::PrintString);
String^ text = gcnew String(buff);
txtReceive->BeginInvoke(print, text);
}
}
}
on Connect() client
...
Threading::ThreadStart^ ts = gcnew Threading::ThreadStart(this,
&Form1::ReceiveThread);
Threading::Thread^ t = gcnew Threading::Thread(ts);
t->Start();
I have a process which uses select to poll the stdin file descriptor, when I run it from console it works fine. After I added this process under cron I see that the output indicates a problem calling select with stdin under cron. Is there a way to workaround this when using cron and make the process think there is an stdin file descriptor which receives nothing?
So what I did was to check /proc/self/fd/0, in case it's not /dev/pts/something I skip the select command. Check your fd 0 using something like this
bool rc = true;
char linkName[256];
const char* fd0 = "/proc/self/fd/0";
const char* devPts = "/dev/pts";
struct stat sb;
lstat(fd0, &sb);
readlink(fd0, linkName, sb.st_size + 1);
linkName[sb.st_size] = '\0';
if (strncmp(linkName, devPts, sizeof(devPts)) != 0)
{
std::cout << "The application's stdin file descriptor doesn't point to /dev/pts/XXX, input will be ignored" << std::endl;
rc = false;
}
return rc;
I am doing an assignment using pthreads and mutual exclusion. I have to create n print servers and m print clients, who each have 5 print jobs. We are to create the threads, pass the jobs through a queue of size 4 to the print servers which then print the job (ie busy work in this case). Here is the code for passing the jobs and servicing the jobs.
These are the client and server threads
void *PrintClient(void *arg){
int i;
char str[NUMJOBSPERCLIENT][100];
for(i=1;i<NUMJOBSPERCLIENT;i++){
pthread_mutex_lock(&mutex);
req.clientID = pthread_self();
req.fileSize = rand_int(FILEMIN,FILEMAX);
sprintf(str[i], "File_%d_%d",pthread_self(),i);
req.fileName = str[i];
append(req);
pthread_mutex_unlock(&mutex);
sleep(rand_int(1,3));
}//for
pthread_exit(NULL);
} // end PrintClient
void *PrintServer(void *arg){
pthread_mutex_lock(&mutex);
pthread_cond_wait(&cond,&mutex);
while(count > 0){
take();
count = count -1;
}
pthread_mutex_unlock(&mutex);
pthread_exit(NULL);
} // end PrintServer
And this is code which adds or removes a job from the queue. I know the error is here and it had to do with the threads themselves but I can not find it for the life of me. So far the debugger has been almost no help (I am running on a university linux server which is showing no compile errors).
void append(PrintRequest item){
BoundBuffer[count] = req;
printf("I am client %s\n",req.fileName);
count++;
if(count == BUFSIZE){
printf("Buffer Size Reached\n");
pthread_cond_signal(&cond);
}
} // end append
PrintRequest take(){
printf("Printing %s\n", BoundBuffer[count].fileName);
usleep(BoundBuffer[count].fileSize/PRINTSPEED);
printf("Finished Printing %s\n", BoundBuffer[count].fileName);
} // end take
I guess the segmentation fault is signaled around printf("Printing %s\n", BoundBuffer[count].fileName);, right?
In your PrintClient, you store file name to local variable str[][] and copy the pointer to this local variable as one parameter of the request req.fileName = str[i];. Thus the address pointed by req.fileName is allocated on the stack of the client thread.
When the requests are processed in the server thread PrintServer, it is possible that the client thread which generated the request is no longer present. The result is that req.fileName points to an address which doesn't exists (the stack memory has already been de-allocated with the exiting of the client thread), then when you de-reference such address in printf("Printing %s\n", BoundBuffer[count].fileName);, segmentation fault is signaled.
Good morning, I’m looking for an example about sending a file from one pc to an other with QTcpSocket. I tried to create my own code. I have an application, in which, the user will choose a file from his DD ( all types) and send it to the TcpServer, this server will then send this file to the other clients.But, I have a problem, when i choose the file and i send it, in the client’s side, i have this message: file is sending , but in the server’s side, it shows me that the file isn’t recieved with it’s totaly bytes.
Any suggestion please. This is the function for sending the file in the client’s side:
void FenClient::on_boutonEnvoyer_2_clicked()
{
QString nomFichier = lineEdit->text();
QFile file(lineEdit->text());
if(!file.open(QIODevice::ReadOnly))
{
qDebug() << "Error, file can't be opened successfully !";
return;
}
QByteArray bytes = file.readAll();
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out << quint32(0);
out << nomFichier;
out << bytes;
out.device()->seek(0);
out << quint32((block.size() - sizeof(quint32)));
qDebug() << "Etat : envoi en cours...";
listeMessages->append("status : sending the file...");
socket->write(block);
}
and the server side:
void FenServeur::datarecieved()
{
QTcpSocket *socket = qobject_cast<QTcpSocket *>(sender());
if(socket == 0)
{
qDebug() << "no Socket!";
return;
}
forever
{
QDataStream in(socket);
if(blockSize == 0)
{
if(socket->bytesAvailable() )
{
qDebug() << "Error < sizeof(quint32))";
return;
}
in >> blockSize;
}
if(socket->bytesAvailable() < blockSize)
{
qDebug() << "data not recieved with its total bytes";
return;
}
qDebug() << "!!!!!!";
QByteArray dataOut;
QString nameFile;
in >> nameFile >> dataOut;
QFile fileOut(nameFile);
fileOut.open(QIODevice::WriteOnly);
fileOut.write(dataOut);
fileOut.close();
blockSize = 0;
}
}
void FenServeur::sendToAll(const QString &message)
{
QByteArray paquet;
QDataStream out(&paquet, QIODevice::WriteOnly);
out << (quint32) 0;
out << message;
out.device()->seek(0);
out << (quint32) (paquet.size() - sizeof(quint32));
for (int i = 0; i < clients.size(); i++)
{
clients[i]->write(paquet);
}
}
So i can't write the file that the server recieved into a new file.
Any suggestion please!! and thanks in advance
Your code is waiting for the other side, but the other side is waiting for you. Any protocol that allows both sides to wait for each other is fundamentally broken.
TCP allows the sender to wait for the receiver but does not allow the receiver to wait for the sender. This makes sense because not allowing the sender to wait for the receiver requires an unlimited amount of buffering. Thus for any application layered on top of TCP, the receiver may not wait for the sender.
But you do:
if(socket->bytesAvailable() < blockSize)
{
qDebug() << "data not recieved with its total bytes";
return;
}
Here, you are waiting for the sender to make progress (bytesAvailable to increase) before you are willing to receive (pull data from the socket). But the sender is waiting for you to make progress before it is willing to send more data. This causes a deadlock. Don't do this.
Receive as much data as you can, as soon as you can, whenever you can. Never insist on receiving more data over the network before you will pull already received data from the network stack.
I'm working on an application that contains several server sockets that each run in a unique thread.
An external utility (script) is called by one of the threads. This script calls a utility (client) that sends a message to one of the server sockets.
Initially, I was using system() to execute this external script, but we couldn't use that because we had to make sure the server sockets were closed in the child that was forked to execute the external script.
I now call fork() and execvp() myself. I fork() and then in the child I close all the server sockets and then call execvp() to execute the script.
Now, all of that works fine. The problem is that at times the script reports errors to the server app. The script sends these errors by calling another application (client) which opens a TCP socket and sends the appropriate data. My issue is that the client app gets a value of 0 returned by the socket() system call.
NOTE: This ONLY occurs when the script/client app is called using my forkExec() function. If the script/client app is called manually the socket() call performs appropriately and things work fine.
Based on that information I suspect it's something in my fork() execvp() code below... Any ideas?
void forkExec()
{
int stat;
stat = fork();
if (stat < 0)
{
printf("Error forking child: %s", strerror(errno));
}
else if (stat == 0)
{
char *progArgs[3];
/*
* First, close the file descriptors that the child
* shouldn't keep open
*/
close(ServerFd);
close(XMLSocket);
close(ClientFd);
close(EventSocket);
close(monitorSocket);
/* build the arguments for script */
progArgs[0] = calloc(1, strlen("/path_to_script")+1);
strcpy(progArgs[0], "/path_to_script");
progArgs[1] = calloc(1, strlen(arg)+1);
strcpy(progArgs[1], arg);
progArgs[2] = NULL; /* Array of args must be NULL terminated for execvp() */
/* launch the script */
stat = execvp(progArgs[0], progArgs);
if (stat != 0)
{
printf("Error executing script: '%s' '%s' : %s", progArgs[0], progArgs[1], strerror(errno));
}
free(progArgs[0]);
free(progArgs[1]);
exit(0);
}
return;
}
Client app code:
static int connectToServer(void)
{
int socketFD = 0;
int status;
struct sockaddr_in address;
struct hostent* hostAddr = gethostbyname("localhost");
socketFD = socket(PF_INET, SOCK_STREAM, 0);
The above call returns 0.
if (socketFD < 0)
{
fprintf(stderr, "%s-%d: Failed to create socket: %s",
__func__, __LINE__, strerror(errno));
return (-1);
}
memset(&address, 0, sizeof(struct sockaddr));
address.sin_family = AF_INET;
memcpy(&(address.sin_addr.s_addr), hostAddr->h_addr, hostAddr->h_length);
address.sin_port = htons(POLLING_SERVER_PORT);
status = connect(socketFD, (struct sockaddr *)&address, sizeof(address));
if (status < 0)
{
if (errno != ECONNREFUSED)
{
fprintf(stderr, "%s-%d: Failed to connect to server socket: %s",
__func__, __LINE__, strerror(errno));
}
else
{
fprintf(stderr, "%s-%d: Server not yet available...%s",
__func__, __LINE__, strerror(errno));
close(socketFD);
socketFD = 0;
}
}
return socketFD;
}
FYI
OS: Linux
Arch: ARM32
Kernel: 2.6.26
socket() returns -1 on error.
A return of 0 means socket() succeeded and gave you file descriptor 0. I suspect that one of the file descriptors that you close has file descriptor 0 and once it's closed the next call to a function that allocated a file descriptor will return fd 0 as it's available.
A socket with value 0 is fine, it means stdin was closed which will make fd 0 available for reuse - such as by a socket.
chances are one of the filedescriptors you close in the forkExec() child path(XMLSocket/ServerFd) etc.) was fd 0 . That'll start the child with fd 0 closed, which won't happen when you run the app from a command line, as fd 0 will be already open as the stdin of the shell.
If you want your socket to not be 0,1 or 2 (stdin/out/err) call the following in your forkExec() function after all the close() calls
void reserve_tty()
{
int fd;
for(fd=0; fd < 3; fd++)
int nfd;
nfd = open("/dev/null", O_RDWR);
if(nfd<0) /* We're screwed. */
continue;
if(nfd==fd)
continue;
dup2(nfd, fd);
if(nfd > 2)
close(nfd);
}
Check for socket returning -1 which means an error occured.
Don't forget a call to
waitpid()
End of "obvious question mode". I'm assuming a bit here but you're not doing anything with the pid returned by the fork() call. (-:
As it is mentioned in another comment, you really should not close 0,1 or 2 (stdin/out/err), you can put a check to make sure you do not close those and so it will not be assigned as new fd`s when you request for a new socket