I am tasked with porting a legacy software, to a Managed Language.
A few of the hard-coded calculation models are extremely time consuming to port, without gaining anything in terms of features or performance from a full port.
We decided to make a C++/CLI wrapper instead.
i.e. something like this:
FortranLib.h:
#pragma comment(lib, "fortranlibrary.lib")
extern "C" {
void SUBROUTINENAME(int * param1, int * param2, float * param3, int * returnCode);
}
using namespace System;
namespace FortranlibraryWrapper {
public ref class FortranLib{
public:
enum class ReturnCodes : int{
ok = 0,
//... and so on and so forth
}
ReturnCodes SubRoutineName(int param1, int param2, float param3);
}
}
FortranLib.cpp:
#include "stdafx.h"
#include "FortranLib.h"
namespace FortranlibraryWrapper {
FortranLib::CalculationReturnCodes FortranLib::SubRoutineName(int param1, int param2, float param3)
{
int returnCode = -1;
SUBROUTINENAME( ¶m1, ¶m2, ¶m3, &returnCode);
return (ReturnCodes)returnCode;
}
}
We have in the actual code tried to bound params 1-3 to avoid issues, but apparently we are not good enough, as we recently we saw this type of error come up, in a new test case:
Intel(r) Visual Fortran run-time error
forrtl: severe (408): fort: (3): Subscript #1 of the array ....
This is due to some calculation in the fortran code, that determines an array index. but the calculated index is outside the bounds of the array.
The problem is that the error comes as an error dialogue, and does not raise an exception. We have already tried this:
int returnCode = -1;
try{
SUBROUTINENAME( ¶m1, ¶m2, ¶m3, &returnCode);
}
catch(...)
{
throw gcnew System::Exception("fortran runtime error??");
}
return (ReturnCodes)returnCode;
and found that it does not catch anything..
The new application is intended as a server based service, so I need to somehow capture this error, and log it, and ideally continue the service, and discard the job that caused the failure.
Does anyone know how to accomplish that?
I would prefer not editing the fortran code, and recompiling it, as I am a novice with that language.
Related
I'm trying to get into objective c, but I don't have a mac so I am trying to get it to work on linux. I found these two articles which talk about compiling objective c on linux: this one and this one
Ok, I forgot to say that I don't want to use gnustep, it seems dead and I don't want all the cocoa framework, just the objective c syntax and the c standard library. But I can't compile any code without gnustep!
If I try to compile this code:
#import <objc/Object.h>
#import <stdio.h>
#interface Number: Object
{
#public
int number;
}
- (void)printNum;
#end
#implementation Number: Object
- (void)printNum
{
printf("%d\n", number);
}
#end
int main(void)
{
Number *myNumber = [Number new]; // equal to [[Number alloc] init]
myNumber->number = 6;
[myNumber printNum];
return 0;
}
I get a segmentation fault, because there's no init nor alloc methods. And if I don't inherit from Object, like so:
#include <stdio.h> // C standard IO library (for printf)
#include <stdlib.h> // C standard library
// Interface
#interface test
-(void)sayHello :(char *)message;
#end
// Implementation
#implementation test
-(void)sayHello :(char *)message {
printf("%s", message);
}
int main(int argc, char *argv[]) {
test *test = [[test alloc] init];
[test sayHello:"Hello world"];
}
I get a Bus error. It seems like the only way to create interfaces and implement them is inheriting from NSObject. How can I fix this?
By the way, I'm using gcc with -lobjc flag (with gobjc)
EDIT: ok, so I have to create a root object myself if I don't want to use a framework. How can I do this? I imagine it's something like malloc and free in the init and release methods, but I'm not sure. How can I do this?
That's not really how Objective C works. You need the runtime, I'm afraid. Many of the good bits of Objective C are about what happens at run- rather than compile-time. If you take away the library, there's not a lot left.
I am studying signals from oreilly book. I came across this.
#include <signal.h>
typedef void (*sighandler_t)(int);----> func ptr returns void. uses typedef
sighandler_t signal (int signo, sighandler_t handler);
Later on in code. He just uses
void sigint_handler (int signo)----> normal function returning void
{
}
can typedef be applied on functions
I want to know how it works
can typedef be applied on functions
Yes.....
I want to know how it works
As the example you have read - the syntax is rather obscure (after 25 years of C I still have to think about it), but it is quite straight forward. Passing and storing pointers to functions is greatly simplified if you use typedefs.
I suggest either take a detour and learn about pointers to functions and typedefs of them, or take it as read for now and return to pointers to function later, as you cannot be a C programmer and avoid them.
A signal is just like a interrupt, when it is generated by user level, a call is made to the kernel of the OS and it will action accordingly. To create a signal, here I just show you an example
#include<stdio.h>
#include<signal.h>
#include<sys/types.h>
void sig_handler1(int num)
{
printf("You are here becoz of signal:%d\n",num);
signal(SIGQUIT,SIG_DFL);
}
void sig_handler(int num)
{
printf("\nHi! You are here becz of signal:%d\n",num);
}
int main()
{
signal(SIGINT,sig_handler1);
signal(SIGQUIT,sig_handler);
while(1)
{
printf("Hello\n");
sleep(2);
}
}
after running this code if you will press Ctrl+C then a message will show - "You are here becoz of signal:2" instead of quiting a process as we have changed a signal according to our action. As, Ctrl+C is a maskable signal.
To know more anbout signals and types of signals with examples please follow the link :
http://www.firmcodes.com/signals-in-linux/
For my purpose, I want to use Qt5.1 to record sounds in WAV format, 16000Hz, 16bit and 1 channel, but the sounds are all 32bit by default. So I must find a class that can set "Bit Size" and the class is QAudioFormat for there's a function setBitSize() in the class. So I can no longer use QAudioRecorder class for it can not take QAudioFormat as parameter but QAudioInput do. And I use QAudioInput to record sounds with the code below:
#include<QAudioFormat>
#include<QAudioInput>
#include<QString>
#include<QFile>
#include<QDebug>
int main()
{
QFile output;
output.setFileName("record.raw");
output.open(QIODevice::WriteOnly);
QAudioFormat settings;
settings.setCodec("audio/PCM");
settings.setSampleRate(16000);
settings.setSampleSize(16);
settings.setChannelCount(1);
settings.setByteOrder(QAudioFormat::LittleEndian);
settings.setSampleType(QAudioFormat::UnSignedInt);
QAudioInput *audio=new QAudioInput(settings);
audio->start(&output);
sleep(3);
audio->stop();
output.close();
delete audio;
return 0;
}
Well, after the program ran, the record.wav was still empty. I have successfully recorded the sounds using QAudioRecorder, and the only different is the QAudioRecorder class has setAudioInput() function (ie. "audio->setAudioInput("alsa:default");). So I think maybe it's the point of the problem, but QAudioInput has no function like this. That's my problem, maybe you can give my some advice and Thanks a lot:-)
I'm glad to have found someone with the same issue as mine. I've been trying to record from a microphone with QAudioRecorder but with a different sample size for a few days already. Thanks to your example I've succeeded by getting rid of QAudioRecorder. So it's my turn to help you.
I think while the program is in the sleep function it's not recording anymore. You need to use the concept of signal and slots provided by Qt to to record while the timer is running.
#include "AudioInput.h"
void AudioInput::setup(){
output.setFileName("record.raw");
output.open(QIODevice::WriteOnly);
QAudioFormat settings;
settings.setCodec("audio/PCM");
settings.setSampleRate(16000);
settings.setSampleSize(16);
settings.setChannelCount(1);
settings.setByteOrder(QAudioFormat::LittleEndian);
settings.setSampleType(QAudioFormat::UnSignedInt);
audio=new QAudioInput(settings);
audio->start(&output);
QTimer::singleShot(3000, this, SLOT(terminateRecording()));
}
void AudioInput::terminateRecording(){
audio->stop();
output.close();
delete audio;
}
I put your code in one class called AudioInput and the only difference is that I replaced sleep(3000) by QTimer::singleShot(3000, this, SLOT(terminateRecording()));. Contrary to sleep this function won't freeze the program during 3s but will just send a signal to terminateRecording() at the end of the time.
Here is the rest of the code:
int main(int argc, char** argv){
QCoreApplication app(argc,argv);
AudioInput t;
t.setup();
app.exec();
return 0;
}
and the header:
class AudioInput : public QObject{
Q_OBJECT
public Q_SLOTS:
void terminateRecording();
public:
void setup();
private:
QAudioInput *audio;
QFile output;
};
so basically the problem you seem to have is that the backend does not support the settings that you try to push into the QAudioInput. Luckily Qt has a way of getting the nearest usable format and here's hot to set it:
void AudioInput::setup(){
output.setFileName("record.raw");
output.open(QIODevice::WriteOnly);
QAudioFormat settings;
settings.setCodec("audio/PCM");
settings.setSampleRate(16000);
settings.setSampleSize(16);
settings.setChannelCount(1);
settings.setByteOrder(QAudioFormat::LittleEndian);
settings.setSampleType(QAudioFormat::SignedInt);
QAudioDeviceInfo info(QAudioDeviceInfo::defaultOutputDevice());
if (!info.isFormatSupported(settings)) {
settings = info.nearestFormat(settings); // This is the magic line
settings.setSampleRate(16000);
qDebug() << "Raw audio format not supported by backend. Trying the nearest format.";
}
audio=new QAudioInput(settings);
audio->start(&output);
QTimer::singleShot(3000, this, SLOT(terminateRecording()));
}
I've just started experiencing with thread and can't get some basics. How can i write to Console from thread with interval say 10 msec? So i have a thread class:
public ref class SecThr
{
public:
DateTime^ dt;
void getdate()
{
dt= DateTime::Now;
Console::WriteLine(dt->Hour+":"+dt->Minute+":"+dt->Second);
}
};
int main()
{
Console::WriteLine("Hello!");
SecThr^ thrcl=gcnew SecThr;
Thread^ o1=gcnew Thread(gcnew ThreadStart(SecThr,&thrcl::getdate));
}
I cannot compile it in my Visual c++ 2010 c++ cli, get a lot of errors C3924, C2825, C2146
You are just writing incorrect C++/CLI code. The most obvious mistakes:
missing using namespace directives for the classes you use, like System::Threading, required if you don't write System::Threading::Thread in full.
using the ^ hat on value types like DateTime, not signaled as a compile error but very detrimental to program efficiency, it will cause the value to be boxed.
not constructing a delegate object correctly, first argument is the target object, second argument is the function pointer.
Rewriting it so it works:
using namespace System;
using namespace System::Threading;
public ref class SecThr
{
DateTime dt;
public:
void getdate() {
dt= DateTime::Now;
Console::WriteLine(dt.Hour + ":" + dt.Minute + ":" + dt.Second);
}
};
int main(array<System::String ^> ^args)
{
Console::WriteLine("Hello!");
SecThr^ thrcl=gcnew SecThr;
Thread^ o1=gcnew Thread(gcnew ThreadStart(thrcl, &SecThr::getdate));
o1->Start();
o1->Join();
Console::ReadKey();
}
I want to run several threads inside a process. I'm looking for the most efficient way of being able to pass messages between the threads.
Each thread would have a shared memory input message buffer. Other threads would write the appropriate buffer.
Messages would have priority. I want to manage this process myself.
Without getting into expensive locking or synchronizing, what's the best way to do this? Or is there already a well proven library available for this? (Delphi, C, or C# is fine).
This is hard to get right without repeating a lot of mistakes other people already made for you :)
Take a look at Intel Threading Building Blocks - the library has several well-designed queue templates (and other collections) that you can test and see which suits your purpose best.
If you are going to work with multiple threads, it is hard to avoid synchronisation. Fortunately it is not very hard.
For a single process, a Critical Section is frequently the best choice. It is fast and easy to use. For simplicity, I normally wrap it in a class to handle initialisation and cleanup.
#include <Windows.h>
class CTkCritSec
{
public:
CTkCritSec(void)
{
::InitializeCriticalSection(&m_critSec);
}
~CTkCritSec(void)
{
::DeleteCriticalSection(&m_critSec);
}
void Lock()
{
::EnterCriticalSection(&m_critSec);
}
void Unlock()
{
::LeaveCriticalSection(&m_critSec);
}
private:
CRITICAL_SECTION m_critSec;
};
You can make it even simpler using an "autolock" class you lock/unlock it.
class CTkAutoLock
{
public:
CTkAutoLock(CTkCritSec &lock)
: m_lock(lock)
{
m_lock.Lock();
}
virtual ~CTkAutoLock()
{
m_lock.Unlock();
}
private:
CTkCritSec &m_lock;
};
Anywhere you want to lock something, instantiate an autolock. When the function finishes, it will unlock. Also, if there is an exception, it will automatically unlock (giving exception safety).
Now you can make a simple message queue out of an std priority queue
#include <queue>
#include <deque>
#include <functional>
#include <string>
struct CMsg
{
CMsg(const std::string &s, int n=1)
: sText(s), nPriority(n)
{
}
int nPriority;
std::string sText;
struct Compare : public std::binary_function<bool, const CMsg *, const CMsg *>
{
bool operator () (const CMsg *p0, const CMsg *p1)
{
return p0->nPriority < p1->nPriority;
}
};
};
class CMsgQueue :
private std::priority_queue<CMsg *, std::deque<CMsg *>, CMsg::Compare >
{
public:
void Push(CMsg *pJob)
{
CTkAutoLock lk(m_critSec);
push(pJob);
}
CMsg *Pop()
{
CTkAutoLock lk(m_critSec);
CMsg *pJob(NULL);
if (!Empty())
{
pJob = top();
pop();
}
return pJob;
}
bool Empty()
{
CTkAutoLock lk(m_critSec);
return empty();
}
private:
CTkCritSec m_critSec;
};
The content of CMsg can be anything you like. Note that the CMsgQue inherits privately from std::priority_queue. That prevents raw access to the queue without going through our (synchronised) methods.
Assign a queue like this to each thread and you are on your way.
Disclaimer The code here was slapped together quickly to illustrate a point. It probably has errors and needs review and testing before being used in production.