I have two subnero WNC-M25MSS3 modems, my goal is to realize a 2 nodes network. They are made of co-processor and Janus protocol.
During my future deployment at sea, the modem named A will be located on the surface and the modem B in depth, I could connect to the shell of the modem A in order to carry out the commands of the scripts which I created there. My goal is the following: I must requisition data from B when I enter the command line "requestData" in the shell of A. To do this, once the command line is entered, this will generate a Janus message with data inside which will be transmitted to B. When B receives the message, it should automatically respond with the data. This is where I get stuck, I have created the data transmission scripts for both modems, but I can't find a way to make B "listen", i.e. wait for the message from A, as soon as it receives it, transmit the data and then listen again.
Here is the function of modem A to call a transmission:
B = host('B');
subscribe phy;
int counterA = 3; // counter which will create in the future
int ID = 0; // Transmitting to everybody
int [] request = new int[2];
request[0] = counterA; // Data is create in a table
request[1] = ID;
// request[2] = ; ????
phy << new TxJanusFrameReq(type: CONTROL, data: request, to: ID); // Transmission of data with Janus
Here is the infinite loop that I created on modem B so that it transmits with "sendData" once a message is received: it is here that I get stuck
subscribe phy;
while(RxFrameNtf != true){
//receivedData = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
//receivedData = ntf.data;
try{
try { //wait 5 seconds function
Thread.sleep(5000) ;
} catch (InterruptedException e) {
// gestion de l'erreur
}
receivedData = ntf.data; // Analyse the receiving data
}catch (Exception e){
//println([receivedData]);
println("No data yet");
//tampomEvent;
}
//println([receivedData]);
}
println("Amazing, you're out now!!!");
if(ntf.data[1] == 0){
println("OK");
sendData;
}
Related
The Toit library contains the zip.toit module, which allows to hope that the system supports data compression and decompression. Unfortunately, there are no examples. Is it possible to give the simplest example of compression and decompression of data, such as strings or binary arrays?
The zlib library currently only supports compression.
Fundamentally you have to:
create a compressor (zlib, or gzip).
feed the data in one task
read/use the compressed data in another task.
Here is an example:
import zlib
import bytes
import monitor
main:
// We use a semaphore to let the main task know when all the
// compressed data has been handled.
// If the data is just sent over the network, then the semaphore
// wouldn't be necessary. The second task (`t`) would just finish
// once all data has been processed.
done := monitor.Semaphore
// We use a byte-buffer to accumulate all the data. If the data is
// to be sent somewhere, then it's better to just do that directly,
// instead of using memory here.
accumulator := bytes.Buffer
// There are other encoders as well, but the gzip-encoder has
// the advantage that it produces `.gz` compatible data.
compressor := zlib.RunLengthGzipEncoder
// We create a second task that takes out the already compressed
// data of the compressor.
t := task::
while data := compressor.read:
accumulator.write data
// We increment the semaphore, so that the other (original) task
// knows that we are done processing the data.
done.up
// In this task we now add data. As example, it's just "foo", but any
// string or byte-array would work.
// Can be called multiple times.
compressor.write "foo"
// The compressor must be closed.
// This flushes the last remaining data, and lets the reading
// task know that it is done.
compressor.close
// We wait for the reading task to signal that all data is handled.
done.down
// Show the data that was accumulated.
// You would normally just send it somewhere, and not print it.
print accumulator.buffer.size
print accumulator.buffer
A small NB. You can also decode the resulting binary array in android. I tested this by writing a small java app:
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
public class MainActivity extends AppCompatActivity {
// byte array from toit:
byte[] bytes = { 31, -117, 8, 0, 0, 0, 0, 0, 0, -1, 75, -53, -49, 7, 0, 33, 101, 115, -116, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
static final String TAG = "GZip";
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
String foo = null;
try {
foo = decompress(bytes);
}
catch(IOException exception) {
Log.d(TAG, exception.toString());
}
Log.d(TAG, "decompress->[" + foo + "]");
}
public static String decompress(byte[] compressed) throws IOException {
final int BUFFER_SIZE = 32;
ByteArrayInputStream is = new ByteArrayInputStream(compressed);
GZIPInputStream gis = new GZIPInputStream(is, BUFFER_SIZE);
StringBuilder string = new StringBuilder();
byte[] data = new byte[BUFFER_SIZE];
int bytesRead;
while ((bytesRead = gis.read(data)) != -1) {
string.append(new String(data, 0, bytesRead));
}
gis.close();
is.close();
return string.toString();
}
I'm trying to handle data recieved at about 1Hz from a robot through a serialport (bluetooth connection). I will recieve data with different headers determining what data will be recieved and what the expected length of the message is.
for example:
Header: sensorvalues (0x32) -> expected length 11 incl. header.
First i want to check if the byte is a header and if thats the case extract the expected length (in bytes).
I am using vc++ and the serialport class in the GUI.
The "seirialport->read(buffer, 0, expected length)" is very slow and not very reliable. I found some tips at http://www.sparxeng.com/blog/software/must-use-net-system-io-ports-serialport but the buffer still builds up to the point where I have about 200 bytes in the buffer, which is not very good when i need live sensorvalues displayed on the GUI.
The key parts of my code are:
System::Void MyForm::serialPort1_DataReceived_1(System::Object^ sender, System::IO::Ports::SerialDataReceivedEventArgs^ e) {
if (serialPort1->BytesToRead > 0){
if (write_position == 0){
serialPort1->BaseStream->ReadAsync(data_recieved_buffer, 0, 1);
header = data_recieved_buffer[0];
if (this->InvokeRequired){
myrecievedata_delegate^ d = gcnew myrecievedata_delegate(&myrecievedata);
this->Invoke(d, gcnew array < Object^ > {'h'});
}
else
{
myrecievedata('h');
}
}
else if (this->serialPort1->BytesToRead > expected_length - 1)
{
serialPort1->BaseStream->ReadAsync(data_recieved_buffer, 0, expected_length - 1);
if (this->InvokeRequired){
myrecievedata_delegate^ d = gcnew myrecievedata_delegate(&myrecievedata);
this->Invoke(d, gcnew array < Object^ > {'b'});
}
else
{
myrecievedata('b');
}
}
else{
return;
}
}
and the recieved data is sent to
System::Void MyForm::myrecievedata(char status){
if (status == 'h'){
handleheader(header);
}
else if (status == 'b'){
handlebyte();
}
Is the problem at the serialport_datarecieved event? I can only think of invoke (which I have very little knowledge about) being the problem, still keeping the work in the serialport thread.
If that is the case how would I make sure that the data is handled in a different thread?
Thanks in advance!
I am trying to use Boost Conditional variable in my application to synchronize two different threads as following:
The main thread, will create a TCP server and instance of object called MIH-User and register a callback to an event_handler.
Main.cpp
/**
* Default MIH event handler.
*
* #param msg Received message.
* #param ec Error code.
*/
void event_handler(odtone::mih::message &msg, const boost::system::error_code &ec)
{
if (ec)
{
log_(0, __FUNCTION__, " error: ", ec.message());
return;
}
switch (msg.mid())
{
// Source Server received HO Complete Message
case odtone::mih::indication::n2n_ho_complete:
{
if (ec)
{
log_(0, __FUNCTION__, " error: ", ec.message());
return;
}
mih::id mobile_id; // Mobile node MIHF ID TLV
mih::link_tuple_id source_id; // Source Link ID TLV
mih::link_tuple_id target_id; // Target Link ID TLV
mih::ho_result ho_res; // Handover Result TLV
// Deserialize received MIH message "N2N Handover Complete Indication"
msg >> mih::indication()
& mih::tlv_mobile_node_mihf_id(mobile_id)
& mih::tlv_link_identifier(source_id)
& mih::tlv_new_link_identifier(target_id)
& mih::tlv_ho_result(ho_res);
log_(0, "has received a N2N_HO_Complete.Indication with HO-Result=", ho_res.get(),
" from ", msg.source().to_string(), ", for Mobile-IP=", mobile_id.to_string());
// Find the source transaction which corresponds to this Indication
src_transaction_ptr t;
tpool->find(msg.source(), mobile_id.to_string(), t);
{
boost::lock_guard<boost::mutex> lock(t->mut);
t->response_received = true;
t->ho_complete_result = ho_res;
t->tid = msg.tid();
}
t->cond.notify_one();
}
break;
}
}
int main(int argc, char **argv)
{
odtone::setup_crash_handler();
boost::asio::io_service ios;
sap::user usr(cfg, ios, boost::bind(&event_handler, _1, _2));
mMihf = &usr;
// Register the MIH-Usr with the local MIHF
register_mih_user(cfg);
// Pool of pending transactions with peer mihfs
ho_transaction_pool pool(ios);
tpool = &pool;
// The io_service object provides I/O services, such as sockets,
// that the server object will use.
tcp_server server(ios, cfg.get<ushort>(kConf_Server_Port));
}
The TCP server will listen for new incoming connections and upon the reception of a new connection it will create a new thread corresponding to a source transaction machine also it will add it to a common transaction pool as following:
TCP Server
void handle_request(std::string arg1,std::string arg2)
{
src_transaction_ptr t(new src_transaction(arg1, arg2));
tpool->add(t);
t->run();
}
void handle_read(const boost::system::error_code &error, size_t bytes_transferred)
{
if (!error)
{
// Split received message defining ";" as a delimiter
std::vector<std::string> strs;
boost::split(strs, mMessage, boost::is_any_of(":"));
log_(0, "Received Message from TCP Client: ", mMessage);
// The first value is the HO Command Initiation message
if ((strs.at(0).compare("INIT") == 0) && (strs.size() == 3))
{
// The second value is the MIHF ID and the third is the IP address
// Start Source transaction if we receive "Init-Message"
boost::thread thrd(&tcp_connection::handle_request, this, strs.at(1), strs.at(2));
}
else if ((strs.at(0).compare("TEST") == 0) && (strs.size() == 3))
{
int max_iterations = atoi(strs.at(2).c_str());
for (int i = 1; i <= max_iterations; i++)
{
boost::thread thrd(&tcp_connection::handle_request,
this, strs.at(1), boost::lexical_cast<std::string>(i));
}
}
else
log_(0, "Error: Unrecognized message.");
memset(&mMessage[0], 0, max_length);
mSocket.async_read_some(boost::asio::buffer(mMessage, max_length),
boost::bind(&tcp_connection::handle_read, shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
}
The source transaction machine will move between different states and in one of the states it will have to freeze the execution until it receives an indication through the main thread which is the "n2n_ho_complete" at this time, it will set the response_received to ture as following:
Source Transaction Machine
/**
* Run Source State Machine transaction.
*/
void src_transaction::run()
{
// Previuos states.
wait_ho_complete_indication_state:
{
log_(1, "is in SRC_WAIT_HO_COMPLETE_INDICATION State for Mobile IP=", ip_address);
mState = SRC_WAIT_HO_COMPLETE_INDICATION;
boost::unique_lock<boost::mutex> lock(mut);
while (!response_received)
{
cond.wait(lock);
}
response_received = false;
// Do some stuff
}
// Other states
return;
}
The response_received is a public variable and each instance of the class has its own variable. When an indication is received through the main thread, it will look for the source transaction that matches that indication and sets its response_received to true.
So my problem is: whenever I try to execute the code, the whole program hangs on the wait_ho_complete_indication_state ,and the program doesn't respond to anything.
And for example if I request the creation of a 10 threads for a source transaction. The program will create all of them and they start to work concurrently, until one of them reaches the wait_ho_complete_indication_state , then everything freezes. Even the main thread doesn't respond at all, even if it received an indication throught the event_handler.
So is my code correct for using the conditional variable?
Please help with this issue.
Thanks a lot.
I've got a MonoTouch app that does an HTTP POST with a 3.5MB file, and it is very unstable on the primary platforms that I test on (iPhone 3G with OS 3.1.2 and iPhone 4 with OS 4.2.1). I'll describe what I'm doing here and maybe someone can tell me if I'm doing something wrong.
In order to rule out the rest of my app, I've whittled this down to a tiny sample app. The app is an iPhone OpenGL Project and it does only this:
At startup, allocate 6MB of memory in 30k chunks. This simulates my app's memory usage.
Read a 3.5MB file into memory.
Create a thread to post the data. (Make a WebRequest object, use GetRequestStream(), and write the 3.5MB data in).
When the main thread detects that the posting thread is done, goto step 2 and repeat.
Also, each frame, I allocate 0-100k to simulate the app doing something. I don't keep any references to this data so it should be getting garbage collected.
iPhone 3G Result: The app gets through 6 to 8 uploads and then the OS kills it. There is no crash log, but there is a LowMemory log showing that the app was jettisoned.
iPhone 4 Result: It gets an Mprotect error around the 11th upload.
A few data points:
Instruments does NOT show the memory increasing as the app continues to upload.
Instruments doesn't show any significant leaks (maybe 1 kilobyte total).
It doesn't matter whether I write the post data in 64k chunks or all at once with one Stream.Write() call.
It doesn't matter whether I wait for a response (HttpWebRequest.HaveResponse) or not before starting the next upload.
It doesn't matter if the POST data is even valid. I've tried using valid POST data and I've tried sending 3MB of zeros.
If the app is not allocating any data each frame, then it takes longer to run out of memory (but as mentioned before, the memory that I'm allocating each frame is not referenced after the frame it was allocated on, so it should be scooped up by the GC).
If nobody has any ideas, I'll file a bug with Novell, but I wanted to see if I'm doing something wrong here first.
If anyone wants the full sample app, I can provide it, but I've pasted the contents of my EAGLView.cs below.
using System;
using System.Net;
using System.Threading;
using System.Collections.Generic;
using System.IO;
using OpenTK.Platform.iPhoneOS;
using MonoTouch.CoreAnimation;
using OpenTK;
using OpenTK.Graphics.ES11;
using MonoTouch.Foundation;
using MonoTouch.ObjCRuntime;
using MonoTouch.OpenGLES;
namespace CrashTest
{
public partial class EAGLView : iPhoneOSGameView
{
[Export("layerClass")]
static Class LayerClass ()
{
return iPhoneOSGameView.GetLayerClass ();
}
[Export("initWithCoder:")]
public EAGLView (NSCoder coder) : base(coder)
{
LayerRetainsBacking = false;
LayerColorFormat = EAGLColorFormat.RGBA8;
ContextRenderingApi = EAGLRenderingAPI.OpenGLES1;
}
protected override void ConfigureLayer (CAEAGLLayer eaglLayer)
{
eaglLayer.Opaque = true;
}
protected override void OnRenderFrame (FrameEventArgs e)
{
SimulateAppAllocations();
UpdatePost();
base.OnRenderFrame (e);
float[] squareVertices = { -0.5f, -0.5f, 0.5f, -0.5f, -0.5f, 0.5f, 0.5f, 0.5f };
byte[] squareColors = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0,
0, 0, 255, 0, 255, 255 };
MakeCurrent ();
GL.Viewport (0, 0, Size.Width, Size.Height);
GL.MatrixMode (All.Projection);
GL.LoadIdentity ();
GL.Ortho (-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
GL.MatrixMode (All.Modelview);
GL.Rotate (3.0f, 0.0f, 0.0f, 1.0f);
GL.ClearColor (0.5f, 0.5f, 0.5f, 1.0f);
GL.Clear ((uint)All.ColorBufferBit);
GL.VertexPointer (2, All.Float, 0, squareVertices);
GL.EnableClientState (All.VertexArray);
GL.ColorPointer (4, All.UnsignedByte, 0, squareColors);
GL.EnableClientState (All.ColorArray);
GL.DrawArrays (All.TriangleStrip, 0, 4);
SwapBuffers ();
}
AsyncHttpPost m_Post;
int m_nPosts = 1;
byte[] LoadPostData()
{
// Just return 3MB of zeros. It doesn't matter whether this is valid POST data or not.
return new byte[1024 * 1024 * 3];
}
void UpdatePost()
{
if ( m_Post == null || m_Post.PostStatus != AsyncHttpPostStatus.InProgress )
{
System.Console.WriteLine( string.Format( "Starting post {0}", m_nPosts++ ) );
byte [] postData = LoadPostData();
m_Post = new AsyncHttpPost(
"https://api-video.facebook.com/restserver.php",
"multipart/form-data; boundary=" + "8cdbcdf18ab6640",
postData );
}
}
Random m_Random = new Random(0);
List< byte [] > m_Allocations;
List< byte[] > m_InitialAllocations;
void SimulateAppAllocations()
{
// First time through, allocate a bunch of data that the app would allocate.
if ( m_InitialAllocations == null )
{
m_InitialAllocations = new List<byte[]>();
int nInitialBytes = 6 * 1024 * 1024;
int nBlockSize = 30000;
for ( int nCurBytes = 0; nCurBytes < nInitialBytes; nCurBytes += nBlockSize )
{
m_InitialAllocations.Add( new byte[nBlockSize] );
}
}
m_Allocations = new List<byte[]>();
for ( int i=0; i < 10; i++ )
{
int nAllocationSize = m_Random.Next( 10000 ) + 10;
m_Allocations.Add( new byte[nAllocationSize] );
}
}
}
public enum AsyncHttpPostStatus
{
InProgress,
Success,
Fail
}
public class AsyncHttpPost
{
public AsyncHttpPost( string sURL, string sContentType, byte [] postData )
{
m_PostData = postData;
m_PostStatus = AsyncHttpPostStatus.InProgress;
m_sContentType = sContentType;
m_sURL = sURL;
//UploadThread();
m_UploadThread = new Thread( new ThreadStart( UploadThread ) );
m_UploadThread.Start();
}
void UploadThread()
{
using ( MonoTouch.Foundation.NSAutoreleasePool pool = new MonoTouch.Foundation.NSAutoreleasePool() )
{
try
{
HttpWebRequest request = WebRequest.Create( m_sURL ) as HttpWebRequest;
request.Method = "POST";
request.ContentType = m_sContentType;
request.ContentLength = m_PostData.Length;
// Write the post data.
using ( Stream stream = request.GetRequestStream() )
{
stream.Write( m_PostData, 0, m_PostData.Length );
stream.Close();
}
System.Console.WriteLine( "Finished!" );
// We're done with the data now. Let it be garbage collected.
m_PostData = null;
// Finished!
m_PostStatus = AsyncHttpPostStatus.Success;
}
catch ( System.Exception e )
{
System.Console.WriteLine( "Error in AsyncHttpPost.UploadThread:\n" + e.Message );
m_PostStatus = AsyncHttpPostStatus.Fail;
}
}
}
public AsyncHttpPostStatus PostStatus
{
get
{
return m_PostStatus;
}
}
Thread m_UploadThread;
// Queued to be handled in the main thread.
byte [] m_PostData;
AsyncHttpPostStatus m_PostStatus;
string m_sContentType;
string m_sURL;
}
}
I think you should read in your file 1 KB (or some arbitrary size) at a time and write it to the web request.
Code similar to this:
byte[] buffer = new buffer[1024];
int bytesRead = 0;
using (FileStream fileStream = File.OpenRead("YourFile.txt"))
{
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0)
{
httpPostStream.Write(buffer, 0, bytesRead);
}
}
This is off the top of my head, but I think it's right.
This way you don't have an extra 3MB floating around in memory when you don't really need to. I think tricks like this are even more important on iDevices (or other devices) than on the desktop.
Test the buffer size too, a larger buffer will get you better speeds up to a point (I remember 8KB being pretty good).
I'm new in OpenGL.
I want to draw somethink using OpenGL in Windows Forms.
If I use Win32 application with WinMain method, application working.
In WinMain method, I fill HWND with CreateWindow() function and ı give WinMain parameters to CreateWindows.
But I want to get Handle from Windows form i cant get this. Everytime
wglCreateContext(hdc) return NULL
there is a example which is I take
public:
COpenGL(System::Windows::Forms::Form ^ parentForm, GLsizei iWidth, GLsizei iHeight)
{
CreateParams^ cp = gcnew CreateParams;
// Set the position on the form
cp->X = 0;
cp->Y = 0;
cp->Height = iHeight;
cp->Width = iWidth;
// Specify the form as the parent.
cp->Parent = parentForm->Handle;
// Create as a child of the specified parent and make OpenGL compliant (no clipping)
cp->Style = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN;
// Create the actual window
this->CreateHandle(cp);
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
rtri = 0.0f;
rquad = 0.0f;
}
GLint MySetPixelFormat(HDC hdc)
{
static PIXELFORMATDESCRIPTOR pfd=
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
16,
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
16,
0,
0,
PFD_MAIN_PLANE,
0,
0, 0, 0
};
GLint iPixelFormat;
// get the device context's best, available pixel format match
if((iPixelFormat = ChoosePixelFormat(hdc, &pfd)) == 0)
{
MessageBox::Show("ChoosePixelFormat Failed");
return 0;
}
// make that match the device context's current pixel format
if(SetPixelFormat(hdc, iPixelFormat, &pfd) == FALSE)
{
MessageBox::Show("SetPixelFormat Failed");
return 0;
}
if((m_hglrc = wglCreateContext(hdc)) == NULL)
{
MessageBox::Show("wglCreateContext Failed");
return 0;
}
if((wglMakeCurrent(hdc, m_hglrc)) == NULL)
{
MessageBox::Show("wglMakeCurrent Failed");
return 0;
}
return 1;
}
Have can I solve this problem?
Here, change in the constructor :
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
wglMakeCurrent(m_hDC, NULL);
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
You must call wglMakeCurrent after m_hDC has been setup. I reply to the first article of the example. Here Creating an OpenGL view on a Windows Form
That solve my problem :)
you can check what glGetLastError value and mostly it is because you choose wrong or incompatible pixel format, you can try with other format, then your window class should be flagged CS_OWNDC and set DoubleBuffering to false.