Compressing data in Toit - zip

The Toit library contains the zip.toit module, which allows to hope that the system supports data compression and decompression. Unfortunately, there are no examples. Is it possible to give the simplest example of compression and decompression of data, such as strings or binary arrays?

The zlib library currently only supports compression.
Fundamentally you have to:
create a compressor (zlib, or gzip).
feed the data in one task
read/use the compressed data in another task.
Here is an example:
import zlib
import bytes
import monitor
main:
// We use a semaphore to let the main task know when all the
// compressed data has been handled.
// If the data is just sent over the network, then the semaphore
// wouldn't be necessary. The second task (`t`) would just finish
// once all data has been processed.
done := monitor.Semaphore
// We use a byte-buffer to accumulate all the data. If the data is
// to be sent somewhere, then it's better to just do that directly,
// instead of using memory here.
accumulator := bytes.Buffer
// There are other encoders as well, but the gzip-encoder has
// the advantage that it produces `.gz` compatible data.
compressor := zlib.RunLengthGzipEncoder
// We create a second task that takes out the already compressed
// data of the compressor.
t := task::
while data := compressor.read:
accumulator.write data
// We increment the semaphore, so that the other (original) task
// knows that we are done processing the data.
done.up
// In this task we now add data. As example, it's just "foo", but any
// string or byte-array would work.
// Can be called multiple times.
compressor.write "foo"
// The compressor must be closed.
// This flushes the last remaining data, and lets the reading
// task know that it is done.
compressor.close
// We wait for the reading task to signal that all data is handled.
done.down
// Show the data that was accumulated.
// You would normally just send it somewhere, and not print it.
print accumulator.buffer.size
print accumulator.buffer

A small NB. You can also decode the resulting binary array in android. I tested this by writing a small java app:
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;
import java.util.zip.GZIPOutputStream;
public class MainActivity extends AppCompatActivity {
// byte array from toit:
byte[] bytes = { 31, -117, 8, 0, 0, 0, 0, 0, 0, -1, 75, -53, -49, 7, 0, 33, 101, 115, -116, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
static final String TAG = "GZip";
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
String foo = null;
try {
foo = decompress(bytes);
}
catch(IOException exception) {
Log.d(TAG, exception.toString());
}
Log.d(TAG, "decompress->[" + foo + "]");
}
public static String decompress(byte[] compressed) throws IOException {
final int BUFFER_SIZE = 32;
ByteArrayInputStream is = new ByteArrayInputStream(compressed);
GZIPInputStream gis = new GZIPInputStream(is, BUFFER_SIZE);
StringBuilder string = new StringBuilder();
byte[] data = new byte[BUFFER_SIZE];
int bytesRead;
while ((bytesRead = gis.read(data)) != -1) {
string.append(new String(data, 0, bytesRead));
}
gis.close();
is.close();
return string.toString();
}

Related

UnetStack : How to get an automatic transmission of data between two modems

I have two subnero WNC-M25MSS3 modems, my goal is to realize a 2 nodes network. They are made of co-processor and Janus protocol.
During my future deployment at sea, the modem named A will be located on the surface and the modem B in depth, I could connect to the shell of the modem A in order to carry out the commands of the scripts which I created there. My goal is the following: I must requisition data from B when I enter the command line "requestData" in the shell of A. To do this, once the command line is entered, this will generate a Janus message with data inside which will be transmitted to B. When B receives the message, it should automatically respond with the data. This is where I get stuck, I have created the data transmission scripts for both modems, but I can't find a way to make B "listen", i.e. wait for the message from A, as soon as it receives it, transmit the data and then listen again.
Here is the function of modem A to call a transmission:
B = host('B');
subscribe phy;
int counterA = 3; // counter which will create in the future
int ID = 0; // Transmitting to everybody
int [] request = new int[2];
request[0] = counterA; // Data is create in a table
request[1] = ID;
// request[2] = ; ????
phy << new TxJanusFrameReq(type: CONTROL, data: request, to: ID); // Transmission of data with Janus
Here is the infinite loop that I created on modem B so that it transmits with "sendData" once a message is received: it is here that I get stuck
subscribe phy;
while(RxFrameNtf != true){
//receivedData = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
//receivedData = ntf.data;
try{
try { //wait 5 seconds function
Thread.sleep(5000) ;
} catch (InterruptedException e) {
// gestion de l'erreur
}
receivedData = ntf.data; // Analyse the receiving data
}catch (Exception e){
//println([receivedData]);
println("No data yet");
//tampomEvent;
}
//println([receivedData]);
}
println("Amazing, you're out now!!!");
if(ntf.data[1] == 0){
println("OK");
sendData;
}

How to make a image processing to use all CPU core

I need a function how is capable to build an image based on multiple merges. So I do this
public static void mergeImagesByName(List<String> names) {
File folder;
File[] listOfFiles;
List<String> allFilesName;
folder = new File("images/");
listOfFiles = folder.listFiles();
allFilesName = new ArrayList<>();
for (File fileName : listOfFiles) {
allFilesName.add(fileName.getName());
}
List<String> imgName = names.stream().map(name -> name += ".PNG").collect(Collectors.toList());
List<String> allExistingName = new ArrayList<>();
allFilesName.stream().forEach((file) -> imgName.stream().filter((name) -> (file.equals(name))).forEach((name) -> allExistingName.add(name)));
try {
File baseImage = new File(folder, "MERGE.PNG");
BufferedImage textImage = ImageIO.read(new File(folder, "Text.PNG"));
BufferedImage image = ImageIO.read(baseImage);
int w = 800;
int h = 450;
BufferedImage combined = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGB);
Graphics g = combined.getGraphics();
g.drawImage(image, 0, 0, null);
for (String name : allExistingName) {
BufferedImage overlay = ImageIO.read(new File(folder, name));
g.drawImage(overlay, 0, 0, null);
ImageIO.write(combined, "PNG", new File(folder, "MERGE.PNG"));
}
g.drawImage(textImage, 0, 0, null);
} catch (IOException ex) {
Logger.getLogger(MergeImages.class
.getName()).log(Level.SEVERE, null, ex);
}
}
But it's to slower for what I need... I need almost 5-8sec to execute all my images and create the result. So I'm thinking... if I will make it to run on multiple core simultaneous that will increase my speed. For example.. I've 4 core, if I will can to divide my original list of elements in 4 list and with of them will have just a quarter part or original list, these can run each of them on one core, and after all will finished I can to merge just 4 images on one. But I have no idea how to do that... So please guys, if anyone of you know how to do that please show me :D
Thx and sorry for me bad English.
The solve was really simple, I need just to use parallelStream and all work much fasted
allExistingName.parallelStream().forEach(name -> {
BufferedImage overlay = images.get(name);
g.drawImage(overlay, 0, 0, null);
});
ImageIO.write(combined, "PNG", new File(folder, "MERGE.PNG"));
That was all what I need

How to know if an 3DObject is Looked at in VR

I am using RajawaliVR library.
I have added a plane and applied texture to it. Know I want to know when my object is being looked at so that I can trigger some even. Is there anything in RajawaliVR or google cardboad that can help me achieve this.
Material cruiserMaterial = new Material();
cruiserMaterial.setDiffuseMethod(new DiffuseMethod.Lambert());
cruiserMaterial.setColorInfluence(0);
cruiserMaterial.enableLighting(true);
try {
cruiserMaterial.addTexture(new Texture("spaceCruiserTex",
R.drawable.image2));
} catch (TextureException e) {
e.printStackTrace();
}
Object3D leftPlane = new Plane(10f, 10f, 1, 1, 1);
leftPlane.setMaterial(cruiserMaterial);
leftPlane.setRotZ(90);
Object3D container = new Object3D();
container.addChild(leftPlane);
container.setRotX(90);
container.setRotY(90);
container.setRotZ(90);
container.setZ(-20);
getCurrentScene().addChild(container);
Just put this in your renderers main loop (OnDrawFrame), iterate a list with objects and pass the object as parameter. The method will return true if you are currently looking at an object.
private static final float YAW_LIMIT = 0.12f;
private static final float PITCH_LIMIT = 0.12f;
/**
* Check if user is looking at object by calculating where the object is in eye-space.
*
* #return true if the user is looking at the object.
*/
private boolean isLookingAtObject(WorldObject object) {
float[] initVec = { 0, 0, 0, 1.0f };
float[] objPositionVec = new float[4];
// Convert object space to camera space. Use the headView from onNewFrame.
Matrix.multiplyMM(mModelView, 0, this.getHeadViewMatrix(), 0, object.getModel().getModelMatrix().getFloatValues(), 0);
Matrix.multiplyMV(objPositionVec, 0, mModelView, 0, initVec, 0);
float pitch = (float) Math.atan2(objPositionVec[1], -objPositionVec[2]);
float yaw = (float) Math.atan2(objPositionVec[0], -objPositionVec[2]);
return Math.abs(pitch) < PITCH_LIMIT && Math.abs(yaw) < YAW_LIMIT;
}

MonoTouch - WebRequest memory leak and crash?

I've got a MonoTouch app that does an HTTP POST with a 3.5MB file, and it is very unstable on the primary platforms that I test on (iPhone 3G with OS 3.1.2 and iPhone 4 with OS 4.2.1). I'll describe what I'm doing here and maybe someone can tell me if I'm doing something wrong.
In order to rule out the rest of my app, I've whittled this down to a tiny sample app. The app is an iPhone OpenGL Project and it does only this:
At startup, allocate 6MB of memory in 30k chunks. This simulates my app's memory usage.
Read a 3.5MB file into memory.
Create a thread to post the data. (Make a WebRequest object, use GetRequestStream(), and write the 3.5MB data in).
When the main thread detects that the posting thread is done, goto step 2 and repeat.
Also, each frame, I allocate 0-100k to simulate the app doing something. I don't keep any references to this data so it should be getting garbage collected.
iPhone 3G Result: The app gets through 6 to 8 uploads and then the OS kills it. There is no crash log, but there is a LowMemory log showing that the app was jettisoned.
iPhone 4 Result: It gets an Mprotect error around the 11th upload.
A few data points:
Instruments does NOT show the memory increasing as the app continues to upload.
Instruments doesn't show any significant leaks (maybe 1 kilobyte total).
It doesn't matter whether I write the post data in 64k chunks or all at once with one Stream.Write() call.
It doesn't matter whether I wait for a response (HttpWebRequest.HaveResponse) or not before starting the next upload.
It doesn't matter if the POST data is even valid. I've tried using valid POST data and I've tried sending 3MB of zeros.
If the app is not allocating any data each frame, then it takes longer to run out of memory (but as mentioned before, the memory that I'm allocating each frame is not referenced after the frame it was allocated on, so it should be scooped up by the GC).
If nobody has any ideas, I'll file a bug with Novell, but I wanted to see if I'm doing something wrong here first.
If anyone wants the full sample app, I can provide it, but I've pasted the contents of my EAGLView.cs below.
using System;
using System.Net;
using System.Threading;
using System.Collections.Generic;
using System.IO;
using OpenTK.Platform.iPhoneOS;
using MonoTouch.CoreAnimation;
using OpenTK;
using OpenTK.Graphics.ES11;
using MonoTouch.Foundation;
using MonoTouch.ObjCRuntime;
using MonoTouch.OpenGLES;
namespace CrashTest
{
public partial class EAGLView : iPhoneOSGameView
{
[Export("layerClass")]
static Class LayerClass ()
{
return iPhoneOSGameView.GetLayerClass ();
}
[Export("initWithCoder:")]
public EAGLView (NSCoder coder) : base(coder)
{
LayerRetainsBacking = false;
LayerColorFormat = EAGLColorFormat.RGBA8;
ContextRenderingApi = EAGLRenderingAPI.OpenGLES1;
}
protected override void ConfigureLayer (CAEAGLLayer eaglLayer)
{
eaglLayer.Opaque = true;
}
protected override void OnRenderFrame (FrameEventArgs e)
{
SimulateAppAllocations();
UpdatePost();
base.OnRenderFrame (e);
float[] squareVertices = { -0.5f, -0.5f, 0.5f, -0.5f, -0.5f, 0.5f, 0.5f, 0.5f };
byte[] squareColors = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0,
0, 0, 255, 0, 255, 255 };
MakeCurrent ();
GL.Viewport (0, 0, Size.Width, Size.Height);
GL.MatrixMode (All.Projection);
GL.LoadIdentity ();
GL.Ortho (-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
GL.MatrixMode (All.Modelview);
GL.Rotate (3.0f, 0.0f, 0.0f, 1.0f);
GL.ClearColor (0.5f, 0.5f, 0.5f, 1.0f);
GL.Clear ((uint)All.ColorBufferBit);
GL.VertexPointer (2, All.Float, 0, squareVertices);
GL.EnableClientState (All.VertexArray);
GL.ColorPointer (4, All.UnsignedByte, 0, squareColors);
GL.EnableClientState (All.ColorArray);
GL.DrawArrays (All.TriangleStrip, 0, 4);
SwapBuffers ();
}
AsyncHttpPost m_Post;
int m_nPosts = 1;
byte[] LoadPostData()
{
// Just return 3MB of zeros. It doesn't matter whether this is valid POST data or not.
return new byte[1024 * 1024 * 3];
}
void UpdatePost()
{
if ( m_Post == null || m_Post.PostStatus != AsyncHttpPostStatus.InProgress )
{
System.Console.WriteLine( string.Format( "Starting post {0}", m_nPosts++ ) );
byte [] postData = LoadPostData();
m_Post = new AsyncHttpPost(
"https://api-video.facebook.com/restserver.php",
"multipart/form-data; boundary=" + "8cdbcdf18ab6640",
postData );
}
}
Random m_Random = new Random(0);
List< byte [] > m_Allocations;
List< byte[] > m_InitialAllocations;
void SimulateAppAllocations()
{
// First time through, allocate a bunch of data that the app would allocate.
if ( m_InitialAllocations == null )
{
m_InitialAllocations = new List<byte[]>();
int nInitialBytes = 6 * 1024 * 1024;
int nBlockSize = 30000;
for ( int nCurBytes = 0; nCurBytes < nInitialBytes; nCurBytes += nBlockSize )
{
m_InitialAllocations.Add( new byte[nBlockSize] );
}
}
m_Allocations = new List<byte[]>();
for ( int i=0; i < 10; i++ )
{
int nAllocationSize = m_Random.Next( 10000 ) + 10;
m_Allocations.Add( new byte[nAllocationSize] );
}
}
}
public enum AsyncHttpPostStatus
{
InProgress,
Success,
Fail
}
public class AsyncHttpPost
{
public AsyncHttpPost( string sURL, string sContentType, byte [] postData )
{
m_PostData = postData;
m_PostStatus = AsyncHttpPostStatus.InProgress;
m_sContentType = sContentType;
m_sURL = sURL;
//UploadThread();
m_UploadThread = new Thread( new ThreadStart( UploadThread ) );
m_UploadThread.Start();
}
void UploadThread()
{
using ( MonoTouch.Foundation.NSAutoreleasePool pool = new MonoTouch.Foundation.NSAutoreleasePool() )
{
try
{
HttpWebRequest request = WebRequest.Create( m_sURL ) as HttpWebRequest;
request.Method = "POST";
request.ContentType = m_sContentType;
request.ContentLength = m_PostData.Length;
// Write the post data.
using ( Stream stream = request.GetRequestStream() )
{
stream.Write( m_PostData, 0, m_PostData.Length );
stream.Close();
}
System.Console.WriteLine( "Finished!" );
// We're done with the data now. Let it be garbage collected.
m_PostData = null;
// Finished!
m_PostStatus = AsyncHttpPostStatus.Success;
}
catch ( System.Exception e )
{
System.Console.WriteLine( "Error in AsyncHttpPost.UploadThread:\n" + e.Message );
m_PostStatus = AsyncHttpPostStatus.Fail;
}
}
}
public AsyncHttpPostStatus PostStatus
{
get
{
return m_PostStatus;
}
}
Thread m_UploadThread;
// Queued to be handled in the main thread.
byte [] m_PostData;
AsyncHttpPostStatus m_PostStatus;
string m_sContentType;
string m_sURL;
}
}
I think you should read in your file 1 KB (or some arbitrary size) at a time and write it to the web request.
Code similar to this:
byte[] buffer = new buffer[1024];
int bytesRead = 0;
using (FileStream fileStream = File.OpenRead("YourFile.txt"))
{
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0)
{
httpPostStream.Write(buffer, 0, bytesRead);
}
}
This is off the top of my head, but I think it's right.
This way you don't have an extra 3MB floating around in memory when you don't really need to. I think tricks like this are even more important on iDevices (or other devices) than on the desktop.
Test the buffer size too, a larger buffer will get you better speeds up to a point (I remember 8KB being pretty good).

wglCreateContext() everytime return NULL in OpenGL and Visual C++

I'm new in OpenGL.
I want to draw somethink using OpenGL in Windows Forms.
If I use Win32 application with WinMain method, application working.
In WinMain method, I fill HWND with CreateWindow() function and ı give WinMain parameters to CreateWindows.
But I want to get Handle from Windows form i cant get this. Everytime
wglCreateContext(hdc) return NULL
there is a example which is I take
public:
COpenGL(System::Windows::Forms::Form ^ parentForm, GLsizei iWidth, GLsizei iHeight)
{
CreateParams^ cp = gcnew CreateParams;
// Set the position on the form
cp->X = 0;
cp->Y = 0;
cp->Height = iHeight;
cp->Width = iWidth;
// Specify the form as the parent.
cp->Parent = parentForm->Handle;
// Create as a child of the specified parent and make OpenGL compliant (no clipping)
cp->Style = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN;
// Create the actual window
this->CreateHandle(cp);
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
rtri = 0.0f;
rquad = 0.0f;
}
GLint MySetPixelFormat(HDC hdc)
{
static PIXELFORMATDESCRIPTOR pfd=
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
16,
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
16,
0,
0,
PFD_MAIN_PLANE,
0,
0, 0, 0
};
GLint iPixelFormat;
// get the device context's best, available pixel format match
if((iPixelFormat = ChoosePixelFormat(hdc, &pfd)) == 0)
{
MessageBox::Show("ChoosePixelFormat Failed");
return 0;
}
// make that match the device context's current pixel format
if(SetPixelFormat(hdc, iPixelFormat, &pfd) == FALSE)
{
MessageBox::Show("SetPixelFormat Failed");
return 0;
}
if((m_hglrc = wglCreateContext(hdc)) == NULL)
{
MessageBox::Show("wglCreateContext Failed");
return 0;
}
if((wglMakeCurrent(hdc, m_hglrc)) == NULL)
{
MessageBox::Show("wglMakeCurrent Failed");
return 0;
}
return 1;
}
Have can I solve this problem?
Here, change in the constructor :
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
wglMakeCurrent(m_hDC, NULL);
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
You must call wglMakeCurrent after m_hDC has been setup. I reply to the first article of the example. Here Creating an OpenGL view on a Windows Form
That solve my problem :)
you can check what glGetLastError value and mostly it is because you choose wrong or incompatible pixel format, you can try with other format, then your window class should be flagged CS_OWNDC and set DoubleBuffering to false.

Resources