wglCreateContext() everytime return NULL in OpenGL and Visual C++ - visual-c++

I'm new in OpenGL.
I want to draw somethink using OpenGL in Windows Forms.
If I use Win32 application with WinMain method, application working.
In WinMain method, I fill HWND with CreateWindow() function and ı give WinMain parameters to CreateWindows.
But I want to get Handle from Windows form i cant get this. Everytime
wglCreateContext(hdc) return NULL
there is a example which is I take
public:
COpenGL(System::Windows::Forms::Form ^ parentForm, GLsizei iWidth, GLsizei iHeight)
{
CreateParams^ cp = gcnew CreateParams;
// Set the position on the form
cp->X = 0;
cp->Y = 0;
cp->Height = iHeight;
cp->Width = iWidth;
// Specify the form as the parent.
cp->Parent = parentForm->Handle;
// Create as a child of the specified parent and make OpenGL compliant (no clipping)
cp->Style = WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | WS_CLIPCHILDREN;
// Create the actual window
this->CreateHandle(cp);
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
rtri = 0.0f;
rquad = 0.0f;
}
GLint MySetPixelFormat(HDC hdc)
{
static PIXELFORMATDESCRIPTOR pfd=
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
16,
0, 0, 0, 0, 0, 0,
0,
0,
0,
0, 0, 0, 0,
16,
0,
0,
PFD_MAIN_PLANE,
0,
0, 0, 0
};
GLint iPixelFormat;
// get the device context's best, available pixel format match
if((iPixelFormat = ChoosePixelFormat(hdc, &pfd)) == 0)
{
MessageBox::Show("ChoosePixelFormat Failed");
return 0;
}
// make that match the device context's current pixel format
if(SetPixelFormat(hdc, iPixelFormat, &pfd) == FALSE)
{
MessageBox::Show("SetPixelFormat Failed");
return 0;
}
if((m_hglrc = wglCreateContext(hdc)) == NULL)
{
MessageBox::Show("wglCreateContext Failed");
return 0;
}
if((wglMakeCurrent(hdc, m_hglrc)) == NULL)
{
MessageBox::Show("wglMakeCurrent Failed");
return 0;
}
return 1;
}
Have can I solve this problem?

Here, change in the constructor :
m_hDC = GetDC((HWND)this->Handle.ToPointer());
if(m_hDC)
{
wglMakeCurrent(m_hDC, NULL);
MySetPixelFormat(m_hDC);
ReSizeGLScene(iWidth, iHeight);
InitGL();
}
You must call wglMakeCurrent after m_hDC has been setup. I reply to the first article of the example. Here Creating an OpenGL view on a Windows Form
That solve my problem :)

you can check what glGetLastError value and mostly it is because you choose wrong or incompatible pixel format, you can try with other format, then your window class should be flagged CS_OWNDC and set DoubleBuffering to false.

Related

How to create text file(log file) having all rights(Read,Write) to all windows users (Everybody) using sprintf in VC++

I am creating Smart.log file using following code :
void S_SetLogFileName()
{
char HomeDir[MAX_PATH];
if (strlen(LogFileName) == 0)
{
TCHAR AppDataFolderPath[MAX_PATH];
if (SUCCEEDED(SHGetFolderPath(NULL, CSIDL_COMMON_APPDATA, NULL, 0, AppDataFolderPath)))
{
sprintf(AppDataFolderPath, "%s\\Netcom\\Logs", AppDataFolderPath);
if (CreateDirectory(AppDataFolderPath, NULL) || ERROR_ALREADY_EXISTS == GetLastError())
sprintf(LogFileName,"%s\\Smart.log",AppDataFolderPath);
else
goto DEFAULTVALUE;
}
else
{
DEFAULTVALUE:
if (S_GetHomeDir(HomeDir,sizeof(HomeDir)))
sprintf(LogFileName,"%s\\Bin\\Smart.log",HomeDir);
else
strcpy(LogFileName,"Smart.log");
}
}
}
and opening and modifying it as follows:
void LogMe(char *FileName,char *s, BOOL PrintTimeStamp)
{
FILE *stream;
char buff[2048] = "";
char date[256];
char time[256];
SYSTEMTIME SystemTime;
if(PrintTimeStamp)
{
GetLocalTime(&SystemTime);
GetDateFormat(LOCALE_USER_DEFAULT,0,&SystemTime,"MM':'dd':'yyyy",date,sizeof(date));
GetTimeFormat(LOCALE_USER_DEFAULT,0,&SystemTime,"HH':'mm':'ss",time,sizeof(time));
sprintf(buff,"[%d - %s %s]", GetCurrentThreadId(),date,time);
}
stream = fopen( FileName, "a" );
fprintf( stream, "%s %s\n", buff, s );
fclose( stream );
}
Here's the problem:
UserA runs the program first, it creates \ProgramData\Netcom\Smart.log using S_SetLogFileName()
UserB runs the program next, it tries to append/ modify to Smart.log and gets access denied.
What should i need to change in my code to allow all users to access Smart.log file ?
This is solution, I am looking for, Hope useful for some one.
refer from :
void SetFilePermission(LPCTSTR FileName)
{
PSID pEveryoneSID = NULL;
PACL pACL = NULL;
EXPLICIT_ACCESS ea[1];
SID_IDENTIFIER_AUTHORITY SIDAuthWorld = SECURITY_WORLD_SID_AUTHORITY;
// Create a well-known SID for the Everyone group.
AllocateAndInitializeSid(&SIDAuthWorld, 1,
SECURITY_WORLD_RID,
0, 0, 0, 0, 0, 0, 0,
&pEveryoneSID);
// Initialize an EXPLICIT_ACCESS structure for an ACE.
ZeroMemory(&ea, 1 * sizeof(EXPLICIT_ACCESS));
ea[0].grfAccessPermissions = 0xFFFFFFFF;
ea[0].grfAccessMode = GRANT_ACCESS;
ea[0].grfInheritance = NO_INHERITANCE;
ea[0].Trustee.TrusteeForm = TRUSTEE_IS_SID;
ea[0].Trustee.TrusteeType = TRUSTEE_IS_WELL_KNOWN_GROUP;
ea[0].Trustee.ptstrName = (LPTSTR)pEveryoneSID;
// Create a new ACL that contains the new ACEs.
SetEntriesInAcl(1, ea, NULL, &pACL);
// Initialize a security descriptor.
PSECURITY_DESCRIPTOR pSD = (PSECURITY_DESCRIPTOR)LocalAlloc(LPTR,
SECURITY_DESCRIPTOR_MIN_LENGTH);
InitializeSecurityDescriptor(pSD, SECURITY_DESCRIPTOR_REVISION);
// Add the ACL to the security descriptor.
SetSecurityDescriptorDacl(pSD,
TRUE, // bDaclPresent flag
pACL,
FALSE); // not a default DACL
//Change the security attributes
SetFileSecurity(FileName, DACL_SECURITY_INFORMATION, pSD);
if (pEveryoneSID)
FreeSid(pEveryoneSID);
if (pACL)
LocalFree(pACL);
if (pSD)
LocalFree(pSD);
}

No error on xcb_grab_key but event loop not catching (Global Hotkey)

I am trying to set up a global hotkey on Linux.
I had initially used x11 (libX11.so) however I had to do this from a thread. I tried it but the XPendingEvent and XNextEvent would eventually crash the app.
So I switched to xcb (libxcb.so.1). There is no errors, I even check with xcb_request_check however the event loop is not picking anything up. As soon as I start the loop, I get only one event which looks like this:
{
response_type: 0,
pad0: 10,
sequence: 2,
pad: [620, 2162688, 0, 0, 0, 0, 0],
full_sequence: 2
}
Here is my code, I actually do this in js-ctypes, but I cut down all the stuff to just show simple agnostic as possible code:
conn = xcb_connect(null, null);
keysyms = xcb_key_symbols_alloc(conn);
keycodesPtr = xcb_key_symbols_get_keycode(keysyms, XK_Space);
setup = xcb_get_setup(conn);
screens = xcb_setup_roots_iterator(setup);
screensCnt = screens.rem;
for (var i=0; i<screensCnt; i++) {
rez_grab = xcb_grab_key(conn, 1, screens.data.root, XCB_MOD_MASK_ANY, keycodesPtr[0], XCB_GRAB_MODE_ASYNC, XCB_GRAB_MODE_ASYNC);
rez_err = xcb_request_check(conn, rez_grab);
// rez_err is null
xcb_screen_next(&screens);
}
xcb_flush(conn);
// start event loop
while (true) {
ev = xcb_poll_for_event(conn);
console.log(ev);
if (ev != null) {
free(ev);
}
Sleep(50);
}
console.log(ev) gives me what I posted above earlier, response_type of 0 and then forever after that ev is just null.
Does anyone know what's up? rez_grab as a raw string is xcb_void_cookie_t(2)
Thanks much
Figured it out at long last!! Ah figured this one out! I was using XCB_MOD_MASK_ANY and this constant works on Debian but not on Ubuntu, which is what I was using to test. I switched that up to use num lock, caps lock etc and now it works! :)
ostypes.API('xcb_grab_key')(conn, 1, screens.data.contents.root, ostypes.CONST.XCB_MOD_MASK_LOCK, keycodesArr[j], ostypes.CONST.XCB_GRAB_MODE_ASYNC, ostypes.CONST.XCB_GRAB_MODE_ASYNC); // caps lock
ostypes.API('xcb_grab_key')(conn, 1, screens.data.contents.root, ostypes.CONST.XCB_MOD_MASK_2, keycodesArr[j], ostypes.CONST.XCB_GRAB_MODE_ASYNC, ostypes.CONST.XCB_GRAB_MODE_ASYNC); // num lock
ostypes.API('xcb_grab_key')(conn, 1, screens.data.contents.root, ostypes.CONST.XCB_MOD_MASK_LOCK | ostypes.CONST.XCB_MOD_MASK_2, keycodesArr[j], ostypes.CONST.XCB_GRAB_MODE_ASYNC, ostypes.CONST.XCB_GRAB_MODE_ASYNC); // caps lock AND num lock
that was very very crazy i had no idea the XCB_MOD_MASK_ANY constant didnt work on Ubuntu -
var rez_grab = ostypes.API('xcb_grab_key')(conn, 1, screens.data.contents.root, ostypes.CONST.XCB_MOD_MASK_ANY, keycodesArr[j], ostypes.CONST.XCB_GRAB_MODE_ASYNC, ostypes.CONST.XCB_GRAB_MODE_ASYNC);
I also tried #n.m's code. On ubuntu it didnt work, but worked on debian -
#include <xcb/xcb.h>
#define XK_LATIN1
#include <xcb/xcb_keysyms.h>
#include <X11/keysymdef.h>
#include <stdio.h>
#include <stdlib.h>
int
main ()
{
xcb_connection_t *conn;
conn = xcb_connect (NULL, NULL);
xcb_key_symbols_t * keysyms = xcb_key_symbols_alloc(conn);
xcb_keycode_t * keycodesPtr = xcb_key_symbols_get_keycode(keysyms, XK_space);
const xcb_setup_t* setup = xcb_get_setup(conn);
xcb_screen_iterator_t screens = xcb_setup_roots_iterator(setup);
int screensCnt = screens.rem;
for (int i=0; i<screensCnt; i++) {
xcb_void_cookie_t rez_grab = xcb_grab_key(conn, 1, screens.data->root, XCB_MOD_MASK_ANY, keycodesPtr[0], XCB_GRAB_MODE_ASYNC, XCB_GRAB_MODE_ASYNC);
const static uint32_t values[] = { XCB_EVENT_MASK_EXPOSURE | XCB_EVENT_MASK_BUTTON_PRESS };
xcb_change_window_attributes (conn, screens.data->root, XCB_CW_EVENT_MASK, values);
xcb_screen_next(&screens);
}
xcb_flush(conn);
// start event loop
while (1) {
xcb_generic_event_t *ev = xcb_wait_for_event(conn);
if (ev && ((ev->response_type & ~0x80) == XCB_KEY_PRESS))
{
xcb_key_press_event_t *kp = (xcb_key_press_event_t *)ev;
printf ("Got key press %d\n", (int)kp->event);
}
if (ev != NULL) {
free(ev);
}
}
return 0;
}

How to know if an 3DObject is Looked at in VR

I am using RajawaliVR library.
I have added a plane and applied texture to it. Know I want to know when my object is being looked at so that I can trigger some even. Is there anything in RajawaliVR or google cardboad that can help me achieve this.
Material cruiserMaterial = new Material();
cruiserMaterial.setDiffuseMethod(new DiffuseMethod.Lambert());
cruiserMaterial.setColorInfluence(0);
cruiserMaterial.enableLighting(true);
try {
cruiserMaterial.addTexture(new Texture("spaceCruiserTex",
R.drawable.image2));
} catch (TextureException e) {
e.printStackTrace();
}
Object3D leftPlane = new Plane(10f, 10f, 1, 1, 1);
leftPlane.setMaterial(cruiserMaterial);
leftPlane.setRotZ(90);
Object3D container = new Object3D();
container.addChild(leftPlane);
container.setRotX(90);
container.setRotY(90);
container.setRotZ(90);
container.setZ(-20);
getCurrentScene().addChild(container);
Just put this in your renderers main loop (OnDrawFrame), iterate a list with objects and pass the object as parameter. The method will return true if you are currently looking at an object.
private static final float YAW_LIMIT = 0.12f;
private static final float PITCH_LIMIT = 0.12f;
/**
* Check if user is looking at object by calculating where the object is in eye-space.
*
* #return true if the user is looking at the object.
*/
private boolean isLookingAtObject(WorldObject object) {
float[] initVec = { 0, 0, 0, 1.0f };
float[] objPositionVec = new float[4];
// Convert object space to camera space. Use the headView from onNewFrame.
Matrix.multiplyMM(mModelView, 0, this.getHeadViewMatrix(), 0, object.getModel().getModelMatrix().getFloatValues(), 0);
Matrix.multiplyMV(objPositionVec, 0, mModelView, 0, initVec, 0);
float pitch = (float) Math.atan2(objPositionVec[1], -objPositionVec[2]);
float yaw = (float) Math.atan2(objPositionVec[0], -objPositionVec[2]);
return Math.abs(pitch) < PITCH_LIMIT && Math.abs(yaw) < YAW_LIMIT;
}

Strange code behavior adding OpenCV to VC++ Kinect SDK samples

I'm new to C++ and trying to add OpenCV into Microsoft's Kinect samples. I was able to do it for the ColorBasics-D2D sample by modifying this function
void CColorBasics::ProcessColor()
{
HRESULT hr;
NUI_IMAGE_FRAME imageFrame;
// Attempt to get the color frame
hr = m_pNuiSensor->NuiImageStreamGetNextFrame(m_pColorStreamHandle, 0, &imageFrame);
if (FAILED(hr))
{
return;
}
INuiFrameTexture * pTexture = imageFrame.pFrameTexture;
NUI_LOCKED_RECT LockedRect;
// Lock the frame data so the Kinect knows not to modify it while we're reading it
pTexture->LockRect(0, &LockedRect, NULL, 0);
// Make sure we've received valid data
if (LockedRect.Pitch != 0)
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
cvSetData(img,(BYTE*) pBuffer, img->widthStep);
Mat &m = Mat(img);
Mat &hsv = Mat();
vector<Mat> mv = vector<Mat>(3,Mat(cvSize(640,480),CV_8UC1));
cvtColor(m,hsv,CV_BGR2HSV);
cvtColor(hsv,m,CV_HSV2BGR);//*/
IplImage iplimg(m);
cvNamedWindow("rgb",1);
cvShowImage("rgb",&iplimg);
// Draw the data with Direct2D
m_pDrawColor->Draw(static_cast<BYTE *>(LockedRect.pBits), LockedRect.size);
// If the user pressed the screenshot button, save a screenshot
if (m_bSaveScreenshot)
{
WCHAR statusMessage[cStatusMessageMaxLen];
// Retrieve the path to My Photos
WCHAR screenshotPath[MAX_PATH];
GetScreenshotFileName(screenshotPath, _countof(screenshotPath));
// Write out the bitmap to disk
hr = SaveBitmapToFile(static_cast<BYTE *>(LockedRect.pBits), cColorWidth, cColorHeight, 32, screenshotPath);
if (SUCCEEDED(hr))
{
// Set the status bar to show where the screenshot was saved
StringCchPrintf( statusMessage, cStatusMessageMaxLen, L"Screenshot saved to %s", screenshotPath);
}
else
{
StringCchPrintf( statusMessage, cStatusMessageMaxLen, L"Failed to write screenshot to %s", screenshotPath);
}
SetStatusMessage(statusMessage);
// toggle off so we don't save a screenshot again next frame
m_bSaveScreenshot = false;
}
}
// We're done with the texture so unlock it
pTexture->UnlockRect(0);
// Release the frame
m_pNuiSensor->NuiImageStreamReleaseFrame(m_pColorStreamHandle, &imageFrame);
}
This works fine. However, when I wanted to add something like this to the SkeletalViewer example, it is just displaying an empty window.
/// <summary>
/// Handle new color data
/// </summary>
/// <returns>true if a frame was processed, false otherwise</returns>
bool CSkeletalViewerApp::Nui_GotColorAlert( )
{
NUI_IMAGE_FRAME imageFrame;
bool processedFrame = true;
HRESULT hr = m_pNuiSensor->NuiImageStreamGetNextFrame( m_pVideoStreamHandle, 0, &imageFrame );
if ( FAILED( hr ) )
{
return false;
}
INuiFrameTexture * pTexture = imageFrame.pFrameTexture;
NUI_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if ( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
cvSetData(img,(BYTE*) pBuffer, img->widthStep);
Mat m(img);
IplImage iplimg(m);
cvNamedWindow("rgb",1);
cvShowImage("rgb",&iplimg);
m_pDrawColor->Draw( static_cast<BYTE *>(LockedRect.pBits), LockedRect.size );
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
processedFrame = false;
}
pTexture->UnlockRect( 0 );
m_pNuiSensor->NuiImageStreamReleaseFrame( m_pVideoStreamHandle, &imageFrame );
return processedFrame;
}
I'm not sure why the same code doesn't work in this example. I'm using Visual Studio 2010 and OpenCV 2.4.2.
Thanks
Figured it out. Changed it to this
if ( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = static_cast<BYTE *>(LockedRect.pBits);
cvSetData(img,(BYTE*) pBuffer, img->widthStep);
Mat m(img);
IplImage iplimg(m);
cvNamedWindow("rgb",1);
cvShowImage("rgb",&iplimg);
waitKey(1);
m_pDrawColor->Draw( static_cast<BYTE *>(LockedRect.pBits), LockedRect.size );
}

MonoTouch - WebRequest memory leak and crash?

I've got a MonoTouch app that does an HTTP POST with a 3.5MB file, and it is very unstable on the primary platforms that I test on (iPhone 3G with OS 3.1.2 and iPhone 4 with OS 4.2.1). I'll describe what I'm doing here and maybe someone can tell me if I'm doing something wrong.
In order to rule out the rest of my app, I've whittled this down to a tiny sample app. The app is an iPhone OpenGL Project and it does only this:
At startup, allocate 6MB of memory in 30k chunks. This simulates my app's memory usage.
Read a 3.5MB file into memory.
Create a thread to post the data. (Make a WebRequest object, use GetRequestStream(), and write the 3.5MB data in).
When the main thread detects that the posting thread is done, goto step 2 and repeat.
Also, each frame, I allocate 0-100k to simulate the app doing something. I don't keep any references to this data so it should be getting garbage collected.
iPhone 3G Result: The app gets through 6 to 8 uploads and then the OS kills it. There is no crash log, but there is a LowMemory log showing that the app was jettisoned.
iPhone 4 Result: It gets an Mprotect error around the 11th upload.
A few data points:
Instruments does NOT show the memory increasing as the app continues to upload.
Instruments doesn't show any significant leaks (maybe 1 kilobyte total).
It doesn't matter whether I write the post data in 64k chunks or all at once with one Stream.Write() call.
It doesn't matter whether I wait for a response (HttpWebRequest.HaveResponse) or not before starting the next upload.
It doesn't matter if the POST data is even valid. I've tried using valid POST data and I've tried sending 3MB of zeros.
If the app is not allocating any data each frame, then it takes longer to run out of memory (but as mentioned before, the memory that I'm allocating each frame is not referenced after the frame it was allocated on, so it should be scooped up by the GC).
If nobody has any ideas, I'll file a bug with Novell, but I wanted to see if I'm doing something wrong here first.
If anyone wants the full sample app, I can provide it, but I've pasted the contents of my EAGLView.cs below.
using System;
using System.Net;
using System.Threading;
using System.Collections.Generic;
using System.IO;
using OpenTK.Platform.iPhoneOS;
using MonoTouch.CoreAnimation;
using OpenTK;
using OpenTK.Graphics.ES11;
using MonoTouch.Foundation;
using MonoTouch.ObjCRuntime;
using MonoTouch.OpenGLES;
namespace CrashTest
{
public partial class EAGLView : iPhoneOSGameView
{
[Export("layerClass")]
static Class LayerClass ()
{
return iPhoneOSGameView.GetLayerClass ();
}
[Export("initWithCoder:")]
public EAGLView (NSCoder coder) : base(coder)
{
LayerRetainsBacking = false;
LayerColorFormat = EAGLColorFormat.RGBA8;
ContextRenderingApi = EAGLRenderingAPI.OpenGLES1;
}
protected override void ConfigureLayer (CAEAGLLayer eaglLayer)
{
eaglLayer.Opaque = true;
}
protected override void OnRenderFrame (FrameEventArgs e)
{
SimulateAppAllocations();
UpdatePost();
base.OnRenderFrame (e);
float[] squareVertices = { -0.5f, -0.5f, 0.5f, -0.5f, -0.5f, 0.5f, 0.5f, 0.5f };
byte[] squareColors = { 255, 255, 0, 255, 0, 255, 255, 255, 0, 0,
0, 0, 255, 0, 255, 255 };
MakeCurrent ();
GL.Viewport (0, 0, Size.Width, Size.Height);
GL.MatrixMode (All.Projection);
GL.LoadIdentity ();
GL.Ortho (-1.0f, 1.0f, -1.5f, 1.5f, -1.0f, 1.0f);
GL.MatrixMode (All.Modelview);
GL.Rotate (3.0f, 0.0f, 0.0f, 1.0f);
GL.ClearColor (0.5f, 0.5f, 0.5f, 1.0f);
GL.Clear ((uint)All.ColorBufferBit);
GL.VertexPointer (2, All.Float, 0, squareVertices);
GL.EnableClientState (All.VertexArray);
GL.ColorPointer (4, All.UnsignedByte, 0, squareColors);
GL.EnableClientState (All.ColorArray);
GL.DrawArrays (All.TriangleStrip, 0, 4);
SwapBuffers ();
}
AsyncHttpPost m_Post;
int m_nPosts = 1;
byte[] LoadPostData()
{
// Just return 3MB of zeros. It doesn't matter whether this is valid POST data or not.
return new byte[1024 * 1024 * 3];
}
void UpdatePost()
{
if ( m_Post == null || m_Post.PostStatus != AsyncHttpPostStatus.InProgress )
{
System.Console.WriteLine( string.Format( "Starting post {0}", m_nPosts++ ) );
byte [] postData = LoadPostData();
m_Post = new AsyncHttpPost(
"https://api-video.facebook.com/restserver.php",
"multipart/form-data; boundary=" + "8cdbcdf18ab6640",
postData );
}
}
Random m_Random = new Random(0);
List< byte [] > m_Allocations;
List< byte[] > m_InitialAllocations;
void SimulateAppAllocations()
{
// First time through, allocate a bunch of data that the app would allocate.
if ( m_InitialAllocations == null )
{
m_InitialAllocations = new List<byte[]>();
int nInitialBytes = 6 * 1024 * 1024;
int nBlockSize = 30000;
for ( int nCurBytes = 0; nCurBytes < nInitialBytes; nCurBytes += nBlockSize )
{
m_InitialAllocations.Add( new byte[nBlockSize] );
}
}
m_Allocations = new List<byte[]>();
for ( int i=0; i < 10; i++ )
{
int nAllocationSize = m_Random.Next( 10000 ) + 10;
m_Allocations.Add( new byte[nAllocationSize] );
}
}
}
public enum AsyncHttpPostStatus
{
InProgress,
Success,
Fail
}
public class AsyncHttpPost
{
public AsyncHttpPost( string sURL, string sContentType, byte [] postData )
{
m_PostData = postData;
m_PostStatus = AsyncHttpPostStatus.InProgress;
m_sContentType = sContentType;
m_sURL = sURL;
//UploadThread();
m_UploadThread = new Thread( new ThreadStart( UploadThread ) );
m_UploadThread.Start();
}
void UploadThread()
{
using ( MonoTouch.Foundation.NSAutoreleasePool pool = new MonoTouch.Foundation.NSAutoreleasePool() )
{
try
{
HttpWebRequest request = WebRequest.Create( m_sURL ) as HttpWebRequest;
request.Method = "POST";
request.ContentType = m_sContentType;
request.ContentLength = m_PostData.Length;
// Write the post data.
using ( Stream stream = request.GetRequestStream() )
{
stream.Write( m_PostData, 0, m_PostData.Length );
stream.Close();
}
System.Console.WriteLine( "Finished!" );
// We're done with the data now. Let it be garbage collected.
m_PostData = null;
// Finished!
m_PostStatus = AsyncHttpPostStatus.Success;
}
catch ( System.Exception e )
{
System.Console.WriteLine( "Error in AsyncHttpPost.UploadThread:\n" + e.Message );
m_PostStatus = AsyncHttpPostStatus.Fail;
}
}
}
public AsyncHttpPostStatus PostStatus
{
get
{
return m_PostStatus;
}
}
Thread m_UploadThread;
// Queued to be handled in the main thread.
byte [] m_PostData;
AsyncHttpPostStatus m_PostStatus;
string m_sContentType;
string m_sURL;
}
}
I think you should read in your file 1 KB (or some arbitrary size) at a time and write it to the web request.
Code similar to this:
byte[] buffer = new buffer[1024];
int bytesRead = 0;
using (FileStream fileStream = File.OpenRead("YourFile.txt"))
{
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0)
{
httpPostStream.Write(buffer, 0, bytesRead);
}
}
This is off the top of my head, but I think it's right.
This way you don't have an extra 3MB floating around in memory when you don't really need to. I think tricks like this are even more important on iDevices (or other devices) than on the desktop.
Test the buffer size too, a larger buffer will get you better speeds up to a point (I remember 8KB being pretty good).

Resources