as the title says, ive got this packet which contains the whole JPEG image file, so i've put that into a byte array, now how do i convert this to an image and then display it.
I dont need to save it, just display it then once the user enters an answer, delete it.
If you're wondering, its a captcha image. Something like what JDownloader does with captchas.
edit: i meant how can i display images in dialog boxes
You can use the GDI+ Image class to do this, using an IStream of your JPEG in memory as the input, which can be done using ISequentialStream::Read.
Related
Trying to tackle the scenario where a PDF is a given input. The PDF has a page margin around its content and when this input PDF is compiled in to another document, which has its own resulting margins, the final appearance is off because now the content within the PDF has a "double margin", making the input pdf content smaller.
The goal is to remove margins in an input pdf BEFORE including it in the process that spits out a final document.
I have looked into node modules such as pdf-lib as to hopefully be able to crop a pdf correctly, but when most of the documents that come through seem to have all the same values for its mediabox, artbox, bleedbox, cropbox, etc... it doesn't really give us a jumping off point on how to correctly recrop this pdf. Since all the values of the previous boxes are (0,0, width of document, height of document), there is not much to go with to find where the content is starting
The hope is to do this all server side with out the need for a user to manually fix the issue.
Has anyone tried to tackle this sort of process? Thoughts? Breakdown the pdf ( which could contain almost anything ) with something like pdf2json module and create a new document with pdf-lib ?
How do I actually crop and compress (resize to snippet size) the image on back-end?
I'm using croppie on front-end: https://foliotek.github.io/Croppie/
I'm completely lost, small guidance would be very helpful.
Thanks!
Why do you want crop your pic on back-end? This operation should have be done in browser. When you use result({ type, size, format, quality, circle }) function, you can get data of cropped pic. If type is base64, you can save the data to file after base64_decode it.
I've stored a video to a BitmapImage and want the user to be able to replay it. However, when I run my program it treats my BitmapImage like it's empty when it's not. I've tried doing the same thing with a picture saved to a BitmapImage and any picture will appear and fill the screen like I tell it to, but videos just don't show up. Why is this?
You cant store a video in a bitmapimage. Videos should be stored in StorageFile.
Use in your dictionary:
<MediaElement x:Key="Video1" Source="ms-appx:///Assets/Videos/Video.mp4"/
and then on page:
<MediaElement Source="{StaticResource Video1}"/>
remember to start width AutoPlay="True".
I need to capture an Image from my viewer and need to do some post processing and display it back on it.
Right now am more interested in the first part of it. That is to capture Image from the viewer.
While going through the OSG I came across ScreenCaptureHandler.
But am not able to get an Image out of it.
I am still working to get it done but in case any of you have any other way of how it can be done or an Example for screencapturehandler that you can share.
To capture the rendered view into an image I use a custom osg::Camera::DrawCallback.
To capture the view at any point, set the drawCallback on the camera, force rendering, restore to a NULL callback.
Notice that the following code is part of a member function of a custom viewer (that's probably not your case):
osgViewer::View::getCamera()->setFinalDrawCallback(new ViewCaptureCallback(img));
osgViewer::Viewer::renderingTraversals();
osgViewer::View::getCamera()->setFinalDrawCallback(NULL);
The ViewCaptureCallback basically uses image->readPixels() to read from the backbuffer.
glReadBuffer( GL_BACK );
osg::GraphicsContext* gc = renderInfo.getState()->getGraphicsContext();
// Here you should process the backbuffer's size and format
image->readPixels(0, 0, gc->getTraits()->width, gc->getTraits()->height, pixelFormat, GL_UNSIGNED_BYTE);
Hope it helps
I am trying to develop a wireless phone projector, wherein I will be showing phone screen on projector using a projector connected PC.
I am bit confused on how to capture screenshot of any running application in j2me.
Can you help?
Just want to capture screenshot in j2me
I'm not really sure of what you wanted to do but if you are thinking of a way for your app to get a screenshot of its screen then what I can say is that you can and cannot do it. Why you cannot do it? Say your using a canvas in creating your screen. I think there isn't a way of converting the Canvas to an Image. The Canvas is limited to just drawing itself on the phone screen. But, like what I have said earlier, you can also create a screenshot of your app screen. What you need to have is an Image object over your Canvas. Why Image? It is because the Image object can be converted to an image file. And the image file will be your screenshot. But, of course, there should be something that dynamically creates the image source for the image object on the canvas.
Image myScreen = Image.createImage(createScreen());
A method that creates the screen:
InputStream createScreen(){
//dynamically creates the source of the screen
}
You can have a screenshot using myScreen. The drawback here is that rendering is quite slow. This is possible but I think this is kind of difficult to implement.
With this snippet code you can take a "screenshot" of the Canvases in your app:
public Image getScreenShot() {
Image screenshot = Image.createImage(getWidth(), getHeight());
Graphics g = screenshot.getGraphics();
paint(g);
return Image.createImage(screenshot);
}
Add getScreenShot() to any canvas that you want "screenshot" of it.Then you can get it's RGB and convert to byte[] and pass it on the network.
References:
developer.nokia