Saving the stream using Intel RealSense - visual-c++

I'm new to Intel RealSense. I want to learn how to save the color and depth streams to bitmap. I'm using C++ as my language. I have learned that there is a function ToBitmap(), but it can be used for C#.
So I wanted to know is there any method or any function that will help me in saving the streams.
Thanks in advance.

I'm also working my way through this, It seems that the only option is to do it manually. We need to get ImageData from PXCImage. The actual data is stored in ImageData.planes but I still don't understand how it's organized.
https://software.intel.com/en-us/articles/dipping-into-the-intel-realsense-raw-data-stream?language=en Here you can find example of getting depth data.
But I still have no idea what is pitches and how data inside planes is organized.
Here: https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/332718 kind of backwards process is described.
I would be glad if you will be able to get some insight from this information.
And I obviously would be glad if you've discovered some insight you can share :).
UPD: Here is something that looks like what we need, I haven't worked with it yet, but it sheds some light on internal organization of planes[0] https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/514663
UPD2: To add some completeness to the answer:
You then can create GDI+ image from data in ImageData:
auto colorData = PXCImage::ImageData();
if (image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB24, &colorData) >= PXC_STATUS_NO_ERROR) {
auto colorInfo = image->QueryInfo();
auto colorPitch = colorData.pitches[0] / sizeof(pxcBYTE);
Gdiplus::Bitmap tBitMap(colorInfo.width, colorInfo.height, colorPitch, PixelFormat24bppRGB, baseColorAddress);
}
And Bitmap is subclass of Image (https://msdn.microsoft.com/en-us/library/windows/desktop/ms534462(v=vs.85).aspx). You can save Image to file in different formats.

Related

Weights&Biases Sweep - Why might runs be overwriting each other?

I am new to ML and W&B, and I am trying to use W&B to do a hyperparameter sweep. I created a few sweeps and when I run them I get a bunch of new runs in my project (as I would expect):
Image: New runs being created
However, all of the new runs say "no metrics logged yet" (Image) and are instead all of their metrics are going into one run (the one with the green dot in the photo above). This makes it not useable, of course, since all the metrics and images and graph data for many different runs are all being crammed into one run.
Is there anyone that has some experience in W&B? I feel like this is an issue that should be relatively straightforward to solve - like something in the W&B config that I need to change.
Any help would be appreciated. I didn't give too many details because I am hoping this is relatively straightforward, but if there are any specific questions I'd be happy to provide more info. The basics:
Using Google Colab for training
Project is a PyTorch-YOLOv3 object detection model that is based on this: https://github.com/ultralytics/yolov3
Thanks! 😊
Update: I think I figured it out.
I was using the train.py code from the repository I linked in the question, and part of that code specifies the id of the run (used for resuming).
I removed the part where it specifies the ID, and it is now working :)
Old code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem,
id=ckpt.get('wandb_id') if 'ckpt' in locals() else None)
New code:
wandb_run = wandb.init(config=opt, resume="allow",
project='YOLOv3' if opt.project == 'runs/train' else Path(opt.project).stem,
name=save_dir.stem)

Skia buggy color blending

I'm using Skia m62 with Open GL backend and getting the glitch while rendering png file.
To create SkBitmap I'm using the following code:
const auto codec = SkCodec::MakeFromStream(SkStream::MakeFromFile("test.png"));
const SkImageInfo imageInfo = codec->getInfo().makeColorType(kN32_SkColorType);
SkBitmap bm;
bm.allocPixels(imageInfo);
codec->getPixels(imageInfo, bm.getPixels(), bm.rowBytes());
The rest of the code is slightly modified (cannot find gl/GrGLUtil.h header) example found in Skia sources: https://github.com/google/skia/blob/master/example/SkiaSDLExample.cpp
The library is built with arguments: skia_use_freetype=true skia_use_system_freetype2=false skia_use_libpng=true skia_use_system_libpng=false skia_use_expat=false skia_use_icu=false skia_use_libjpeg_turbo=false skia_use_libwebp=false skia_use_piex=false skia_use_sfntly=false skia_use_zlib=true skia_use_system_zlib=false is_official_build=true target_os="mac" target_cpu="x86_64"
Here is the FULL EXAMPLE on GitHub illustrating the issue. It contains the png under observation and full setup to run on Mac OS x86_64.
UPD: Filed a bug in Skia tracker: https://bugs.chromium.org/p/skia/issues/detail?id=7361
I'll quote the answer from Skia's bugtracker:
Skia's GPU backend doesn't support drawing unpremultiplied images, but that is the natural state of most encoded images, including all .pngs. What you're seeing is an unpremultiplied bitmap being drawn as if it were a premultiplied bitmap. Pretty isn't it?
There are a couple easy ways to fix this. The simplest way to adapt the code you have there is to make sure to ask SkCodec for premultiplied output, e.g.
const SkImageInfo imageInfo = codec->getInfo().makeColorType(kN32_SkColorType)
.makeAlphaType(kPremul_SkAlphaType);
Or, you can use this simpler way to decode and draw:
sk_sp img = SkImage::MakeFromEncoded(SkData::MakeFromFileName("test.png"));
...
canvas->drawImage(img, ...);
That way lets SkImage::MakeFromEncoded() make all the choices about format for you.
This solves the issue.

Kinect: Extracting audio and face tracking out of xed files

I have a database with some .xed files recorded with a Kinect that I need for my current audio-visual speech recognizer.
First, I would like to extract the audio files out of the xed files. Is there a simple converter for this?
Also I want to get some face recognition features. I have already found an application that does this real time (http://msdn.microsoft.com/en-us/library/jj131044 and http://nsmoly.wordpress.com/2012/05/21/face-tracking-sdk-in-kinect-for-windows-1-5/). How do I use this with my previously recorded xed files?
Kind regards
For extracting the audio you can use Kinect Studio to reproduce the recorded data. Since it works as a server it would be the input of your own c-sharp solution.
Add code that you can find in AudioBasis sample, related to the extraction of audio beans. In function Reader_AudioFrameArrived you could find lines like the following:
for (int i = 0; i < this.audioBuffer.Length; i += BytesPerSample) {
// Extract the 32-bit IEEE float sample from the byte array
float audioSample = BitConverter.ToSingle(this.audioBuffer, i);
You can save audioSample in an List and then write it to a file.
Then, run your solution. Connect Kinect Studio and play your data. You should see the recorded data in the solution.
It is not the most efficient method but it just works.
Hope it helps you!

Core Data - NSURL command and momd

I'm working on a project that uses Core Data and I can't seem to find an adequate explanation of why the following line of code in my program always returns NIL for modelURL.
NSURL *modelURL = [[NSBundle mainBundle] URLForResource:#"CoreDataBooks" withExtension:#"momd"];
This example is straight out of Apple's sample code and it actually works in their program, but I can't get it to work in mine.
Questions:
1) Does something have to be in place before I try to implement. I notice the Apple solution has a "CoreDataBooks.DCBStore" file that I do not have. I've tried a number of things to create this...No luck.
2) momd: I've read a lot about this and it seems it's quite a bit different than "mom." I understand the "d" gives the dataset additional capabilities and in some answers posted here, the author indicated to use "mom" and not "momd" without a great explanation of why. All the same, this doesn't work either.
As always, I appreciate your help!
Glenn
So -[NSBundle URLForResource:…] is returning nil. That's supposed to mean the requested resource doesn't exist.
Fire up the Finder and have a look inside the bundle. Confirm that file really doesn't exist. Is there actually a momd file (or similar) there, but by a different name? Probably want to adjust your code to match.
If no such files exist, you probably need to add your Core Data model to your build target.

Decoding protobuf data from plCrashReporter

I'm integrating plCrashReporter into one of my apps to add crash reporting functionality. Essentially, if I detect a crash I gather the crash report as NSData...
NSData *crashData;
NSError *error;
crashData = [crashReporter loadPendingCrashReportDataAndReturnError: &error];
crashData now contains the entire report. I can push this crashData into a PLCrashReport struct and read out parameters of it, but I'd rather just send the whole blob to my servers and look at it there. When the data reaches me, it looks like a lot of this:
706c6372 61736801 0a110801 1205342e 322e3118 02209184 82e80412
1b0a1263 6f6d2e73 6d756c65 2e545061 696e4465 76120531 2e362e32
1adb0208 00120618 d4a5f59d 03120618 bda5f59d 03120418 b5b96c12
0618df95 b09d0312 0618938b 9f9a0312 0618f9bb f68d0312 0618cdbc
f68d0312
I haven't had any luck getting anything meaningful out of this. I've tried using the plcrashutil, but haven't had any luck...
./plcrashutil convert --format=iphone example.plcrash
Could not decode crash log: Could not decode invalid crash log header
I also tried using Google's protobuf but was unable to get it running.
I do have a dSYM file but am not even at the point of trying to symbolicate this yet.
I'm running Mac OS X 10.6.5.
Any advice would be greatly, greatly appreciated. Thanks!
Got this sorted out! The report gets sent through as hex, but converting it to binary allows you then to nicely run it through plcrashutil. Here is my HexToBinary.cpp implementation.

Resources