I have a database with some .xed files recorded with a Kinect that I need for my current audio-visual speech recognizer.
First, I would like to extract the audio files out of the xed files. Is there a simple converter for this?
Also I want to get some face recognition features. I have already found an application that does this real time (http://msdn.microsoft.com/en-us/library/jj131044 and http://nsmoly.wordpress.com/2012/05/21/face-tracking-sdk-in-kinect-for-windows-1-5/). How do I use this with my previously recorded xed files?
Kind regards
For extracting the audio you can use Kinect Studio to reproduce the recorded data. Since it works as a server it would be the input of your own c-sharp solution.
Add code that you can find in AudioBasis sample, related to the extraction of audio beans. In function Reader_AudioFrameArrived you could find lines like the following:
for (int i = 0; i < this.audioBuffer.Length; i += BytesPerSample) {
// Extract the 32-bit IEEE float sample from the byte array
float audioSample = BitConverter.ToSingle(this.audioBuffer, i);
You can save audioSample in an List and then write it to a file.
Then, run your solution. Connect Kinect Studio and play your data. You should see the recorded data in the solution.
It is not the most efficient method but it just works.
Hope it helps you!
Related
I am trying to create snapshots from a video stream using the "scene" video filter. I'm on Windows for now, but this will run on Linux I don't want the video output window to display. I can get the scenes to generate if I don't use the --vout=dummy option. When I include that option, it does not generate the scenes.
This example on the Wiki indicates that it's possible. What am I doing wrong?
Here is the line of code from the LibVLCSharp code:
LibVLC libVLC = new LibVLC("--no-audio", "--no-spu", "--vout=dummy", "--video-filter=scene", "--scene-format=jpeg", "--scene-prefix=snap", "--scene-path=C:\\temp\\", "--scene-ratio=100", $"--rtsp-user={rtspUser}", $"--rtsp-pwd={rtspPassword}");
For VLC 3, you will need to disable hardware acceleration which seems incompatible with the dummy vout.
In my tests, it was needed to do that on the media rather than globally:
media.AddOption(":avcodec-hw=none");
I still have mainy "Too high level or recursion" errors, and for that, I guess you'd better open an issue on videolan's trac.
I have just started to get back into c++ after years using Perl, php and assembler and I am trying to create a simple MFC program using Visual Studio 2017 and c++ to open binary files for viewing. I am trying to work within the code created by the wizard and I have gotten stumped. I know this is probably not the best way of doing what I want to do but I am learning.
Anyways the code I am working on is:
void CAdamImageManagerDoc::Serialize(CArchive& ar)
{
if (ar.IsStoring())
{
// TODO: add storing code here
}
else
{
// TODO: add loading code here
char documentBuffer[1000];
ar.Read(documentBuffer, 1000);
AfxMessageBox((LPCTSTR)documentBuffer);
}
}
This is called after you select a file using the standard mfc file open dialog box OnFileOpen. What I am trying to figure out is:
how can I know what the size is of the file that was referenced in the call?
how can I find out what the name of the file referenced is?
This is my first question on here in almost 10 years so please be gentle and not tell me how I didn't format the question properly or other things.
Use ar.GetFile()->GetFilePath() to get the complete file path to the file (reference)
Use ar.GetFile()->GetLength() to get the file size. (reference)
In general you decode the stream of a CArchive in the reverse ways like you write it.
So in most cases there is no need to know the size of the file. Serializing n elements is mostly done using CObList or CObArray or you just write the size of a data block into the archive followed by the BYTEs. The same way you may decode the stream.
if (ar.IsStoring())
{
DWORD dwSize = m_lenData;
ar << dwSize;
ar.Write(documentBuffer, dwSize);
}
else
{
DWORD dwSize;
ar >> dwSize;
ar.Read(documentBuffer, dwSize);
}
If you look at the MFC code how a CString is serialized or how a CObArray is serialized you find the same way.
Note that in this case the file turns into a binary file. And is no longer just "text".
I am new to LiDAR technology. From the Documentation I found that we can visualize LiDAR data using Veloview software. But my aim is to create a 3D image using .pcap file and process it further for object detection. Whole working is in Ubuntu 14.04.
Can anyone provide me a good solution?
Open 6 terminal tabs (Ctrl + Shift + t):
1st tab:
$ roscore
2nd tab (run your .pcap file):
$ rosrun velodyne_driver velodyne_node _model:=VLP16 _pcap:=/path/to file.pcap
3rd tab: (create .bag file to visualize in Rviz)
$ rosrun rosbag record -O vlp_16.bag /velodyne_packets
4th. tab: (play your .bag file just created from 3rd. tab):
$ rosbag play vlp_16.bag
5th. tab: (to convert velodyne_msgs/VelodyneScan to /Pointcloud2 and /LaserScan topic to visualize in Rviz):
$ roslaunch velodyne_pointcloud VLP16_points.launch
6th. tab: (fixed frame MUST be assigned to "velodyne")
$ rosrun rviz rviz -f velodyne
In Rviz:
To visualize /scan topic:
Displays -> Add -> By topic -> LaserScan
To visualize velodyne_points topic:
Displays -> Add -> By topic -> PointCloud2
Enjoy!
I would suggest looking into using ROS and the point cloud library. There is a lot of support for this kind of processing using those. For pulling in the data form the VLP-16 you could use this ROS package.
You may use the Point Cloud Library.
Compiling PCL is a bit tricky, but once you get it going, you can do quite a lot of point cloud analysis.
Follow the instructions here for the LiDAR HDL Grabber.
For your purpose, you will want to include the <pcl/io/vlp_grabber.h> along with the <pcl/io/hdl_grabber.h>. The vlp_grabber, basically uses the same hdl_grabber but supplies the vlp calibration parameters. Also in the main, you will want to instantiate a pcl::VLPGrabber instead of a pcl::HDLGrabber.
These few changes alone may not be enough to getting a fully functional grabber and viewer, but it is a start.
The example on PCL is for the hdl_viewer_simple.cpp but there is also a vlp_viewer.cpp also located in visualization/tools/. Check that out.
This is not an answer to fully solve your problem, but provide a path to a solution if you want to use PCL.
I'm new to Intel RealSense. I want to learn how to save the color and depth streams to bitmap. I'm using C++ as my language. I have learned that there is a function ToBitmap(), but it can be used for C#.
So I wanted to know is there any method or any function that will help me in saving the streams.
Thanks in advance.
I'm also working my way through this, It seems that the only option is to do it manually. We need to get ImageData from PXCImage. The actual data is stored in ImageData.planes but I still don't understand how it's organized.
https://software.intel.com/en-us/articles/dipping-into-the-intel-realsense-raw-data-stream?language=en Here you can find example of getting depth data.
But I still have no idea what is pitches and how data inside planes is organized.
Here: https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/332718 kind of backwards process is described.
I would be glad if you will be able to get some insight from this information.
And I obviously would be glad if you've discovered some insight you can share :).
UPD: Here is something that looks like what we need, I haven't worked with it yet, but it sheds some light on internal organization of planes[0] https://software.intel.com/en-us/forums/intel-perceptual-computing-sdk/topic/514663
UPD2: To add some completeness to the answer:
You then can create GDI+ image from data in ImageData:
auto colorData = PXCImage::ImageData();
if (image->AcquireAccess(PXCImage::ACCESS_READ, PXCImage::PIXEL_FORMAT_RGB24, &colorData) >= PXC_STATUS_NO_ERROR) {
auto colorInfo = image->QueryInfo();
auto colorPitch = colorData.pitches[0] / sizeof(pxcBYTE);
Gdiplus::Bitmap tBitMap(colorInfo.width, colorInfo.height, colorPitch, PixelFormat24bppRGB, baseColorAddress);
}
And Bitmap is subclass of Image (https://msdn.microsoft.com/en-us/library/windows/desktop/ms534462(v=vs.85).aspx). You can save Image to file in different formats.
I'm integrating plCrashReporter into one of my apps to add crash reporting functionality. Essentially, if I detect a crash I gather the crash report as NSData...
NSData *crashData;
NSError *error;
crashData = [crashReporter loadPendingCrashReportDataAndReturnError: &error];
crashData now contains the entire report. I can push this crashData into a PLCrashReport struct and read out parameters of it, but I'd rather just send the whole blob to my servers and look at it there. When the data reaches me, it looks like a lot of this:
706c6372 61736801 0a110801 1205342e 322e3118 02209184 82e80412
1b0a1263 6f6d2e73 6d756c65 2e545061 696e4465 76120531 2e362e32
1adb0208 00120618 d4a5f59d 03120618 bda5f59d 03120418 b5b96c12
0618df95 b09d0312 0618938b 9f9a0312 0618f9bb f68d0312 0618cdbc
f68d0312
I haven't had any luck getting anything meaningful out of this. I've tried using the plcrashutil, but haven't had any luck...
./plcrashutil convert --format=iphone example.plcrash
Could not decode crash log: Could not decode invalid crash log header
I also tried using Google's protobuf but was unable to get it running.
I do have a dSYM file but am not even at the point of trying to symbolicate this yet.
I'm running Mac OS X 10.6.5.
Any advice would be greatly, greatly appreciated. Thanks!
Got this sorted out! The report gets sent through as hex, but converting it to binary allows you then to nicely run it through plcrashutil. Here is my HexToBinary.cpp implementation.