I am new to Direct Show. What I am trying to do is use one iGraphBuilder object, to play both a silent Avi video, and a separate Wav file at the same time. I can't merge the two into one video, but I would like it to appear as if they were.
Is there any way to go about using one set of filters to run an avi and wav file concurrently?
Thanks!
You can achieve this both ways: by adding two separate chains into the same graph, or using two separate filter graphs.
The first method will give you a graph like this:
Second method will get you two graphs:
The first approach has advantage that you have lipsync between video and audio and you control the graphs as a single playback object. The other method's advantage is ability to control video and audio separately, including stopping and changing files independently one from the other.
Related
I have built a loopstation in JSyn. It allows you to record and play back samples. By playing multiple samples you can layer up sounds (e.g. one percussion sample, one melody, etc)
JSyn allows me to connect each of the sample players directly to my lineout where it is mixed automatically. But now I would like to record the sound just as the user hears it to a .wav-file. But I am not sure what I should connect the input port of the recorder to.
What is the smartest way to connect the audio output of all samples to the WaveRecorder?
In other words: In the Programmer's Guide there is an example for this but I am not sure how I create the "finalMix" used there.
Rather than using multiple LineOuts, just use one LineOut.
You can mix all of your signals together using a chain of MultiplyAdd units.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MultiplyAdd.html
Or you can use a Mixer unit.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MixerStereoRamped.html
Then connect the mix to your WaveRecorder and to your single LineOut.
I'm doing research for a project that's about to start.
We will be supplied hundreds of 30 second video files that the end user can select (via various filters) we then want to play them as if it was one video.
It seems that Media Source Extensions with MPEG-DASH is the way to go.
I feel like it could possibly be solve in the following way, but I'd like to ask if this sounds right from anyone who has done similar things
My theory:
Create mpd's for each video (via mp4box or similar tool)
User make selections (each of which has a mpd)
Read each mpd and get their <period> elements (most likely only one in each)
Create a new mpd file and insert all the <period> elements into it in order.
Caveats
I imagine this may be problematic if the videos were all different sizes formats etc, but in this case we can assume consistency.
So my question is to anyone with mpeg-dash / mpd exterience, does this sound right? or is there a better way to acheive this?
Sounds right, multi period is the only feasible way in my opinion.
Ideally you would encode all the videos with the same settings to provide the end user a consistent experience. However, it shouldn't be a problem if quality or even aspect ratio etc change from one period to another from a technical point of view. You'll need a player which supports multi period, such as dash.js or Bitmovin.
I wonder if there is an obvious and elegant way to add additional data to a jpeg while keeping it readable for standard image viewers. More precisely I would like to embed a picture of the backside of a (scanned) photo into it. Old photos often have personal messages written on the back, may it be the date or some notes. Sure you could use EXIF and add some text, but an actuall image of the back is more preferable.
Sure I could also save 2 files xyz.jpg and xyz_back.jpg, or arrange both images side by side, always visible in one picture, but that's not what I'm looking for.
It is possible and has been done, like on Samsung Note 2 and 3 you can add handwritten notes to the photos you've taken as a image. Or some smartphones allow to embed voice recordings to the image files while preserving the readability of those files on other devices.
There are two ways you can do this.
1) Use and Application Marker (APP0–APPF)—the preferred method
2) Use a Comment Marker (COM)
If you use an APPn marker:
1) Do not make it the first APPn in the file. Every known JPEG file format expects some kind of format specific APPn marker right after the SOI marker. Make sure that your marker is not there.
2) Place a unique application identifier (null terminated string) at the start of the data (something done by convention).
All kinds of applications store additional data this way.
One issue is that the length field is only 16-bits (Big Endian format). If you have a lot of data, you will have to split it across multiple markers.
If you use a COM marker, make sure it comes after the first APPn marker in the file. However, I would discourage using a COM marker for something like this as it might choke applications that try to display the contents.
An interesting question. There are file formats that support multiple images per file (multipage TIFF comes to mind) but JPEG doesn't support this natively.
One feature of the JPEG file format is the concept of APP segments. These are regions of the JPEG file that can contain arbitrary information (as a sequence of bytes). Exif is actually stored in one of these segments, and is identified by a preamble.
Take a look at this page: http://www.sno.phy.queensu.ca/~phil/exiftool/#JPEG
You'll see many segments there that start with APP such as APP0 (which can store JFIF data), APP1 (which can contain Exif) and so forth.
There's nothing stopping you storing data in one of these segments. Conformant JPEG readers will ignore this unrecognised data, but you could write software to store/retrieve data from within there. It's even possible to embed another JPEG file within such a segment! There's no precedent I know for doing this however.
Another option would be to include the second image as the thumbnail of the first. Normally thumbnails are very small, but you could store a second image as the thumbnail of the first. Some software might replace or remove this though.
In general I think using two files and a naming convention would be the simplest and least confusing, but you do have options.
My goal is to show multiple (small) panes of video on-screen simultaneously.
I would prefer to use the hardware scalar. This is currently working well for a single video on a single surface. For multiple streams it appears multiple SurfaceViews are needed - I don't see a way to use the hardware scaler to blit multiple images into different parts of the same Surface. What's the best way to lock/blit image pixels to these surfaces?
ANativeWindow_unlockAndPost causes a wait-for-vsync + swap (I think?), so I can't call this per-SurfaceView in the same update cycle (well I can, but I get horrible jittering).
One alternative is to use a seperate render thread per SurfaceView. Does this seem like a sane avenue to pursue? Are there any other ways to update multiple SurfaceViews with a single wait-for-vsync+swap?
Where else is the stream number used other than in these two places: GetStreamSource and SetStreamSource?
Using multiple streams allows you to combine together vertex component data from different sources. This can be useful when you have different rendering methods, each of which requires different sets of vertex components. Instead of always sending the entire set of data, you can separate it into streams and only use the ones you need. See this chapter from GPU Gems 2 for an example and sample code. It can also be useful for effects such as morphing.
When calling CreateVertexDeclaration, you specify the stream number in the D3DVERTEXELEMENT9 elements to determine which stream each vertex component comes from.