Accessing FLV clip directly from flash player's memory under linux - linux

I would like to access a video clip directly from flash plugin during a RTMP transmission and save it to disk. I'm wondering is that a sane idea and would it be possible to build a reliable solution?
I know I can read raw memory for a process but I'm not looking for "a value" but a whole transmission. I can imagine that once a FLV frame has been read from a RTMP message and presented on a screen the plugin can free or overwrite it and there won't be anything to read (if I'm not fast enough). I'm also assuming that each chunk of a video might be stored under random address making it even more difficult/impossible to do?
What would be the best linux tool for "looking into memory" and trying to investigate this problem?

Even if you access the process' memory you cannot extract the rtmp stream from that memory. This is because you don't know which section of the memory is used by variables or flash player internals and which region is used for the rtmp stream. Also I don't expect the rtmp to be completely in memory, but just a chunk of it at a time.
Alternative:
If you have the url of the video you can just use rtmpdump. If you don't have it already you can obtain the url using a packet sniffer like wireshark.
You told in comments that you've already tried that and encountered problems doing so. However, I fear that there is no way around the usage of rtmpdump beside the manual implementation of a rtmp client that emulates the flash player behaviour.

Related

find a video file in memory dump of a process

I have a player that plays encrypted video files and works like this:
I open an encrypted video file with it
it decrypts the video file and writes it to its memory
and plays the file from the memory after that
and I want to copy the decrypted video file from memory and play it with a usual video player like VLC so I tried to create its memory dump with task manager and hoped to find out the video file there. Sadly I don't know enough to find a video file in a large chunk of bits from memory. I tried to find mp4 patterns in a hex editor and done every solution that I find online but nothing worked for me so I hoped someone here maybe has an idea and willing to help me how to make it done.
I upload its memory dump here (after opening a short encrypted video with it)
Most probably, the software doesn't decode whole video file in one go, but instead in streaming fashion. This makes it impossible to catch a moment when the decoded video data is available in the memory dump.
If the player software is open source, compile it with debug symbols and run it under debugger. Otherwise, resort to reverse engineering.
I don't think the question is on-topic for StackOverflow in general, including but not limited to specifically reversing a software solution intended for digital rights management. However I would still leave an answer.
First of all, as comments suggest the topic in question is reversal of specific solution provided by a commercial provider. Ability to recover a media file from memory dump highly depends on implementation of this solution and methods the provider used to complicate the reversal. It is only the simplest and straightforward solution is easy to reverse and the more developer put in to cover traces, the harder - exponentially - is to reverse.
Even though there is a little chance to find the original file in full in memory (through memory dump analysis) it is unlikely to be possible for any media playback application, even such that does not do any decryption. Media playback is typically streaming: the data is loaded from disk, storage, network etc. as necessary for playback and not as a full download. Decryption needs to be applied to certain pieces of data needed momentarily, and then a decent DRM-enabled application would immediately erase the ephemeral clear data once it is no longer needed. That is, a memory dump would - at best - contain a ridiculously small amount of media data.
To capture/restore the original media file one would typically have to place himself as a middleman into some media streaming related process and be able to copy data as it is being streaming durign playback. A static memory dump is of little help here.

which is better to use in v4l2 framework, user pointer or mmap

After going through these links,
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/userp.html
https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/mmap.html
I understood that there are two ways to create a buffer in v4l2 framework
Userpointer buffer: buffer will be created in user space.
Memory buffer: Buffer will be created in kernel space.
I have bit confused, which one to use while doing v4l2 driver development. I mean, which is better approach in terms of performance and handling buffer?
I will be using DMS-SG for data transfer in my hardware.
It depends.. on your requirements.
Case: Visualization of the video stream.
In this case, you might want to write the video data directly to memory that is accessible to the video driver, saving a copy operation. You will also get the shortest camera-to-display time. In this case, a user pointer would be the go to.
Case: Recording of the video stream.
In this case, you do not care about the timely delivery, but you do care about not missing frames. In this case, you can use memory mapped acquisition with multiple buffers.
Case: Single image acquisition for processing.
In this case, both timely delivery and missing frames are both less important, so you could use either method, but buffered operation will give the fastest acquisition time, since there is always a buffer with recent image data available.

Is it possible to record audio to variable/RAM in LiveCode?

Is it possible to record audio to a variable/RAM in LiveCode?
Normal recording requires to use a file, but I'm trying to figure out a way to not have to use the extra step of writing to disk, only to then read it from disk and send through sockets.
This is currently impossible and there is no good way to stream content from within LiveCode. When I tried to use video recording and sockets at the same time, I ran into a bug that caused LiveCode (Revolution at the time) to crash. Looking into the crash files, it appeared to me that using the recording routines and the socket routines at the same time caused a memory address conflict. After sending roughly 1000 recorded frames through a socket, Revolution would inevitably crash. To my best knowledge, this problem has never been fixed.
I would recommend dedicated software for streaming. A possibility might be VLC. You can use VLC from the command line, which means that you can set up a stream from within LiveCode, using the shell() function.

caputre OpenGL window in X11 with fast framerate - possible?

I have an OpenGL application with the size of 800x600 running on my linux machine (X11). The content of this application (the rendered image) should be exported via network to another PC.
First of all, i want to know if it is possible to take snapshots of the applications window with about 30 Hz, save them to jpeg and export them to the other machine via HTTP or whatever (like the IP Cameras are doing). Is it possbile to read the graphic's cards memory (Radeon HD 5800) in a fast way so that i can get a framerate of about 30 pictures per second?
If you're willing to tolerate some latency Pixel Buffer Objects (PBOs) should get you some decent read-back throughput.
libjpeg-turbo looks like a good solution for high-speed JPEG encoding.
If you don't have the source to the app you're trying to monitor then LD_PRELOAD hacks combined with the above should work.
You may want to take a look at VirtualGL which does exactly what you aim for.

Fast Audio Input/Output

Here's what I want to do:
I want to allow the user to give my program some sound data (through a mic input), then hold it for 250ms, then output it back out through the speakers.
I have done this already using Java Sound API. The problem is that it's sorta slow. It takes a minimum of about 1-2 seconds from the time the sound is made to the time the sound is heard again from the speakers, and I haven't even tried to implement delay logic yet. Theoretically there should be no delay, but there is. I understand that you have to wait for the sound card to fill up its buffer or whatever, and the sample size and sampling rate have something to do with this.
My question is this: Should I continue down the Java path trying to do this? I want to get the delay down to like 100ms if possible. Does anyone have experience using the ASIO driver with Java? Supposedly it's faster..
Also, I'm a .NET guy. Does this make sense to do with .NET instead? What about C++? I'm looking for the right technology to use here, and maybe a good example of how to read/write to audio input/output streams using your suggested technology platform. Thanks for your help!
I've used JavaSound in the past and found it wonderfully flaky (and it keeps changing between VM releases). If you like C#, use it, just use the DirectX APIs. Here's an example of doing kind of what you want to do using DirectSound and C#. You could use the Effects plugins to perform your 250 ms echo.
http://blogs.microsoft.co.il/blogs/tamir/archive/2008/12/25/capturing-and-streaming-sound-by-using-directsound-with-c.aspx
You may want to look into JACK, an audio API designed for low-latency sound processing. Additionally, Google turns up this nifty presentation [PDF] about using JACK with Java.
Theoretically there should be no delay, but there is.
Well, it's impossible to have zero delay. The best you can hope for is an unnoticeable delay (in terms of human perception). It might help if you describe your basic algorithm for reading & writing the sound data, so people can identify possible problems.
A potential issue with using a garbage-collected language like Java is that the GC will periodically run, interrupting your processing for some arbitrary amount of time. However, I'd be surprised if it's >100ms in normal usage. If GC is a problem, most JVMs provide alternate collection algorithms you can try.
If you choose to go down the C/C++ path, I highly recommend using PortAudio ( http://portaudio.com/ ). It works with almost everything on multiple platforms and it gives you low-level control of the sound drivers without actually having to deal with the various sound driver technology that is around.
I've used PortAudio on multiple projects, and it is a real joy to use. And the license is permissive.
If low latency is your goal, you can't beat C.
libsoundio is a low-level C library for real-time audio input and output. It even comes with an example program that does exactly what you want - piping the microphone input to the speakers output.
It's possible with JavaSound to get end-to-end latency in the ballpark of 100-150ms.
The primary cause of latency is the buffer sizes of the capture and playback lines. The bufferSize is set when opening the lines:
capture: TargetDataLine#open(AudioFormat format, int bufferSize)
playback: SourceDataLine#open(AudioFormat format, int bufferSize)
If the buffer is too big it will cause excess latency, but if it's too small it will cause stuttery playback. So you need to find a balance for your applications needs and your computing power.
The default buffer size can be checked with DataLine#getBufferSize when calling #open(AudioFormat format). The default size will vary based on the AudioFormat and seems to be geared for high latency, stutter free playback applications (e.g. internet streaming). If you're developing a low latency application, the default buffer size is much too large and should be changed.
In my testing with a 16-bit PCM AudioFormat, a buffer size of 1024 bytes has been pretty close to ideal for low latency.
The second and often overlooked cause of audio latency is any other activity being done in the capture or playback threads. For example, logging messages to console can introduce 10's of ms of latency. Turn it off.

Resources