How to use the result of guardian-ffmpeg-Android? - android-ndk

I am trying to build ffmpeg on Android. There are many tutorials. Some are very old.
So I want to try one that can use newer version of ffmpeg and Android NDK.
After long time searching, I find one, guardianproject / android-ffmpeg
The project was updated several months ago.
NDK r8 is used. ffmpeg is put from online, so a latest version.
After I follow all the instruction, I am confused which result I should use, and how to use it.
The README mentions testing, like:
# embedding metadata into a matroska video /data/local/ffmpeg -y -i test.mp4 \
-attach attach.txt -metadata:s:2 mimetype=text/plain \
-acodec copy -vcodec copy testattach.mkv
First, I fail to find the path: /data/local
Second, this is a command. How will I use it in Android?
Totall confused.
Any light?

I presume that you have successfully built the ffmpeg executable and understand the difference between static and shared libs in the build. If not you should read up on that before trying to exec it on the CLI in android.
I think that the author uses ffmpeg on the cli using either system.exec or processbuilder techniques to run and executable located on the phone in ./data/local/$yourSubDir....
see here for discussion on ffmpeg on cli in android and note that alot of people prefer to use the full JNI route employing java interfaces to wrap calls to ffmpeg.main(). IMO the JNI route is more robust for apps you intend to distribute. But it works on the cli.
the tests that confused you are just expressions for using ffmpeg that would need to be called in android using the approach you prefer ( system.exec or processbuilder ). You can get familiar with the cli testing by simply running ffmpeg in windows/ linux in your dev environment. Get a shell there and play with samples mentioned in the ffmpeg faqs.
you can look at the halfninja project on git for more examples on JNI approach.

Related

How can I use a USB stick as the build directory for compiling OpenCV with cmake?

I'm trying to install OpenCV to my Raspberry Pi by following this tutorial: https://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/
However, when I get to the build section with cmake, I do not have enough space to compile the package after building it. I read that you can use a USB stick to store the build data in, which seems to be the bulk of the data.
I have attempted to manually create a build folder on the usb stick, and copied the CMakeLists file over, but to no avail.
Apologies for lack of clarity on this question, I'm exceptionally new to Linux and don't even know how to provide additional details. A step-by-step guide for performing step 5 of the above guide, using a USB stick to store the bulk of the data, would be very much appreciated.
Edit: The section I am trying to use is:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-3.1.0/modules \
-D BUILD_EXAMPLES=ON ..
I am trying to use media/usb/build as my build folder, but still want the package to install to the SD card. How can I do this?

Gnu Parallel and --link argument

Hi i'm very new to linux and learning to use the terminal and bash. currently i'm running through the GNU Parallel Tutorial. I've come to the section that talks about linking arguments with the --link :::+
if i try using link the terminal says unknown option and if i try using :::+ it will say no such file or directory.
i'm i on the wrong version of GNU Parallel as the tutorial on the web is for 20160822 where as the tutorial if i use man parallel_tutorial it says 20140622.
i've tried to update my parallel version but I cant seem to get this link option to work.
Use the tutorial that comes with the version of GNU Parallel you use (man parallel_tutorial).
The reason is that the tutorial online is made for the newest version.
In 20160822 --xapply was renamed to --link, 20160422 introduced :::+.
parallel --version will give you which version you run.

MPEG DASH:using Gpac to get MPD files

My goal is to convert some mpeg4 files in my hard disk into mpd files that will alllow me to use it in mpeg dash streaming .i read about gpac's MP4Box capability to create mpd files and i followed the instructions of the following link to successfully compile gpac for ubuntu like in the instructions in this two links
http://gpac.wp.mines-telecom.fr/2011/04/20/compiling-gpac-on-ubuntu/
http://gpac.wp.mines-telecom.fr/2012/02/01/dash-support/
But when i try to execute any command such as
MP4Box -dash 10000 -frag 1000 -rap myFile.mp4
I get the following error
Option -dash unknown. Please check usage
I wonder is there any commands or instructions that i must execute when building gpac to add the dash and if is there any other methods to get my own MPD File not those provided by itec.
Thanks in advance !!!
Try to follow the compile instruction carefully & make sure to fetch the latest version from SVN.
MP4Box should work with your commands.
looks like you are using a outdated version of MP4Box. try downloading and compiling the latest one from here (for me the same command works): http://gpac.wp.mines-telecom.fr/downloads/gpac-nightly-builds/

How to use protobuf-csharp-port from unix

We've been using protocol buffers, and are generating the c++ and python files with protoc, and the c# files with protobuf-csharp-port. At the moment these are done separately, the c++ and python from linux and the c# from windows. We'd like to have one script generate all of these, running in linux.
To do this I'm trying to run ProtoGen.exe with mono, but it's not producing any output. The following command runs, but produces no cs files, and no errors.
mono ../cs/Packages/Google.ProtocolBuffers/tools/ProtoGen.exe --protoc_dir=/usr/local/bin/ ./subdir/simple_types.proto
I've got a feeling that I'm missing something simple.
I don't think I've tried running protoc from ProtoGen.exe on Linux. I'm surprised that it doesn't have any errors, but we can definitely look into that. (If you fancy raising an issue, that would be really helpful - or I'll do it when I get the chance.)
For the moment, I suggest that you run protoc first, using --descriptor_set_out to produce a binary (protobuf) version of the .proto file. That's what ProtoGen.exe is trying to do first, and failing by the sounds of it.
Once you've got the binary version of your message descriptor (I'd call it simple_types.pb), you can run ProtoGen.exe on that. It's been a while since I've done this, but I believe you should be able to just run
mono ../cs/Packages/Google.ProtocolBuffers/tools/ProtoGen.exe ./subdir/simple_types.pb
... and it should magically work.
As a horrible alternative, you could try symlinking protoc.exe to protoc in your binary directory. Fundamentally I suspect that's what's going wrong :)
my script
protoc "--proto_path=$SRC_DIR" "--descriptor_set_out=x.protobin" --include_imports $SRC_DIR/x.proto
mono $PRJ_HOME/Google.ProtocolBuffers.2.4.1.521/tools/ProtoGen.exe -line_break=Unix x.protobin
protoc and mono were installed via distrib package manager :
# archlinux
pacman -S protobuf mono

Beagleboard Angstrom Linux, Image Capture Script streamer alternative

I want to take a snapshot from my logitech webcam with desired resolution and save the image by using linux bash script. I need to do it on my beagleboard with Angstrom image. In my beagleboard i can capture with using cheese. But i dont know how to capture in terminal with script.
In my host computer i am using streamer with
streamer -c /dev/video0 -b 16 -o outfile.jpeg
But i dont know how to take snapshots in Angstrom. Can you make suggestions?
How can i capture with command line?
Regards
I've used mjpg-streamer with some success. It sends a video stream through port 8080, although you can change that by editing the startup script.
I used instructions from here, although I skipped the make install part and just run it off my home directory. Worked both with the default Angstrom image and with Debian running off the SD card (i.e., non-flashed).
You can view the stream by pointing your browser (either local or over-the-LAN) to http://beagle.address:8080/?action=x, where x is stream or snapshot. I trust those parameters are self-explanatory :).
You can use a text-based browser such as links to open the URL, and links will then prompt you for the filename of the image. That's for testing, then I suppose you can find a way to save the snapshot without human intervention if you intend to use it from a script.
I'm using gstreamer to capture webcam input on Beaglebone using a logitech webcam. You'll need gstreamer with gstreamer-utils installed. I'm using Ubuntu and they can be found from the standard repos. Here's the CLI command:
gst-launch v4l2src num-buffers=1 ! ffmpegcolorspace ! video/x-raw-yuv,width=320,height=240 ! jpegenc ! filesink location=test.jpg
Unfortunately, I'm experiencing some issues after some images as the pipeline freezes on v4l2src. Maybe you'll have better luck with your setup.

Resources