Is Adobe Media Encoder (AME) Scriptable? I've heard people mention it was "officially scriptable" but I can't find any reference to its scriptable object set.
Has anyone had any experience scripting AME?
Adobe media encoder is 'officially' not scriptable but we can use extend script API for scripting AME.
Below functions are available through extend script
1.Adding a file to batch
Encode progress
host = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.encodeEncodeProgress
= function (progress) {
$ .writeln (progress);
}
eHost. enc.encode ("/ Users / test / Desktop / 00000.MTS", "/Users/test/Desktop/0.mov");
} else {
alert ("The preset could not be loaded ");
}
encode end
ehost = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.onEncodeFinished
= function (success) {
if (success) {
alert ("Successfully encoding has ended ");
} Else {
Alert (" failed to encode ");
}
}
EHost.RunBatch ();
} Else {
Alert (" preset could not be read ");
}
2.Start batch
eHost = app.getEncoderHost ();
eHost.runBatch ();
3.Stop batch
eHost = app.getEncoderHost ();
eHost.stopBatch ();
4.Pause batch
eHost = app.getEncoderHost ();
eHost.pauseBatch ();
5.Getting preset formats
EHost = App.GetEncoderHost ();
List = EHost.GetFormatList ();
6.getting presets
eHost = app.getEncoderHost ();
enc = eHost.createEncoderForFormat ("QuickTime");
list = enc.getPresetList ();
and many more...
The closest bits of info I've found are:
http://www.openspc2.org/book/MediaEncoderCC/
The latter resource is actually good, if you can read japanese, or at least use the Chrome built-in translate function, then you can see it has resources such as this one:
http://www.openspc2.org/book/MediaEncoderCC/easy/encodeHost/009/index.html
We can perform almost all basic functionalities through script.
I had a similar question about Soundbooth.. I haven't tried scripting Adobe Media Encoder though, it doesn't show up in the list of applications I could potentially connect to and script with the ExtendScript Toolkit.
I did find this article that might come in handy if you're on a Windows. I guess using something similar written in AppleScript could do the job on a OSX. I haven't tried it, but this Sikuli thing looks nice, maybe it could help with the job.
Adobe Media Encoder doesn't seem to be scriptable. I was wondering, for batch converting, could you use ffmpeg ? There seem to be a few scripts out there for that, if you google for ffmpeg batch flv.
HTH,
George
Year 2021
Yes, AME is scriptable in ExtendScript. AME API doc can be found at
https://ame-scripting.docsforadobe.dev/index.html.
The API methods can be invoked locally inside AME or remotely through BridgeTalk.
addCompToBatch and other alternatives in the API doc seem to be safe to use. This is working:
app.getFrontend().addCompToBatch(project, preset, destination);
The method requires project to be structured so that 1 and only 1 comp is at the root of the project.
encoder.encode – references of which can be found in Web, supposed to support encode progress callbacks - is not available in AME 2020 and 2021. As a result, this is not working:
var encoder = app.getEncoderHost().createEncoderForFormat(encoderFormat);
var res = encoder.loadPreset(encoderPreset);
if(res){
encoder.encode(project, destination); // error: encode is not a function
}
The method seems to have been removed in AME 2017.1, according to the post reporting the issue https://community.adobe.com/t5/adobe-media-encoder-discussions/media-encoder-automation-system-with-using-extendscript/td-p/9344018
The official stance at the moment is "no", but if you open the Adobe Extend Script Toolkit, and set the target app to Media Encoder, you will see in the Data Browser that a few objects and methods are already exposed in the app object, like app.getFrontend(), app.getEncoderHost() etc. There is no official documentation though, and no support, so you are free to experiment with them at your own risk.
You can use the ExtendScript reflection interface like this:
a = app.getFrontend()
a.reflect.properties
a.reflect.methods
a.reflect.find("addItemToBatch").description
But as far as I can see, no meaningful information can be found this way beyond list of methods and properties.
More about the ExtendScript reflect interface can be found in the JavaScript Tools Guide CC document.
Doesn't seem to be. There're some reference to it being somewhat scriptable via using FCP XML yet it's not "scriptable" in its accepted form.
Edit, it looks like they finally got their finger out and made ME scriptable: https://stackoverflow.com/a/69203537/432987
I got here after it came second in the duckduckgo results for "extendscript adobe media encoder". First was a post on the Adobe forums where an adobe staffer wrote:
Scripting in Adobe Media Encoder is not a supported feature.
and, just to give the finger to anyone seeking to develop solutions for adobe users using adobe's platform:
Also, this is a user-to-user forum, not an official channel for support from Adobe personnel.
I think the answer is "Adobe says no"
Related
I have gone through the answer provided here for the difference. But I need to just play notification sound for like 2 seconds as an alert. No video or any other heavy loading.
This is the notification sound I am about to play.
ms-winsoundevent:Notification.SMS
The below is for MediaPlayerElement:
MediaPlayerElement mediaPlayerElement = new MediaPlayerElement();
mediaPlayerElement.SetMediaPlayer(new Windows.Media.Playback.MediaPlayer { AudioCategory = Windows.Media.Playback.MediaPlayerAudioCategory.Alerts});
mediaPlayerElement.MediaPlayer.AudioCategory = Windows.Media.Playback.MediaPlayerAudioCategory.Alerts;
mediaPlayerElement.Source = Windows.Media.Core.MediaSource.CreateFromUri(new Uri("ms-winsoundevent:Notification.Default"));
mediaPlayerElement.AutoPlay = false;
mediaPlayerElement.MediaPlayer.Play();
The below is for MediaElement:
MediaElement mediaElement = new MediaElement();
mediaElement.AudioCategory = AudioCategory.Alerts;
mediaElement.Source = new Uri("ms-winsoundevent:Notification.Default");
mediaElement.AutoPlay = false;
mediaElement.Play();
Can I use MediaElement since its a small audio or should I only use MediaPlayerElement as it is the one prescribed by Microsoft? which one is better to use in this case?
P.S.: I need to set audio category as Alerts in order to dim any background music.
Can I use MediaElement since its a small audio or should I only use MediaPlayerElement as it is the one prescribed by Microsoft? which one is better to use in this case?
Derive from official document,
In Windows 10, build 1607 and on we recommend that you use MediaPlayerElement in place of MediaElement. MediaPlayerElement has the same functionality as MediaElement, while also enabling more advanced media playback scenarios. Additionally, all future improvements in media playback will happen in MediaPlayerElement.
And it means that the new feature will be developed base on the MediaPlayerElement, we recommend using MediaPlayerElement that could make your app has longer life.
I am following the samples for Microsoft Cognitive Services Speech SDK, namely the Speech Translation.
The sample for dotnet core uses microphone as audio input and translates what you speak. Translated results are also available as synthesized speech. I would like to play this audio but could not find the appropriate code for that.
Tried using NAudio as sugguested in this answer but I get garbled audio. Guess there is more to the format of the audio.
Any pointers?
On .Net Core, many audio pacakges might not work. For example with NAudio, I can't play sound on my Mac.
I got it working using NetCoreAudio package (Nuget), with the following implementation in the translation Synthesizing event:
recognizer.Synthesizing += (s, e) =>
{
var audio = e.Result.GetAudio();
Console.WriteLine(audio.Length != 0
? $"AudioSize: {audio.Length}"
: $"AudioSize: {audio.Length} (end of synthesis data)");
if (audio.Length > 0)
{
var fileName = Path.Combine(Directory.GetCurrentDirectory(), $"{DateTime.Now.ToString("yyyy-MM-dd_HH-mm-ss.wav")}");
File.WriteAllBytes(fileName, audio);
var player = new Player();
player.Play(fileName).Wait();
}
};
Is there a way to read the info (fps, bitrate, duration, codecs required, etc.) of a media file (avi, mp4, mkv, etc.) on windows using visual studio c++?
I managed to play various files (which I actually don't even want) using directshow (http://msdn.microsoft.com/en-us/library/windows/desktop/dd389098%28v=vs.85%29.aspx) but I don't know how to only get the information from the file.
Edit: I got it working like this...
int height, width, framerate, bitrate;
LARGE_INTEGER duration;
// initialize the COM library
CoInitialize(NULL);
//
IPropertyStore* store = NULL;
SHGetPropertyStoreFromParsingName(L"E:\\test.avi", NULL, GPS_DEFAULT, __uuidof(IPropertyStore), (void**)&store);
PROPVARIANT variant;
store->GetValue(PKEY_Media_Duration, &variant);
duration = variant.hVal;
store->GetValue(PKEY_Video_FrameHeight, &variant);
height = variant.lVal;
store->GetValue(PKEY_Video_FrameWidth, &variant);
width = variant.lVal;
store->GetValue(PKEY_Video_FrameRate, &variant);
framerate = variant.lVal;
store->GetValue(PKEY_Video_TotalBitrate, &variant);
bitrate = variant.lVal;
//
store->Release();
//
CoUninitialize();
You can obtain this information via DirectShow, however if you don't need the playback/streaming pipeline and you are on Windows 7, then you possibly have a better alternate option to get the data from shell properties - those supplying data to display in additional columns of Windows explorer.
SHGetPropertyStoreFromParsingName gets you property store
MSDN entry point for Shell Metadata Providers
Code snippet: How to use the IPropertyStore to obtain Media_Duration?
Have you considered using the MediaInfo SDK? You can get extensive information about all of the audio and video streams available in the container, including codec specifics, as well as everything you were asking about.
Their getting started guide and reference documentation are here:
http://mediaarea.net/en/MediaInfo/Support/SDK/Quick_Start
http://mediaarea.net/en/MediaInfo/Support/SDK/More_Info
Code is available at their SourceForge page here.
I'm searching for a way to retrieve data (formatted in json) from an API and parse them.
I really want to use the code both for android and for IOS. I already saw examples but they didn't work for both platforms.
If you can provide me examples for connection, retrieving and for json, it is the best for me because I didn't find great docs about cross-platform (quite simple) implementation.
Comments Welcome !
Thanks in advance!
I've used Newtonsoft's json library in a monotouch solution
Find the source code here.
As far as retrieving the data - that depends on your API, I suspect it's a web API with HTTP calls? If that's the case you can further elaborate on this, obviously exception handling and threading is up to you:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create (url);
...
request.BeginGetResponse ((r) =>
{
string res = null;
using (StreamReader srd = new StreamReader(response.GetResponseStream())) {
res = srd.ReadToEnd ();
}
T jres = Newtonsoft.Json.JsonConvert.DeserializeObject<T> (res);
}, null);
Instead of using nuget to download Newtonsoft, you need to download it from here:http://components.xamarin.com/view/json.net/
I'm developing an applications which I've got running on a server on my linux desktop. Due to the shortcomings of Flash on Linux (read: too hard) I'm developing the (small) flash portion of the app in Windows, which means there's a lot of frustrating back and forth. Now I'm trying to capture the output of the flash portion using flash tracer and that is proving very difficult also. Is there any other way I could monitor the output of trace on linux? Thanks...
Hope this helps too (for the sake of google search i came from):
In order to do trace, you need the debugger version of Flash Player from
http://www.adobe.com/support/flashplayer/downloads.html (look for "debugger" version specifically - they are hard to spot on first look)
Then an mm.cfg file in your home containing
ErrorReportingEnable=1 TraceOutputFileEnable=1 MaxWarnings=50
And then you are good to go - restart the browser. When traces start to fill in, you will find the log file in
~/.macromedia/Flash_Player/Logs/flashlog.txt
Something like
tail ~/.macromedia/Flash_Player/Logs/flashlog.txt -f
Should suffice to follow the trace.
A different and mind-bogglingly simple workaround that I've used for years is to simply create an output module directly within the swf. All this means is a keyboard shortcut that attaches a MovieClip with a textfield. All my traces go to this textfield instead of (or in addition to) the output window. Over the years I've refined it of course, making the window draggable, resizable, etc. But I've never needed any other approach for simple logging, and it's 100% reliable and reusable across all platforms.
[EDIT - response to comment]
There's no alert quite like javascript's alert() function. But using an internal textfield is just this simple:
ACTIONSCRIPT 1 VERSION
(See notes at bottom)
/* import ExternalInterface package */
import flash.external.*;
/* Create a movieclip for the alert. Set an arbitrary (but very high) number for the depth
* since we want the alert in front of everything else.
*/
var alert = this.createEmptyMovieClip("alert", 32000);
/* Create the alert textfield */
var output_txt = alert.createTextField("output_txt", 1, 0, 0, 300, 200);
output_txt.background = true;
output_txt.backgroundColor = 0xEFEFEF;
output_txt.selectable = false;
/* Set up drag behaviour */
alert.onPress = function()
{
this.startDrag();
}
alert.onMouseUp = function()
{
stopDrag();
}
/* I was using a button to text EI. You don't need to. */
testEI_btn.onPress = function()
{
output_txt.text = (ExternalInterface.available);
}
Notes: This works fine for AS1, and will translate well into AS2 (best to use strong data-typing if doing so, but not strictly required). It should work in Flash Players 8-10. ExternalInterface was added in Flash 8, so it won't work in previous player versions.
ACTIONSCRIPT 3 VERSION
var output_txt:TextField = new TextField();
addChild(output_txt);
output_txt.text = (String(ExternalInterface.available));
If you want to beef it out a bit:
var alert:Sprite = new Sprite();
var output_txt:TextField = new TextField();
output_txt.background = true;
output_txt.backgroundColor = 0xEFEFEF;
output_txt.selectable = false;
output_txt.width = 300;
output_txt.height = 300;
alert.addChild(output_txt);
addChild(alert);
alert.addEventListener(MouseEvent.MOUSE_DOWN, drag);
alert.addEventListener(MouseEvent.MOUSE_UP, stopdrag);
output_txt.text = (String(ExternalInterface.available));
function drag(e:MouseEvent):void
{
var alert:Sprite = e.currentTarget as Sprite;
alert.startDrag();
}
function stopdrag(e:MouseEvent):void
{
var alert:Sprite = e.currentTarget as Sprite;
alert.stopDrag();
}
[/EDIT]
If you only need the trace output at runtime, you can use Firebug in Firefox and then use Flash.external.ExternalInterface to call the console.log() Javascript method provided by Firebug.
I've used that strategy multiple times to a large degree of success.
Thunderbolt is a great logging framework with built-in firebug support.
I use the flex compiler on linux to build actionscript files, [embed(source="file")] for all my assets including images and fonts, I find actionscript development on linux very developer friendly.
Then again, I'm most interested in that flash has become Unix Friendly as aposed to the other way around :)
To implement FlashTracer, head to the following address and be sure you have the latest file. http://www.sephiroth.it/firefox/flashtracer/ . Install it and restart the browser.
Head over to adobe and get the latest flash debugger. Download and install the firefox version as FlashTracer is a firefox addition.
Now that firefox has the latest flash debugger and flash tracer we need to locate mm.cfg
Location on PC: C:\Documents and Settings\username
Inside of mm.cfg should be:
ErrorReportingEnable=1
TraceOutputFileEnable=1
MaxWarnings=100 //Change to your own liking.
Once that is saved, open firefox, head to the flash tracer window by heading to tools > flash tracer. In the panel that pops up there is two icons in the bottom right corner, click the wrench and make sure the path is set to where your log file is being saved. Also check to see that flash tracer is turned on, there is a play/pause button at the bottom.
I currently use this implementation and hope that it works for you. Flash Tracer is a little old, but works with the newest versions of FireFox. I am using it with FireFox 3.0.10.