I've been trying to figure this out for the past day or two with minimal results. Essentially what I want to do is send my selected comps in After Effects to Adobe Media Encoder via script, and using information about them (substrings of their comp name, width, etc - all of which I already have known and figured out), and specify the appropriate AME preset based on the conditions met. The current two methods that I've found won't work for what I'm trying to do:
https://www.youtube.com/watch?v=K8_KWS3Gs80
https://blogs.adobe.com/creativecloud/new-changed-after-effects-cc-2014/?segment=dva
Both of these options more or less rely on the output module/render queue, (with the first option allowing sending it to AME without specifying preset) which, at least to my knowledge, won't allow h.264 file-types anymore (unless you can somehow trick render queue with a created set of settings prior to pushing queue to AME?).
Another option that I've found involves using BridgeTalk to bypass the output module/render queue and go directly to AME...BUT, that primarily involves specifying a file (rather than the currently selected comps), and requires ONLY having a single comp (to be rendered) at the root level of the project: https://community.adobe.com/t5/after-effects/app-project-renderqueue-queueiname-true/td-p/10551189?page=1
Now as far as code goes, here's the relevant, non-working portion of code:
function render_comps(){
var mySelectedItems = [];
for (var i = 1; i <= app.project.numItems; i++){
if (app.project.item(i).selected)
mySelectedItems[mySelectedItems.length] = app.project.item(i);
}
for (var i = 0; i < mySelectedItems.length; i++){
var mySelection = mySelectedItems[i];
//~ front = app.getFrontend();
//~ front.addItemToBatch(mySelection);
//~ enc = eHost.createEncoderForFormat("H.264");
//~ flag = enc.loadPreset("HD 1080i 25");
//app.getFrontend().addItemToBatch(mySelection);
var bt = new BridgeTalk();
bt.appName = "ame";
bt.target = "ame";
//var message = "alert('Hello')";
//bt.body = message;
bt.body="app.getFrontend().addCompToBatch(mySelection)";
bt.send();
}
}
Which encapsulates a number of different attempts and things that I've tried.
I've spent about 4-5 hours trying to scour the internet and various resources but so far have come up short. Thanks in advance for the help!
We are working on Universal Windows Apps in which we are opening the files (whose size is 20MB) using below code.
FileOpenPicker openPicker = new FileOpenPicker();
openPicker.FileTypeFilter.Add(".abc");
StorageFile file = await openPicker.PickSingleFileAsync();
if (file == null) return false;
FlowSheetFilePath = file.Path;
LaunchQuerySupportStatus status = await Launcher.QueryFileSupportAsync(file);
if (status == LaunchQuerySupportStatus.Available)
{
bool didLaunch = await Launcher.LaunchFileAsync(file);
if (didLaunch)
{
}
}
In the above code, Is there any way to determine how much time is needed to completely open the file whose size is around 20MB?
It's not possible. Note that this will depend not only on device configuration/type but ola CPU overload and so on.
If your app read/processes the file you may implement some kind of ProgressBar that will indicate how much work is already done (with implementation of IProgress<>) and how much is left, however that also won't help you with determination of time - you can think of estimating the time left basing on that what is already done, but this is only estimation and will surely change over time. But this won't also help you with Launcher.LaunchFileAsync(file).
Poking around I was unable to discover a way to detect hidden files in OS X with node (nodejs).
Of course, we can easily find the ".dot_hidden" files, but on the Mac, there are files/folders that are "protected" system files, that most users shouldn't fiddle with. In the Finder GUI, they are invisible or grey'd out when hidden files are forced to be shown via "AppleShowAllFiles".
I did discover a reference to UF_HIDDEN : 0x8000 here:
https://developer.apple.com/library/mac/documentation/FileManagement/Conceptual/FileSystemProgrammingGuide/FileSystemDetails/FileSystemDetails.html
Using node's stat, we can return 2 additional bits of info that may provide a clue for the hidden status:
mode: 33188, // File protection.
ino: 48064969, // File inode number. An inode is a file
system data structure that stores
information about a file.
I'm not really a hex / binary guy, but it looks like grabbing the stat's "ino" property we can apply 0x8000 and determine if the file is being hinted as hidden or not.
I didn't have any success with the 0x8000 mask on the mode, but did have some with ino.
Here's what I've got, checking the "ino" returns 0 or 1726, when it's 1726 the file seems to match as a hidden file in OS X.
var fs = require("fs");
var dir = "/";
var list = fs.readdirSync(dir);
list.forEach(function(f){
// easy dot hidden files
var hidden = (f.substr(0, 1) == ".") ? true : false;
var ino = 0;
var syspath = dir + "/" + f;
if( ! hidden ){
var stats = fs.statSync(syspath);
ino = parseInt( stats.ino & 0x8000, 8);
// ino yeilds 0 when hidden and 1726 when not?
if(ino || dotted){
hidden = true;
}
}
console.log(syspath, hidden, ino);
});
So my question is if I'm applying the 0x8000 mask properly on the ino value to yeild a proper result?
And how would one go about parsing the ino property get at all the other flags contained within it?
The inode number (stats.ino) is a number which uniquely identifies a file; it has nothing to do with the hidden status of the file. (Indeed, it's possible to set or clear the hidden flag on a file at any time, and this won't change the inode number.)
The hidden flag is part of the st_flags field in the struct stat structure. Unfortunately, it doesn't look like the node.js fs module exposes this value, so you may need to shell out to the stat shell utility if you need to get this information on Mac OS X. (Short version: stat -f%f file will print a file's flags, represented in decimal.)
Suppose I want to invoke some command on all files in a directory and set a watch to invoke that command on all files that get created in that directory. If I do:
while( ( sdi = readdir( d )) != NULL ) { ... }
closedir( d );
/* Files created here will be missed */
inotify_add_watch( ... );
then some files will potentially be missed. If I call inotify_add_watch()
before the readdir(), files may be acted on twice (it would require
a fair bit of infrastructure to prevent acting twice, and it seems that
the edge cases would be difficult to handle). Is there a simple way to avoid
having to record the names of all files worked on during the readdir loop and
comparing those to the names returned in the inotify_event structure? I can
minimize the amount of necessary comparisons with:
while( ( sdi = readdir( d )) != NULL ) { ... }
inotify_add_watch( ... );
while( ( sdi = readdir( d )) != NULL ) { /* record name */ ... }
closedir( d );
And usually the second readdir() loop will do nothing, but this feels like a bad hack.
You simply can't. The more you hack, the more race conditions you'll get.
The simplest actually working solution is to set the watch before using opendir(), and keep a list (set) of already used names (or their hashes).
But this isn't perfect either. User can have the file open in a text editor; you fix it, user saves it and the directory contains unfixed file anyway, though it's on your list.
The best method would be to be able for the program to actually distinguish used files by their content. In other words, you set watch, call command on readdir() results, then call it on inotify results and let the command itself know whether the file is fine already or not.
I want to process some data. I have about 25k items in a Dictionary. IN a foreach loop, I query a database to get results on that item. They're added as value to the Dictionary.
foreach (KeyValuePair<string, Type> pair in allPeople)
{
MySqlCommand comd = new MySqlCommand("SELECT * FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", con);
MySqlDataReader reader2 = comd.ExecuteReader();
Dictionary<string, Dictionary<int, Log>> allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader2.Read())
{
if (!allViews.ContainsKey(reader2.GetString("src")))
{
allViews.Add(reader2.GetString("src"), reader2.GetInt32("time"));
}
}
reader2.Close();
reader2.Dispose();
allPeople[pair.Key].View = allViews;
}
I was hoping to be able to do this faster by multi-threading. I have 8 threads available, and CPU usage is about 13%. I just don't know if it will work because it's relying on the MySQL server. On the other hand, maybe 8 threads would open 8 DB connections, and so be faster.
Anyway, if multi-threading would help in my case, how? o.O I've never worked with (multiple) threads, so any help would be great :D
MySqlDataReader is stateful - you call Read() on it and it moves to the next row, so each thread needs their own reader, and you need to concoct a query so they get different values. That might not be too hard, as you naturally have many queries with different values of pair.Key.
You also need to either have a temp dictionary per thread, and then merge them, or use a lock to prevent concurrent modification of the dictionary.
The above assumes that MySQL will allow a single connection to perform concurrent queries; otherwise you may need multiple connections too.
First though, I'd see what happens if you only ask the database for the data you need ("SELECT src,time FROMlogsWHERE IP = '" + pair.Key + "' GROUP BY src") and use GetString(0) and GetInt32(1) instead of using the names to look up the src and time; also only get the values once from the result.
I'm also not sure on the logic - you are not ordering the log events by time, so which one is the first returned (and so is stored in the dictionary) could be any of them.
Something like this logic - where each of N threads only operates on the Nth pair, each thread has its own reader, and nothing actually changes allPeople, only the properties of the values in allPeople:
private void RunSubQuery(Dictionary<string, Type> allPeople, MySqlConnection con, int threadNumber, int threadCount)
{
int hoppity = 0; // used to hop over the keys not processed by this thread
foreach (var pair in allPeople)
{
// each of the (threadCount) threads only processes the (threadCount)th key
if ((hoppity % threadCount) == threadNumber)
{
// you may need con per thread, or it might be that you can share con; I don't know
MySqlCommand comd = new MySqlCommand("SELECT src,time FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", con);
using (MySqlDataReader reader = comd.ExecuteReader())
{
var allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader.Read())
{
string src = reader.GetString(0);
int time = reader.GetInt32(1);
// do whatever to allViews with src and time
}
// no thread will be modifying the same pair.Value, so this is safe
pair.Value.View = allViews;
}
}
++hoppity;
}
}
This isn't tested - I don't have MySQL on this machine, nor do I have your database and the other types you're using. It's also rather procedural (kind of how you would do it in Fortran with OpenMPI) rather than wrapping everything up in task objects.
You could launch threads for this like so:
void RunQuery(Dictionary<string, Type> allPeople, MySqlConnection connection)
{
lock (allPeople)
{
const int threadCount = 8; // the number of threads
// if it takes 18 seconds currently and you're not at .net 4 yet, then you may as well create
// the threads here as any saving of using a pool will not matter against 18 seconds
//
// it could be more efficient to use a pool so that each thread takes a pair off of
// a queue, as doing it this way means that each thread has the same number of pairs to process,
// and some pairs might take longer than others
Thread[] threads = new Thread[threadCount];
for (int threadNumber = 0; threadNumber < threadCount; ++threadNumber)
{
threads[threadNumber] = new Thread(new ThreadStart(() => RunSubQuery(allPeople, connection, threadNumber, threadCount)));
threads[threadNumber].Start();
}
// wait for all threads to finish
for (int threadNumber = 0; threadNumber < threadCount; ++threadNumber)
{
threads[threadNumber].Join();
}
}
}
The extra lock held on allPeople is done so that there is a write barrier after all the threads return; I'm not quite sure if it's needed. Any object would do.
Nothing in this guarantees any performance gain - it might be that the MySQL libraries are single threaded, but the server certainly can handle multiple connections. Measure with various numbers of threads.
If you're using .net 4, then you don't have to mess around creating the threads or skipping the items you aren't working on:
// this time using .net 4 parallel; assumes that connection is thread safe
static void RunQuery(Dictionary<string, Type> allPeople, MySqlConnection connection)
{
Parallel.ForEach(allPeople, pair => RunPairQuery(pair, connection));
}
private static void RunPairQuery(KeyValuePair<string, Type> pair, MySqlConnection connection)
{
MySqlCommand comd = new MySqlCommand("SELECT src,time FROM `logs` WHERE IP = '" + pair.Key + "' GROUP BY src", connection);
using (MySqlDataReader reader = comd.ExecuteReader())
{
var allViews = new Dictionary<string, Dictionary<int, Log>>();
while (reader.Read())
{
string src = reader.GetString(0);
int time = reader.GetInt32(1);
// do whatever to allViews with src and time
}
// no iteration will be modifying the same pair.Value, so this is safe
pair.Value.View = allViews;
}
}
The biggest problem that comes to mind is that you are going to use multithreading to add values to a dictionary, which isn't thread safe.
You'll have to do something like this to make it work, and you might not get that much of a benefit from implementing it this was as it still has to lock the dictionary object to add a value.
Assumptions:
There is a table People in your
database
There are alot of people in
your database
Each database query adds overhead you are doing one db query for each of the people in your database I would suggest it was faster to get all the data back in one query then to make repeated calles
select l.ip,l.time,l.src
from logs l, people p
where l.ip = p.ip
group by l.ip, l.src
Try this with a loop in a single thread, I belive this will be much faster then your existing code.
With in your existing code another thing you can do is to take the creation of the MySqlCommand out of the loop, prepare it in advance and just change the parameter. This should speed up execution of the SQL. see http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqlcommand.html#connector-net-examples-mysqlcommand-prepare
MySqlCommand comd = new MySqlCommand("SELECT * FROM `logs` WHERE IP = ?key GROUP BY src", con);
comd.prepare();
comd.Parameters.Add("?key","example");
foreach (KeyValuePair<string, Type> pair in allPeople)
{
comd.Parameters[0].Value = pair.Key;
If you are using mutiple threads, each thread will still need there own Command, at lest in MS-SQL this would still be faster even if you recreated and prepared the statment every time, due to the ability for the SQL server to be able to cache the execution plan of a paramertirised statment.
Before you do anything else, find out exactly where the time is being spent. Check the execution plan of the query. The first thing I'd suspect is a missing index on logs.IP.
18 minutes for something like this seems much too long to me. Even if you can cut the execution time in eight by adding more threads (which is unlikely!) you still end up using more than 2 minutes. You could probably read the whole 25k rows into memory in less than five seconds and do the necessary processing in memory...
EDIT: Just to clarify, I'm not advocating actually doing this in memory, just saying that it looks like there's a bigger bottleneck here that can be removed.
I think if you are running this on a multi core machine you could gain benefits from multi threading.
However the way I would approach it is to first look at unblocking the thread you are currently using by making asynchronous database calls. The call backs will execute on background threads, so you will get some multi core benefit there and you won't be blocking threads waiting for the db to come back.
For IO intensive apps like this example sounds like you are likely to see improved throughput depending on what load the db can handle. Assuming the db scales to handle more than one concurrent request you should be good.
Thanks everyone for your help. Currently I am using this
for (int i = 0; i < 8; i++)
{
ThreadPool.QueueUserWorkItem(addDistinctScres, i);
}
ThreadPool to run all the threads. I use the method provided by Pete Kirkham, and I'm creating a new connection per thread.
Times went down to 4 minutes.
Next I'll make something wait for the callback of the threadpool? before performing other functions.
I think the bottleneck now is the MySQL server, because the CPU usage has drops.
#odd parity I thought about that, but the real thing is waaay more than 25k rows. Idk if that'd work.
This sound like the perfect job for map/reduce, i am not a .Net-programmer, but this seems like a reasonable guide:
http://ox.no/posts/minimalistic-mapreduce-in-net-4-0-with-the-new-task-parallel-library-tpl