I'm implementing shouldWaitForLoadingOfRequestedResource handler for AVPlayer/AVURLAsset HLS videos and found a weird behavior in tvOS.
As I see it could request the same resources multiple times, including "root" manifest, second-level manifests and segments (and I'm talking not about multiple quality switches, it's requesting exactly the same resources).
In the same time, each request are served with my code well enough – video is playing good.
Also, exactly the same code is working fine in iOS – no duplicated requests.
In which cases AVURLAsset/AVAssetResourceLoader could request the same resources multiple times in tvOS?
I have the same problem
I can add that I'm using the apple sample app
public func resourceLoader(_ resourceLoader: AVAssetResourceLoader, shouldWaitForLoadingOfRequestedResource loadingRequest: AVAssetResourceLoadingRequest) -> Bool {
print("\(#function) was called in AssetLoaderDelegate with loadingRequest: \(loadingRequest)")
var ret : Bool = true
ret = shouldLoadOrRenewRequestedResource(resourceLoadingRequest: loadingRequest)
return ret
}
This is the debug print i get:
resourceLoader(_:shouldWaitForLoadingOfRequestedResource:) was called
in AssetLoaderDelegate with loadingRequest:
{ URL:
skd://817015000008100f172b492d3b25f5dda31c59d090b21000 }, request
ID = 3, content information request =
AVAssetResourceLoadingContentInformationRequest: 0x14f6dd070, content
type = "(null)", content length = 0, byte range access supported = NO,
disk caching permitted = NO, renewal date = (null), data request =
AVAssetResourceLoadingDataRequest: 0x14f67ae50, requested offset = 0,
requested length = 9223372036854775807, requests all data to end of
resource = YES, current offset = 0
I can see each time the value request ID in the print is different
Related
I've been trying to figure this out for the past day or two with minimal results. Essentially what I want to do is send my selected comps in After Effects to Adobe Media Encoder via script, and using information about them (substrings of their comp name, width, etc - all of which I already have known and figured out), and specify the appropriate AME preset based on the conditions met. The current two methods that I've found won't work for what I'm trying to do:
https://www.youtube.com/watch?v=K8_KWS3Gs80
https://blogs.adobe.com/creativecloud/new-changed-after-effects-cc-2014/?segment=dva
Both of these options more or less rely on the output module/render queue, (with the first option allowing sending it to AME without specifying preset) which, at least to my knowledge, won't allow h.264 file-types anymore (unless you can somehow trick render queue with a created set of settings prior to pushing queue to AME?).
Another option that I've found involves using BridgeTalk to bypass the output module/render queue and go directly to AME...BUT, that primarily involves specifying a file (rather than the currently selected comps), and requires ONLY having a single comp (to be rendered) at the root level of the project: https://community.adobe.com/t5/after-effects/app-project-renderqueue-queueiname-true/td-p/10551189?page=1
Now as far as code goes, here's the relevant, non-working portion of code:
function render_comps(){
var mySelectedItems = [];
for (var i = 1; i <= app.project.numItems; i++){
if (app.project.item(i).selected)
mySelectedItems[mySelectedItems.length] = app.project.item(i);
}
for (var i = 0; i < mySelectedItems.length; i++){
var mySelection = mySelectedItems[i];
//~ front = app.getFrontend();
//~ front.addItemToBatch(mySelection);
//~ enc = eHost.createEncoderForFormat("H.264");
//~ flag = enc.loadPreset("HD 1080i 25");
//app.getFrontend().addItemToBatch(mySelection);
var bt = new BridgeTalk();
bt.appName = "ame";
bt.target = "ame";
//var message = "alert('Hello')";
//bt.body = message;
bt.body="app.getFrontend().addCompToBatch(mySelection)";
bt.send();
}
}
Which encapsulates a number of different attempts and things that I've tried.
I've spent about 4-5 hours trying to scour the internet and various resources but so far have come up short. Thanks in advance for the help!
I have a simple chrome extension, and I'm trying to do some page analysis through the content.js. this is the code:
console.log("content.js running.."); //debug
var fromDOM = new XMLSerializer().serializeToString(document);
console.log(fromDOM)
var i = 0;
var item;
for (item in fromDOM) {
var x = fromDOM[item];
if (x == "/"){
i++;
console.log(i);
chrome.runtime.sendMessage({lala: i});
}
}
This code searches for any occurence of "/" in the page and sends a message to a background script (that currently does nothing).
This for loop alone causes any tab I load to load slower than usual.. affecting user performance.
What am I doing wrong here? I can't do heavy lifting on content.js scripts? or is there a better way I'm missing.
Assuming you want to process the current HTML of the page:
Use document.documentElement.innerHTML
Use string methods like indexOf to get the position of each / without enumerating the long HTML string character by character.
Accumulate all the positions in an array and send it in one message since sending of a message is an expensive operation that involves internal JSON.stringify+JSON.parse.
Don't use console.log when devtools is open as it does a lot of extra processing to format the messages. And generally prefer debugging interactively - there's a panel in devtools to inspect and set breakpoints in content scripts so you can debug the code, view the variables, and so on.
const html = document.documentElement.innerHTML;
const slashes = [];
let pos = -1;
do {
pos = html.indexOf('/', pos + 1);
if (pos >= 0) {
slashes.push(pos);
}
} while (pos >= 0);
chrome.runtime.sendMessage({lala: slashes});
Now your background listener will receive an array of character positions - not really useful per se, but that's just an example. You can put more info inside the array to make it more meaningful.
I want to change max-results while retrieving comments of a video from youtube. This is my code :
YouTubeService service = new YouTubeService(
"CLIENT_ID");
String str="http://gdata.youtube.com/feeds/api/videos/"+videoId;
YouTubeQuery youtubeQuery = new YouTubeQuery(new URL(
str));
youtubeQuery.setMaxResults(50);
youtubeQuery.setStartIndex(1);
String videoEntryUrl = youtubeQuery.getUrl().toString();
System.out.println(videoEntryUrl+" *************");
VideoEntry videoEntry = service.getEntry(new URL(videoEntryUrl),
VideoEntry.class);
While creating VideoEntry object in the last row, it gives this error :
Exception in thread "main" com.google.gdata.util.InvalidEntryException: The 'max-results' parameter is not supported on this resource
http://schemas.google.com/g/2005'>GDataunsupportedQueryParamThe 'max-results' parameter is not supported on this resource
My code prints the query so when it gives error query is like that :
http://gdata.youtube.com/feeds/api/videos/v_wzBsZLLaE?start-index=1&max-results=40
Why max-results parameter is not supported in this situation?
Greetings
You are requesting video information about one video. So, for 1 video, using start-index and max-results does not make any sense. (If it would be allowed then both can only be 1.)
The question sounds very simple but I couldn't find a way to check if a track uri is correct.
For example, the normal procedure to play a track by a given valid track uri spotify:track:5Z7ygHQo02SUrFmcgpwsKW is:
1) get sp_link* by sp_link_create_from_string(const char *$track_uri)
2) get sp_track* by sp_link_as_track(sp_link*)
3) sp_track_add_ref(sp_track*)
4) if sp_track_error() returns SP_ERROR_OK, or SP_ERROR_IS_LOADING but metadata_updated and SP_ERROR_OK then sp_session_player_load and sp_session_player_play to load and play the track.
5) sp_track_release() and sp_session_player_unload() when it's the end of track.
Now if user made a typo on the spotify track uri and tried to play spotify:track:5Z7ygHQo02SUrFmcgpwsKQ (the last character Q is a typo, should be W). All the possible error checks during the procedure don't return any error:
step 1) sp_link is not NULL
step 2) sp_track* is not NULL
step 3) returns SP_ERROR_OK
step 4) sp_track_error() returns SP_ERROR_IS_LOADING, metadata_updated never gets called, and of course the program hangs.
The checks before the track is loaded can't find out the invalidity of a track uri. The API sp_track_availability sounds like a way, but it will always return NOT AVAILABLE if the track is not loaded, but if the track uri is not correct, track will never be loaded.
Did I miss something or misunderstand the APIs?
You can check if the uri is valid by checking the link type which states:
enum sp_linktype {
SP_LINKTYPE_INVALID = 0,
SP_LINKTYPE_TRACK = 1,
SP_LINKTYPE_ALBUM = 2,
SP_LINKTYPE_ARTIST = 3,
SP_LINKTYPE_SEARCH = 4,
SP_LINKTYPE_PLAYLIST = 5,
SP_LINKTYPE_PROFILE = 6,
SP_LINKTYPE_STARRED = 7,
SP_LINKTYPE_LOCALTRACK = 8,
SP_LINKTYPE_IMAGE = 9
}
That leaves us possibilities to perform link checks like so:
check if sp_link_type(link) == SP_LINKTYPE_TRACK
create a sp_track* track from sp_link_as_track(link), or, sp_track_get_playable(sp_link*, sp_session*)
wait for it to be loaded, then check availability
copy the track uri spotify:track:5Z7ygHQo02SUrFmcgpwsKW to the address bar of browser, and Enter, it will automatically switch to spotify app and locate to that album.
I am streaming small movies (1-3MB) off my website into my iPhone app. I have a slicehost webserver, I think it's a "500MB slice". Not sure off the top of my head how this translates to bandwidth, but I can figure that out later.
My experience with MPMoviePlayerLoadStateDidChangeNotification is not very good.
I get much more reliable results with the old MPMoviePlayerContentPreloadDidFinishNotification
If I get a MPMoviePlayerContentPreloadDidFinishNotification, the movie will play without stuttering, but if I use MPMoviePlayerLoadStateDidChangeNotification, the movie frequently stalls.
I'm not sure which load state to check for:
enum {
MPMovieLoadStateUnknown = 0,
MPMovieLoadStatePlayable = 1 << 0,
MPMovieLoadStatePlaythroughOK = 1 << 1,
MPMovieLoadStateStalled = 1 << 2,
};
MPMovieLoadStatePlaythroughOK seems to be what I want (based on the description in the documentation):
MPMovieLoadStatePlaythroughOK
Enough data has been buffered for playback to continue uninterrupted.
Available in iOS 3.2 and later.
but that load state NEVER gets set to this in my app.
Am I missing something? Is there a better way to do this?
Just making sure that you noticed it's a flag, not a value?
MPMoviePlayerController *mp = [aNotification object];
NSLog(#"LoadState: %i", (NSInteger)mp.loadState);
if (mp.loadState & MPMovieLoadStatePlaythroughOK)
{
// Do stuff
}