I am using JPlayer which plays different audio files based on the the user input. Every time a user enters an input, I am calling a REST web service to retrieve the audio file to play. The response from the REST service is a byte[].
What I am trying to achieve is to save this array of byte in memory instead of writing it to a file and use that byte[] for jplayer. I am not sure how to get JPlayer to play a byte[].
var file = [[${audiofile}]]
$(document).ready(function(){
$("#jquery_jplayer_1").jPlayer(
{
ready: function () {
$(this).jPlayer("setMedia",{
wav: file
});
},
the variable file evaluates to a byte[]. When trying to play the audio, I see the following error in console.
Uncaught TypeError: Object
-1,-5,-112,-60,0,3,19,124,-73.......
I will appreciate if somebody has any suggestions.
Thanks
Unfortunately, this feature looks unimplemented right now. From the source code of JPlayer 2.9.2 from line 1945:
setMedia: function(media) {
/* media[format] = String: URL of format. Must contain all of the supplied option's video or audio formats.
* media.poster = String: Video poster URL.
* media.track = Array: Of objects defining the track element: kind, src, srclang, label, def.
* media.stream = Boolean: * NOT IMPLEMENTED * Designating actual media streams. ie., "false/undefined" for files. Plan to refresh the flash every so often.
*/
Please notice the last line. Wish I had better news.
REF: https://github.com/happyworm/jPlayer/blob/master/src/javascript/jplayer/jquery.jplayer.js
Some fellows came close to a solution, but not with JPlayer. REF: How to play audio byte array (not file!) with JavaScript in a browser
Related
I am using a third party library that requires I pass U8IntList to display an image in a PDF. Their examples has me obtain the image in a File and read the bytes out.
PdfBitmap(file.readAsBytesSync())
This system is great when I am obtaining an image from a server, but I want to display an image stored in local assets.
What I tried to implement was this code..
Future<File> getImageFileFromAssets(String path) async {
final byteData = await rootBundle.load('assets/$path');
final file = File('${(await getTemporaryDirectory()).path}/$path');
await file.writeAsBytes(byteData.buffer.asUint8List(byteData.offsetInBytes, byteData.lengthInBytes));
return file;
}
Which returns the error 'No implementation found for method getTemporaryDirectory on channel plugins.flutter.io/path_provider'.
If anyone knows how to get an Asset Image as File on web it would be greatly appreciated.
Why would you want to write byte data to a file just to read it again? Just directly pass your byte data to the constructor that requires it. This should be changed on both your web and mobile implementations as it will end up being far faster.
final byteData = await rootBundle.load('assets/$path');
PdfBitmap(byteData.buffer.asUint8List())
I want to use server-to-server cloudkit js. to save record with Asset field.
the Asset field is a m4a audio. after saved, the audio file is corrupt to play
The Apple's Doc is not clear about the Asset field.
In a record that is being saved to the database, the value of an Asset field must be a window.Blob type. In the code fragment above, the type of the assetFile variable is window.File.
Docs:
https://developer.apple.com/documentation/cloudkitjs/cloudkit/database/1628735-saverecords
but in nodejs ,there is no Blob or .File, I filled it with a buffer like this code:
var dstFile = path.join(__dirname,"../test.m4a");
var data = fs.readFileSync(dstFile);
let buffer = Buffer.from(data);
var rec = {
recordType: "MyAttachment",
fields: {
ext: { value: ".m4a" },
file: { value: buffer }
}
}
//console.debug(rec);
mydatabase.newRecordsBatch().create(rec).commit().then(function (response) {
if (response.hasErrors) {
console.log(">>> saveAttachFile record failed");
console.warn(response.errors[0]);
} else {
var createdRecord = response.records[0];
console.log(">>> saveAttachFile record success:", createdRecord);
}
});
The record is successful be saved.
But when I download the audio from icloud.developer.apple.com/dashboard .
the audio file is corrupt to play.
What's wrong with it. thank you to reply.
I was having the same problem and have found a working solution!
Remembering that CloudKitJS needs you to define your own fetch method, I implemented a custom one to see what was going on. I then attached a debugger on the custom fetch to inspect the data that was passing through it.
After stepping through the caller, I found that all asset values are transformed using its toString() method only when the library is embedded in NodeJS. This is determined by the absence of the global window object.
When toString() is called on a Buffer, its contents are encoded to UTF-8 (by default), which causes binary assets to become malformed. If you're using node-fetch for your fetch implementation, it supports Buffer and stream.Readable, so this toString() call does nothing but harm.
The most unobtrusive fix I've found is to swap the toString() method on any Buffer or stream.Readable instances passed as an asset field values. You should probably use stream.Readable, by the way, so that you don't load the entire asset in memory when uploading.
Anyway, here's what it looks like in practice:
// Put this somewhere in your implementation
const swizzleBuffer = (buffer) => {
buffer.toString = () => buffer;
return buffer;
};
// Use this asset value instead
{ asset: swizzleBuffer(fs.readFileSync(path)) }
Please be aware that this workaround mutates a Buffer in an ugly way (since Buffer apparently can't be extended). It's probably a good idea to design an API which doesn't use Buffer arguments so that you can mutate instances that only you create yourself to avoid unintended side effects anywhere else in your code.
Also, sure to vendor (make a local copy) of CloudKitJS in your project, as the behavior may change in the future.
ORIGINAL ANSWER
I ran into the same problem and solved it by encoding my data using Base64. It appears that there's a bug in their SDK which mangles Buffer instances containing non-ascii characters (which, um, seems problematic).
Anyway, try something like this:
const assetField = { value: Buffer.from(data.toString('base64')), 'ascii') }
Side note:
You'll need to decode the asset(s) on the device before using them. There's no way to do this efficiently without writing your own routines, as the methods included in Data / NSData instances requires all data to be in memory.
This is a problem with CloudKitJS (and not the native CloudKit client / service), so the other option is to write your own routine to upload assets.
Neither of these options seem particularly great, but rolling your own atleast means there aren't extra steps for clients to take in order to use the asset.
I'm trying to get a multi-audio HLS stream working on a v3 Google Cast custom receiver app. The master playlist of the stream refers to several video renditions of different resolution and two alternative audio tracks:
#EXTM3U
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="TV Ton",DEFAULT=YES, AUTOSELECT=YES,URI="index_1_a.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac",LANGUAGE="de",NAME="Audiodeskription",DEFAULT=NO, AUTOSELECT=NO,URI="index_2_a.m3u8"
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=383000,RESOLUTION=320x176,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_0_av.m3u8
...more renditions
#EXT-X-STREAM-INF:AUDIO="aac",BANDWIDTH=3697000,RESOLUTION=1280x720,CODECS="avc1.4d001f, mp4a.40.2",CLOSED-CAPTIONS=NONE
index_6_av.m3u8
The video plays fine in both the sender and receiver app, I can see both audio tracks in the sender app, but when casting to the receiver there are no controls for changing the audio tracks.
When accessing the AudioTracksManager's getTracks() method while intercepting the LOAD message like so...
playerManager.setMessageInterceptor(
cast.framework.messages.MessageType.LOAD, loadRequestData => {
loadRequestData.media.hlsSegmentFormat = cast.framework.messages.HlsSegmentFormat.TS
const audioTracksManager = playerManager.getAudioTracksManager();
console.log(audioTracksManager.getTracks())
console.log('Load request: ', loadRequestData);
return loadRequestData;
});
I get an error saying:
Uncaught Error: Tracks info is not available.
Maybe unrelated, but super weird: I can console.log the request's media prop and see its tracks prop (an array with the expected 1 video and 2 audio tracks), however, if I try to access the tracks property in the LOAD message interceptor I get undefined.
I currently cannot look into the iOS sender code yet, so I tried to eliminate error sources on the receiver end. The thing is:
I always assumed that the receiver identifies alternative audio tracks on its own when loading HLS playlists. Is this assumption correct or can the AudioTracksManager only access tracks that have been previously defined in a sender app?
I couldn't find a clear statement on that in the Google Cast reference...
Ok, feeling stupid for the time I spent on this, but I'm finally able to answer my own question. I didn't realize that I was accessing the AudioTracksManager in the wrong place - namely in the LOAD message interceptor instead of in a PLAYER_LOAD_COMPLETE event listener (as it is properly documented here)
After placing my logic into this event listener I was able to access and programmatically set my audio tracks.
So to answer my original question: Yes, the receiver app automatically identifies alternative audio tracks from an HLS playlist.
Google has somewhat recently rolled out the ability to insert audio files from your Drive into Slides with various playback options.
I cannot find any documentation on how to insert a file through Google Scripts but can do so going through the available menu options. I tried using the insertVideo method but got an error
"Exception: The parameters (DriveApp.File) don't match the method signature for SlidesApp.Slide.insertVideo."
Here is a general function I'm trying to get to work (NOOB disclaimer goes here):
function uploadAudioToCurrentSlide(){
var presentation = SlidesApp.getActivePresentation();
var currentSlide = presentation.getSlides()[0];
var audioFile = DriveApp.getFileById('idofaudiofileindrive');
currentSlide.insertVideo(audioFile);
}
Any help is most appreciated!
You want to insert a audio file in Google Drive to Google Slides using Google Apps Script.
Issue and workaround:
I think that the reason of your issue is that the file object is directly used to the method of insertVideo. The argument of insertVideo is the URL and the video object which is not the file object. By this, such error occurs.
In the current stage, when the method of insertVideo is used, the video content is required to be the publicly shared YouTube URL.
And also, it seems that the audio file cannot be directly inserted.
Unfortunately, it seems that these are the current specification. So as a workaround, how about the following flow?
At first, convert the audio file to a video file like MP4. As a test, this can be done at other site. But I'm not sure about the file type of your audio file.
Insert the converted MP4 file on Google Drive using Slides API.
When the Slides API is used, you can insert the video file in Google Drive to the Google Slides. In this sample script, "CreateVideoRequest" of the batchUpdate method of Slides API is used.
Sample script:
Before you run the script, please enable Slides API at Advanced Google services.
function myFunction() {
var fileId = "###"; // Please set the file ID of the converted video file on Google Drive.
var presentation = SlidesApp.getActivePresentation();
var currentSlide = presentation.getSlides()[0];
var resource = {requests: [{createVideo: {source: "DRIVE", id: fileId, elementProperties: {pageObjectId: currentSlide.getObjectId()}}}]};
Slides.Presentations.batchUpdate(resource, presentation.getId());
}
Note:
When you can upload the audio file to YouTube and publicly share it, you can use your script using the URL of the YouTube.
References:
insertVideo(videoUrl)- Advanced Google services
Method: presentations.batchUpdate
CreateVideoRequest
I'm using the SpotifyAPI-NET on GitHub from JohnnyCrazy to play and pause songs on my Spotify desktop client. This works fine.
Now I want to change the playing position of the currently playing song. So I only want to say something like "SetPlayingPosition(64)" to play the current song from position "01:04". It seems that the SpotifyLocalAPI didn't support this feature.
To play and pause a song the API uses a message with the following format:
http://127.0.0.1:4381/remote/pause.json?pause=true&ref=&cors=&_=1520448230&oauth=oauth&csrf=csrf
I tried to find a summary of possible commands in this format, but I didn't find anything.
Is there something like http://127.0.0.1:4381/remote/seek.json... that I can use to seek to a specific position?
EDIT:
I tried to write my own method in the RemoteHandler class in the local portion of the SpotifyAPI. With this method I can set the position in the current playback.
Here's my code:
internal async Task SendPositionRequest(double playingPositionSec) //The desired playback position in seconds
{
StatusResponse status = GetNewStatus(); //Get the current status of the local desktop API
string trackUri = "spotify:track:" + status.Track.TrackResource.ParseUri().Id; //The URI of the current track
TimeSpan playingPositionTimeSpan = TimeSpan.FromSeconds(playingPositionSec);
string playingPosStr = playingPositionTimeSpan.ToString(#"mm\:ss"); //Convert the playingPosition to a string (Format mm:ss)
string playingContext = "spotify:artist:1EfwyuCzDQpCslZc8C9gkG";
await SendPlayRequest(trackUri + "#" + playingPosStr, playingContext);
if (!status.Playing) { await SendPauseRequest(); }
}
I need to call the SendPlayRequest() method with the correct playingContext because when the current song is part of a playlist and you call SendPlayRequest() without the context, the next song isn't from the playlist anymore.
But you can see that I use a fixed context at the moment.
So my question is now: How can I get the context (playlist, artist, ...) of the currently played song with the SpotifyLocalAPI?
The SeekPlayback method of the library you mentioned lets you seek through playback on whatever device your user is listening on. You can find the docs here.
Seeking playback is not currently possible using the Spotify Local API portion of that library.