I'm building a skill for Echo Show and I've been trying to loop a video (mp4) file. I use the below code to play the video:
if (this.event.context.System.device.supportedInterfaces.VideoApp) {
this.response.playVideo(LINK);
} else {
this.response.speak("The video cannot be played on your device. " +
"To watch this video, try launching the skill from your echo show device.");
}
Unfortunately, regardless of which looping function I use, I either run into a general Amazon error or it simply plays the video once.
I've seen another post that kinda showed how to loop an audio file, but I haven't been able to apply similar logic to video.
Thanks in advance!
The problem is looping is not yet supported for VideoApp directive. You can use Video apl component to loop a video. https://developer.amazon.com/de/docs/alexa-presentation-language/apl-video.html
Related
I need to get waveform data from the wav file,but my code returns not right waveform (i compare my results with waveform from fl studio)
This is my code:
path = "/storage/emulated/0/FLM User
Files/My Samples/808 (16).wav";
waveb = FileUtil.readFile(path);
waveb = waveb.substring((int) (waveb.indexOf("data") + 4), (int)(waveb.length()));
byte[] b = waveb.getBytes();
for(int i= 0; i < (int)(b.length/4); i++) {
map = new HashMap<>();
map.put("value", String.valueOf((long)((b[i*4] & 0xFF) +
((b[i*4+1] & 0xFF) << 8))));
map.put("byte", String.valueOf((long)(b[i*4])));
l.add(map);
}
listview1.setAdapter(new
Listview1Adapter(l));
( (BaseAdapter)listview1.getAdapter()).notifyDataSetChanged();
My results:
Fl studio mobile results:
I'm not sure I can help, given what I know off of the top of my head, but perhaps this will trigger some ideas in your search for a solution.
It looks to me like you are assuming the sound file is 16-bit stereo, little-endian, and that you are only attempting to inspect one track of the stereo frame. Can you confirm this?
There's at least one way this plan could go awry: the .wav header may be an odd number of bytes in length, and you might not be properly parsing frame boundaries as a result. As an experiment, maybe try adding a different increment when you reference the b[] array? For example b[i4 + 1] and b[i4 + 2] instead of b[i4] and b[i4 + 1]. This won't solve the general problem of parsing .wav headers, but it could at least get you closer to understanding the situation.
It sure looks like Java's AudioInputStream is not accessible in Android, and all searches that I have that ask if there is an Android equivalent are turning up unanswered.
I've used AudioTrack for the playback of raw PCM, but I don't know an Android equivalent for reading wav files. The AudioRecord class and read() methods look interesting as the read methods store PCM data in a short array, but I've never used them, and they seem to be hard-coded to the microphone for input.
There used to be a Google Group: andraudio#googlegroups.com. IDK if it is still around. I used to go there and occasionally ask about things.
Maybe there is code you can use from Oboe or libGDX? The latter makes use of OpenAL and is for cross-platform development, with Android as one of the target platforms. I have not looked into either for this question.
If you do find the answer, it would be great to post it as a solution. This seems to be a matter that many have tried to solve and given up on.
I was trying to build a YouTube streamer using Rust that uses mpv player. I've managed to extract URL of music video from the YouTube search page.
I have set up an input loop to take user's commands. Actions are taken according to user commands. When the user specifies play thisSong, the music video's URL is extracted and saved as a string. Now, I want to start a process by calling mpv player. The output of mpv player should be ignored, and the player should play music in the background, and the user should be back to prompt, from where he can supply commands again.
I tried to set it up, but the problem was that as soon as thempv child process starts, it starts to take commands supplied by the user to my main program. I want mpv to ignore those commands.
let mut youtube_mpv = match Command::new("mpv")
.arg(song_url)
.arg("--no-video")
.arg("--ytdl-format=worst")
.arg("--really-quiet")
.arg("&")
.stdout(Stdio::null())
.spawn()
{
Err(_why) => exit(1),
Ok(process) => process,
};
println!("Playing {} from YouTube", song_name);
Add .stdin(Stdio::null()).
By default, the subprocess will inherit all streams from the parent. If you don't want that, either pipe them (to interact with the subprocess via stdin/stdout) or null them (to redirect to / from /dev/null).
Incidentally I don't think this:
.arg("&")
makes any sense. It's going to pass an & argument to mpv, which mpv is going to assume is a file, look up, fail to find, and trigger an error. Assuming you eventually wait() on the mpv subprocess, it'll always report failure.
I intend on making a cli audio player for racket, as an exercise to learn Racket, and everything else that would entail this project. I am stuck though how to begin. I can't find any package to play sound files, so I am guessing I may have to make one. How would I go about it?
What you probably want is #lang video (website). It provides a high level interface into audio playback. Allowing you to do something like:
#lang video
(clip "file.mp3")
Since you want to make a little command line player you might also want to take a look at its small preview tool.
I ended doing this the hackish way by calling a shell script via racket, not ideal at all. For reference, putting the code here.
; This creates the initail rsound
; for a song, this rsound is passed around
; so the whole song doesn't have to be
; decoded from the file everytime.
(define (play filepath)
(cond [(string=? "mp3" (last (regexp-split #rx"\\." filepath)))
(system* "./mp3-hack" filepath)
(set! filepath "curr.wav")])
(define input-pstream (make-pstream))
(define input-rsound (rs-read filepath))
(pstream-play input-pstream input-rsound)
(values input-pstream input-rsound filepath))
And the mp3-hack file just uses ffmpeg
#!/bin/sh
ffmpeg -i $1 -acodec pcm_s16le -ac 1 -ar 44100 curr.wav
Yeah, I know. Inelegant, but at least I got it working. I needed it for my hackathon project MPEGMafia
I am using vuforia video playback demo with cloud recognition.
I have combined both projects and it is working properly. But currently video dimension is according to detected object. But i need fixed width and height when video plays.
Can anyone help me ?
Thanks in advance.
Well apparently Vuforia fixes the width and height at the start of the game no matter what the size of the object is. I could not find when exactly this operation is conducted but it is done at beginning of your game. When you change the size of the ImageTarget in runtime it is not fixed anymore. Add these lines to your OnTrackingFound function of DefaultTrackableEventHandler.cs
if (this.name == "WhateverTheNameOfYourRelatedImageTarget"&& !isScaled)
{
//Increase the size however you want i just added 1 to each dimension
this.transform.localScale += Vector3.one;
// make isScaled true not to scale every time it is found initially it shoud be false
isScaled = true;
}
Good luck!
What i usually do is , instead of Videoplayback , i play the video on canvas object and hook this object to Defaulttrackableeventhandler script. So when target is found, gameobject.setactive(true) and gameobject.setactive(false) when target is lost. By this method the size of the gameobject is fixed and it stays in a fixed location .
I just made an example you can get here (have to import it to any project and open the scene Assets/VideoExample/Examples). You can see there a bit clearer what the ScreenSpace - Overlay does ... it might be better to just switch to ScreenSpace - Camera in general
i have configured sphinx with netbeans and its wroking fine. but im using a button to do the process. but after it recognisers. i want to do the process again. but then it gives a error saying the "logmath instance is already present" and saying cannot open the microphone.
can someone give me a solution. what i want to do is use speech recogntion in several times in the same form. till it gives the correct answer.
please help me
this is the error i get
"Creating new instance of LogMath while another instance is already present
10:53:27.833 SEVERE microphone Can't open microphone line with format PCM_SIGNED 16000.0 Hz, 16 bit, mono, 2 bytes/frame, big-endian not supported."
you are using the Recognizer again and again each time you done the speech recognition.
make sure to "
//Get the spoken text
Result result = recognizer.recognize();
"
call this above result only one time. if you call again and again in the same event. it will give a error. so make it public to call only once and do the process. then it should work