I have been playing around with this project from git, and so far so good.
https://github.com/bespoken/streamer
I would like to enhance it to play a long form static audio file when the user asks for it. For example, if the user asks for "Ask Streamer to Play the National Anthem", I would like to just play just that file. Does anyone have a good idea on the best way to implement this simple thing?
I tried a few approaches and I am having trouble getting the end result. For one, I do not want the data of static file to be saved on dynamodb, but still want the podcast information to be saved.
I added an Intent for 'Anthem', and sample utterances for that intent. In the constants.js file, I added a new "STATIC_MODE", and tried to replicate how PLAY_MODE is implemented through out
Here is the issue I am running into, whenever I stop the Anthem file from playing, and later invoke the Podcast Player, it starts playing The Anthem, instead of podcasts.I tried commenting out the saveState in audioEventHandlers.js for the STATIC_MODE handler, yet, when I try to play podcast, it still plays the Anthem when I ask for play podcast.
Any help would be appreciated!
This is probably bad, but I have never coded in javascript, just tried to follow the git project to enhance the functionality to my liking.
I created the Streamer project that you reference. In the interest of providing a simpler example on how to use the Alexa AudioPlayer, I also created this project:
https://github.com/bespoken/super-simple-audio-player
I believe it happens to do exactly what you requested, which is it simply plays a single, static audio file. I created this because I wanted a have a less complicated example to show people how the AudioPlayer works. Hope you find it helpful!
Related
What I have:
different video templates without logo
UI where users will select a template from that list
UI where users can upload their logotype, add mp3 file, text
What should be done:
new generated video based on this configuration.
What I found:
I can use ffmpeg and combine it all. But not sure how to make such animations with logo. Maybe there is existing gui.
I also found https://github.com/inlife/nexrender. But it works with Adobe After Effects and seems like what I need.
Example of result https://www.introbrand.com/logo-opening-mobiles.html
I'm not looking for the ready solution, just a few words - how to go, what to use..
This is absolutely new things for me, so if you could please suggest something or just tell me what's the best way - I'd appreciate this)
From what you described, I would say this is what you are looking for :https://creatomate.com/blog/video-rendering-with-nodejs-and-ffmpeg
I'm brand new to programming in general, but specifically Processing, and I need some help on a final project for my digital media class! Using a QR Code Reader library, I've made a sketch in Processing that successfully reads any QR code I throw at it. For the next step, I wanted to find a way to turn the url my sketch displays into a working hyperlink. I want to be able to scan the code, display the url, and click on the url and have it open in my browser. I have steps one and two, but if anyone knows how to write step 3, I'm in desperate need of some advice!
This is the first time in my life I've ever had to code (I'm studying creative writing), and I have yet to get a response on any other forums, so I'd really appreciate the help! Thank you!
I don't know about clicking on text, but there is a way to open a URL in a browser!
You use the link() command. I can't actually find it on the reference page, but it takes a single argument, the URL you want to open.
Here is the example from Processing.org.
i want to develop an google-action. (ideally using dialogflow).
but the google-action needs some features where i couldn't find a solution, and i'm not sure if it's even possible.
My Usecases:
The google action starts a mps. someone stops and exits the google action, and if the user starts the google action again, i would resume the mp3.
but i couldn't find a solution where i can determine the "offset", when the user stops the mp3.
and even i would have this offset, i didn't find a solution how to tell google assistant, that i want to play the mp3, but starts at e.g. Minute 51.
I would be really wondered, it the google action possibilitys are so extremly restricted.
can someone confirm, that this usecases are not possible, or can someone give me a hint how to do it?
i only found this one, which is restricted to start a mp3 at beginning.
https://developers.google.com/actions/assistant/responses#media_responses
Kind Regards
Stefan
To start an mp3 file at a certain point you can try the SSML tag and its clipBegin property.
clipBegin - A TimeDesignation that is the offset from the audio source's beginning to start playback from. If this value is greater than or equal to the audio source's actual duration, then no audio is inserted.
https://developers.google.com/actions/reference/ssml
To use this, your mp3 file has to be hosted using HTTPS
Hope that this helps.
You could use the conversational actions (instead of dialogflow) where media responses allow using a start_offset
....
"content": {
"media": {
"start_offset": "2.12345s",
...
For more details see
https://developers.google.com/assistant/conversational/prompts-media#MediaResponseProperties
Even conversational actions seem to be the "newest" technology for google actions. Or at least released recently.
I'm trying to play an mp3 I will bundle with my app. I added it in the assets, just like the images, added to the pubspec, but can't make it play.
I tried two libraries, rxlabz/audioplayer and evrone/flutter_audio (medrecorder_audio), to no avail.
The first I tryed configuring to isLocal: true, added every conceivable callback, but it just returned the error:
Attempt to call getDuration without a valid mediaplayer
Which I couldn't relate to flutter/dart but found is a common error on Android, related to not calling prepare method. I checked their code, though, and they seem to be calling. I thought it my be async, so I waited a while after creating the player to call start, no luck.
The latter just gives the error "startPlay: FAIL" when I start play.
It seems these libs are made to play an audio that's on the user's cellphone, but I'm not sure. I want to play my own custom mp3 that I will bundle as an asset. Would someone shed some light?
Thanks to #aziza's link, and a lot of experimenting, I found the problem.
The file to be played need to me an MP3; WAV just won't work, and will give a very cryptic error.
This class shows in details how to do it.
I have a collection of photographs (about 30,000) which I'd like to put online. I've tried doing this before, over the years, with static image galleries, applications such as Gallery2, and self-rolled scripts. None have worked that well, as my requirements are fiddly, but it still seems like this should be a solved problem.
My photos are currently organised into folders named YYYY-MM-DD short album title, using Digikam.
I need a system that:
Is Free software, is essentially feature-complete, and has an active developer community.
Allows new photos and albums to be added and updated automatically with little more manual intervention than rsyncing the source directory on my computer to the web server, and rescanning.
Allows visitors to leave comments
Allows re-captcha or equivalant spam filtering and bulk moderation of these comments.
Reads tags from the IPTC Keywords field.
If it finds a tag named "friends", requires the user to enter a password to view.
If it finds a tag named "family", requires the user to enter a different password to view.
If it finds a tag named "private", does not display the photo at all, or even better, does not upload it to the live web server.
Reads descriptions from the IPTC Caption field.
Creates sane permalinks, e.g. http://example.com/2009/03/28/shortalbumtitle/IMG_0001.jpg
I acknowledge that I may be asking for something that doesn't exist, but I hope it does.
I acknowledge that answers may be something like "use Django and code the bits that don't already exist yourself", in which case do you have any tips? :)
Thanks.
Use Django and code the bits that don't already exist yourself.
Seriously. I was going to write that and was tempted not to when I saw you'd written it yourself, but it really does make the most sense if you have any familiarity with it!
I'd start with django-photologue 2. Get a basic gallery with tagging and comments working. You'll need a couple of pl's optional dependencies.
Then I'd write a custom import wrapper that allows you to rsync to a dir and update your library.
Comments are handled internally (through photologue, I think) but if not, there are plenty of comment apps that "just work". There is a recaptcha script that works as just another form field.
PIL can read IPTC
The URL structure is up to you =)
I'm finally getting around to doing this. I'm using a local python script to extract image metadata (tags, captions and timestamp) using pyexiv2, then rotate the image according to its EXIF orientation tag if appropriate, using PIL, and export a hierarchy of files to a temporary directory, where rsync uploads it to my host, and a remote python script (actually a Django app) imports the metadata into a Django DB.