Alexa Audio Player Directive - Speak After Audio Played - node.js

I am trying to make an Alexa fitness app and when the user begins an exercise, Alexa explains the exercise and then plays an audio file. I would like to then get Alexa to explain the second exercise and play the next audio file. Repeating this for around 5 exercises.
Here is part of my code:
else if (slotValues.area.heardAs == 'lower body' || slotValues.area.heardAs == 'lower' || slotValues.area.heardAs == 'legs') {
say = 'Great choice! We will get started with an lower body workout. Let\'s begin with a warm-up. The first exercise is ' + lowerWorkout1[0].name + lowerWorkout1[0].description + ' 3. 2. 1.' ;
handlerInput.responseBuilder.addAudioPlayerPlayDirective('REPLACE_ALL', sLowerWorkout1.url, sLowerWorkout1.token, 0, null, sLowerWorkout1.metadata);
// Then explain second exercise
// Play the second audio
// Then explain third exercise
// Play the third audio
}
/.../
return responseBuilder
.speak(say)
//.reprompt('try again, ' + say)
.getResponse();
I have tried to add the a second say variable after the audioPlayerPlayDirective but it only outputs the second .speak() and then plays the first audio file. I have also tried to add another .speak() after getResponse() but that gives me an error.
Does anyone have any ideas as to how I can do this?
Thanks,
Martin

Once your audio player starts, your skill ends its session. Meaning either you are playing mp3s, either you are in skill session and you can speak. Not both.
I guess workaround would be to have all as audio clips.

Related

How to change an input() to lower case?

I'm new to python and trying my best to learn. At this moment I'm following to program along with YouTube. But I got stuck with this piece of code where I'm trying to change user input to lowercase and comparing it to a list to see if item is available or not. And ever time I ran code I get Not available. Here is the code:
stock = ['Keyboard', 'Mouse', 'Headphones', 'Monitor']
productName = input('Which product would you like to look up:').lower()
if productName in stock
print('Available')
else:
print('Not Available')
- List item
Change your stock array to be all lowercase, like so:
stock = ['keyboard', 'mouse', 'headphones', 'monitor']
Because you modify the user input to be lowercase, no matter what, and the stock items in the array are capitalized, no matter what, they will never match in your if statement. String comparison in Python is case sensitive (as it is in nearly every programming language).

QR code data does not show up for SOME of my QR codes

I'm working on my own little Python powered OMR multiple choice marking program. A big challenge for me!!
First step is to create qr codes for each student. This works OK, I generate 162 qr codes for 1 course.
I want to check them, make sure the data is in the code.
The data should be like this, a student number and a name:
2025010105:段赵元
2025010106:段卓含
2025010108:范玉虹
As you can see, for example, the data for student 2025010107 does not show in my loop. However, using my mobile phone to scan the qr code for 2025010107, I see the correct data. The data is there.
QRfiles = os.listdir(savepathQRcodes)
QRfiles.sort()
for f in QRfiles:
img = cv2.imread(savepathQRcodes + f)
detector = cv2.QRCodeDetector()
data, bbox, straight_qrcode = detector.detectAndDecode(img)
#text = data.split(':')
#print('number, name: ', text[0], text[1])
print(data)
Do you have any idea why the data for some of the qr codes does not show up? The data is there, confirmed with my mobile phone. (My guess is a computer memory problem)
From 162 qr codes, 20 or so do not show data in this loop. Most of the qr codes show the data in this loop.
However, if I open the folder with the qr codes, I can scan the codes which do not show data with my phone and I see the correct data. Same goes for all other qr codes I make.
I will need the data when I get around to marking, so I need to be sure that the qr codes can be read successfully in Python.
Also, it is not always the same qr codes which show no data. I have made them and read them many times for testing.
I have the latest version of opencv.
One thing I noticed was, when I reduced the qr code border size to border=1, I got less codes not showing data. I don't know if that is a clue to the problem.
Maybe you can try adding a loop?
for f in QRfiles:
img = cv2.imread(savepathQRcodes + f)
detector = cv2.QRCodeDetector()
data = None
while not data:
data, bbox, straight_qrcode = detector.detectAndDecode(img)
#text = data.split(':')
#print('number, name: ', text[0], text[1])
print(data)
or maybe the data == '', so first identify what is in the blank data(perhaps None type or ' ') then do the loop
that could be a possible fix, but unfortunately I do not know what causes this. if it happens to random qrcode, it might not be your fault.

Slicing a dictionary to print specific information

This is part of something larger i'm working on but i needed to figure out how to do it for this section in order to move on.
Note: This is not homework, only a personal project.
Roxanne = {'Location': 'Rustboro City', 'Type': 'Rock',
'Pokemon Used': 'Geodude: Lvl 12, Geodude: Lvl 12, Nosepass: Lvl 15'}
user = input("What would you like to learn about: Gym Leaders, Current Team, Natures, or Evolution ")
if user == 'Gym Leaders' or 'gym leaders':
response = input('Who would you like to learn about? '
'Roxanne, Brawly, Wattson, Flannery, Norman, Winona, The Twins, or Juan? ')
if response == 'Roxanne' or 'roxanne':
ask = input('What would you like to know about? Location, Type, Pokemon Used ')
if ask == 'Location' or 'location':
# i need to print the location part of the Roxanne dictionary here, i.e. 'Rustboro City'
I'm just testing to see if I should just print the entire 'Roxanne' dictionary based on user input, or if I should print individual info from it, so that a user can find specific information.
Finally, the only python experience I have is the introductory course I'm taking in Uni right now, so nothing too difficult or complicated. The challenge I set myself was to stay within the boundaries of the course.

PyEphem: How to test if an object is above the horizon?

I am writing a Python script that gives basic data for all the planets, the Sun and the Moon. My first function divides the planets between those that are above the horizon, and those that are not risen yet:
planets = {
'mercury': ephem.Mercury(),
'venus': ephem.Venus(),
'mars': ephem.Mars(),
'jupiter': ephem.Jupiter(),
'saturn': ephem.Saturn(),
'uranus': ephem.Uranus(),
'neptune': ephem.Neptune()
}
def findVisiblePlanets(obs):
visiblePlanets = dict()
notVisiblePlanets = dict()
for obj in planets:
planets[obj].compute(obs)
if planets[obj].alt > 0:
visiblePlanets[obj] = planets[obj]
else:
notVisiblePlanets[obj] = planets[obj]
return (visiblePlanets, notVisiblePlanets)
This works alright, the tuple I receive from findVisiblePlanets corresponds corresponds to the actual sky for the given 'obs'.
But in another function, I need to test the altitude of each planet. If it's above 0, the script displays 'setting at xxx', and if it's under 0, the script displays 'rising at xxx'. Here is the code:
if bodies[obj].alt > 0:
print(' Sets at', setTime.strftime('%H:%M:%S'), deltaSet)
else:
print(' Rises at', riseTime.strftime('%H:%M:%S'), deltaRise)
So I'm using the exact same condition, except that this time it doesn't work. I am sure I have the correct object behind bodies[obj], as the script displays name, magnitude, distance, etc. But for some reason, the altitude (.alt) is always below 0, so the script only displays the rising time.
I tried print(bodies[obj].alt), and I receive a negative figure in the form of '-0:00:07.8' (example). I tried using int(bodies[obj].alt) for the comparison but this ends up being a 0. How can I test if the altitude is negative? Am I missing something obvious here?
Thanks for your help.
I thinkk I had a similar problem once. How I understand it pyephem forwards the time of your observer, when you call nextrising() or nextsetting() on a object. It somehow looks, at which timepoint the object is above/below the horizont for the first time. if you then call the bodie.alt it will always be this little bit below/above horizon.
You have to store your observer time somehow and set it again after calculating setting/rising times.

Sphinx 4 Transcription Time Index

How do I get time index (or frame number) in Sphinx 4 when I set it to transcribe an audio file?
The code I'm using looks like this:
audioURL = ...
AudioFileDataSource dataSource = (AudioFileDataSource) cm.lookup("audioFileDataSource");
dataSource.setAudioFile(audioURL, null);
Result result;
while ((result = Recognizer.recognize()) != null) {
Token token = result.getBestToken();
//DoubleData data = (DoubleData) token.getData();
//long frameNum = data.getFirstSampleNumber(); // data seem always null
String resultText = token.getWordPath(false, false);
...
}
I tried to get time of transcription from result/token objects, e.g. similar to what a subtitler do. I've found Result.getFrameNumber() and Token.getFrameNumber() but they appear to return the number of frames decoded and not the time (or frame) where the result was found in the context of entire audio file.
I looked at AudioFileDataSource.getDuration()[=private] and the Recognizer classes but haven't figure out how to get the needed transcribed time-index..
Ideas? :)
Frame number is the time multiplied by frame rate which is 100 frames/second.
Anyway, please find the patch for subtitles demo which returns timings here:
http://sourceforge.net/mailarchive/forum.php?thread_name=1380033926.26218.12.camel%40localhost.localdomain&forum_name=cmusphinx-devel
The patch applies to subversion trunk, not to the 1.0-beta version.
Please note that this part is under major refactoring, so the API will be obsolete soon. However, I hope you will be able to create subtitles with just few calls without all current complexity.

Resources