I have a PI running with 4 GPIO input ports.
The target is, if one the 4 buttons will be pressed, a mp3 file should be played, i.e. button1 = file1.mp3, button2 = file2.mp3 and so on.
It seems to be no so complicate, but 'the devil is in the detail' :-)
This is my code for 2 buttons at moment :
#!/usr/bin/env python
#coding: utf8
import time
from time import sleep
import os
import RPi.GPIO as GPIO
GPIO.setmode(GPIO.BCM)
GPIO.setup(24, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
GPIO.setup(23, GPIO.IN, pull_up_down = GPIO.PUD_DOWN)
def my_callback_1(channel):
print("Button 23 Pressed")
os.system('omxplayer -o both /root/1.mp3')
sleep(10)
def my_callback_2(channel):
print("Button 24 Pressed")
os.system('omxplayer -o both /root/2.mp3')
sleep(10)
GPIO.add_event_detect(23, GPIO.RISING, callback=my_callback_1, bouncetime=200)
GPIO.add_event_detect(24, GPIO.RISING, callback=my_callback_2, bouncetime=200)
try:
while 1:
time.sleep(0.5)
except KeyboardInterrupt:
# exits when you press CTRL+C
print(" Bye Bye")
except:
print("Other error or exception occurred!")
finally:
GPIO.cleanup() # this ensures a clean exit
The sleep time is set for the longer of the mp3 file.
Its working, but not like I expected.
The problem is, when a buttons will be pushed while a file is already playing, the PI keeps the button push in a buffer and play the file after the current file in a loop.
Imagine, somebody will push 5 times the same button, 5 times the same mp3 file will be played in a loop.
So I searching for a solution like this:
While a file is playing, all Input buttons should be "disabled" for this time. When the mp3 file paying is finished, the buttons should be "re-enabled" and another button can be pushed.
How can I this ? Thanks for your help.
I don't see a simple way to do this without adding threads. Note that you are implicitly already using threads behind the scenes with add_event_detect(), which runs the callbacks in separate threads. If add_event_detect doesn't support suppressing the button presses (which I don't think it does), then you can thread it in one of two ways - using python threads or processes, or a simpler way by using bash.
To use background processes in bash, remove your add_event_detect calls, and then in your while loop, you'd do something like (untested):
started_23 = 0
while True:
if GPIO.input(23) and time.time() - started_23 > 10:
started_23 = time.time()
print("Button 23 Pressed")
os.system('omxplayer -o both /root/1.mp3 &')
time.sleep(0.200)
Note the ampersand added to the system() call - that will run the omxplayer in the background. And the started_23 variable keeps track of when the sound was started in order to prevent replaying it for another 10 seconds. You may way to increase that to include the length of the file time. You can similarly add in code for GPIO 24 in the same loop.
Thanks for help, Brian. You brought me in the right direction !
I got it. Its working now as I described above.
Here my code :
try:
vtc1 = 8 # Time Audiofile 1
vtc2 = 11 # Time Audiofile 2
vtc3 = 0 # Time Audiofile 3
vtc4 = 0 # Time Audiofile 4
vtc = 0 # Current AudioFileTime
started_t = 0 # Started Time
while True:
if GPIO.input(23) and time.time() - started_t > vtc:
vtc = vtc1
started_t = time.time()
print("Button 23 Pressed")
os.system('omxplayer -o both /root/1.mp3 &')
time.sleep(0.200)
if GPIO.input(24) and time.time() - started_t > vtc:
vtc = vtc2
started_t = time.time()
print("Button 24 Pressed")
os.system('omxplayer -o both /root/2.mp3 &')
time.sleep(0.200)
The problem was, that the seconded file was started before the first was finished. Because the code did not know the longer of currently played file. So I put the time of the audiofile in the "vtc" value when this file were executed.
If you push another button, it calculate the time playing with the current file time "vtc". That's it.
Thanks again.
Related
I'm new to pico, only using arduinos before. I'm trying to make a simple rotary encoder program that displays a value from 0-12 on an 0.96 oled display, and lights up that many leds on a strip.
I wanted to try out using multiple cores, as interrupts made the leds not run smooth when I had them just cycling (everything would be paused while the encoder was being turned)
However, when I run this program, aside from the encoder being bouncy, the pico would crash maybe 30 seconds into running the program, making a mess on the display and stopping the code. I feel like there's some rule of using multiple cores that I completely ignored.
Here's the code:
from machine import Pin, I2C
from ssd1306 import SSD1306_I2C
import _thread
import utime
import neopixel
#general variables section
numOn = 0
#Encoder section
sw = Pin(12,Pin.IN,Pin.PULL_UP)
dt = Pin(11,Pin.IN)
clk = Pin(10,Pin.IN)
encodeCount = 0
lastClk = clk.value()
lastButton = False
#Encoder thread
def encoder(): #don't mind the indentation here,
#stackoverflow kinda messed up the code block a bit.
while True:
#import stuff that I shouldn't need to according to tutorials but it doesn't work without
global encodeCount
global lastClk
global clk
import utime
if clk.value() != lastClk:
if dt.value() != clk.value():
encodeCount += 1
else:
encodeCount -= 1
if encodeCount > 12:
encodeCount = 0
elif(encodeCount < 0):
encodeCount = 12
lastClk = clk.value()
print(encodeCount)
utime.sleep(0.01)
_thread.start_new_thread(encoder,())
#LED section
numLed = 12
ledPin = 26
led = neopixel.NeoPixel(machine.Pin(ledPin),numLed)
#Screen Section
WIDTH = 128
HEIGHT = 64
i2c = I2C(0,scl=Pin(17),sda=Pin(16),freq=200000)
oled = SSD1306_I2C(WIDTH,HEIGHT,i2c)
#loop
while True:
for i in range(numLed):
led[i] = (0,0,0)
for i in range(encodeCount):
led[i] = (100,0,0)
led.write()
#Display section
oled.fill(0)
oled.text(f'numLed: {numOn}',0,0)
oled.text(f'counter: {encodeCount}',0,40)
oled.show()
I'm probably doing something stupid here, I just don't know what.
Also, any suggestions on simply debouncing the encoder would be very helpful.
Any help would be appreciated! Thanks!
Update: The code above bricked the pico, so clearly I'm doing something very very wrong. _thread start line stopped it from crashing again, so the problem is there.
Same issue with very similar code on a Raspberry Pico W. I specify the 'W' because my code works without crashing on an earlier Pico.
I'm wondering if the low level networking functions might be using the 2nd core and causing a conflict.
I'm adding thread locking to see if passing a baton helps, the link below has an example, eg:
# create a lock
lck= _thread.allocate_lock()
# Function that will block the thread with a while loop
# which will simply display a message every second
def second_thread():
while True:
# We acquire the traffic light lock
lck.acquire()
print("Hello, I'm here in the second thread writting every second")
utime.sleep(1)
# We release the traffic light lock
lck.release()
I am working at building a personal assistant in Python. It appears that the Python's SpeechRecognition library has built-in Snowboy recognition, but it appears to be broken. Here is my code. (Note that the problem is that the listen() function never returns).
import speech_recognition as sr
from SnowboyDependencies import snowboydecoder
def get_text():
with sr.Microphone(sample_rate = 48000) as source:
audio = r.listen(source, snowboy_configuration=("SnowboyDependencies", {hotword_path})) #PROBLEM HERE
try:
text = r.recognize_google(audio).lower()
except:
text = none
print("err")
return text
I did some digging in SpeechRecognition and I have found where the problem exists, but I am not sure how to fix it because I am not very familiar with the intricacies of the library. The issue is that sr.listen never returns. It appears the the Snowboy hotword detection is 100% working, because the program progresses past that point when I say my hotword. Here is the source code. I have added my own comments to try to describe the issue further. I added three comments and all of them are enclosed in a multi-line box of #s.
def listen(self, source, timeout=None, phrase_time_limit=None, snowboy_configuration=None):
"""
Records a single phrase from ``source`` (an ``AudioSource`` instance) into an ``AudioData`` instance, which it returns.
This is done by waiting until the audio has an energy above ``recognizer_instance.energy_threshold`` (the user has started speaking), and then recording until it encounters ``recognizer_instance.pause_threshold`` seconds of non-speaking or there is no more audio input. The ending silence is not included.
The ``timeout`` parameter is the maximum number of seconds that this will wait for a phrase to start before giving up and throwing an ``speech_recognition.WaitTimeoutError`` exception. If ``timeout`` is ``None``, there will be no wait timeout.
The ``phrase_time_limit`` parameter is the maximum number of seconds that this will allow a phrase to continue before stopping and returning the part of the phrase processed before the time limit was reached. The resulting audio will be the phrase cut off at the time limit. If ``phrase_timeout`` is ``None``, there will be no phrase time limit.
The ``snowboy_configuration`` parameter allows integration with `Snowboy <https://snowboy.kitt.ai/>`__, an offline, high-accuracy, power-efficient hotword recognition engine. When used, this function will pause until Snowboy detects a hotword, after which it will unpause. This parameter should either be ``None`` to turn off Snowboy support, or a tuple of the form ``(SNOWBOY_LOCATION, LIST_OF_HOT_WORD_FILES)``, where ``SNOWBOY_LOCATION`` is the path to the Snowboy root directory, and ``LIST_OF_HOT_WORD_FILES`` is a list of paths to Snowboy hotword configuration files (`*.pmdl` or `*.umdl` format).
This operation will always complete within ``timeout + phrase_timeout`` seconds if both are numbers, either by returning the audio data, or by raising a ``speech_recognition.WaitTimeoutError`` exception.
"""
assert isinstance(source, AudioSource), "Source must be an audio source"
assert source.stream is not None, "Audio source must be entered before listening, see documentation for ``AudioSource``; are you using ``source`` outside of a ``with`` statement?"
assert self.pause_threshold >= self.non_speaking_duration >= 0
if snowboy_configuration is not None:
assert os.path.isfile(os.path.join(snowboy_configuration[0], "snowboydetect.py")), "``snowboy_configuration[0]`` must be a Snowboy root directory containing ``snowboydetect.py``"
for hot_word_file in snowboy_configuration[1]:
assert os.path.isfile(hot_word_file), "``snowboy_configuration[1]`` must be a list of Snowboy hot word configuration files"
seconds_per_buffer = float(source.CHUNK) / source.SAMPLE_RATE
pause_buffer_count = int(math.ceil(self.pause_threshold / seconds_per_buffer)) # number of buffers of non-speaking audio during a phrase, before the phrase should be considered complete
phrase_buffer_count = int(math.ceil(self.phrase_threshold / seconds_per_buffer)) # minimum number of buffers of speaking audio before we consider the speaking audio a phrase
non_speaking_buffer_count = int(math.ceil(self.non_speaking_duration / seconds_per_buffer)) # maximum number of buffers of non-speaking audio to retain before and after a phrase
# read audio input for phrases until there is a phrase that is long enough
elapsed_time = 0 # number of seconds of audio read
buffer = b"" # an empty buffer means that the stream has ended and there is no data left to read
##################################################
######THE ISSIE IS THAT THIS LOOP NEVER EXITS#####
##################################################
while True:
frames = collections.deque()
if snowboy_configuration is None:
# store audio input until the phrase starts
while True:
# handle waiting too long for phrase by raising an exception
elapsed_time += seconds_per_buffer
if timeout and elapsed_time > timeout:
raise WaitTimeoutError("listening timed out while waiting for phrase to start")
buffer = source.stream.read(source.CHUNK)
if len(buffer) == 0: break # reached end of the stream
frames.append(buffer)
if len(frames) > non_speaking_buffer_count: # ensure we only keep the needed amount of non-speaking buffers
frames.popleft()
# detect whether speaking has started on audio input
energy = audioop.rms(buffer, source.SAMPLE_WIDTH) # energy of the audio signal
if energy > self.energy_threshold: break
# dynamically adjust the energy threshold using asymmetric weighted average
if self.dynamic_energy_threshold:
damping = self.dynamic_energy_adjustment_damping ** seconds_per_buffer # account for different chunk sizes and rates
target_energy = energy * self.dynamic_energy_ratio
self.energy_threshold = self.energy_threshold * damping + target_energy * (1 - damping)
else:
# read audio input until the hotword is said
#############################################################
########THIS IS WHERE THE HOTWORD DETECTION OCCURRS. HOTWORDS ARE DETECTED. I KNOW THIS BECAUSE THE PROGRAM PROGRESSES PAST THIS PART.
#############################################################
snowboy_location, snowboy_hot_word_files = snowboy_configuration
buffer, delta_time = self.snowboy_wait_for_hot_word(snowboy_location, snowboy_hot_word_files, source, timeout)
elapsed_time += delta_time
if len(buffer) == 0: break # reached end of the stream
frames.append(buffer)
# read audio input until the phrase ends
pause_count, phrase_count = 0, 0
phrase_start_time = elapsed_time
while True:
# handle phrase being too long by cutting off the audio
elapsed_time += seconds_per_buffer
if phrase_time_limit and elapsed_time - phrase_start_time > phrase_time_limit:
break
buffer = source.stream.read(source.CHUNK)
if len(buffer) == 0: break # reached end of the stream
frames.append(buffer)
phrase_count += 1
# check if speaking has stopped for longer than the pause threshold on the audio input
energy = audioop.rms(buffer, source.SAMPLE_WIDTH) # unit energy of the audio signal within the buffer
if energy > self.energy_threshold:
pause_count = 0
else:
pause_count += 1
if pause_count > pause_buffer_count: # end of the phrase
break
# check how long the detected phrase is, and retry listening if the phrase is too short
phrase_count -= pause_count # exclude the buffers for the pause before the phrase
####################################################################3
#######THE FOLLOWING CONDITION IS NEVER MET THEREFORE THE LOOP NEVER EXITS AND THE FUNCTION NEVER RETURNS################
############################################################################
if phrase_count >= phrase_buffer_count or len(buffer) == 0: break # phrase is long enough or we've reached the end of the stream, so stop listening
# obtain frame data
for i in range(pause_count - non_speaking_buffer_count): frames.pop() # remove extra non-speaking frames at the end
frame_data = b"".join(frames)
return AudioData(frame_data, source.SAMPLE_RATE, source.SAMPLE_WIDTH)
The issue is that the main while loop in listen() never exits. I am not sure why. Note that the SpeechRecognition module works flawlessly when I am not integrating snowboy. Also note that snowboy works flawlessly on its own.
I am also providing the speech_recognition.snowboy_wait_for_hot_word() method as the problem could be in here.
def snowboy_wait_for_hot_word(self, snowboy_location, snowboy_hot_word_files, source, timeout=None):
print("made it")
# load snowboy library (NOT THREAD SAFE)
sys.path.append(snowboy_location)
import snowboydetect
sys.path.pop()
detector = snowboydetect.SnowboyDetect(
resource_filename=os.path.join(snowboy_location, "resources", "common.res").encode(),
model_str=",".join(snowboy_hot_word_files).encode()
)
detector.SetAudioGain(1.0)
detector.SetSensitivity(",".join(["0.4"] * len(snowboy_hot_word_files)).encode())
snowboy_sample_rate = detector.SampleRate()
elapsed_time = 0
seconds_per_buffer = float(source.CHUNK) / source.SAMPLE_RATE
resampling_state = None
# buffers capable of holding 5 seconds of original and resampled audio
five_seconds_buffer_count = int(math.ceil(5 / seconds_per_buffer))
frames = collections.deque(maxlen=five_seconds_buffer_count)
resampled_frames = collections.deque(maxlen=five_seconds_buffer_count)
while True:
elapsed_time += seconds_per_buffer
if timeout and elapsed_time > timeout:
raise WaitTimeoutError("listening timed out while waiting for hotword to be said")
buffer = source.stream.read(source.CHUNK)
if len(buffer) == 0: break # reached end of the stream
frames.append(buffer)
# resample audio to the required sample rate
resampled_buffer, resampling_state = audioop.ratecv(buffer, source.SAMPLE_WIDTH, 1, source.SAMPLE_RATE, snowboy_sample_rate, resampling_state)
resampled_frames.append(resampled_buffer)
# run Snowboy on the resampled audio
snowboy_result = detector.RunDetection(b"".join(resampled_frames))
assert snowboy_result != -1, "Error initializing streams or reading audio data"
if snowboy_result > 0: break # wake word found
return b"".join(frames), elapsed_time
I am running python 3.7 on a Raspberry pi 3B+ running Raspbian Buster Lite (kernel 4.19.36). Please ask if I can provide any additional information.
I'm working on a Python script to use my old rotary phone as input device for a webradio. The dialer is connected by GPIO to a Raspberry Pi3 and launches mplayer to play a station when I dial '1'.
When I launch the script form terminal (by ssh) it works fine: I get all kinds of information about the channel, tracks being played etc.. Also, when I press '9' or '0' on my keyboard the volume goes up and down.
Next thing I want to do is to control the volume by dialing '2' (volume up) or '3' (volume down), from the script(!).
I've tried several libraries like xdotools etc., but they all expect a display I guess. Nothing seems to work, so far.
Is it possible at all? Anyone any pointers or solutions? I would be very grateful, this thing has cost me all day and I didn't progress a bit.
This is the script so far:
#!/usr/bin/env python3
import RPi.GPIO as GPIO
from time import sleep
import subprocess
#GPIO.setwarnings(False)
GPIO.setmode(GPIO.BCM)
GPIO.setup(18, GPIO.IN, pull_up_down=GPIO.PUD_UP)
GPIO.setup(15, GPIO.IN, pull_up_down=GPIO.PUD_UP)
c=0
last = 1
def count(pin):
global c
c = c + 1
def play_radio(dial):
if dial == 1:
subprocess.call("mplayer -nocache -afm ffmpeg http://playerservices.streamtheworld.com/api/livestream-redirect/SLAM_MP3.mp3",shell=True)
if dial == 2:
#HERE'S WHERE THE VOLUME MUST GO UP BY KEYPRESS '0'
if dial == 3:
#HERE'S WHERE THE VOLUME MUST GO DOWN BY KEYPRESS '9'
GPIO.add_event_detect(15, GPIO.BOTH)
while True:
try:
if GPIO.event_detected(15):
current = GPIO.input(15)
if(last != current):
if(current == 0):
GPIO.add_event_detect(18, GPIO.BOTH, callback=count, bouncetime=10)
else:
GPIO.remove_event_detect(18)
number = int((c-1)/2)
print(number)
play_radio(number)
c = 0
last = GPIO.input(15)
except KeyboardInterrupt:
break
sleep(0.3)
I have a question in vlc in python
import vlc
sound = vlc.MediaPlayer('sound.mp3')
sound.play()
# i wanna wait until the sound ends then do some code without
time.sleep()
import time, vlc
def Sound(sound):
vlc_instance = vlc.Instance()
player = vlc_instance.media_player_new()
media = vlc_instance.media_new(sound)
player.set_media(media)
player.play()
time.sleep(1.5)
duration = player.get_length() / 1000
time.sleep(duration)
After edit
That exactly what I wanted, thanks everyone for helping me ..
You can use the get_state method (see here: https://www.olivieraubert.net/vlc/python-ctypes/doc/) to check the state of the vlc player.
Something like
vlc_instance = vlc.Instance()
media = vlc_instance.media_new('sound.mp3')
player = vlc_instance.media_player_new()
player.set_media(media)
player.play()
print player.get_state()# Print player's state
for wait util end of vlc play sound, except your:
player.play()
time.sleep(1.5)
duration = player.get_length() / 1000
time.sleep(duration)
other possible (maybe more precise, but CPU costing) method is:
# your code ...
Ended = 6
current_state = player.get_state()
while current_state != Ended:
current_state = player.get_state()
# do sth you want
print("vlc play ended")
refer :
vlc.State definition
vlc.Instance
vlc.MediaPlayer - get_state
I.m.o. Flipbarak had it almost right. My version:
import vlc, time
vlc_instance = vlc.Instance()
song = 'D:\\mozart.mp3'
player = vlc_instance.media_player_new()
media = vlc_instance.media_new(song)
media.get_mrl()
player.set_media(media)
player.play()
playing = set([1])
time.sleep(1.5) # startup time.
duration = player.get_length() / 1000
mm, ss = divmod(duration, 60)
print "Current song is : ", song, "Length:", "%02d:%02d" % (mm,ss)
time_left = True
# the while loop checks every x seconds if the song is finished.
while time_left == True:
song_time = player.get_state()
print 'song time to go: %s' % song_time
if song_time not in playing:
time_left = False
time.sleep(1) # if 1, then delay is 1 second.
print 'Finished playing your song'
A slight alternative / method that i just tested and had good results (without needing to worry about the State.xxxx types.
This also allowed me to lower the overall wait/delay as i'm using TTS and found i would average 0.2 seconds before is_playing returns true
p = vlc.MediaPlayer(audio_url)
p.play()
while not p.is_playing():
time.sleep(0.0025)
while p.is_playing():
time.sleep(0.0025)
The above simply waits for the media to start playing and then for it to stop playing.
Note: I'm testing this via a URL / not a local file but was having the same issue and believe this will work the same.
Also fully aware its a slightly older and answered, but hopefully its of use to some.
I have a GUI that is controlling a device over USB.
The way I have it set up is with basically two buttons, forward and back, and while the button is pressed that function is transmitted to the motor, when the button is released, the off signal is triggered once.
def on_release():
print('Off')
self.off()
def on_click():
print('forward')
self.forward()
button = QPushButton('Cut', self)
button.move(100,70)
button.pressed.connect(on_click)
button.released.connect(on_release)
def on_click():
print('back')
self.back()
button = QPushButton('back', self)
button.move(200,70)
button.pressed.connect(on_click)
button.released.connect(on_release)
I've recently encountered an interesting failure mode where if the USB connection is paused (in my case I was using GDB and hit a breakpoint, released the button, then released the breakpoint) at the moment the button is released, the kill signal is never sent and the motor will continue going back or forward forever. (It can be killed by either clicking one of back or forward and releasing, or by killing USB entirely")
I already have protections in place (a threaded heartbeat signal) for turning off the motor in the condition that a USB connection is severed but I'd like to make this fail a little more safely in case that one particular USB off transmission were to fail.
Is there a way for me to check if no buttons are pressed so I can use this to trigger the off signal?
Learning material from tjmarkham stepper.py script at [https://github.com/tjmarkham/python-stepper] for a raspberry Pi which can be put behind your buttons:
#CURRENT APPLICATION INFO
#200 steps/rev
#12V, 350mA
#Big Easy driver = 1/16 microstep mode
#Turn a 200 step motor left one full revolution: 3200
from time import sleep
import RPi.GPIO as gpio #https://pypi.python.org/pypi/RPi.GPIO
#import exitHandler #uncomment this and line 58 if using exitHandler
class stepper:
#instantiate stepper
#pins = [stepPin, directionPin, enablePin]
def __init__(self, pins):
#setup pins
self.pins = pins
self.stepPin = self.pins[0]
self.directionPin = self.pins[1]
self.enablePin = self.pins[2]
#use the broadcom layout for the gpio
gpio.setmode(gpio.BCM)
#set gpio pins
gpio.setup(self.stepPin, gpio.OUT)
gpio.setup(self.directionPin, gpio.OUT)
gpio.setup(self.enablePin, gpio.OUT)
#set enable to high (i.e. power is NOT going to the motor)
gpio.output(self.enablePin, True)
print("Stepper initialized (step=" + self.stepPin + ", direction=" + self.directionPin + ", enable=" + self.enablePin + ")")
#clears GPIO settings
def cleanGPIO(self):
gpio.cleanup()
#step the motor
# steps = number of steps to take
# dir = direction stepper will move
# speed = defines the denominator in the waitTime equation: waitTime = 0.000001/speed. As "speed" is increased, the waitTime between steps is lowered
# stayOn = defines whether or not stepper should stay "on" or not. If stepper will need to receive a new step command immediately, this should be set to "True." Otherwise, it should remain at "False."
def step(self, steps, dir, speed=1, stayOn=False):
#set enable to low (i.e. power IS going to the motor)
gpio.output(self.enablePin, False)
#set the output to true for left and false for right
turnLeft = True
if (dir == 'right'):
turnLeft = False;
elif (dir != 'left'):
print("STEPPER ERROR: no direction supplied")
return False
gpio.output(self.directionPin, turnLeft)
stepCounter = 0
waitTime = 0.000001/speed #waitTime controls speed
while stepCounter < steps:
#gracefully exit if ctr-c is pressed
#exitHandler.exitPoint(True) #exitHandler.exitPoint(True, cleanGPIO)
#turning the gpio on and off tells the easy driver to take one step
gpio.output(self.stepPin, True)
gpio.output(self.stepPin, False)
stepCounter += 1
#wait before taking the next step thus controlling rotation speed
sleep(waitTime)
if (stayOn == False):
#set enable to high (i.e. power is NOT going to the motor)
gpio.output(self.enablePin, True)
print("stepperDriver complete (turned " + dir + " " + str(steps) + " steps)")
teststepper.py:
from Stepper import Stepper
#stepper variables
#[stepPin, directionPin, enablePin]
testStepper = Stepper([22, 17, 23])
#test stepper
testStepper.step(3200, "right"); #steps, dir, speed, stayOn