In Jupiter lab, can I have an ipywidgets "Save" and "Load" buttons which call the save/load of the notebook?
And can I access the is_saved attribute of the notebook?
Much thanks!
Yes, just use the ipywidget button with the module ipylab. This module make a connection with the jupyterlab's backend/frontend.
from ipylab import JupyterFrontEnd
from ipywidgets import widgets
def save_button_func(button):
app = JupyterFrontEnd()
app.commands.execute('docmanager:save')
save_button = widgets.Button()
save_button.description = 'Save this notebook'
save_button.on_click(save_button_func)
save_button
to load notebook is the same logic, you can find the code with element inspect on the browser or search in git project.
Related
I'd like to play audio on a remote Jupyter Notebook (like in Google Colab or a Binder-hosted Jupyterlab) but I can't figure out how. What I would like to get working is something like this:
from pydub import AudioSegment
from pydub.playback import play
start = 1000
end = 3000
audio = AudioSegment.from_file("someaudio.flac")
audio_piece = audio[start:end]
play(audio_piece)
With a playback package like simpleaudio installed, everything works fine on my local machine. But when I try to run this code in Google Colab, for example, I get this error message:
SimpleaudioError: Error opening PCM device. -- CODE: -2 -- MSG: No such file or directory
I tried several other audio packages but I always ran into some trouble. The only thing that works is IPython.display -> Audio. But I can't use this for my project because I don't want a player displayed (and because it doesn't seem to have the option to play segments of an audio file).
Does anyone know a solution for this?
For MyBinder-served sessions, you have to connect the audio playing ability to IPython abilities imported into the notebook or link the playing of audio to ipywidgets running in the notebook.
Example of playing a wav file in a MyBinder-served notebook
Go to here and press 'launch Binder'.
When the session spins up, paste this into a cell and run that cell:
# PLAY A WAV. BASED ON
# https://ipython.org/ipython-doc/stable/api/generated/IPython.display.html#IPython.display.Audio
from IPython.display import Audio, display
Audio("http://www.nch.com.au/acm/8k16bitpcm.wav") # From URL
# see https://ipython.org/ipython-doc/stable/api/generated/IPython.display.html#IPython.display.Audio
# for other options such as a file you upload to the remote MyBinder-served session
That will show a controller you can click 'play' to play the file found at the URL as the wav file.
See the source linked in the comments for more information, such as how you could play your own wav file.
To demo this in JupyterLab:
If you are starting fresh, click here to launch directly into JupyterLab. When that sessions spins up, open a new notebook and paste in the code above.
If you are already in a session provided by MyBinder in the classic notebook, click the 'Jupyter' logo in the upper left side above the notebook. The page will refresh and the JupyterLab interface will load and you can open a notebook there and paste in the above code.
The environment specified for those sessions isn't overly complex as you can see here.
Example of playing a tone in a MyBinder-served notebook
Similar to everything above; however, use this as the code block you paste into a cell:
# Generate a sound. BASED ON
# https://ipython.org/ipython-doc/stable/api/generated/IPython.display.html#IPython.display.Audio
import numpy as np
from IPython.display import Audio, display
framerate = 44100
t = np.linspace(0,5,framerate*5)
data = np.sin(2*np.pi*220*t) + np.sin(2*np.pi*224*t)
Audio(data,rate=framerate)
Example if an interactive control via ipywidgets of audio-generation in a MyBinder-served notebook.
Click here to launch a session with the JupyterLab interface back by an environment with the necessary modules installed, and then run the following in a cell:
!curl -OL https://raw.githubusercontent.com/mlamoureux/PIMS_YRC/master/Widget_audio.ipynb
That will fetch a notebook that you can run, based on code described here.
Alternatively, you can use the classic interface by launching fresh to one or switching from the JupyterLab inferface back by using from the menubar, 'Help' > 'Launch Classic Notebook'.
(I believe the notebook used in this example is based on here, or vice versa. When I tried that one in the ipywidgets docs from a session that already had ipywidgets installed & not much else, I had to also install matplotlib along with ipywidgets via running %pip install matplotlib, because of the line import matplotlib.pyplot as plt.)
UPDATE: use pydub to edit audio via MyBinder and hear it played
This was added in response to the comments to the general answer. Specifically, pydub use is demonstrated using a different enviornment, since pydub needs ffmpeg.
Go here and click on 'launch binder' to spin up a session where ffmpeg is installed in the backing environment. pydub needs ffmpeg or equivalent.
You can run the following line in a notebook and then open the notebook that it gets to work though that demonstration:
!curl -OL https://gist.githubusercontent.com/fomightez/86482965bbce4bbbb7adb4c98f6cd9e6/raw/d31473699d8a2ec6d31dbf1d9590b8a0ef8972db/pydub_edit_plays_via_mybinder.ipynb```
Or step through the equivalent demonstration code in a notebook in JupyterLan by following these steps.
First enter in a cell the following:
```python
%pip install pydub
Get an audio file to use for testing by running this:
!curl -OL http://www.nch.com.au/acm/8k16bitpcm.wav
Edit that file and playback the result in the notebook without interacting with it, by running this in a cell:
from pydub import AudioSegment
from pydub.playback import play
start = 1000
end = 3000
audio = AudioSegment.from_file("8k16bitpcm.wav")
audio_piece = audio[start:end]
audio_piece.export("test_clip.wav", format='wav')
from IPython.display import Audio, display
Audio("test_clip.wav", autoplay=True)
I'm trying to automate a process within the OpenSea Create page after having logged in with Metamask, and so far, I have managed to develop a simple program that chooses a particular image file using a path which passes to the Open File dialog "implicitly", here's the code:
import pyautogui
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
def wait_xpath(code): #function to wait for the xpath of an element to be located
WebDriverWait(driver, 60).until(EC.presence_of_element_located((By.XPATH, code)))
opt = Options() #the variable that will store the selenium options
opt.add_experimental_option("debuggerAddress", "localhost:9222") #this allows bulk-dozer to take control of your Chrome Browser in DevTools mode.
s = Service(r'C:\Users\ResetStoreX\AppData\Local\Programs\Python\Python39\Scripts\chromedriver.exe') #Use the chrome driver located at the corresponding path
driver = webdriver.Chrome(service=s, options=opt) #execute the chromedriver.exe with the previous conditions
nft_folder_path = r'C:\Users\ResetStoreX\Pictures\Cryptobote\Cryptobote NFTs\Crypto Cangrejos\SANDwich\Crabs'
start_number = 3
if driver.current_url == 'https://opensea.io/asset/create':
print('all right')
print('')
print(driver.current_window_handle)
print(driver.window_handles)
print(driver.title)
print('')
nft_to_be_selected = nft_folder_path+"\\"+str(start_number)+".png"
wait_xpath('//*[#id="main"]/div/div/section/div/form/div[1]/div/div[2]')
imageUpload = driver.find_element(By.XPATH, '//*[#id="main"]/div/div/section/div/form/div[1]/div/div[2]').click() #click on the upload image button
print(driver.current_window_handle)
print(driver.window_handles)
time.sleep(2)
pyautogui.write(nft_to_be_selected)
pyautogui.press('enter', presses = 2)
Output:
After checking the URL, the program clicks on the corresponding button to upload a file
Then it waits 2 seconds before pasting the image path into the Name textbox, for then pressing Enter
So the file ends up being correctly uploaded to this page.
The thing is, the program above works because the following conditions are met before execution:
The current window open is the Chrome Browser tab (instead of the Python program itself, i.e. Spyder environment in my case)
After clicking the button to upload a file, the Name textbox is selected by default, regardless the current path it opens with.
So, I'm kind of perfectionist, and I would like to know if there's a method (using Selenium or other Python module) to check if there's an Open File dialog open before doing the rest of the work.
I tried print(driver.window_handles) right after clicking that button, but Selenium did not recognize the Open File dialog as another Chrome Window, it just printed the tab ID of this page, so it seems to me that Selenium can't do what I want, but I'm not sure, so I would like to hear what other methods could be used in this case.
PS: I had to do this process this way because send_keys() method did not work in this page
The dialog you are trying to interact with is a native OS dialog, it's not a kind of browser handler / dialog / tab etc. So Selenium can not indicate it and can not handle it. There are several approaches to work with such OS native dialogs. I do not want to copy - paste existing solutions. You can try for example this solution. It is highly detailed and looks good.
I'm trying to perform a Keys.ARROW_DOWN in selenium but it doesn't want to work, the code open the context menu, but the key arrow_down don't work, example of what I'm doing:
from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.common.keys import Keys
import time
driver = webdriver.Chrome()
driver.get('http://www.google.com.br')
time.sleep(1)
actions = ActionChains(driver)
actions.context_click().send_keys(Keys.ARROW_DOWN).perform()
chromedriver version 83
Someone can give me a light please of what I'm doing wrong?
Thanks for the help!
You can achieve with the help of pyautogui as shown in the below code
import time
import pyautogui
from selenium import webdriver
from selenium.webdriver import ActionChains
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
options = webdriver.ChromeOptions()
options.add_argument( "user-data-dir=C:\\Users\\Sangeeta-Laptop\\AppData\\Local\\Google\\Chrome\\User Data\\Guest Profile");
cdriver = "C:\\Users\\Sangeeta-Laptop\\Downloads\\chromedriver_win32 (4)\\chromedriver"
driver = webdriver.Chrome(executable_path=cdriver, chrome_options=options)
driver.get('http://www.google.com.br')
time.sleep(1)
actions = ActionChains(driver)
actions.context_click().perform()
time.sleep(1)
pyautogui.press("down");
But as 0buz said in the comment, there are multiple ways to achieve your requirement. So please tell us what are you trying to achieve in detail and maybe we all will help you to resolve your issue :)
Keys.ARROW_DOWN within Context Menu
Context Menu initiated through context_click() is generally invoked on a WebElement e.g. a link.
Invoking context_click() on a element opens a browser native context menu which is a browser native operation and can't be managed by Selenium by design.
Conclusion
Using Selenium you won't be able to interact with browser native context menu items using send_keys(Keys.ARROW_DOWN), send_keys(Keys.DOWN), etc.
Reference
You can find a relevant discussion in:
Could not do Arrow down using sendKeys(Keys.ARROW_DOWN) in chrome
When you perform keys.ARROW_DOWN you should enter values into field then you can perform action.
I need to write an application that basically focuses on a given Windows window title and copy-pastes data in a notepad. I've managed to achieve it with pygetwindow and pyautogui, but it's buggy:
import pygetwindow as gw
import pyautogui
# extract all titles and filter to specific one
all_titles = gw.getAllTitles()
titles = [title for title in all_titles if 'title' in title]
window = gw.getWindowsWithTitle(titles[0])[0].activate()
pyautogui.hotkey('ctrl', 'a')
pyautogui.hotkey('ctrl', 'c')
Using Spyder, I ocasionally get the following error when activating:
PyGetWindowException: Error code from Windows: 126 - The specified module could not be found.
Additionally, I would be interested in doing this process without affecting the user working on the machine. Activate basically makes the window pop to front. Moreover, it would be better to not be OS dependant, but I haven't found anything yet.
I've tried pywinauto but the SetFocus() method doesn't work (it's buggy, documented).
Is there any other method which would make the whole process invisible and easier?
Not sure if this will help
I am using pywinauto to set_focus
import pywinauto
import pygetwindow as gw
def focus_to_window(window_title=None):
window = gw.getWindowsWithTitle(window_title)[0]
if not window.isActive:
pywinauto.application.Application().connect(handle=window._hWnd).top_window().set_focus()
I am using Plotly to make graphs in my IPython notebook. I am able to view graphs on my IPython notebook when I upload them on GitHub they are displayed as blank spaces.
I read on the web that Plotly currently does not support iframes and hence the issue, but is there a workaround?
Here's the link to my GitHub Ipython notebook:
https://github.com/dhavalbhinde/bhinde_dhaval_spring2017/blob/master/Finals/Analysis%203.ipynb
Please, can someone advice how should I handle them?
I found a way to show Plotly plots on Github. They aren’t interactive anymore but it’s better than nothing.
First
import plotly.io as pio
pio.renderers
you can see the list of available renders.
*if you get an error on this step you can simply just install orca:
conda install -c plotly plotly-orca
and then there are 2 possible ways.
you can pass "svg" to .show() like this:
fig = px.scatter_3d(iris, x=transformed_iris['component1'], y=transformed_iris['component2'], z=transformed_iris['component3'],color='species')
fig.show(renderer="svg")
or you can set the pio.renderers.default to svg:
pio.renderers.default = "svg"
Import these and this and it will work :
from plotly.offline import plot, iplot, init_notebook_mode
import plotly.graph_objs as go
init_notebook_mode(connected=True)