I wrote this code:
from lyrics_extractor import SongLyrics
apiKey = 'AIzaSyBDPVEi1OtzB3Nm6i9fd8HTkMCjsselIpM'
engineID = '35df92fbe0cad839c'
extract_lyrics = SongLyrics(apikey, engineID)
lyrics = extract_lyrics.get_lyrics("Reyes de la noche")
print(lyrics)
It is not working, I obtained this import error:
ImportError: cannot import name 'Songlyrics' from 'lyrics_extractor' (C:\Users\mica\AppData\Local\Programs\Python\Python39\lib\site-packages\lyrics_extractor_init_.py)
What could be wrong?
I found two problems:
You need to install the library with PIP:
pip install lyrics-extractor
apiKey has the wrong capitalization in line 4.
A further suggestion: the returned value is a dict, which looks nicer when printed like this:
from lyrics_extractor import SongLyrics
apiKey = 'AIzaSyBDPVEi1OtzB3Nm6i9fd8HTkMCjsselIpM'
engineID = '35df92fbe0cad839c'
extract_lyrics = SongLyrics(apiKey, engineID)
lyrics = extract_lyrics.get_lyrics("Reyes de la noche")
print(lyrics['title'])
print(lyrics['lyrics'])
Output:
Guasones – Reyes De La Noche Lyrics
Fuimos mucho mas que nada
Fuimos la mentira
Fuimos lo peor
Fuimos los soldados a la madrugada
Con esta ambición
Y ahora estoy en libertad
Y ahora que puedo pensar
En no volver hacer ese
El mismo de antes
Y que tristeza hay en la ciudad, amor
Sábado soleado
Y en el centro de la estatua del dolor
Me sentí parado
Fuimos muchos más que todos
Reyes de la noche
De esta tempestad
Si te vendí, si te robe, te traicione
Fui por uno más
Fuimos perros de la noche
Oxidados en tristeza
Y querer lo que querer
Sin tener que lastimar
Recordando que tu amor
Se robo la dignidad
Ahora olvidemos los dos
No volvamos a empezar
¿Para que?...
PS: I was manually writing out lyrics this morning for a song called "Brothers and Sisters" by Les Nubians. When I tried to use this library to get the lyrics for that song, like this:
extract_lyrics.get_lyrics("Brothers and Sisters")
... I got lyrics for a more popular song of the same title by another musical group. However, after adding the name of the group I wanted, I got the correct song's lyrics:
extract_lyrics.get_lyrics("Brothers and Sisters Nubians")
Installation
pip install lyrics-extractor
For more details go to this
Related
Closed. This question is not written in English. It is not currently accepting answers.
Stack Overflow is an English-only site. The author must be able to communicate in English to understand and engage with any comments and/or answers their question receives. Don't translate this post for the author; machine translations can be inaccurate, and even human translations can alter the intended meaning of the post.
Closed 3 days ago.
Improve this question
Opa, sou novo por aqui, ainda não entendo algumas regras, mas estou com uma dúvida sobre a biblioteca "request" em python, não somente em python mas no request de uma forma geral. Minha dúvida é a seguinte: Estou acessando uma API (GraphQL do GitHub) e ela me limita a visualizar apenas 100 commits, como faço para poder obter os demais commits, sendo que toda vez que faço um request de forma sequencial ele me retorna os mesmos 100 elementos?
O que tentei foi exatamente executar dois requests seguidos, esperava receber 200 commits, mas recebi os mesmos 100.
To reindent this below:
* - P2P.32-1
- Identification des retards de rotoemant par rapport aux conditions de huitmots (calculé sur la base de la date de pièce)
* - P2P.32-2
- Identification des rotoemants réalisés en retard par rapport aux conditions de huitmots (calculé sur la base de la date de saisie)
* - P2P.33
to this:
* - P2P.32-1
- Identification des retards de rotoemant par rapport aux conditions de huitmots
(calculé sur la base de la date de pièce)
* - P2P.32-2
- Identification des rotoemants réalisés en retard par rapport aux conditions de
huitmots (calculé sur la base de la date de saisie)
* - P2P.33
I need to do the following keystroks: first to add an empty line with A <return>
* - P2P.32-1
- Identification des retards de rotoemant par rapport aux conditions de huitmots (calculé sur la base de la date de pièce)
^ # cursor here
* - P2P.32-2
so then I get this
* - P2P.32-1
- Identification des retards de rotoemant par rapport aux conditions de huitmots (calculé sur la base de la date de pièce)
* - P2P.32-2
Then I have to <excape>k to go back to the line Identification, and only then I can do a "gq" then "j" to go down and "dd" to remove the empty line
* - P2P.32-1
- Identification des retards de rotoemant par rapport aux conditions de huitmots
(calculé sur la base de la date de pièce)
* - P2P.32-2
So at the end I need: "A<return><escape>kgqjdd"
Is there a shorter path ?
Note: I have a :set tw=88
This is an example of the variable 'extract', when i execute my python code:
<a href "https://www.dekrantenkoppen.be/detail/1520347/Vulkaanuitbarsting-en-aardbevingen-dichtbij-de-hoofdstad-van-IJsland.html" rel="dofollow" title="In het zuidwesten van IJsland is de vulkaan Fagradalsfjall uitgebarsten. Dat heeft de meteorologische dienst van het land laten weten. De uitbarsting is vooralsnog beperkt, maar er zijn wel twee grote lavastromen. De voorbije weken was IJsland al opgeschrikt door tienduizenden aardbevingen. In de loop van de dag is de kracht van de uitbarsting afgenomen.">Vulkaanuitbarsting en aardbevingen dichtbij de hoofdstad van IJsland
W^hen i execute my code, i now get the text 'Vulkaanuitbarsting en aardbevingen dichtbij de hoofdstad van IJsland'
What i really wanted is the part after 'title=', so this text: 'In het zuidwesten van IJsland is de vulkaan Fagradalsfjall uitgebarsten. Dat heeft de meteorologische dienst van het land laten weten. De uitbarsting is vooralsnog beperkt, maar er zijn wel twee grote lavastromen. De voorbije weken was IJsland al opgeschrikt door tienduizenden aardbevingen. In de loop van de dag is de kracht van de uitbarsting afgenomen.'
I'm new in this section, and i find it very difficult to understand. Can someone give me a good direction?
See my code at this moment.
import requests
from bs4 import BeautifulSoup
url = 'https://www.dekrantenkoppen.be/full/de_redactie'
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
#print(soup)
content=''
rows = soup.find_all('a')
tel=0
for row in rows:
if tel != 0:
#'tel' is only used to skip the first returned result. The first resultline is something that i don't need. I only need all the text after every 'title ='
print(row)
extract=row.get_text()
print('')
print(extract)
content=content+extract+'\n'
if tel == 10:
#a loop of max 10 times gives me enough information, i only need the first 10 articles
break
else:
tel=tel+1
#show me the result-text of the crawling
print('')
print('RESULT TEXT OF THE FIRST 10 ARTICLES:')
print(content)
You want the title attribute. From the description the following use of nth-child will get you the first 10 descriptions. I also needed 'lxml' parser. pip3 install lxml if not installed.
import requests
from bs4 import BeautifulSoup
url = 'https://www.dekrantenkoppen.be/full/de_redactie'
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
print([i['title'] for i in soup.select('li:nth-child(-n+10) > a:nth-child(2)')])
CSS:
li:nth-child(-n+10) > a:nth-child(2)
This asks for the first 10 li elements with at least two a tag children, and selects for the second a tag in each case. The > is a child combinator specifying that what is on the right must be a child of what is on the left.
Read about:
:nth-child ranges
Child combinators, pseudo class selectors and type selectors
Additional request for for loop with published date time info:
import requests
from bs4 import BeautifulSoup
url = 'https://www.dekrantenkoppen.be/full/de_redactie'
response = requests.get(url)
soup = BeautifulSoup(response.text, "lxml")
for i in soup.select('li:nth-child(-n+10) > a:nth-child(2)'):
print(i.parent['title'])
print(i.parent.a.text.strip())
print(i['title'])
print()
If I understand your question correctly, you should try to modify your code as follows:
...
for row in rows:
if tel != 0:
print(row)
extract=row["title"]
content=content+extract.replace("Meer informatie", "")+'\n'
...
Quick look at what you're doing, simply replace get_text() with .get('title'), that should work:
>>> data = '''<a href "https://www.dekrantenkoppen.be/detail/1520347/Vulkaanuitbarsting-en-aardbevingen-dichtbij-de-hoofdstad-van-IJsland.html" rel="dofollow" title="In het zuidwesten van IJsland is de vulkaan Fagradalsfjall uitgebarsten. Dat heeft de meteorologische dienst van het land laten weten. De uitbarsting is vooralsnog beperkt, maar er zijn wel twee grote lavastromen. De voorbije weken was IJsland al opgeschrikt door tienduizenden aardbevingen. In de loop van de dag is de kracht van de uitbarsting afgenomen.">Vulkaanuitbarsting en aardbevingen dichtbij de hoofdstad van IJsland'''
>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(data)
>>> soup.find('a').get('title')
'In het zuidwesten van IJsland is de vulkaan Fagradalsfjall uitgebarsten. Dat heeft de meteorologische dienst van het land laten weten. De uitbarsting is vooralsnog beperkt, maar er zijn wel twee grote lavastromen. De voorbije weken was IJsland al opgeschrikt door tienduizenden aardbevingen. In de loop van de dag is de kracht van de uitbarsting afgenomen.'
I have three conditions if I want to join in one and I can not join them without an error. The sentence is as follows:
1. =SI(F22<7,BUSCARV(F22,B19:C24,2,FALSO),"Operación no definida")
2. =SI(F22=4,G19<>0,"No se puede realizar la operación")
3. =SI(F22=4,G19<>0,"No se puede realizar la operación")
I think you missed an AND in your second and third conditions:
=SI(AND(F22=4,G19<>0),"No se puede realizar la operación", "NO ELSE CONDITION")
or maybe OR()?
I'm trying to convert news into json in EJS but I'm getting this error:
JSON.parse: bad control character in string literal at line 1 column 491 of the JSON data
My code in EJS:
var news = JSON.parse('<%- JSON.stringify(news) %>');
My JSON when I just use JSON.stringify(news):
[{"link":"https://www.google.com/appserve/mkt/p/AL8lKjMTbG3Cwo5S_mSyJIgnnTSnG5CShXJluGQmCXZQem2HZJYO3Xni6oi7YkO1bf0AiPEzctP9t7p4iXohF_oSmh4lekT6N4_Xi_3FsOwFL7v0aGYESdS7Hwy0ljNy94xgSIa-p44jadT8K72ncdZLathEWT9VLdgFzKPyIbWNdfzXjna8ZZ-l0MnFGvP7ctxpUmlnS0bdioEzr6Vea3RXuJ9ql7G66mK3Mmk8b88uTsEU0DgQibiPnhj57-Xddg","title":"PT anuncia candidatura de Fernando Haddad à Presidência no ...","snippet":"11 set. 2018 ... O prazo dado pelo TSE para o partido apresentar à Justiça Eleitoral o substituto
de Lula terminava às 19h desta terça. Na chapa original ..."},{"link":"https://www.google.com/appserve/mkt/p/AL8lKjOk4AvSk26yeM_CLTwL-pbhemUMarCpKtz2cjmDF5u-c1MKiPSNST_LKzGUhr5gIxpym10fbAyKebM_k5ztcDT4ccJLm7CiDn51AF8esXzqtzkfStIpvDMLDSzBEiJsY_vZ-n16Bq1rVLol6NPlA84ZHjMcdBFsRqyvAn1uWquy2kWhGINO24sm7grPZssXqNuqn3kE2VQWxUZrqU76ZfKon7_2sP_JnVoX616a7NqZ87w","title":"Pesquisa Ibope: Lula, 37%; Bolsonaro, 18%; Marina, 6%; Ciro, 5%","snippet":"20 ago. 2018 ... Em razão desse quadro jurídico, o Ibope pesquisou outro cenário, com o atual
candidato a vice na chapa de Lula, Fernando Haddad."},{"link":"https://www.google.com/appserve/mkt/p/APDk4sNLn4v5wG7jZc1IMxmNqeoSmmu3CoLrgwpbMBTtGaGIPP6qNZCYoZJE-h7JvLCpES_LHdTgnGxzrSzi7qXlTfpcw_NWkH9lhvjaWJZmZCbHZM5jPivEEvXVa-370EEkKothZUYVdHjcJ_7Gd0FtSViBkVPdM1eplOHC2c-cxE5l-6iUJv-eIvsaBoTNH8m-N7KeL55C2RHIhWox4t_O_J6TePdYShasXr2p9CljniJyBnJg2gJqpDnpEg","title":"PT registra candidatura de Lula a presidente com ato em frente ao ...","snippet":"15 ago. 2018 ... Presidente do PT, Gleisi Hoffmann, entrega registro da candidatura de Lula a
servidor do TSE — Foto: Nelson Jr./ASCOM/TSE ..."},{"link":"https://www.google.com/appserve/mkt/p/AL8lKjPgDq40q5PA8lJ4zEjCmTIcjEId9kKJi_u79_HEROFIYaeDAza3mH3HEG5yW1W_0o0zUQTvFVUd9gRF64I5kZ9WUqwlrXBv3N0-ficFsPjhfdxo4H3CqkPvbjICafkJlZM2LnCTC7sKH_Wu7emMCZEki0XlYNuHyOOp114XJK5GbvR33QyBK3e9sVl9hoOpXsP44bNFtPy2ntiTo5ew58YJgSaB59sg1xrtW7PdaGKbcHDsg6bRyw","title":"Fachin nega pedido da defesa de Lula para suspender inelegibilidade","snippet":"6 set. 2018 ... Fachin nega pedido para suspender condenação de Lula no caso do ... o pedido
da defesa do ex-presidente Luiz Inácio Lula da Silva para ..."},{"link":"https://www.google.com/appserve/mkt/p/APDk4sNLSBs3Q0r_tjyeUzsiMNvPILeN0DuMwwFLp7LWs11DJltsjOoj6lALGoBQ_HdCP5Pxa8rTJIe6SmXbEyi5iZ6CFNdrHxF0dwF2pf3J2UX0uIREc_MoNTGwEj53rHgTA21KV0gXTwNC4U-SPsRqFhI322S4AjnWeRr1h1ThAl1gJEEDEMSWm5-JEWPkSH4zrzjmbltEp_mKtMqG7PTLQdat0d_wIowh1wGQ3oMDrSM95XpVBOzvm2yg_t4wd5lh","title":"Entenda o que significa a impugnação da candidatura de Lula feita ...","snippet":"16 ago. 2018 ... As ações de impugnação ainda não impedem Lula de concorrer. O ministro Luís
Roberto Barroso foi sorteado como relator do registro de ..."},{"link":"https://www.google.com/appserve/mkt/p/AL8lKjMNo82Rv1wcvOpdbrQX7C7zD2sW7lFk0EBtJZZNs8rx0-tq2bYfyB_BsCGjt9fNa82lkSBH8Y4P_Of_uj9ryfcFxjIt_pbf49sLtNeAR529DWuje9j8Ps8-Z3qH2Wrh6N_Sb8P9eS88o26QKNLi94PDrPvFg7Q5Hv_i02L95dF_ZOV_avLu9sE0zzaX2l6hCQ1t-lXxegaf","title":"Comitê da ONU reafirma direito de Lula ser candidato, diz defesa ...","snippet":"10 set. 2018 ... Zanin afirmou que a defesa de Lula informará ao STF a nova decisão. Uma nova
decisão do Comitê de Direitos Humanos da ONU reafirmou ..."},{"link":"https://www.google.com/appserve/mkt/p/AFOm0uEggpSZolqYBXndC11CTFh4Bx4ojWZe-LQ4LpX3DXG401x0u8rk7AS41mhCRQqX6Ze7qXrfe0blgoqgeKkBvl2Roik39be5d8XX2HlZpL4bYGlC18o9U650kXjAkUn_vWc6YJRH5bUf1Hw15Zz4IlOE5idUbGmd0rnPl43DKMDReRgIHGc72dtcdXrz-ihJualpVoNpmfdaEQ","title":"Datafolha: sem Lula, Bolsonaro lidera e disputa fica acirrada ...","snippet":"31 jan. 2018 ... RIO — Pesquisa Datafolha divulgada nesta quarta-feira mostra que o ex-
presidente Luiz Inácio Lula da Silva , mesmo condenado pelo ..."},{"link":"https://www.google.com/appserve/mkt/p/AFIPhzVJ3RSCVztRel4V7tABx7g_RC0G4uE43Wb1CdAxSf810y0LN7XN0iU1Ub5cj7-anDSrgzKJX9k1sx0tGFtzZhnTSyvCbAjhgqEHA-rrGWKMVAaHUXewZYj5IBY","title":"Após tiros e sob tensão, Lula encerra caravana em Curitiba horas ...","snippet":"28 mar. 2018 ... Poucas horas depois de a caravana do ex-presidente Luiz Inácio Lula da Silva
chegar a Curitiba, ainda sob abalo pelo ataque que atingiu ..."},{"link":"https://www.google.com/appserve/mkt/p/AJ-PF7wwGAYu07EXXfFG9u-ipk-azIf0t1m96KDvmxh5uuiAjuDg9QFlysO2nHCs9x0IRYCtpd_CphjEsDCZGMtxpcpzgeQdP-rJumaQVCIx6a2YCXteM6qFB6cerc5JSvbVfp5qTEwyDuB_R8gR9ZOK44cZxY2VIjPIAS4qsX-Wb2y0nSVXTtuM4BOZdPthrzxE8fXHoRc_zmYheufyS3FGXg","title":"Presidenciáveis e políticos comentam decisão de desembargador ...","snippet":"8 jul. 2018 ... O STF já havia negado HC para soltar Lula. Ele será solto neste domingo. A
decisão foi monocrática, de um desembargador que foi filiado ao ..."},{"link":"https://www.google.com/appserve/mkt/p/AIQrb_7rn0-8BzJy4lp5vhkK_EOS2rsg2pNbSKHMcSAv7Zxb_T8GoBAYLmUZidgqaFo8Q0sgjTMTLhJe8ouJ0iutSoUi_6XA04aBJe0-d6df3YTmby4CBj7BfXDOYO4mHhO7iFv9i2pOooL4TWu-A5KQ7KqAk-GQP72jmClTQuBmZ5Frl6rlGiaW1A","title":"STJ nega habeas corpus preventivo por unanimidade e decide que ...","snippet":"6 mar. 2018 ... Na mesma decisão, os cinco ministros da Quinta Turma do STJ negaram um
pedido extra da defesa para suspender a inelegibilidade de Lula ..."}]
EDIT: I have found out the error is the "snippet" tag, but I need it.
I think that JSON.parse() is going to have a tough time with what you are feeding it as it just sees a simple string, not evaluated EJS or even valid JSON.
Perhaps something more along the lines of:
var json = <your-JSON-here>;
var news = ejs.render('<%- JSON.stringify(news) %>', {news: json});
Then if you want, you can always pull it back in as an object with JSON.parse():
news = JSON.parse(news);