I'm facing a problem when following this explanation: Power Query - How to extract values in the list of the list?
Basically, I'm into the same situation of the user, but I'm getting an error, just like the image attached.(https://i.stack.imgur.com/6F61G.jpg)
I don't really understand why I'm getting an error, could someone help?
Recreating the steps in the tutorial
Related
This is the situation. Don't ask why, it is the way it is.
I created a simple fulltext-index over the document-contents by grabbing the content from the document-server and adding the simple unformatted content as a new property (raw_content) to each Document-Object in the neo4jdb.
Then i created a fulltext-index like:
CALL db.index.fulltext.createNodeIndex('content', ['Document'], ['title', 'teaser', 'raw_content'])
So far so good. Search works very well.
Now I want to index the attachments. I've got Attachment-URLs for each document, which I can call by the docid.
So, before I slide into any antipatterns, I'd like to ask the community, about how to do this. I've got two ways on my mind:
similar to the way I index the raw_content - is there a way to make Lucene get and parse the URLs, that I give to it?
the batch does all the parsing and adds the content to new fields like "attachment01_content" ...
Solution 1 would be appreciated, but i did not find any documentation on this.
Solution 2 is ugly, especially because Lucene can handle with pdf, doc ...
any ideas on how to solve this?
I really love the query function to find work items but I cannot figure out how to query for work items that are linked to tests. Is there any column that might help me? Or any other way to get a list of work items that are linked to tests and one for the ones that aren't?
You can use Work items and direct links. User stories without test cases:
I'm trying to extract some data from an amazon product page.
What I'm looking for is getting the images from the products. For example:
https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
By using the XPath
//script[contains(., "ImageBlockATF")]/text()
I get the part of the source code that contains the urls, but 2 options pop up in the chrome XPath helper.
By trying things out with XPaths I ended up using this:
//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]
Which gives me exclusively the data I need.
The problem that I'm having is that, for certain products ( it can happen within 2 pairs of different shoes) sometimes I can extract the data and other times nothing comes out. I extract by doing:
imagenesString = response.xpath('//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]').extract()
If I use the chrome xpath helper, the data always appears with the xpath above but in the program itself sometimes it appears, sometimes not. I know sometimes the script that the console reads is different than the one that appears on the site but I'm struggling with this one, because sometimes it works, sometimes it does not. Any ideas on what could be going on?
I think I found your problem: Its a captcha.
Follow these steps to reproduce:
1. run scrapy shell
scrapy shell https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
2. view response like scrapy
view(respone)
When executing this I sometimes got a captcha.
Hope this points you in the right direction.
Cheers
Hey guys I have been looking around the net and I do not seem to find a viable answer. Here comes: I got a list of 1000ish addresses for which i want to get the coordinates. The dumb thing is that google maps gives me the coodrinates of each point but i gotta go copy/paste 1000 entries to get them, say in excel worksheet. I've seen sites that offer to get me the coords one by one which again is not viable for me. Is there any way to extract the coords of google maps, or any other site that can process large quantities at once?
Thank you
If you can write VB Script you can implement the Google API yourself, someone already wrote it for you: policeanalyst.com/using-the-google-geocoding-api-in-excel.
For bulk conversion just do a Google search, there are a couple of sites that claim to do it, this one works: findlatitudeandlongitude.com/batch-geocode
You can upload your file in text only csv format at
http://geocoder.ca/?batchupload=1&account=1
Then save the results back to csv, shapefile or even print on a map as pdf or png file.
When I execute a script for updating calender entries for 6 months I got this error. Please help me how to solve this error.
Error: updating search context has encountered a problem when executing one Lotus script
Looks like your code is not finding the proper document. Test your code in the debugger if you use the tab to look at values. You will see that the search is not working.
I would switch from a search based code to a document collection search if you don't have that many documents to look at.
You might even set a flag on a document when you save the doc so you can find it later... This would be a change of code to the save button.
There are many ways to fix this issue.