Django 4 By Example code doesn't show body-contents at all - web

I'm reading a book the title is Django 4 By Example.
I tried to make chapter01 & 02.
I wrote codes following the explanation of the book, everything is okay without bugs, but the result was not good.
In my case, the body is empty.
So I download & migrate the canonical codes and execute it, but the result was the same.
Naturally I made some articles by admin as a superuser.
This result is like this.
I'm new to django, and to make web application, but this code is undoubtly the same as example codes.I'm using windows11 so I tried to do the same thing on windows10, but the result was the same.I tried to do on the Microsoft Edge & Google Chrome.
code is here. This image is chapter02.
django 4 by example github
Do you know this is what kind of problems?
Is this occured to me only?I don't find out the person who run into the same problem.
Many readers say this book is wonderful, yes, I think so, too. But I'm embarassed at this result.
This is the command prompt message.

Related

GitHub auto-grading

Hi there,
friends, first I want to say I'm an absolute beginner. Your help makes me so motivated.
I am doing one assignment. they provide templates for the front end(angular) and back end (node.js, SQLite). I want to edit some code to complete that.
1. for the Front end
implement a searching functionality for the teacher and student classes
2. for the Back end
Update the readStudents function to read all student data.
Update the readStudentInfo function to read the information of a specified student.
Update the addStudent function to add a student.
Update the updateStudent function to update details of a specific student.
Update the deleteStudent function to delete a specified student.
I have completed all tasks myself.
I use GitHub to auto-grading it shows 80/160. I passed all back-end tasks but it shows I failed all front-end tasks.
The task for my side is to complete (implement a search functionality). so think that this is their side problem or any technical error.
I add the screenshot about auto-grading and the error message.
These are a few example images of errors, if you want to see all the error screenshots you ask me for, I will add this question.
..
The Back end is working fine. if I run npm start it works. it shows
capstone-project#1.0.0 start
node backend/index.js
DEV DB
Capstone Project Backend is running on http://localhost:8080
but if I click this link. there is nothing on my browser. it shows Cannot Get /
I you want to see any file of mine, u can ask I will add it here so fast.
please help me to solve this error. I am so tired to finish it. Your guide makes me move one step.
Thank you for read this :)
i found the error. the error is
Autograding
Node.js 12 actions are deprecated. Please update the following actions to use Node.js 16: actions/checkout#v2, education/autograding#v1. For more information see: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/.
friend, I hope it will use to help me.
How I update that

Why won't Azure OCR code loop through media?

I'm trying to get a project going where I can extract text from JPGs, PNGs, and PDFs. I found this article where someone made something similar
I tried following this person's guide and tried tying it to my Azure instance, but when I run the code (basically copied exactly what he did), I get an error of KeyError: 'regions'
Any idea what the issue may be? I have the error screenshot below, as well as the code (with my API key and endpoint removed.
So turns out I was just using the wrong end point in the code. I was copying what was given to me in Azure and not what the guy had in his video. Ooops

Scrapy not extracting data from a certain xpath

I'm trying to extract some data from an amazon product page.
What I'm looking for is getting the images from the products. For example:
https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
By using the XPath
//script[contains(., "ImageBlockATF")]/text()
I get the part of the source code that contains the urls, but 2 options pop up in the chrome XPath helper.
By trying things out with XPaths I ended up using this:
//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]
Which gives me exclusively the data I need.
The problem that I'm having is that, for certain products ( it can happen within 2 pairs of different shoes) sometimes I can extract the data and other times nothing comes out. I extract by doing:
imagenesString = response.xpath('//*[contains(#type, "text/javascript") and contains(.,"ImageBlockATF") and not(contains(.,"jQuery"))]').extract()
If I use the chrome xpath helper, the data always appears with the xpath above but in the program itself sometimes it appears, sometimes not. I know sometimes the script that the console reads is different than the one that appears on the site but I'm struggling with this one, because sometimes it works, sometimes it does not. Any ideas on what could be going on?
I think I found your problem: Its a captcha.
Follow these steps to reproduce:
1. run scrapy shell
scrapy shell https://www.amazon.com/gp/product/B072L7PVNQ?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=48QP07X56PTH002QVCPM&th=1&psc=1
2. view response like scrapy
view(respone)
When executing this I sometimes got a captcha.
Hope this points you in the right direction.
Cheers

XPath Data Scraping From Online Community

I recently read this article on how to scrape the Inbound.org community members profile using Excel. And you can watch the video here if you prefer it that way.
Since the release of this tutorial, the Inbound website structure has changed a bit, as you can see at minute 11:00 in the video, if you attempt to copy the XPath of the social media icons it appears slightly different and because of this I haven't been able to extract that information.
Here's what I get now:
/html/body/div[3]/div/div/div[1]/div/div[2]/a[1]/i
This is how I wrote the syntax in Excel:
=XPathOnUrl(A2,"//a[#class='twitter']","href")
And then like this:
=XPathOnUrl(A2,"//a[contains(#class,twitter)]/#href")
Although I tried in many different ways, none of them showed me the link to the member's social media profile.
I even tried changing the xpath in multiple ways to get different data from the page, but none of it was the social media information:
=XPathOnUrl(A2,"//*[contains(#class,member-banner-tagline)]/div[2]/div/div/div[1]/div/div[1]")
=XPathOnUrl(A2,"//*[contains(#class,member-banner-tagline)]/div[2]/div/div/div[1]/div/h1")
I honestly don't know what to try anymore, something's wrong and I can't figure it out. Anybody have enough experience with this or can pinpoint the problem here with my syntax?
Thanks a lot
The first formula you tried looks fine, but this is the one that works for me (SEO Tools version 4.3.4) :
=Dump(XPathOnUrl(A2;"//a[#class='twitter']";"href";HttpSettings(TRUE)))

problems testing sharepoint with selenium (timeouts, repeating auth and missed links)

I have some serious problems testing a sharepoint site with selenium/bromine. As I did't find an answer via various searches I hope someone here can point me in the right direction.
I am constantly getting timeouts opening the main page, but the server is definetly fast enough to answer the request and at 90% idle. Nevertheless I just get logs like these:
open http://username:passwd#10.13.110.54/default.aspx | Timed out after 90000ms
Test terminated The selenium server did not return OK
The auth popup is popping up at irregular intervals (every 5 to 10 clicks) although every open command uses the http://username:passwd#10.13.110.54/ as prefix
Clicking on elements is sometimes not registered, the logs show a successful
isElementPresent link=myLink
click link=myLink
but the browser doesn't react. These are mainly in-page links which open a new folder or an editing box.
I'm not sure whether I should have posted the in three separate questions, but I didn't want to spam.
Hope someone can help me, as I have these problems now for nearly 3 weeks.
Thanks in advance
Thomas
For your question number 2: Okay, this is a really late reply. I stumbled on this page looking for the answer myself. Given that I have solved it in the meantime, I figured I'd post my answer for other people stumbling onto this page.
General solution:
You need to create or use a profile that will let firefox automatically forward your credentials to the sharepoint website. You can create the profile manually and call it each time, see https://applicationtestingtips.wordpress.com/2009/12/21/seleniumrc-handle-windows-authentication-firefox/ for instructions.
Programmer solution: (works in python, should work similarly in Java)
Or you can create a new profile on the fly each time. I did that based on the information in the previously mentioned website. I use python for calling selenium, but this should be rather similar in whatever language you use to call selenium:
sharepointHosts = 'sharepoint1.mycompany.com,sharepoint2.mycompany.com' #have all your sharepoint hosts here in a comma-separated list
ffProfile.set_preference('network.automatic-ntlm-auth.trusted-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.delegation-uris', sharepointHosts)
ffProfile.set_preference('network.negotiate-auth.trusted-uris', sharepointHosts)
driver = webdriver.Firefox(firefox_profile=ffProfile)

Resources