How to let PySide2 WebEngineView Show a video successfully? - python-3.x

I'm building a simple browser using PySide2+Python 3.9, but I find that it doesn't play any videos correctly. I don't know how to solve it.
I tried to visithttps://v.qq.com/x/page/w3041d29ecr.htmlHowever, in the page, I saw that your browser does not support this video. On the console, I see these outputs:
js: A cookie associated with a cross-site resource at http://mediav.com/ was set without the `SameSite` attribute. A future release of Chrome will only deliver cookies with cross-site requests if they are set with `SameSite=None` and `Secure`. You can review cookies in developer tools under Application>Storage>Cookies and see more details at https://www.chromestatus.com/feature/5088147346030592 and https://www.chromestatus.com/feature/5633521622188032.
js: Uncaught (in promise) NotSupportedError: The element has no supported sources.
It seems that my code does not support video resources.
I used the following code to configure WebEngineView:
self.browser.settings().setAttribute(QWebEngineSettings.PluginsEnabled, True)
self.browser.settings().setAttribute(QWebEngineSettings.JavascriptEnabled, True)
self.browser.settings().setAttribute(QWebEngineSettings.AllowRunningInsecureContent, True)
self.browser.settings().setAttribute(QWebEngineSettings.LocalContentCanAccessFileUrls, True)
self.browser.settings().setAttribute(QWebEngineSettings.LocalContentCanAccessRemoteUrls, True)
But I found that the browser still could not display the video correctly.
From some questions, I learned that maybe I need to recompile PySide2, but how should I compile it? Or how can I solve this problem?

Related

Web extension converted from chrome to safari fails with error "The service_worker script failed to load due to an error."

As stated in the title, I am trying to convert a web extension originally made for chrome to safari, using the tool documented at https://developer.apple.com/documentation/safariservices/safari_web_extensions/converting_a_web_extension_for_safari
The project is created, builds, and launches successfully, however when the extension is enabled in safari I get 2 errors:
"An extension with a non-persistent background page cannot listen to webRequest events."
"The service_worker script failed to load due to an error."
The first error is a general bug in chromium, which is fixed in v107 (verified in chrome canary), and the extension relies on this API to work.
I have no idea what to do with the second error, as it provides no information at all. The option to access the background page process is disabled in the Safari Develop menu with the message "service worker failed to load".
Here is my "manifest.json"
{
"name":"...",
"manifest_version":3,
"content_scripts":[{"all_frames":false,"js":["content.js"],"matches":["file://*/*","http://*/*","https://*/*"],"run_at":"document_idle","match_origin_as_fallback":true}],
"host_permissions":["<all_urls>"],
"permissions":["tabs","activeTab","storage","scripting","notifications","webRequest","downloads","alarms"],
"background":{"service_worker":"background.js"},
"content_security_policy":{"extension_pages":"script-src 'self' 'wasm-unsafe-eval'; object-src 'self'"},
"web_accessible_resources":[{"resources":["inject.js"],"matches":["<all_urls>"]}],
"action":{"default_popup":"popup.html"},
"icons":{"16":"resources/icons/16x16.png","32":"resources/icons/32x32.png","48":"resources/icons/48x48.png","128":"resources/icons/128x128.png"},
"commands":{"_execute_action":{"suggested_key":{"default":"Shift+Alt+C"},"description":"Start the extension"}},
"version":"0.7.5",
"description":"...",
"author":"..."
}
Does anyone have any good suggestions/knowledge on how to debug why the service worker doesn't load? The extension works without any errors or warnings in Google Chrome.
The webRequest api doesn't seem to be available in safari when using manifest v3, and is a blocking issue, meaning the problem cannot currently be solved.
Furthermore storage.session also caused the service worker to crash, but this could be mitigated by using storage.local instead.
Will have to wait and see if safari supports the webRequest API in the future.

X-Content-Type-Options Header Missing Website Application SocketIO

i am developing an nodejs express application that is running in the ibm cloud. Via Hostedscan i tested my application for security issues.
im getting follwing result:
"X-Content-Type-Options Header Missing"
The Anti-MIME-Sniffing header X-Content-Type-Options was not set to 'nosniff'. This allows older versions of Internet Explorer and Chrome to perform MIMEsniffing on the response body, potentially causing the response body to be interpreted and displayed as a content type other than the declared content type.
Current (early 2014) and legacy versions of Firefox will use the declared content type (if one is set), rather than performing MIME-sniffing.
my url that have a risk:
https://xxx.xxx.xxx.xxx:xxxxx/socket.io/socket.io.js
its a get method
i implemented following solution
(server.js)
const helmet = require("helmet")
.....
app.use(helmet.noSniff())
with this code a part of the security issues where gone like :
https://xxx.xxx.xxx.xxx:xxxxx/
https://xxx.xxx.xxx.xxx:xxxxx
https://xxx.xxx.xxx.xxx:xxxxx/chatroom
but the secuirty risk with https://xxx.xxx.xxx.xxx:xxxxx/socket.io/socket.io.js is still reminding.
i also tryed in my index.html following, because i thought the security risk could be because my client is also interacting with socket.io
<script
src="https://cdnjs.cloudflare.com/ajax/libs/socket.io/4.5.1/socket.io.js"
integrity="sha512-xxxxxxxx....."
crossorigin="anonymous">
</script>
does anyone have an approach?

Chrome extension chrome favicon permission [duplicate]

I have recently migrated my chrome extension to manifest v3 using this guide:
https://developer.chrome.com/docs/extensions/mv3/intro/mv3-migration/
The v3 manifest.json file no longer supports using chrome://favicon/. Looking through the documentation I could not find an alternative. There were some articles I found that said it might be moved to a new favicon permission and be available under the google.favicon namespace. However they were all older and speculative, I tried these speculations to no avail.
The new API was just released as part of Chrome 104!
To use it, first add the favicon permission to your manifest.json:
{
...
"permissions": ["favicon"],
...
}
Then you can load the favicon using your chrome extension's id, for example:
const faviconSrc = `chrome-extension://${chrome.runtime.id}/_favicon/?pageUrl=${encodeURIComponent(url)}&size=32`;
it seems like they forget to build this api, you can star this issue on the page or leave a comment to tell them.
Issue 104102: Create a new API permission for access to favicons was fixed on June 13, 2022.
chrome://favicon Replacement for Extensions document mentions the API:
var faviconUrl = new URL('chrome-extension://<id>/_favicon');
faviconUrl.searchParams.append('page_url', 'http://example.com');
let image = document.createElement('img');
image.src = faviconUrl.href;
// src is 'chrome-extension://<id>/?page_url=http%3A%2F%2Fexample.com%2F'
Note that there's a mistake on the last line. It should be:
// src is 'chrome-extension://<id>/_favicon?page_url=http%3A%2F%2Fexample.com%2F'
Unfortunately, I still hasn't been able to get this API working on Chrome Canary 105.0.5174.0, which should include the changes from the resolved bug already. I'm getting Failed to load resource: net::ERR_FAILED errors.

Can MechanicalSoup log into page requiring SAML Auth?

I'm trying to download some files from behind a SSO (Single Sign-On) site. It seems to be SAML authenticated, that's where I'm stuck. Once authenticated I'll be able to perform API requests that return JSON, so no need to interpret/scrape.
Not really sure how to deal with that in mechanicalsoup (and relatively unfamiliar with web-programming in general), help would be much appreciated.
Here's what I've got so far:
import mechanicalsoup
from getpass import getpass
import json
login_url = ...
br = mechanicalsoup.StatefulBrowser()
response = br.open(login_url)
if verbose: print(response)
# provide the username + password
br.select_form('form[id="loginForm"]')
print(br.get_current_form().print_summary()) # Just to see what's there.
br['UserName'] = input('Email: ')
br['Password'] = getpass()
response = br.submit_selected().text
if verbose: print(response)
At this point I get a page telling me javascript is disabled and that I must click submit to continue. So I do:
br.select_form()
response = br.submit_selected().text
if verbose: print(response)
That's where I get a complaint about state information being lost.
Output:
<h2>State information lost</h2>
State information lost, and no way to restart the request<h3>Suggestions for resolving this problem:</h3><ul><li>Go back to the previous page and try again.</li><li>Close the web browser, and try again.</li></ul><h3>This error may be caused by:</h3><ul><li>Using the back and forward buttons in the web browser.</li><li>Opened the web browser with tabs saved from the previous session.</li><li>Cookies may be disabled in the web browser.</li></ul>
The only hits I've found on scraping behind SAML logins are all going with a selenium approach (and sometimes dropping down to requests).
Is this possible with mechanicalsoup?
My situation turned out to require Javascript for login. My original question about getting into SAML auth was not the true environment. So this question has not truly been answered.
Thanks to #Daniel Hemberger for helping me figure that out in the comments.
In this situation MechanicalSoup is not the correct tool (due to Javascript) and I ended up using selenium to get through authenication then using requests.

How best to get the user's browser information and settings for debugging purposes?

My problem is that I have a user that is having a problem displaying a portion of website I am creating, but I am unable to reproduce it on any of my browsers, even with the same version of the browser.
What I'm looking for is probably a website that I can send the user to which will tell me what version of the browser they are running along with the plugs installed and any other information that might affect the display of a page.
Any one know of anything like this?
Edit: The problem is related to CSS. They want some special image around all the text inputs, but on the users computer the text input displays partially outside of the image which is setup as a background.
I need more user specific information than Google Analytics as you can't separate out a specific user. I also suspect that it's more complicated than just the user agent.
I also can put the website out there publicly because they want to keep their idea private until it's released...grr.
I find that sending users to the Support Details site (http://supportdetails.com/) is a great way to get systems and browser specifics. At that site all they have to do is enter your email address and the site will send details such as:
Operating System
Screen Resolution
Browser Name and version
Browser size (view port)
IP Address
Color Depth
Javascript enabled (Y/N)
Flash version installed
Cookies enabled (Y/N).
Those pieces of info can also be exported as csv or PDF. Pretty sweet.
The site is made by an agency called Imulus.
Unfortunately, I don't know of any site that will log every detail about the users browser, as you request.
But perhaps browsershots.org could help with your debugging? It allows you to test you design in a lot of different browsers very easily.
EDIT: ... unfortunately restricted to the initial design on page load, since it simply takes a screenshot for you.
The classic approach is to use the useragent to determine the browser and OS
Looks like this site will display it for you.
As for plugins there are various ways to test in javascript for the plugins you are looking for.
You have to test for these on the client side as there is (to my knowledge) no way of detecting these on the server side.
The following crude example shows how to test for acrobat reader in IE and Mozilla browsers and returns if it was installed and if so what version in an object.
function TestAcro()
{
var acrobat=new Object();
acrobat.installed=false;
acrobat.version='0.0';
if (navigator.plugins && navigator.plugins.length)
{
for ( var x = 0, l = navigator.plugins.length; x < l; ++x )
{
//Note: Adobe changed the name of Acrobat to Adobe Reader
if ((navigator.plugins[x].name.indexOf('Acrobat') != -1) | (navigator.plugins[x].description.indexOf('Acrobat') != -1) | (navigator.plugins[x].name.indexOf('Adobe Reader') != -1) |(navigator.plugins[x].description.indexOf('Adobe Reader') != -1))
{
acrobat.version=parseFloat(navigator.plugins[x].description.split('Version ')[1]);
if (acrobat.version.toString().length == 1) acrobat.version+='.0';
acrobat.installed=true;
break;
}
}
}
else if (window.ActiveXObject)
{
for (x=2; x<10; x++)
{
try
{
oAcro=eval("new ActiveXObject('PDF.pdfCtrl."+x+"');");
if (oAcro)
{
acrobat.installed=true;
acrobat.version=x+'.0';
}
}
catch(e) {}
}
try
{
oAcro4=new ActiveXObject('PDF.pdfCtrl.1');
if (oAcro4)
{
acrobat.installed=true;
acrobat.version='4.0';
}
}
catch(e) {}
try
{
oAcro7=new ActiveXObject('AcroPDF.PDF.1');
if (oAcro7)
{
acrobat.installed=true;
acrobat.version='7.0';
}
}
catch(e){}
}
return acrobat;
}
Google analytics? If you have any sort of web analytics program installed on your web server, generally they also give info such as the operating system, web browser, etc. You could use the user's IP address to find his info in your logs.
Also, what issue are they having? We might be able to help..
I did find this program, but unfortunately it's not a free service, nor is there really anyway for me to get the information on that page (unless I pay for it): http://www.cyscape.com/showbrow.aspx
The useragent and related HTTP headers that are sent in all requests can give you some information (Browser and version), but for detail about the client-side installation, you may be out of luck for an automated capture mechanism that obtain a list of arbitrary plugins installed on the client browser. This would be a security violation, so unless a browser intentionally exposes them, you wouldn't get access to this without installing a client-side binary.
Depending on the relationship with the user, you could try something like Go2Meeting or CoPilot so that you can see the bug in action yourself. This would also allow you to peruse the browser settings and plugins.
If it is a CSS issue and the issue is with IE (most often) you may want to consider using the IE 7 library.
When it comes to CSS... I get it working properly in Mozilla browsers then I see what I need to conditionally hack to make it work in IE. This library comes in handy.
Also if possible I would try to limit support to the major modern browsers out there.
And if possible try to include the mobile browsers (iPhone, etc).
Hope this helps.
I've been using Ocean's Browser Capabilities in my ASP.NET web sites. It is really easy to get many properties. Specifically I'm using the Ocean2.Web.HttpCapabilities library.
To get the browser type and capabilities:
string browserSettings = Ocean2.Web.HttpCapabilities.BrowserCaps.Build.ProcessDefault(HttpContext.Current.Request);
Here is a sample of the results:
Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.0.04506; Media Center PC 5.0; InfoPath.2)
os - Windows Vista
platform - WinNT
win16 - false
win32 - true
win64 - true
type - IE7
browser - IE
version - 7.0
BrowserBuild - aol - false
cookies - true
javascript - true
ecmascriptversion - 1.2
vbscript - true
activexcontrols - true
javaapplets - true
screenBitDepth - 1
mobileDeviceManufacturer - Unknown
mobileDeviceModel - Unknown
You could also try this:
BROWSER PROBE finds details about your browser, plugins, system, screen and much more.
A great tool for support staff and casual users alike.
Browser Probe
Most of these answers are outdated with dead links.
I found http://www.mybrowserinfo.com that suits my needs. Hope it helps someone else.
More user friendly service: https://aboutmybrowser.com/?nr

Resources