Can a firefox or chrome extension use service workers? Attempting to install one via a content script doesn't seem to work
Failed to register/update a ServiceWorker for scope (url)
And installing one via a background script seems to be failing too:
Registration failed with SecurityError: The operation is insecure.
Is this possible?
The service worker is installed if the storage is enabled.
But the storage is disabled if this Firefox setting is enabled: Preferences / Cookies and Site Data / Delete cookies and site data when Firefox is closed
Please check the Mozilla ticket for details. Also you could test if the service worker works on the test page
Related
I am having a SharePoint-based application, using which I need to perform load testing.
But When I m recording the script, the response is not as same as the browser, and thus unable to get what needs to be done
And in first 2 requests:
get page
post login
in these, there is no dynamic value, so I am not able to understand it.
First of all add HTTP Cookie Manager to your test plan
Second check all fields of the request from the browser (i.e. using browser developer tools) and JMeter and pay attention to URL and Headers
And last but not the least very often Sharepoint installations are protected using NTLM or Kerberos, if this is the case you will need to add properly configured HTTP Authorization Manager, see Windows Authentication with Apache JMeter article for more details.
I have slackbox running locally, have created a Spotify dev application and have successfully authenticated slackbox. It says I am logged in at http://localhost:5000/. All of my variables have been set, including the slack token, in an .env file via dotenv.
All seems well there.
On the slack side, I have created a slash command mapped to /spotify that POSTs to http://localhost:5000/store. The slash command shows up in my command description list when typing.
When I attempt to use it though, I get an access denied message in chat, I'm assuming due to cross-domain issues:
ERROR: The requested URL could not be retrieved Access Denied.
According to their docs - https://github.com/benchmarkstudios/slackbox - running this locally should work. I also run a Hubot bot locally and it integrates fine with the same slack room.
Any help is appreciated!
https://sprint.ly/blog/5-steps-to-a-slack-integration/
Slack’s outgoing slash command requests need to be sent to a public facing url, which is a problem if we want to receive these messages to our local development server.
How do we solve this?
One way is with the use of a secure tunnel which acts as a public HTTPS URL for our local development server. Problem solved!
Who provides this service?
ForwardHQ provide the best user experience, including a browser extension for setting up a local tunnel in one click. They have a free 7 day trial.
My preferred option is ngrok. It’s free for one concurrent tunnel client, with no time restriction. Woop! Its a little harder to use but it does the job.
I was trying to build the chromium source for the chrome remote desktop host and chromoting web app in linux. I followed the instructions from here and here, and the build was successful.
But the problem is when I add the chromoting webapp as an extension, it starts, asks for authorization, but after that shows:
Error: invalid_client
Examining the request details, I got client_id=dummytoken and believe this is the problem. So my question is, why this is happening and how can I solve this?
Another problem is when I try to start the chrome remote desktop host process, it stops with the following message:
...
Launching host process
['/opt/google/chrome-remote-desktop/chrome-remote-desktop-host', '--host-config=-', '--audio-pipe-name=/home/diptap/.config/chrome-remote-desktop/pulseaudio#7e4d6b70aa/fifo_output', '--signal-parent']
wait() returned (6794,26112)
Host process terminated
Failure count for 'host' is now 1
OAuth credentials are invalid - exiting.
Cleanup.
Terminating Xvfb
....
Why my credentials are invalid? Are these two problems are related? I got them according to the steps mentioned in the links.
This is first time I am building chromium or any chrome app, I may miss something obvious.
ok so I just figured this part out. Stuck at the next stage. I shall help you move forward.
I assume you compiled the chromoting webapp yourself. the credentials from the google cloud console dont seem to stick when you build it. I had to manually add it later.
Go to the folder where the app is located and modify plugin_settings.js as follows
remoting.Settings.prototype.OAUTH2_CLIENT_ID = 'YOUR CLIENT ID HERE'
remoting.Settings.prototype.OAUTH2_CLIENT_SECRET = 'YOUR CLIENT SECRET HERE'
and now you should be able to get past that stage. Infact you will now be able to access remote machines. enabling remote access to this machine however is posing me a few small problems. let me know where you get to
I am newbie in Azure development ,I am migrating an existing asp.net application to be hosted as cloud service ,everything works just fine locally (on dev. AppFabric) but when I publish it on cloud and after I try to singin on my signin.aspx page it keeps giving me "This webpage is not available" error .
after that refreshing the page gives me "HTTP Error 503. The service is unavailable." error
my signin page does a lot of stuff (calling db procedure ,setting user session) before redirecting to default page [I am using the free azure trial account]
what may cause the problem ,how can I debug ,why everything works fine on the emulator ?
Checking the event Viewer I found that the problem was in "Debugger.Break ();" inside a class it seems like that line just freaks the runtime out and causes it to lose control and recycle the Application Pool and stopping the service !!!!
You should set up a remote desktop connection to your web role instance and look at the Event Log for IIS. You will want this for development and testing independent of your current issue.
Links:
http://www.windowsazure.com/en-us/develop/net/common-tasks/remote-desktop/
http://msdn.microsoft.com/en-us/library/windowsazure/gg443832.aspx
I'm attempting to use grunt-contrib-qunit to run a pre-existing suite of qunit tests (testing parsing of ajax request results) in headless mode with Phantom on Windows 8.
The tests complete fine in these scenarios:
When the remote page is accessed directly from any browser without Fiddler or another proxy running
When Phantom runs the tests from a command prompt with Fiddler open and running
Oddly if I don't have fiddler open monitoring the requests, the AJAX requests I'm testing never seem to initialize. I've checked my default IE LAN Settings and there is no proxy enabled, I've also tried flipping the Auto Detect Settings checkbox there and no change.
Any thoughts??
Details on my setup:
Node v0.10.4
Latest grunt-contrib-qunit
Windows 8
QUnit is divided into 4 or 5 modules with setup and teardown tasks in some modules, asynchronous and synchronous tests, and autorun is set to false.
Update:
If I turn off the options in Fiddler for "Reuse client connections" and "Reuse connections to servers" I seem to get the same failure behavior as when Fiddler is off. This led me to believe its a problem with connections being closed prematurely, so I tried setting a custom keep-alive header -- but it still errors out.
Update 2:
I still question this because the page itself loads fine, but the requests fail, but it looks like this could possibly be related to NTLM authentication. Fiddler might somehow facilitating the handshake. There is an open issue for NTLM on the Phantom github page.
Update 3:
After continued troubleshooting this evening it looks like the issue is only with authentication on POST requests. GET requests seem to work fine. I'm working around this for now by routing all requests through an ASHX handler and thus dropping the auth component. Only thing I had to change was to disable web security on phantom to allow the cross-domain requests through.
I was going to say you need to turn off security, which is done by passing --web-security=no to phantomjs. This will sort out the CORS issues. However I see in your Update#2 that you've already discovered this.
For the POST authentication problem, I blogged about the workaround here:
http://darrendev.blogspot.jp/2013/04/phantomjs-post-auth-and-timeouts.html
I've heard the most recent version has fixed this, so upgrading might be the actual answer?
BTW, be careful with auth in PhantomJS, as the auth details are sent on all requests. E.g. if your test page fetches JQuery from a CDN, the CDN will be sent your authentication headers. (SlimerJS has some new features in place for getting around this; AFAIK PhantomJS does not yet.)