I want to create an automated performance metric gathering tool, to get various metrics for page load times for a flash based web application. I am doing the web automation using a batch script, and then I want to collect the various metrics using browsermob-proxy (http://opensource.webmetrics.com/browsermob-proxy/), it exports the metrics in a HAR file. I've never done this before so I was wondering if this approach is okay? What are the steps to using browswermob-proxy for windows (firefox). There is no information about windows in the documentation, just linux. I know I run the proxy from the /bin directory, then what do I need to do?
I had downloaded the copy from the browsermob zip from here..http://opensource.webmetrics.com/browsermob-proxy/ Unzip and refer the readme.md
The readme file in the browsemob zip, specifies the instructions to start the proxy server. Once the server is started, you would need to specify the server port and machine ip in your firefox proxy settings (from options->advanced->network>settings. Then trigger your urls on this browser.
Related
I would be most grateful if anyone could help me solve this problem with ClickOnce Web deployment.
I have read all the threads on this subject and I have also read through all the Microsoft documentation on the subject. They seem to say a lot without actually being direct or providing helpful examples. However, perhaps I am wrong and I have not looked in the right places.
I have already used ClickOnce successfully to deploy an application on the local area network.
It works well and really isn't that complicated. However, my goal is to deploy this application to customers, who are not connected to my local network.
I have set up a web site (www.mydomain.co.za), which I can access directly or via the ftp protocol.
I have created a sub directory off the root where I intend to publish the files created by the publish function. The publish function of the application requires a Publishing Folder Location and a Installation Folder URL I don't really understand the functional difference between these two locations. If I set the Publishing Location to ftp://www.mydomain.co.za/MyProductName and the Installation Folder URL to http://www.mydomain.co.za/MyProductName, then the publish process succeeds and when I check on the web server, the files have been published successfully it would seem. A further Application Files/MyProductName subdiectory with the version number information appended was created where all the output was placed.
My next step is to then grab the URL of the setup.exe file and to run it from a browser. This downloads the setup.exe file to my downloads folder which I then try to run but I get an error
Deployment and application do not have matching security zones.>
I have seen this come up in other threads but These threads don't seem to relate directly to what I am trying to do. These threads make mention of using Internet Explorer to achieve some degree of success, but all the browser did was to download the file.
I have also noted with interest that a web page is created in the root with a button that prompts the user to install the application. This does not work either.
Does anyone know of an article that I can read on this subject which is more helpful or if anyone can offer more insights into this I would be very grateful.
I wasn't sure whether to ask this in an Inkscape specific forum or here in Azure. I tagged both.
My goal is to run a windows build of Inkscape in a cloud function preferably or in an App Service to open up different vector files and send them back to the user as a plain SVG.
I've downloaded the binary archive (https://inkscape.org/en/release/0.92.2/windows/32-bit/) and extracted it in Kudu on both a paid App Service and in a Function App.
When I run inkview.com it seems to be working. It outputs info to cmd
But when I run inkscape.com it just stays open for a couple of seconds and quits. (Just outputs a blank line and exits) I've tried -V and -? and many other commands (also using the -Z without GUI command).
Does anybody have an idea of what's going on here? Is Azure perhaps missing some dependencies that Inkscape needs to run? Any ideas on how to troubleshoot?
Thanks in advance.
Azure Functions, like WebApps and Mobile Apps, run in an App Service. The App Service runs in a secure environment called a sandbox which imposes certain limitation. Amongst them, is the use of GDI+.
With Inkspace being a graphics program, I can only imagine that it is making use of GDI+, so it would be blocked.
You can see the list of limitation https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#unsupported-frameworks
In order to be able to run inkspace in Azure, you need to host in something other than App Service, such as a VM, Cloud Service, Service Fabric, Containers... etc.
I am new to HP Load Runner. I was trying to record a script on my Virtual Machine. However while trying to record script in action, Vugen does not hit the HTTP based application. I am able to access the application using Internet Explorer.
Has this combination ever worked or is this a new installation?
What recording options have you tried? (HTML, URL, Sockets, Proxy, ...)
What version of LoadRunner VUGEN?
What version of Internet Explorer
Have you tried a control site, such as the Flights Web application used as a part of the LoadRunner tutorial?
Have you tried a different browser?
Does this work on another machine? ( Look to differences to reconcile )
Have you satisfied all of the requirements for installation, including your credentials level on the host?
Do you have antivirus in the virtual machine which needs to be disabled?
Is VUGEN inside of the virtual machine instance with the browser or outside the virtual machine inside of the core operating system?
Seems like the default path for Internet Explorer(C:\Program Files\Internet Explorer\iexplore.exe) was incorrect in HP load Runner. After manually selecting the web browser at correct path(C:\Program Files (x86)\Internet Explorer\iexplore.exe), Load runner started to record the scripts.
Thanks all.
I'm building a launcher for internal use with a Chrome packaged app which includes links to internal resources (databases, web links, etc.).
The problem is with local files. I want them to launch using whatever program is the default handler for them. For example, access databases open in Access, etc.
I've tried:
Creating a file link file:///. Nothing happens in this scenario on click and the link is not followed.
I found an extension (locallinks) here: https://code.google.com/p/locallinks/, which will open local file links. I've tried borrowing from that extension and passing the file link to the background script in my packaged app which would then open a new window with that url. Unfortunately, that results in a file not found, even for simple types such as text files. So obviously the local filesystem is sandboxed. Not surprising.
I thought maybe it would work to pass the link to an extension to open, but in that case, the file would be opened in Chrome and if Chrome does not support it, it would attempt to download the file locally.
The reason I'm using Chrome Packaged Apps is:
1. This will be updated often and the Chrome Web Store update feature would make it easy to keep clients updated without having to build our own update mechanism.
2. We can restrict installation of the app through CWS to internal users.
3. The app would be used in a Windows, Linux and Mac environment. Obviously the file paths here would be different but since they would point to a samba share, and mount points and network share drive's are known this is an easy problem to overcome.
4. There is additional functionality we will be building into the Chrome app in the future other than the launcher which fits very well with how Chrome Apps are designed.
My thoughts are:
Native Client? I have read a bit about these, but I think I would end up with the same limitations where the native client app would be sandboxed and may not actually have any better way of launching a local file.
Sockets? Maybe a simple Qt app listening on a socket to launch apps? Since the Qt app would be run with user permissions, and the socket would only accept connections from localhost, I guess the socket could in theory be used by a non-privileged app to launch something with user-level permissions. Is there a way for me to limit connections through the socket to only be accessible from my extension?
The sockets solution isn't ideal but may work since the app would not be updated often (if ever) since functionality is so simple.
Am I missing an obvious way of doing this that wouldn't require another component (a Qt app?)
Relating to your thought #2, not sure what local installation footprint you are willing to tolerate, but you may consider:
Hosting a miniscule local web server, or Qt app as you mention, which can also launch local programs (any of those lightweight web server frameworks). Have your packaged app, or your own chrome extension rewrite links such that they point at your web server along with the url of the original link, which can easily launch whatever program. Downsides: this may cause bypassing some browser security screening of the original links in some forms of implementation.
You may also look at this stackoverflow question if it helps.
You can limit access by confirming the requests originate from the local machine, or by embedding a key or hash inside your chrome extension. You may generate the key upon installation so that it's unique per machine. None of this will pass very proper security scrutiny so it depends on your risk profile. You will have a hard time justifying how each part is secure and clean of exploitation attack potential.
It seems you will need both a chrome extension and a local miniscule web server to make this work. Maybe it's easier to let users just download the files and click them...
Sorry if this isn't help enough, but basically you are trying to do something that is by design not made possible in Chrome, so at this state of affairs there would likely not be a simple solution.
According to the above documentation, to create the initial configuration I need to open a web browser. This is sound really weird to me. I would like to script the deploy of OpenAM and it seems impossible.
https://wikis.forgerock.org/confluence/display/openam/Deploy+OpenAM
My actual script do the following
Download openam
extract openam
copy .war in tomcat webapp
extract administration tools
extract configuration tool
extract diagnostic tool
download opendj
lauch the setup of opendj with all args
Now I would like to launch the configuration tool of openam with the configuration file I would like to use but it seems OpenAM must be already configured;
The configuration tool require $HOME/openam/boostrap file where $HOME/openam is the configuration folder that should exists once you have already configure it.
Is this true ? To use configuration tool you must already have configured your service ?
Of course not ... 'configurator tool' is meant to initially configure OpenAM.
For the sake of simplicity, you should not use an external data store.
If you really want to use an external configuration store (OpenDJ or Oracle DSEE are the only supported ones currently), the external config store must be up and running before launching 'configurator tool'.
OpenDJ can be configurated in an automated way as well.
If you do not need an external configuration store, just deploy the OpenAM web-app and use 'configurator tool'
Some helpful links:
Using the CLI
configurator.jar documentation
Automated installation and configuration of OpenAM