I am attempting to obtain a data feed from yahoo finance. I am doing this with the following code:
System.Net.WebRequest request = System.Net.WebRequest.Create(http://download.finance.yahoo.com/download/quotes.csv?format=sl&ext=.csv&symbols=^ftse,^ftmc,^ftas,^ftt1x,^dJA);
request.UseDefaultCredentials = true;
// set properties of the request
using (System.Net.WebResponse response = request.GetResponse())
{
using (System.IO.StreamReader reader = new System.IO.StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
I have placed this code into a console application and, using Console.WriteLine on the output I receive the information I require. I have used the 'Run as..' command to execute this using a specific domain account.
When I use this code from within a Page load I receive the following error message "No connection could be made because the target machine actively refused it 76.13.114.90:80".
This seems to suggest that the call is reaching yahoo (is this true?) and that there is something missing.
This would suggest there is an identity difference in the calls between the console application and application pool.
Environment is: Windows Server 2003, IIS 6.0, .net 4.0
"Target machine actively refused it" indicates that the TCP connection itself is not succeeding. This could be due to the fact that the Proxy settings when run under IIS are not the same as those that apply when you run in the console.
You can fix this by setting a WebProxy on your request, that points to the proxy server being used in the environment.
Yes, an active refusal is indication that the target machine is receiving the request and the information in the headers is either incorrect or insufficient to process the request. It is entirely possible that if you had to run this call using a "run as" command in console that the application pool's identity user does not have the appropriate permission or username. You can attempt to change the identity user to this specific domain account to see if that alleviates the problem, but you may have to isolate this particular function into its own application pool in order to protect the rest of the website from having this specification.
Related
I can't get SqlDataProvider to work when executed in a fsx script which is running in an Azure Web Site.
I have started from the samples that Tomas Petrecek has here: https://github.com/tpetricek/Dojo-Suave-FsHome.
In short it is a FSX script that is executed using the IIS httpPlatformHandler so that all http requests to my Azure Web site is forwarded to my F# script.
The F# Script use Suave to handle the requests.
When I tried adding some database access to my HTTP handlers I got into problems.
The problematic code looks like this:
[<Literal>]
let connStr = "Server=(localdb)\\v11.0;Initial Catalog=My_Database;Integrated Security=true;"
[<Literal>]
let resolutionFolder = __SOURCE_DIRECTORY__
FSharp.Data.Sql.Common.QueryEvents.SqlQueryEvent |> Event.add (printfn "Executing SQL: %s")
// the following line fails when executing in azure
type db = SqlDataProvider<connStr, Common.DatabaseProviderTypes.MSSQLSERVER, ResolutionPath = resolutionFolder>
let saveData someDataToSave =
let ctx = db.GetDataContext(Environment.GetEnvironmentVariable("SQLAZURECONNSTR_QUERIES"))
.....
/// code using the context here
This works just fine when I run it locally, but when I deploy it to the azure site it will fail at the line where the type dbis created.
The error message is (line 70 is the line that has the type db = ...:
D:\home\site\wwwroot\app.fsx(70,11): error FS3033: The type provider
'FSharp.Data.Sql.SqlTypeProvider' reported an error: A network-related
or instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 52
- Unable to locate a Local Database Runtime installation. Verify that SQL Server Express is properly installed and that the Local Database
Runtime feature is enabled.)
The design-time database in the connStr is not available in the azure site, but I thought this is why we have the GetDataContext overload that takes a connection string to be used at run-time?
Is it because it is running as a script and not as compiled code that it is trying to access the database when creating the TypeProvider?
If yes, does it mean that my only option is to compile and provide the database code as a compiled assembly that I load and use in my Suave FSX script?
Reading the connection string from a config file does not work very well as this is in a azure site. I really need to get the connection string from an environment variable (which is set in the azure management interface).
Hmm, this is a bit unfortunate - as #Fyodor mentioned in the comments, the problem is that the script-based deployment to Azure actually compiles the script on the Azure machine - and so you need to have a statically-resolved connection string that works on Azure.
There are two options:
Use compiled project instead. If you compile your F# code locally and deploy the compiled code to Azure it will work. Sadly, there are no good samples for that.
Do some clever trick to make the connection string accessible to the script at compile time.
Send a PR to the SQL provider so that you can give it the name of an environment variable and it reads the connection string from there.
I think (3) would actually be quite nice and useful feature.
I'm not necessarily sure what the best way to do (2) would be. But I think you might be able to modify app.azure.fsx so that it creates a file (say connection.fsx) that contains something like:
module Connection
let [<Literal>] ConnString = "<Contents of SQLAZURECONNSTR_QUERIES>"
Then app.fsx could load this script and use Connection.ConnString in the argument of SQL type provider.
Summary:
Domino server 8.5.3 Windows server 2008 FP2.
When calling
NotesAgent.runOnServer(noteid)
from a web browser in a Thread where agent is set to "run as web user", I am getting error "HTTP JVM: You are not authorized to perform that operation".
Detail:
All web requests come through 2 channels into our application, via a notes agent or via an xpage (that acts like an XAgent). We have a back end process that can take up to 20 seconds to complete, it is a call to a remote web service and we have no control over it. Due to requirements we can not queue these documents for a scheduled agent, they need to go immediately...or as immediately as the service will allow! The 2 main problems are: 1..the user has to wait up to 20 seconds, 2..the http thread is not freed up. During a busy time, we have seen the http thread pool saturate. What I have done in my test environment is send the request into the XAgent, this calls our backing bean which starts a separate thread, returns message to user. It's working great, http thread frees up immediately and a timely response for user and submission to web service proceeds "asynchronously".
The logic calling the web service is in LotusScript, converting to java would be a massive job as there are an enormous amount of interconnected processes in LotusScript. In the java thread the username is the server name, effectiveUserName is the authenticated http user, the thread calls a
NotesAgent.runOnServer(noteid)
, which works, except the agent runs with credentials of the user that signed the agent. If we set the agent to "Run as web user", I get the error above. As a test, I moved the code that triggers the NotesAgent.run() into the main "calling" function, which gets it's session via:
JSFUtil.getVariableValue("session")
and this works as expected (user=server, http user=effective user). The thread session is got like this:
this.module = NotesContext.getCurrent().getModule();
this.sessionCloner = SessionCloner.getSessionCloner();
NotesContext context = new NotesContext( this.module );
NotesContext.initThread( context );
session = this.sessionCloner.getSession();
...and as above, the effective User Name is the authenticated http user, the user name is the server name.
If I browse directly to the agent, e.g. .../myapp.nsf/myagent?openagent, the agent will run as the effective http user. I then put my test http user into the highest security group I have on my test server, same error. I then logged in as a server admin user (that has security settings for everything) and got same error.
On my test serrver I have: Domino\jvm\lib\security\java.policy when running the Job from the NSF:
grant {
permission java.security.AllPermission;
};
Since I can trigger the agent via JSFUtil.getVariableValue("session") is there some security difference when getting a session via SessionCloner.getSessionCloner().getSession() ?
Thanks in advance.
Agents and XPages shall not interbreed :-). For the thread I would remove the need to get the web user. Pass a java object to the thread that does not contain any Notes objects. Then go old school and use sInitThread stermThread to get a shiny new session and run the agent from there.
Long shot I guess, with the lack of real information that I am offering at this stage. I'll gladly offer up some more details on how to reproduce the issue - but wanted some fast feedback to see if there was a gotcha somewhere I was missing.
I've a simple ServiceStack hello world application, in which I'm playing with the Facebook Auth Provider:
Vanilla ServiceStack
Vanilla Facebook Auth Proivider
Vanilla User Session
Vanilla OrmLite User Repository
Vanilla OrmLite MySql Db Factory
When debugging on my local machine - on Windows 7 (and 8); everything works a treat. The service launches, the database tables are created and I can login via Facebook and records are inserted to the relevant tables.
When running the service on Ubuntu inside a Vagrant Box (running in Virtual Box as the provider for virtualization, hosted on nginx with mono-fastcgi) - the service launches correctly and I can see that the tables are created in the MySql database. When I hit /auth/facebook I am correctly forwarded to Facebook - but I hit an error when the callback to the service occurs.
This is the current output:
[Auth: 07/30/2013 13:02:47]: [REQUEST: {provider:facebook}] System.NullReferenceException: Object reference not set to an instance of an object at
ServiceStack.ServiceInterface.Auth.FacebookAuthProvider.Authenticate (ServiceStack.ServiceInterface.IServiceBase,ServiceStack.ServiceInterface.Auth.IAuthSession,ServiceStack.ServiceInterface.Auth.Auth) <0x0061e> at
ServiceStack.ServiceInterface.Auth.AuthService.Authenticate (ServiceStack.ServiceInterface.Auth.Auth,string,ServiceStack.ServiceInterface.Auth.IAuthSession,ServiceStack.ServiceInterface.Auth.IAuthProvider) <0x000a7> at
ServiceStack.ServiceInterface.Auth.AuthService.Post (ServiceStack.ServiceInterface.Auth.Auth) <0x00303> at
ServiceStack.ServiceInterface.Auth.AuthService.Get (ServiceStack.ServiceInterface.Auth.Auth) <0x00013> at (wrapper dynamic-method) object.lambda_method (System.Runtime.CompilerServices.Closure,object,object) <0x0004f> at
ServiceStack.ServiceHost.ServiceRunner`1<ServiceStack.ServiceInterface.Auth.Auth>.Execute (ServiceStack.ServiceHost.IRequestContext,object,ServiceStack.ServiceInterface.Auth.Auth) <0x00416>
It is clearly reaching the Service (which I'm accessing via localhost:8080 which maps through to the guest machine on port 80); as the error is wrapped nicely in ServiceStack output.
I don't suppose anyone has any clues?
Okay after an evening of investigation - I've found the root cause.
Line 51 of FacebookAuthProvider.cs calls off to Line 28 of WebRequestExtensions.cs - which in turn calls Line 227 of WebRequestExtensions.cs.
This method call fails at line 255-ish - essentially because Mono by default doesn't trust any SSL certificates by default: as explained here..
Instead of figuring out the correct configuration for Mono - I've taken the nasty route (for the time being at least); of using the following line in my AppHostBase.Configure implementation:
F#
System.Net.ServicePointManager.ServerCertificateValidationCallback <- new RemoteCertificateValidationCallback(fun _ _ _ _ -> true)
C#
System.Net.ServicePointManager.ServerCertificateValidationCallback += (a, b, c, d) => { return true; };
I am now up and running (like a fully-operational Death Star).
I am trying to execute this command from a web application on sourceServer:
var mgr = ServerManager.OpenRemote(destServer)
but I receive this error:
UnAuthorized access wth a detail error: "Retrieving the COM class factory for remote component with CLSID {2B72133B-3F5B-4602-8952-803546CE3344} from machine failed due to the following error: 80070005"
I have full administrative rights on both servers.
I can issue that command from a console application no problem, but when I try it from the web application, I get the error!
I have tried enabling the remote management checkbox and started teh remote access auto connection manager and also tried updating the load user profile on the applicaiton pool from false to true.
I have searched so much to the point that all of my links are pink in color!
Any input is greatly appreciated.
I've decided to ditch the use of ServerManager.OpenRemote() and use the DirectoryEntry way:
DirectoryEntry root = new DirectoryEntry("IIS://server/W3SVC", username, password)
it is much simpler and straightforward.
I have a SharePoint 2013 installation on a Window 8 machine.
I am trying to create a web application and it is taking forever. The creation process never stops. I checked in application event logs and found this error:
*Machine 'SHAREPOINT2013C (SharePoint - 43000(_LM_W3SVC_1458308317_ROOT))' failed ping validation and has been unavailable since '1/22/2013 3:56:48 AM'.*
Searched the web but could not find anything that works for me.
Can anyone suggest a way to resolve the issue? Thanks a lot in advance.
Below are my findings:
In order to recognize routing targets IIS has to be able to process SPPING HTTP method
To test run this code in Powershell:
$url = "http://your-Routing-Target-Server-Name"
$myReq = [System.Net.HttpWebRequest]::Create($url)
$myReq.Method = "SPPING";
$response = $myReq.GetResponse();
$response.StatusCode
If you get the following error message:
Exception calling "GetResponse" with "0" argument(s): "The remote server returned an error: (405) Method Not Allowed."
that means that web front end is not set up to process SPPING HTTP method
To resolve the issue run the following commands on each routing target server:
Import-Module WebAdministration
add-WebConfiguration /system.webserver/handlers "IIS:\" -value #{
name = "SPPINGVerbHandler"
verb = "SPPING"
path = "*"
modules = "ProtocolSupportModule"
requireAccess = "None"
}
This will add a handler for SPPING verb to IIS configuration.
Run the test script again to make sure this works.
So this has to do with the Request Management Service that runs on the WFE servers on SharePoint 2013. The Request Management Service is of no value since you only have one server. If you disable this service on your single server farm these messages will go away and your Web Application creation performance will greatly increase.
Mark Ringo
I recently faced this issue, I created new Web Application and it was showing a popup of "It shouldn't take long", then after some time it showed a Connection failure page. I browsed to the virtual directory folder for the new web application and found that the folder was totally empty.
Then what I did to solve this problem:
1. Open IIS
2. Go to Applicatin Pools
3. Select Central Admin application pool and right click and select "Advance Settings".
4. There was a property named "Shutdown Time Limit", it was set to "90" by default. I changed it to 400 and clicked OK.
It restarted the applicaition pool automatically. Then again I created new web application from central admin and it worked for me.
I've found that these events correlate to when the specified application pools are recycled (mine are at a specific time in the morning). It's unfortunate that they're logged in the event viewer and can't really clean it up.