Reading a file from a UNC path and setting the correct MIME type in a HTTP request - iis

How would I go about reading a file from a UNC path, discovering the proper MIME type, and streaming that out to a browser?
It feels to me like I'm re-inventing IIS, and I'll also have to maintain my own MIME type database for each file extension. Does the above request sound reasonable, or is there a better way?
I plan on streaming this out via a browser HTTP Get request on IIS7. If it matters, I'm also running Cognos on the same server. Any framework is OK (WCF, ASPX, etc)

Using WCF its pretty basic:
This code can be hosted under IIS/Service/WAS/etc.
I never found a convenient way to handle the mime type, you will need to have your own db that will map file extension into mime types.
[ServiceContract(SessionMode = SessionMode.NotAllowed)]
public interface IMediaRetriver
{
[OperationContract]
[WebGet(UriTemplate = "/get?f={fileName}")]
Stream Get(string fileName);
}
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)]
public class MediaRetriver : IMediaRetriver
{
public Stream Get(string fileName)
{
// pro tips
// this will cause the file dialog to show the file name instead of "get"
WebOperationContext.Current.OutgoingResponse.Headers.Add(
"Content-disposition", string.Format("inline; filename={0}", fileName));
WebOperationContext.Current.OutgoingResponse.ContentType =
"application/octet-stream";
// you want to add sharing here also
return File.Open(fileName)
}
}

Related

Trying to use HttpClient.GetStreamAsync straight to the adls FileClient.UploadAsync

I have an Azure Function that will call an external API via HttpClient. The external API returns a JSON response. I want to save the response directly to an ADLS File.
My simplistic code is:
public async Task UploadFileBulk(Stream contentToUpload)
{
await this._theClient.FileClient.UploadAsync(contentToUpload);
}
The this._theClient is a simple wrapper class around the various Azure Data Lake classes such as DataLakeServiceClient, DataLakeFileSystemClient, DataLakeDirectoryClient, DataLakeFileClient.
I'm happy this wrapper calls works as I expect, I spin one up, set the service, filesystem, directory and then a filename to create. I've used this wrapper class to create directories etc. so it works as I expect.
I am calling the above method as follows:
await dlw.UploadFileBulk(await this._httpClient.GetStreamAsync("<endpoint>"));
I see the file getting created in the Lake directory with the name I want, however if I then download the file using Sorage Explorer and then try to open it in say VS Code it's not in a recognisable format (I can "force" code to open it but it looks like binary format to me).
If I sniff the traffic with fiddler I can see the content from the external API is JSON, content-type is application/json and the body shows in fiddler as JSON.
If I look at the calls to the ADLS endpoint I can see a PUT call followed by two PATCH calls.
The first PATCH call looks like it is the one sending the content, it has a content-header of application/octet-stream and the request body is the "binary looking content".
I am using HttpClient.GetStreamAsync as I don't want my Function to have to load the entire API payload into memory (some of the external API endpoints return very large files over 100mb). I am thinking I can "stream the response from the external API straight into ADLS".
Is there a way to change how the ADLS FileClient.UploadAsync(Stream stream) method works so I can tell it to upload the file as a JSON file with a content type of application/json?
EDIT:
So turns out the External API was sendng back zipped content and so once I added the following extra AutomaticDecompression code to my functions startup I got the files uploaded to ADLS as expected.
public override void Configure(IFunctionsHostBuilder builder)
{
builder.Services.AddHttpClient("default", client =>
{
client.DefaultRequestHeaders.Add("Accept-Encoding", "gzip, deflate");
}).ConfigurePrimaryHttpMessageHandler(() => new HttpClientHandler
{
AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate
});
}
#Gaurav Mantri has given me some pointers on if the pattern of "streaming from an output to an input" is actually correct, I will research this further.
Regarding the issue, please refer to the following code
var uploadOptions = new DataLakeFileUploadOptions();
uploadOptions.HttpHeaders = new PathHttpHeaders();
uploadOptions.HttpHeaders.ContentType ="application/json";
await fileClient.UploadAsync(stream, uploadOptions);

problems uploading xslx file in body of post request to .net core app on aws-lambda

I'm trying to send a post request with postman to our AWS-Lambda server. Let me first state that, when running the web-server on my laptop using the Visual studio debugger, everything works fine. When trying to do exactly the same but to the url of the AWS-Lambda i'm getting the following errors when shifting through the logging:
when uploading the normal xlsx file (it's a size of 593kb)
Split or spanned archives are not supported.
When uploading the same file but with a few worksheet removed (because i thought maybe the size is to big, which should be bs but lets try):
Number of entries expected in End Of Central Directory does not correspond to number of entries in Central Directory.
when uploading a random xlsx file:
Offset to Central Directory cannot be held in an Int64.
I do not know what is going on, it might have something to do with the way postman serializes the xlsx file and the way my debug session (on a windows machine) deserializes it which is different from the way AWS-Lambda deserializes it but that's just a complete guess.
I always get a 400 - Bad Request response
I'm at a loss and am hoping someone here knows what to do.
This is the method in my controller, however the problem occurs before this:
[HttpPost("productmodel")]
public async Task<IActionResult> SeedProductModel()
{
try
{
_logger.LogInformation("Starting seed product model");
var memoryStream = new MemoryStream();
_logger.LogInformation($"request body: {Request.Body}");
Request.Body.CopyTo(memoryStream);
var command = new SeedProductModelCommand(memoryStream);
var result = await _mediator.Send(command);
if (!result.Success)
{
return BadRequest(result.MissingProducts);
}
return Ok();
}
catch (Exception ex)
{
_logger.LogError(ex.Message);
return BadRequest();
}
}
postman:
we do not use api keys for our test environment
Since you are uploading binary content to API Gateway, you need to enable it through the console.
Go to API Gateway -> select your API -> Settings -> Binary Media Types -> application/octet-stream, like the image below
Save it and make sure to redeploy your API, otherwise your changes will have no effect.
To do so, select your API -> Actions -> Deploy API

Can the ServiceStack config setting "WebHostPhysicalPath" be used for relative paths?

Hello ServiceStack aficionados!
I would like to host static XML files through the ServiceStack service; however, I can't seem to get the configuration right and only receive 404 errors. Feels like I tried all sorts of path/url combinations.
Can the WebHostPhysicalPath be defined as a relative path? Is there another setting that must be enabled? I was concerned that maybe the XML extension is conflicting with the format conversion stuff.
Also, can I host Razor cshtml files this way too?
Any comments on this approach?
thanks!
You can return a static file from a service like so:
[Route("/myFile/")]
public class GetMyFile
{
}
public class HelloService : Service
{
public HttpResult Any(GetMyFile request)
{
return new HttpResult(new FileInfo("~/myfile.xml"), asAttachment:true) { ContentType = "text/xml" };
}
}
As for razor: http://razor.servicestack.net/

Serving a static file with servicestack

How would i go around serving a static file using servicestack?
I would like to add a route like Routes.Add(/app) and when a client issues a GET for this path i need to return the a silverlight xap file.
ServiceStack is already be able to serve static files by referencing them directly.
Otherwise if you want a service return a file for downloading, you can do so with:
return new HttpResult(new FileInfo("~/app.xap"), asAttachment:true) {
ContentType = "application/x-silverlight-app"
};
Note: asAttachment will control whether or not to send HTTP Content-Disposition headers.
More info about ServiceStack's responses is in this earlier question: ServiceStack and returning a stream

Upload a file to SharePoint through the built-in web services

What is the best way to upload a file to a Document Library on a SharePoint server through the built-in web services that version WSS 3.0 exposes?
Following the two initial answers...
We definitely need to use the Web Service layer as we will be making these calls from remote client applications.
The WebDAV method would work for us, but we would prefer to be consistent with the web service integration method.
There is additionally a web service to upload files, painful but works all the time.
Are you referring to the “Copy” service?
We have been successful with this service’s CopyIntoItems method. Would this be the recommended way to upload a file to Document Libraries using only the WSS web service API?
I have posted our code as a suggested answer.
Example of using the WSS "Copy" Web service to upload a document to a library...
public static void UploadFile2007(string destinationUrl, byte[] fileData)
{
// List of desination Urls, Just one in this example.
string[] destinationUrls = { Uri.EscapeUriString(destinationUrl) };
// Empty Field Information. This can be populated but not for this example.
SharePoint2007CopyService.FieldInformation information = new
SharePoint2007CopyService.FieldInformation();
SharePoint2007CopyService.FieldInformation[] info = { information };
// To receive the result Xml.
SharePoint2007CopyService.CopyResult[] result;
// Create the Copy web service instance configured from the web.config file.
SharePoint2007CopyService.CopySoapClient
CopyService2007 = new CopySoapClient("CopySoap");
CopyService2007.ClientCredentials.Windows.ClientCredential =
CredentialCache.DefaultNetworkCredentials;
CopyService2007.ClientCredentials.Windows.AllowedImpersonationLevel =
System.Security.Principal.TokenImpersonationLevel.Delegation;
CopyService2007.CopyIntoItems(destinationUrl, destinationUrls, info, fileData, out result);
if (result[0].ErrorCode != SharePoint2007CopyService.CopyErrorCode.Success)
{
// ...
}
}
Another option is to use plain ol' HTTP PUT:
WebClient webclient = new WebClient();
webclient.Credentials = new NetworkCredential(_userName, _password, _domain);
webclient.UploadFile(remoteFileURL, "PUT", FilePath);
webclient.Dispose();
Where remoteFileURL points to your SharePoint document library...
There are a couple of things to consider:
Copy.CopyIntoItems needs the document to be already present at some server. The document is passed as a parameter of the webservice call, which will limit how large the document can be. (See http://social.msdn.microsoft.com/Forums/en-AU/sharepointdevelopment/thread/e4e00092-b312-4d4c-a0d2-1cfc2beb9a6c)
the 'http put' method (ie webdav...) will only put the document in the library, but not set field values
to update field values you can call Lists.UpdateListItem after the 'http put'
document libraries can have directories, you can make them with 'http mkcol'
you may want to check in files with Lists.CheckInFile
you can also create a custom webservice that uses the SPxxx .Net API, but that new webservice will have to be installed on the server. It could save trips to the server.
public static void UploadFile(byte[] fileData) {
var copy = new Copy {
Url = "http://servername/sitename/_vti_bin/copy.asmx",
UseDefaultCredentials = true
};
string destinationUrl = "http://servername/sitename/doclibrary/filename";
string[] destinationUrls = {destinationUrl};
var info1 = new FieldInformation
{
DisplayName = "Title",
InternalName = "Title",
Type = FieldType.Text,
Value = "New Title"
};
FieldInformation[] info = {info1};
var copyResult = new CopyResult();
CopyResult[] copyResults = {copyResult};
copy.CopyIntoItems(
destinationUrl, destinationUrls, info, fileData, out copyResults);
}
NOTE: Changing the 1st parameter of CopyIntoItems to the file name, Path.GetFileName(destinationUrl), makes the unlink message disappear.
I've had good luck using the DocLibHelper wrapper class described here: http://geek.hubkey.com/2007/10/upload-file-to-sharepoint-document.html
From a colleage at work:
Lazy way: your Windows WebDAV filesystem interface. It is bad as a programmatic solution because it relies on the WindowsClient service running on your OS, and also only works on websites running on port 80. Map a drive to the document library and get with the file copying.
There is additionally a web service to upload files, painful but works all the time.
I believe you are able to upload files via the FrontPage API but I don’t know of anyone who actually uses it.
Not sure on exactly which web service to use, but if you are in a position where you can use the SharePoint .NET API Dlls, then using the SPList and SPLibrary.Items.Add is really easy.

Resources