I'm implementing a domino web service provider, whose purpose is to stream from the base64 format, which in the client that consumes the web service is an attachment file, transform it back into a file. In the web service provider that is developed in java, I use the Stream class and the Mime classes to, to convert the stream and file. The web service provider works well for files up to 5 MB, for larger files the error as technote is displayed. Has anyone had this problem yet? Is there any way around it?
Here is the code for the web service provider
public class criaAnexo {
private Vector itemsToRecycle;
public void attachDocument( byte[] is) {
// creating the output stream used to create the MIME attachments
try
{
itemsToRecycle = new Vector();
Session session = NotesFactory.createSession();
Database db = session.getDatabase("Serverx", "base.nsf");
if (!db.isOpen())
System.out.println("names2.nsf does not exist on snapper");
else
{
Stream outStream = session.createStream();
outStream.write(is);
session.setConvertMIME(false);
// create the MIME body
Document doc = db.createDocument();
doc.replaceItemValue("Form", "formAttachment");
MIMEEntity body = doc.createMIMEEntity();
// create a child for each attachment<br/>
MIMEEntity child = body.createChildEntity();
// find the fileSuffix<br/>
//String fileSuffix = files[i].substring(files[i].lastIndexOf(".")+1);
String fileSuffix = "pdf";
// set the child to the outstream using a mapped MIME type<br/>
// MIME type mapping see: http://www.w3schools.com/media/media_mimeref.asp
//child.setContentFromBytes(outStream, mapMIMEType(fileSuffix), MIMEEntity.ENC_IDENTITY_BINARY);
child.setContentFromBytes(outStream, "application/pdf", MIMEEntity.ENC_IDENTITY_BINARY);
// set name for file attachment<br/>
MIMEHeader header = child.createHeader("Content-Disposition");
header.setHeaderVal("attachment; filename=\"teste.pdf\"");
// set unique id for file attachment to be able to refer to it<br/>
header = child.createHeader("Content-ID");
header.setHeaderVal("teste.pdf");
//outStream.truncate();
//outStream.close();
outStream.close();
Runtime rt = Runtime.getRuntime();
long total_mem = rt.totalMemory();
long free_mem = rt.freeMemory();
long used_mem = total_mem - free_mem;
System.out.println("Total de Memória:"+total_mem);
System.out.println("Total de Memória livre:"+free_mem);
System.out.println("Total de memoria usada pelo agente: " + used_mem/1048576);
doc.save( true, true );
itemsToRecycle.add(doc);
session.recycle(itemsToRecycle); //recycle all items added to vector
session.recycle();
}
}
catch(Exception e)
{
}
}
}
Due to base64 encoding and other overhead, files larger than 5 MB may be exceeding the 10 MB limits you have configured for the Maximum size of request content and Maximum POST data settings for your server. Try increasing them.
In fact the limitation occurs in the client that consumes the web service that I implemented in the domino itself. The technote quoted in the description of the problem, implies that the problem is on the side of the provider, but in fact it is not. When I implemented the web service client on dot net the file was streamed without problems.
Related
I created a web API which allows users to send files and upload to Azure Storage. The way it works is, the client app will connect to API to send one or more files to the file upload controller and controller will take care of rest such as
Upload file to Azure storage
Update database
Works great but I don't think it is the right way to do this because now I can see there are two different processes
Upload file from the client's file system to my web API (server)
Upload file to the Azure storage from API (server)
It gives me the feeling that I am duplicating the upload process as the same file first travels to API (server) and then Azure (destination) from the client (file system). I feel the need of showing two progress-bars to the client for file upload progress (from client to server and then the server to Azure) - That just doesn't make sense to me and I feel that my approach is incorrect.
My API accepts up to 250MBs so you can imagine the overload.
What do you guys think?
//// API Controller
if (!Request.Content.IsMimeMultipartContent("form-data"))
{
throw new HttpResponseException(HttpStatusCode.UnsupportedMediaType);
}
var provider = new RestrictiveMultipartMemoryStreamProvider();
var contents = await Request.Content.ReadAsMultipartAsync(provider);
int Total_Files = contents.Contents.Count();
foreach (HttpContent ctnt in contents.Contents)
{
await storageManager.AddBlob(ctnt)
}
////// Stream
#region SteamHelper
public class RestrictiveMultipartMemoryStreamProvider : MultipartMemoryStreamProvider
{
public override Stream GetStream(HttpContent parent, HttpContentHeaders headers)
{
var extensions = new[] { "pdf", "doc", "docx", "cab", "zip" };
var filename = headers.ContentDisposition.FileName.Replace("\"", string.Empty);
if (filename.IndexOf('.') < 0)
return Stream.Null;
var extension = filename.Split('.').Last();
return extensions.Any(i => i.Equals(extension, StringComparison.InvariantCultureIgnoreCase))
? base.GetStream(parent, headers)
: Stream.Null;
}
}
#endregion SteamHelper
///// AddBlob
public async Task<string> AddBlob(HttpContent _Payload)
{
CloudStorageAccount cloudStorageAccount = KeyVault.AzureStorage.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("SomeContainer");
cloudBlobContainer.CreateIfNotExists();
try
{
byte[] fileContentBytes = _Payload.ReadAsByteArrayAsync().Result;
CloudBlockBlob blob = cloudBlobContainer.GetBlockBlobReference("SomeBlob");
blob.Properties.ContentType = _Payload.Headers.ContentType.MediaType;
blob.UploadFromByteArray(fileContentBytes, 0, fileContentBytes.Length);
var B = await blob.CreateSnapshotAsync();
B.FetchAttributes();
return "Snapshot ETAG: " + B.Properties.ETag.Replace("\"", "");
}
catch (Exception X)
{
return ($"Error : " + X.Message);
}
}
It gives me the feeling that I am duplicating the upload process as
the same file first travels to API (server) and then Azure
(destination) from the client (file system).
I think you're correct. One possible solution would be have your API generate a Shared Access Signature (SAS) token and return that SAS token/URI to the client whenever a client wishes to upload a file.
Using this SAS URI your client can directly upload the file to Azure Storage without sending it to your API first. Once the file is uploaded successfully by the client, it can send a message to the API to update the database.
You can read more about SAS here: https://learn.microsoft.com/en-us/azure/storage/common/storage-dotnet-shared-access-signature-part-1.
I have also written a blog post long time back on using SAS that you may find useful: https://gauravmantri.com/2013/02/13/revisiting-windows-azure-shared-access-signature/.
While debugging my Connect listener (REST, Java), I am trying to create a PDF document based on xml in the Connect log for Demo. (I have to emulate Docusign POST request while security issues are being resolved.)
I have DocuSign Connect Service activated with “Include Documents” and "Include Certificate of Completion" checked.
I can see Attachment element in the log’s xml but not DocumentPDF element. When saving content as a byte array into PDF file and then trying to open it, it cannot be opened in Acrobat.
Is Attachment element in the Connect Log supposed to be a PDF document?
Here is my code to convert to pdf file:
String documentName = parseXMLDoc(xmlDoc, "DocumentStatus[1]/Name");
SimpleDateFormat fmt = new SimpleDateFormat("yyyyMMdd HHmmsss");
String nowTime = fmt.format(new Date());
OutputStream out = new FileOutputStream("c:\\temp\\"+documentName.replaceAll(".pdf","_"+nowTime+".pdf"));
BASE64Decoder decoder = new BASE64Decoder();
String encodedBytes = parseXMLDoc(xmlDoc, "Attachment/Data");
byte[] decodedBytes = decoder.decodeBuffer(encodedBytes);
out.write(decodedBytes);
out.close();
where parseXMLDoc is
public static String parseXMLDoc(Document xmlDoc, String searchToken) {
String xPathExpression;
try {
XPath xPath = XPathFactory.newInstance().newXPath();
xPathExpression = "//" + searchToken;
return (xPath.evaluate(xPathExpression, xmlDoc));
} catch (Exception e) {
throw new RuntimeException(e);
}
}
No the Connect log does NOT contain the actual document bytes... the documents are only included in the actual Connect pushes to your external listener. The log files simply contain metadata around the transaction, not the actual content.
You can possibly get a full response from DocuSign by using https://Webhook.site where you will be given a temporary URL for listening. Plug the temporary URL into the DocuSign Connection (create a new connection for testing), and open the XML packet from the dashboard on the webhook site dashboard when it arrives from DocuSign.
need solution for website publishing form web application hosted in Azure.
I tried the following code, It create the domain but I was not able to upload the Published website.
private HttpResponseMessage CreateWebsite(CreateSiteViewModel site)
{
var cert = X509Certificate.CreateFromCertFile(Server.MapPath(site.CertPath));
string uri = string.Format("https://management.core.windows.net/{0}/services/WebSpaces/{1}/sites/", site.Subscription, site.WebSpaceName);
// A url which is looking for the right public key with
// the incomming https request
var req = (HttpWebRequest)WebRequest.Create(uri);
String dataToPost =string.Format(
#"<Site xmlns=""http://schemas.microsoft.com/windowsazure"" xmlns:i=""http://www.w3.org/2001/XMLSchema-instance"">
<HostNames xmlns:a=""http://schemas.microsoft.com/2003/10/Serialization/Arrays"">
<a:string>{0}.azurewebsites.net</a:string>
</HostNames>
<Name>{0}</Name>
<WebSpaceToCreate>
<GeoRegion>{1}</GeoRegion>
<Name>{2}</Name>
<Plan>VirtualDedicatedPlan</Plan>
</WebSpaceToCreate>
</Site>", site.SiteName, site.WebSpaceGeo, site.WebSpaceName);
req.Method = "POST"; // Post method
//You can also use ContentType = "text/xml";
// with the request
req.UserAgent = "Fiddler";
req.Headers.Add("x-ms-version", "2013-08-01");
req.ClientCertificates.Add(cert);
// Attaching the Certificate To the request
// when you browse manually you get a dialogue box asking
// that whether you want to browse over a secure connection.
// this line will suppress that message
//(pragramatically saying ok to that message).
string postData = dataToPost;
var encoding = new ASCIIEncoding();
byte[] byte1 = encoding.GetBytes(postData);
// Set the content length of the string being posted.
req.ContentLength = byte1.Length;
Stream newStream = req.GetRequestStream();
newStream.Write(byte1, 0, byte1.Length);
// Close the Stream object.
newStream.Close();
var rsp = (HttpWebResponse)req.GetResponse();
var reader = new StreamReader(rsp.GetResponseStream());
String retData = reader.ReadToEnd();
req.GetRequestStream().Close();
rsp.GetResponseStream().Close();
return new HttpResponseMessage
{
StatusCode = rsp.StatusCode,
Content = new StringContent(retData)
};
}
I am not entirely sure what you try to achieve here. But if I understand correctly you want to publish a website programmatic.
You cannot do this (publish a website programmatic) with Azure Management APIs. Azure management APIs are to manage Azure services and resources. The web site content itself is not in any way Azure Service, nor an Azure resource.
If you want to programmaticly publish a website to Azure Web Site, I would suggest taking deep read into How to deploy an Azure Web site.
Out from what is mentioned there, pretty easy to automate are
Web Deploy
Repositories using GIT
MSBuild
any other that you are familiar with ...
I'm converting a website from a standard ASP.NET website over to use Azure. The website had previously taken an Excel file uploaded by an administrative user and saved it on the file system. As part of the migration, I'm saving this file to Azure Storage. It works fine when running against my local storage through the Azure SDK. (I'm using version 1.3 since I didn't want to upgrade during the development process.)
When I point the code to run against Azure Storage itself, though, the process usually fails. The error I get is:
System.IO.IOException occurred
Message=Unable to read data from the transport connection: The connection was closed.
Source=Microsoft.WindowsAzure.StorageClient
StackTrace:
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.get_Result()
at Microsoft.WindowsAzure.StorageClient.Tasks.Task`1.ExecuteAndWait()
at Microsoft.WindowsAzure.StorageClient.CloudBlob.UploadFromStream(Stream source, BlobRequestOptions options)
at Framework.Common.AzureBlobInteraction.UploadToBlob(Stream stream, String BlobContainerName, String fileName, String contentType) in C:\Development\RateSolution2010\Framework.Common\AzureBlobInteraction.cs:line 95
InnerException:
The code is as follows:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.Logs.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
dmc.Logs.ScheduledTransferLogLevelFilter = LogLevel.Verbose;
DiagnosticMonitor.Start(storageAccount, dmc);
CloudBlobClient BlobClient = null;
CloudBlobContainer BlobContainer = null;
BlobClient = storageAccount.CreateCloudBlobClient();
// For large file copies you need to set up a custom timeout period
// and using parallel settings appears to spread the copy across multiple threads
// if you have big bandwidth you can increase the thread number below
// because Azure accepts blobs broken into blocks in any order of arrival.
BlobClient.Timeout = new System.TimeSpan(1, 0, 0);
Role serviceRole = RoleEnvironment.Roles.Where(s => s.Value.Name == "OnlineRates.Web").First().Value;
BlobClient.ParallelOperationThreadCount = serviceRole.Instances.Count;
// Get and create the container
BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
BlobContainer.CreateIfNotExist();
//delete prior version if one exists
BlobRequestOptions options = new BlobRequestOptions();
options.DeleteSnapshotsOption = DeleteSnapshotsOption.None;
CloudBlob blobToDelete = BlobContainer.GetBlobReference(fileName);
Trace.WriteLine("Blob " + fileName + " deleted to be replaced by newer version.");
blobToDelete.DeleteIfExists(options);
//set stream to starting position
stream.Position = 0;
long totalBytes = 0;
//Open the stream and read it back.
using (stream)
{
// Create the Blob and upload the file
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
try
{
BlobClient.ResponseReceived += new EventHandler<ResponseReceivedEventArgs>((obj, responseReceivedEventArgs)
=>
{
if (responseReceivedEventArgs.RequestUri.ToString().Contains("comp=block&blockid"))
{
totalBytes += Int64.Parse(responseReceivedEventArgs.RequestHeaders["Content-Length"]);
}
});
blob.UploadFromStream(stream);
// Set the metadata into the blob
blob.Metadata["FileName"] = fileName;
blob.SetMetadata();
// Set the properties
blob.Properties.ContentType = contentType;
blob.SetProperties();
}
catch (Exception exc)
{
Logging.ExceptionLogger.LogEx(exc);
}
}
}
I've tried a number of different alterations to the code: deleting a blob before replacing it (although the problem exists on new blobs as well), setting container permissions, not setting permissions, etc.
Your code looks like it should work, but it has lots of extra functionality that is not strictly required. I would cut it down to an absolute minimum and go from there. It's really only a gut feeling, but I think it might be the using statement giving you grief. This enture function could be written (presuming the container already exists) as:
public void UploadToBlob(Stream stream, string BlobContainerName, string fileName,
string contentType)
{
// Setup the connection to Windows Azure Storage
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(GetConnStr());
CloudBlobClient BlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer BlobContainer = BlobClient.GetContainerReference(BlobContainerName);
CloudBlockBlob blob = BlobContainer.GetBlockBlobReference(fileName);
stream.Position = 0;
blob.UploadFromStream(stream);
}
Notes on the stuff that I've removed:
You should set up diagnostics just once when you're app starts, not every time a method is called. Usually in the RoleEntryPoint.OnStart()
I'm not sure why you're trying to set ParallelOperationThreadCount higher if you have more instances. Those two things seem unrelated.
It's not good form to check for the existence of a container/table every time you save something to it. It's more usual to do that check once when your app starts or to have a process external to the website to make sure all the required containers/tables/queues exist. Of course if you're trying to dynamically create containers this is not true.
The problem turned out to be firewall settings on my laptop. It's my personal laptop originally set up at home and so the firewall rules weren't set up for a corporate environment resulting in slow performance on uploads and downloads.
I'm currently developing an application which download attachment from gmail account.
Right now, I got error whenever downloading zipped attachment. But, not all, some I can retrieve it without error. Here's the Exception message:
Exception in thread "main" com.sun.mail.util.DecodingException: BASE64Decoder: Error in encoded stream: needed 4 valid base64 characters but only got 1 before EOF, the 10 most recent characters were: "Q3w5ilxj2P"
FYI: I was able to download the attachment via gmail web interface.
Here's the snippet:
Multipart multipart = (Multipart) message.getContent();
for (int i = 0; i < multipart.getCount(); i++) {
BodyPart bodyPart = multipart.getBodyPart(i);
if (bodyPart.getFileName().toLowerCase().endsWith("zip") ||
bodyPart.getFileName().toLowerCase().endsWith("rar")) {
InputStream is = bodyPart.getInputStream();
File f = new File("/tmp/" + bodyPart.getFileName());
FileOutputStream fos = new FileOutputStream(f);
byte[] buf = new byte[bodyPart.getSize()];
int bytesRead;
while ((bytesRead = is.read(buf)) != -1) {
fos.write(buf, 0, bytesRead);
}
fos.close();
}
}
}
Anyone have idea, how to work around this problem?
From a list of the known limitations, bugs, issues of JavaMail:
Certain IMAP servers do not implement
the IMAP Partial FETCH functionality
properly. This problem typically
manifests as corrupt email attachments
when downloading large messages from
the IMAP server. To workaround this
server bug, set the
"mail.imap.partialfetch" property to
false. You'll have to set this
property in the Properties object that
you provide to your Session.
So you should just turn off partial fetch in imap session. For example:
Properties props = System.getProperties();
props.setProperty("mail.store.protocol", "imap");
props.setProperty("mail.imap.partialfetch", "false");
Session session = Session.getDefaultInstance(props, null);
Store store = session.getStore("imaps");
store.connect("imap.gmail.com", "<username>","<password>");
source: https://javaee.github.io/javamail/docs/api/com/sun/mail/imap/package-summary.html
If You Are Using java mail API then add these lines while you are connectin the imap server......
Properties prop = new Properties();
prop.put("mail.imaps.partialfetch", false);
Session session = Session.getDefaultInstance(prop, null);
........
.... your code ..
......
it should work.