I'm trying to use Syncfusion to convert HTML to PDF on Windows Azure.
It is working fine on the development machine. However on Windows Azure it does not work with HTTPS protocol. It will only work with HTTP.
Syncfusion's troubleshooting website suggests the following solution:
Reason When OpenSSL package is not installed in the machine.
Solution For converting HTTPS sites, it requires OPENSSL libraries to be installed in the machine. The OPENSSL library can be installed by downloading its setup from the below link,
OpenSSL
Instead, the required assemblies can added in the Windows system folder (for 64 bit machine, it should be place in
$SystemDrive\Windows\SysWOW64 and for 32 bit machine, it should be place in
$SystemDrive\Windows\System32),
libeay32.dll
libssl32.dll
ssleay32.dll
https://help.syncfusion.com/file-formats/pdf/convert-html-to-pdf/webkit#troubleshooting
Is it possible to implement this with App Services? If so how would I go about it?
When converting HTTPS sites, blank paged PDF may occur due to missing of the OPENSSL assemblies in Azure website. To convert HTTPS sites, the converter requires OPENSSL assemblies. By default, some Azure websites do not have the OPENSSL assemblies. So, these assemblies should be added to the website explicitly. We could not place the assemblies in system drive on Azure App service environment. Refer below steps to place the OPENSSL assemblies in Azure for converting HTTPS sites to PDF.
Create a new folder in a project and copy the OPENSSL assemblies to
that folder.
libeay32.dll
libssl32.dll
ssleay32.dll
Include that folder in a project and set copy to the output
directory as “Copy always”.
Get the path of the OPENSSL assemblies folder in C# and set it to
the environment variable.
Refer below link for more information about adding OPENSSL assemblies in Azure.
KB: https://www.syncfusion.com/kb/8821/blank-pdf-is-generated-when-converting-https-sites-to-pdf-in-azure
Note: I work for Syncfusion.
Due to some limits about Azure Web App sandbox, I am sure you can not directly add these required assemblies in the Windows system folder.
Although I never used Syncfusion, per my experience, I think you can try to add these required dll libraries in the same folder of the Syncfusion's. Because the dll load path generally be the system foloder or the same path from caller.
Other doubts by me, Azure default supports SSL, I don't think you need to install OpenSSL manually. May you just need to change your code to use an exisiting SSL certificate to support HTTPs for Syncfusion, as the offical document Use an SSL certificate in your application code in Azure App Service said.
All above as reference only based on my thinking. Hope it helps.
Related
for example, there is an application written using dot net core 2.1
Published under the IIS (Windows Server 2016). At the root of the application (near to the binaries (.dll)) random files are being created and modified. Will this affect the performance of the application? Will it make a difference if these files are created in a subdirectory next to the binaries?
If we exclude IIS and host under Kestrel, will it affect somehow?
As far as I remember, when hosting applications, written on a .NET fullframework, under IIS, and modifying any file (for example, a text file) in the BIN directory resulted in restarting the web application.
I do not know what files are being created and modified i would separate them form the app files for security reasons to say the least. Having said that to answer your question.
It should not affect the performance unless the volume of operations is so large that it will use up all iops on the drive or the pc/ram of the machine.
Kestrel will not be affected unless you try to modify files that kestrel uses for the app in some way and if you use dotnet watch run it will try to recompile them and run in the host if not it will ignore their existence until the host is restarted.
IIS should ignore them as well but i do not know what will happen if you have those files in bin and try to restart the host. i have tried changing and adding a file and did not restart. Maybe there is something in IIS settings but since i have not setup our IIS my answer is lacking in that regard.
I am getting started with Office Add-ins through the Java script API. I am going through this tutorial. When I proceed with the Try It Out section. I get this error. I am getting the add-in to run fine when I give the absolute path in the source location node of the manifest for example E:\Excel-Add-in-Javascript\first-excel-addin\Home.html but its the relative path that is not working for example \\SAAD\Excel-Add-in-Javascript\first-excel-addin\Home.html Kindly let me know if you a solution.
The source location node should not contain a relative path. It should use a complete path, either on the internet or on a network share.
In your case, you need to make \\SAAD a network share, not just a folder.
I don't think that serving from the file path (file:///C:/Users/username/Desktop/something.html) or share is a supported scenario. It may work, but note that it will run differently (and sometimes not run, or be overly permissive) than when you deploy the app for real.
To be clear, you can have a manifest file on a network share for ease of testing the add-in -- and in fact, it's the easiest way to get your add-in registered with Office desktop. But the web content should be served off of a web server (anything from hosting via an IIS local-host web server, to using an Azure Website, to putting your content on github and serving it via https://rawgit.com/).
For a project I am currently working on, I need to create a setup application for an existing desktop application. The setup application will be downloaded from a website, and will download required files to the correct locations. When the application is started, it will look for newer versions of these files, download them if any exist, then start the application.
I am using Visual Studio Online with TFVC, linked to Azure. I have a test application set up so that when I trigger a build, Release Management finds the build directory, and moves the files to Azure Blob Storage, but prepends a GUID to the file names being transferred. So what I have in my storage container is:
{Some GUID}/2390/Test.exe
{Some GUID}/2389/Test.exe
{Some GUID}/2387/Test.exe
...
What I want in my container is the latest version of Test.exe, so I can connect to the container, and determine whether I want to download or not.
I have put together a NullSoft installer that checks a website, and downloads files. I have also written a NullSoft "launcher" that will compare local file versions with versions on the website (using a version xml file on the website), and download if newer, then launch the application. What I need to figure out is how to get the newer files to the website after a build, with automation being one of the goals.
I am an intern, and new to deployment in general, and I don't even know if I'm going about this the right way.
Questions:
Does what I am doing make sense for what I am trying to accomplish?
We are trying to emulate ClickOnce functionality, but can't use ClickOnce due to the fact that the application dynamically loads a number of DLLs. Is there a way to configure ClickOnce to include non-referenced DLLs?
Is there a best practice for doing what I'm describing?
I appreciate any advice, links to references, or real-world examples.
You are mentioning ClickOnce, which you investigated but can't use. Have you already tried an alternative: Squirrel? With Squirrel you can specify which files should be part of the installation, allowing you to explicitly specify which files to include even if you load them dynamically.
Link: https://github.com/Squirrel/Squirrel.Windows
Squirrel is a full framework for creating an auto-update application and can work with Azure Blob Storage hosting (and also CDN if you need to scale up)
I have a standard azure webrole that requires some 3rd party software to be installed on the cloud service. The webrole itself is very basic. The 3rd party software has a few prerequisites, so everything is being included in the role's content (a total of 5 MSI files) and is installed via a an elevated startup task.
The software is installing successfully and everything's working, but including these MSIs as content for the webrole results in an 80MB *.cspkg file.
Excluding the installers yields a package size of 10MB. The total size of all the 5 MSI files is 20.5MB. I don't understand why including 20MB of installers results in an 80MB cloud service package.
I think the packaging process might be attempting to compress the files (unintentionally increasing the payload). Is what I'm seeing normal? Or is there any way of reducing the resulting package when installers are included?
The package for WebRoles is usually bigger then one for worker roles because your role content is included twice! Once is approot folder and once in sites\0\ folder.
As a second thought of yours - compressing. This is true. Azure package is just a regular zip file. You can safely rename yourpackage.cspkg to yourpackage.zip and browse the content.
However your strategy for keeping installation files in package is not effective in its roots. What I will suggest is that you put your 3rd party installer in a blob storage. Then first download it and then install it. For downloading the content you can use various techniques. From PowerShell to Azure Bootstrapper and probably many more.
Having your (all and any of them) 3rd party installers in a Blob storage is a recommended best practice for designing applications for Windows Azure.
As to the question why your Web Application is copied twice in the Azure Deployment package - it has its roots in the times where Windows Azure Web Role was using IIS Hostable Web Core and not full IIS. And is probably still there for backward compatibility.
I found if you add the msi to the \bin folder of the Azure project it doesn't copy multiple times, in this style:
https://learn.microsoft.com/en-us/azure/cloud-services/cloud-services-dotnet-install-dotnet
Using Visual Studio 2017, Azure SDK 2.9, Build Engine version 14.0.27530.0
Currently I generate an installer for a program using NSIS on a Linux machine. The NSIS binaries have been compiled for Ubuntu, and using the .nsi script presents no difficulties. However, the resulting setup.exe file is unsigned. This results in scary warnings for our users who download the installer via most common web browsers, as well as warnings from Windows itself when run.
We'd like to avoid these warnings, and unless I'm missing something, that requires using a Windows tool to sign the generated setup.exe file. Is there a way to do this on a non-Windows machine?
Unfortunately, each installer is unique (different files are bundled depending on the customer's request, and a unique ID included) so I cannot sign the installer on a Windows machine and then upload it.
Your best choice is probably the use of: osslsigncode. Built easily for me (make sure to have the OpenSSL headers available). It may have difficulties with the kernel mode signing policy, though (embedding the parent certs up to the root) - so you may still have to resort to WINE in the end.
I had to do it a few weeks ago, without using wine. What I did was to import the pfx file to windows and then exported it with "Include all certificates in the certificate path if possible" option. then I followed the instruction on this page .
After you have all the certs (spc and pvk files) you should use the following command:
signcode -spc [spc file] -v [pvk file] -a sha1 -$ commercial -t http://timestamp.verisign.com/scripts/timstamp.dll -tr 10 [exe file to sign]
I had to install mono-dev pack:
sudo apt-get install mono-devel
Signing files for Windows uses Microsoft Authenticode signatures. There is a tool in the SDK that signs Executables and DLLs (signtool.exe). You might be able to run that using Wine.
It's also possible to sign files through Windows API calls - these functions might be implemented in Wine aswell, but I sort of doubt it because Authenticode is only used and implemented by Microsoft (as far as I know).
However this tool doesn't to very much - it basically appends the certificate and a signed timestamp at the end of the file. There might exist adaptations for Linux aswell.
Here is a link to someone who got it working using signcode.