As per the https://nvd.nist.gov/vuln/detail/CVE-2019-10068
An issue was discovered in Kentico before 12.0.15. Due to a failure to
validate security headers, it was possible for a specially crafted
request to the staging service to bypass the initial authentication
and proceed to deserialize user-controlled .NET object input. This
deserialization then led to unauthenticated remote code execution on
the server where the Kentico instance was hosted.
Does this apply to v12 only or lower versions are affected by it as well like v8.2 and v9?
Will need workaround for older versions or prior to v12.0.5
Take a look under security bugs
https://devnet.kentico.com/download/hotfixes#securityBugs-v12
Due to an error in the Microsoft.Web.Services3 library, it was
possible for a specially crafted request on staging service to bypass
the initial authentication and proceed to deserialize user-controlled
input. The deserialization of the user-controlled input then led to
remote code execution on the server where the Kentico instance was
hosted.
Workaround for all Kentico versions The workaround for this issue is
the same for all projects, regardless of staging utilization - set the
'Staging service authentication' setting to 'X.509':
1. Navigate to 'Settings' -> 'Versioning & Synchronization' -> 'Staging'
2. Under the 'Staging service' section set 'Staging service authentication' to 'X.509'
3. 'Save' the changes
Details
Issue type: Remote Code Execution Security risk:
Critical Found in version:
12.0.14 and below Fixed in version:
12.0.15 Fixed date: 3/22/2019 Reported by: Aon’s Cyber Solutions Recommendation
Install the latest hotfix. You can download the latest
hotfix from Download section on the DevNet portal. If you use an older
Kentico version, it is highly recommended to upgrade to the latest
Related
I always deployed from my local machine to Azure (Classic cloud service) but from yesterday I get this error:
Could not complete the request to remote agent URL 'https://[MYNAME].cloudapp.net:8172/msdeploy.axd?site=Default Web Site'.The request was aborted: Could not create SSL/TLS secure channel
The port is open. The web deploy is installed. As I see nothing has changed.
I tried to install a new version of Web Deploy (3.6) but it didn't help.
What else can be checked?
Thank you.
i faced the same problem: old 2008 r2 Server never changed anything
cause (speculation): since visual studio 15.9 they disabled SSL 2.0, TLS 1.0, etc. so it cant communicate propper with the old webdeploy
solution: enable TLS 1.2 on old webserver (and disable the old ones)
i found this link: https://developercommunity.visualstudio.com/content/problem/384634/webdeploy-netcore-api-to-iis-vs159.html
which linked to this blog:
https://tecadmin.net/enable-tls-on-windows-server-and-iis/
Step 1 – Backup Registry Settings We strongly recommend taking a
backup of the registry before making any changes. Use below link to
find steps to how to export registry values.
Step 2 – Enable TLS 1.2 on Windows You have two options to enable TLS
version on your system.
Option 1 – Merge Resistry File Download the Enable-TLS12-Windows.reg
and Enable-TLS12-TLS11-Windows.reg files on your Windows system. Now
right click on file and click Merge.
Step 3: restart.
PS: First i tried the "Client" way, which doesn't work for me. But maybe it helps some one:
I found the answer on this thread: https://stackoverflow.com/a/40050789/9624651
and he referenced to this link https://dougrathbone.com/blog/2016/02/28/pci-compliant-web-deploy-getting-webdeploy-working-after-disabling-insecure-ciphers-like-ssl-30-and-tls-10
Do have a look here. I guess they have mentioned your problem
I'm trying to build and publish web job via MSBuild and it is failing at
'_CheckAzureNet46Support' with error VerifyAzureNet46Support -
[VerifyAzureNet46Support] C:\Program Files
(x86)\MSBuild\Microsoft\VisualStudio\v14.0\Web\Microsoft.Web.Publishing.targets(4755,
7): Your hosting provider does not yet support ASP.NET 4.6, which your
application is configured to use.
I've published other projects as web job to this web app with no issue but intermittently this issue occurs, is it something with my configuration of web app.
MSBuild arguments used for build is
/P:Configuration=Release /p:DeployOnBuild=true
/P:PublishProfileRootFolder="%heckoutDir\BuildConfigurations\publishProfile"
/p:PublishProfile="WebsitePublishProfile"
/P:Password=WebsitepublishProfilePassword
/P:outputdirectory=Bin/Release
If the app service plan is having high RAM or CPU utilization this issue occurs.
To mitigate this you can Scale Up app service plan to avoid this issue.
It is highly unlikely this has nothing to do with the App Service plans. That analysis simply does not make sense.
Rather, a build-time check fails verifying whether the deployment target in Azure supports ASP.NET 4.6. This is done by performing an HTTP GET request to a hard-coded URL at http://go.microsoft.com/fwlink/?LinkID=613106&clcid=0x409. If said request returns 200 OK, the check fails.
Therefore, the more likely explanation is that the support page in question was back up for a few hours by mistake. We had the same problem, but it seems to work now.
I have a site when I tried to apply local staging it's worked fine,but we I tried to connect it through remote server it's not working giving error connection can't be established.Does any one tried it?
This is the configuration with the error message:
This blog post (disclaimer: my own) explains how to do it with https - you can omit long parts of it if you don't want encryption. It also covers 6.0, but the general principle is still the same.
You want to pay special attention to the paragraph Allow access to webservices in that article and check if your publishing server (the "stage") has access to the live server. In general, if this is not on localhost, it requires configuration as mentioned in that article.
As you indicate that you can't connect to your production server from your staging server, please check by opening a browser, running on the staging server and connect it to the production server - go to http://production-server-name:8080/api/axis and validate that you can connect (note: You get the authoritative result for this test only when not accessing localhost as the production system: Do run the browser on the staging system!) - with this test you can eliminate the first chance of your remote system being disallowed. Once this succeeds, you'll need credentials for the production server to be entered on the staging server - the account that you use needs to have permissions to change all the data it needs to change when publishing content (and pages etc.)
The error message you give in the added screenshot can appear when the current user on staging does not have access to the production system (with the credentials used) - verify that you have the same user account that you are using on your staging system (the one that gets the error message from the screenshot) in your production system. Synchronize the passwords of the two.
I your comment you give the information that you're using different version for the staging and the production environment - I don't expect that to work, so this might be the root cause. Test with both systems at the same version.
A couple important points to keep in mind with remote publishing:
If you're not on LDAP (or you have different LDAPs for different environments), you should validate that your user account is exactly the same in both source and target environments. So, if you're on the QA site and you want to remote publish to production, your screen name, email address, and password should all be the same.
Email address is uber important. Depending on which distribution (version) of Liferay you are on, the remote publish code uses your email address to irrespective of whether or not you have portal-ext.properties configured to use screenname.
You should have the Administrator role in on both sides. It may not be required in every scenario, but giving that role out to users that do remote publishing has saved me time and effort debugging why someone's remote publish didn't work. Debugging this process takes a very long time.
If remote publishing is causing you problems (and it probably is or you wouldn't be here), try doing lar file exports / imports. This is important since remote publish failures are not exactly helpful in telling you what failed, they just tell you then failed. Surprisingly, there are often problems in the export process and you can sometimes pinpoint some bad documents or a funky development thing you did using Global scope and portlet preferences that caused your RP to fail. I generally use this order in this situation a) documents and media [exclude thumbnails or your lar file will likely double in size, also exclude ranks if you're not using them] from the wrench icon in the control panel b) web content from the wrench icon in the control panel c) public pages [include data > web content display, but remove all the other data check boxes], include permissions, include categories d) private pages [same options as public pages].
If you already have Administrator role and it's saying you don't have permissions to RP to the remote site, setup your user on the target environment with the "Site Administrator" or "Site Owner" role.
A little late for first and foremost, but anytime you have something that's not working (remote publishing or otherwise), check the logs before you do anything else. The Liferay code base doesn't include a lot of helpful logging, but you do occasionally get a nugget of information that helps you piece together enough to do root cause analysis.
Cheers! HTH
I would like to update the Diagnostic configuration file for the azure roles whenever I upgrade my deployment. How can I do this automatically?
From time to time, we do change our diagnostic (using code) - and upgrade the service. But whenever we upgrade the service, it is still using the old diagnostic configuration and we do not see any new logs we have configured using new code.
How can I achieve this so that whenever I upgrade my deployment, it upgrades the diagnostic configuration as well.
I wonder if you have a bug in your diagnostics updating code. If each role ran code in OnStart or Run to configure diagnostics on startup, there would be no reason that your instances wouldn't be properly configured. I tend to think that imperative code that configures diagnostics is inherently a bad idea in the long run, but it should still work. If you share the code, maybe I can spot an issue.
The best** way I have found to update and enforce configuration is to use the diagnostics.wadcfg file and update it. When you upgrade your deployment, it will use those settings if you have not overridden it in code somewhere. Contrary to Microsoft's guidance at that link, it should be the preferred method as opposed to code which must be maintained and is orthogonal to your application's purpose. Said another way - a declarative configuration file that your ops team can maintain over writing code is usually a better idea. To use this, just include it in your deployment as content and delete any existing files in wad-control-container (and remove any code that configured diagnostics). It will just configure itself from that file then when you next upgrade.
** you can also using a 3rd party SaaS monitoring to set and maintain your diagnostics config. I work on one such one, but I am guessing you want to know how to do it yourself. :)
I have a 3rd party web page screen capture DLL from http://websitesscreenshot.com/ that lets me target a URL and save the page to a image file. I've moved this code into my Azure-based project and when I run it on my local sandboxed dev box and save to the Azure blob, everything is fine. But when I push the bits to my live server on Azure, it's failing.
I think this is because either MSHTML.dll and/or SHDOCVW.dll are missing from my Azure configuration.
How can I get these libraries (plus any dependent binaries) up to Azure?
I found the following advice on an MSFT forum but haven't tried it yet. http://social.msdn.microsoft.com/Forums/en-US/windowsazuredevelopment/thread/0344dcff-6fdd-4479-a3b4-3e89750a92f4/
Hello, I haven't tried mshtml in the cloud. But generally speaking, to
use a native dll in a Web Role, you add the dll to the Web Role
project just like adding a picture (choose add existing items). Then
make sure the Build Action is set to Content. This tells Visual Studio
to copy the dll file to the output package.
Also check dependencies carefully. A lot of problems related to native
code are caused by missing dependencies, such as a particular VC++
runtime dll.
Thought I'd ask here first before I burn a day or two on an unproven solution.
EDIT #1:
it turns out that our problem was not related to MSHTML.dll or SHDOCVW.dll missing from the Azure server. They're there.
The issue is that by default new server instance have the IE security hardening feature enabled, and this was preventing our 3rd party dll from executing script. So we needed to turn off the enhanced IE security configuration settings. This is also a non-trivial exercise.
In the meantime, we just created a server-side version of the feature on our site we need to make screen captures from (e.g. we eliminated JSON-based rendering of UI on the client), and we were able to proceed.
I think the solution mentioned in the MSDN forum thread is correct. You should put them as part of your project files, so that the SDK will package and deploy them to the VM on the cloud.
But if they are COM and need to be registed you'd better call the register command via the Startup feature. Please check http://msdn.microsoft.com/en-us/hh351539
HTH