We observed vulnerability CVE-2022-29464 being exploited in the wild since April, allowing unrestricted file uploads resulting to arbitrary remote code execution (RCE) found from here
This affects WSO2 API Manager 2.2.0 and above, Identity Server 5.2.0 and above, Identity Server Analytics 5.4.0 to 5.6.0, Identity Server as Key Manager 5.3.0 and above, Open Banking AM 1.4.0 and above, and Enterprise Integrator 6.2.0 and above.
We're using WSO2 EI Product V6.4.0/6.5.0.
I have seen Security Advisory WSO2-2021-1738 guideline too.
We don't have Support Subscription, So I'm planning to remove <FileUploadConfig>mappings in the <product_home>/conf/carbon.xml as suggested in same WSO2 Security Advisory page.
Is this mitigations step enough or do we need to concentrate further more on this?
As per the advisory, it seems disabling the file upload services is not a complete fix. If you look at the fix that has been implemented it has code-level changes as well.[1]
[1] - https://github.com/wso2/carbon-kernel/pull/3152/files
Related
WSO2IS 5.8 include Log4j 1.2.17
A security vulnerability, CVE-2019-17571 has been identified against Log4j 1. Log4j includes a SocketServer that accepts serialized log events and deserializes them without verifying whether the objects are allowed or not. This can provide an attack vector that can be expoited.
Someone knows if this vulnerability can be exploited in the context of WSO2IS 5.8?
Thanks in advance!
WSO2 is very frequently issuing security patches as and when the issues are discovered. Can you please write to security#wso2.com and check.
Also - as a security best practice we recommend to use security#wso2.com all the time to report security issues - this is a common practice followed by all open source projects.
UPDATE: Even though the WSO2 Identity Server 5.8.0 has this dependency, it does not use any of the functionalities provided by SocketServer. So, anyone using 5.8.0 version is NOT affected. Also, since IS 5.9.0 this dependency is upgraded to Log4j 2.
More details here: https://wso2.com/security
I am using hazelcast 3.11.2, free version. Trying to enforce authentication, i.e. group password, but it is not working. Hazelcast is ignoring it, and letting nodes join the cluster anyway, without specifying or specifying a wrong password.
According to hazelcast resources on the net, newer versions starting with 3.8.2 will let members join the cluster with the same group name even if the group password is different / not specified. On the other hand JAAS is supported with Enterprise version only.
So, how should authentication be added in the hazelcast's community edition? Try and hack in something when the members are joining or there is a better, standardized way?
Open to recommendations... Thx!
Group password was removed because it wasn't meant to be used as security. For community edition, you can try setting hazelcast.application.validation.token at runtime for all your members.
Security is an enterprise feature and can not be accessed in community version.
When using Azure web/worker roles users can specify osVersion to explicitly set "Guest OS image" version. This ensures that when Microsoft issues new critical updates they are first shown up on a newer "OS image" which users can explicitly specify and test their service on.
How is the same achieved with Azure Service Fabric? Suppose I deployed my service into Azure Service Fabric and it's been running for a month, then Microsoft issues updates for the OS on the server where the service is running - how are they applied such that I can test them first to ensure they don't break the service?
Brett is correct. SF cluster is based on Azure VMSS and the expectation is that the customer is responsible to patch the OS. https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cluster-upgrade/
We have heard from majority of the SF customers that this is not at all desirable and that they do not want to be responsible for OS patching.
The feature to enable an OPT-IN automatic OS patching is indeed a very high priority within Azure Compute team. The exact details on how best to offer this is still in design, however the intent is to have this functionality enabled before the end of the year.
Although that is the right long term solution, to mitigate this issue in the short term, SF team is working on a set of steps that will enable the customers to opt into having the their VMs patched using WU in a safe manner. Once the steps are tested out, we will blog about it and will publish a document detailing the steps. Expect that in the next couple of months.
As I understand it you are currently responsible for managing patching on SF cluster nodes yourself. Apparently moving this to be a SF managed feature is planned but I have no idea how far down the road it might be.
I personally would make this a high priority. Having used Cloud Services for many years I have come to rely on never having to patch my VM's manually. SF is a large backwards step in this particular area.
It'd be great to hear from an Azure PM on this...
Automatic Image based patching like cloud services in service fabric.
Today you do not have that option. The image based patching capability is work in progress. I posted a road map to get there on the team blog : https://blogs.msdn.microsoft.com/azureservicefabric/2017/01/09/os-patching-for-vms-running-service-fabric/ Try out the script and report any issues you hit. Looking forward to your feedback.
Lots of parts of Service Fabric are huge rolling dumpster fires backwards. Whole new hosts of problems have been introduced that the IIS/WAS/WCF team have already solved that need to be developed for once again. The concept of releasing a PAAS platform while requiring OS patch management is laughable. To add insult to injury there is no migration path from "classic cloud PAAS" to this stuff. WEEEE I get to write my very own service host. Something that was provided out of the box for a decade by WAS. Not all of us were scared by the ability to control all aspects of service host communication options via configuration. Now we get to use code so a tweak channel configuration requires a full patch/release cycle!
I got this email from Amazon saying that some of my app's use SSL to access S3 buckets. After I contacted their support, they gave me a list of clients, which points to my iOS app running on iOS7/8. I use AWS iOS SDK version 1.7.1.
The first thought came to my mind was obviously to update the SDK to the latest. That cost quite some effort due to the major difference between 1.x and 2.x of the SDK. After that, I tested with simulator pointing to their testing end point with SSL disabled. It worked, great!
But tonight I did some reading on AWS forum, in one thread, AWS claimed that all versions of their iOS SDK support TLS... things simply do not add up.
Anybody can think of a reasonable explanation to this? If it is not the SDK, and I obviously never altered the SDK in anyway, what caused SSL accesses to show up on their report?
If you have not modified the SDK or not implemented the NSURLConnection's authentication related delegates to manipulate the security model, a proxy can be a potential cause.
Some of the mobile devices may be behind a proxy, and it prevents the proper TLS negotiation. You may need to identify which mobile devices are using SSL and see if there are any common network components between them and the AWS service.
I am exploring the idea of hosting my CD environment in Windows Azure. I read that the current release of the DMS does not play ball in the cloud, however, no detailed explanation was given. Apparently Azure support is planned for second quarter 2013, but in the meantime, I'd like to know why it doesn't work so that I can explore potential workarounds.
For instance, is the issue related to sticky sessions (or lack thereof)? Or, is it related to the DMS compatibility with SQL Azure?
It will be an issue with the sticky sessions. As the DMS does all its work server side it needs proper session state management to work. You could do this on Azure using IaaS, but then you would be responsible for installing and maintaining the deployment of Sitecore on the OS rather than using the built in deployment features.
See this post by Jakob Leander for more info: http://www.sitecore.net/Community/Technical-Blogs/Jakob-Leander/Posts/2013/01/Why-we-love-Sitecore-on-Azure.aspx