CVE NIST Description:
A vulnerability, which was classified as problematic, has been found in fredsmith utils. This issue affects some unknown processing of the file screenshot_sync of the component Filename Handler. The manipulation leads to predictable from observable state. The name of the patch is dbab1b66955eeb3d76b34612b358307f5c4e3944. It is recommended to apply a patch to fix this issue. The identifier VDB-216749 was assigned to this vulnerability.
Has anyone come across this vulnerability which is NIST Link - NVD - CVE-2021-4277 (nist.gov)
This is a false positive.
The 'project' that the CVE has been raised against is just somebody's dump of scripts that they've written for themselves & didn't deserve their own repo. (https://github.com/fredsmith/utils)
Not sure why it's been given a CPE/CVE by MITRE
Might be worth just suppressing this CVE (or CPE) entirely as I don't think that this project can be imported as an artifact
https://github.com/jeremylong/DependencyCheck/issues/5213
Related
I would like the find out if the log4j security vulnerability CVE-2021-44228 (https://nvd.nist.gov/vuln/detail/CVE-2021-44228) affects logstash-logback-encoder?
Warning:
Alster's answer is technically correct, but it may be misleading to some people!
logstash, logback, and slf4j I think all use log4j-core-1.x... this means they are not vulnerable to CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105. See Apache's Log4J security bulletin.
HOWEVER logback usess Log4J version 1.x and Log4J version 1.2 IS VULNERABLE to CVE-2019-17571 and CVE-2021-4104 (keep reading for more info on these)
On the SLF4J website that Alster linked, the creators say that logback is safe from CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 because it "does NOT offer a lookup mechanism at the message level". In other words, logback is not directly using the vulnerable JndiLookup.class file within Log4J...
HOWEVER (again), they do mention that Jndi lookup calls are possible from the logback config file. This is documented in CVE-2021-42550 with a severity score of CVSS 6.6. This severity is lower than the others because the exploit is harder to achieve for an attacker, thereby reducing the exposure... however the end result if an attacker were to be successful is the same: arbitrary remote code execution.
Additionally, the SLF4J website fails to mention the CVE's that are independently associated with their Log4J 1.x dependency that I linked above (CVE-2019-17571 and CVE-2021-4104). Those CVEs are not related to the JndiLookup.class file, so their statement "does NOT offer a lookup mechanism at the message level" is not a mitigation for these. They do actually talk about some of the details for CVE-2021-4104, but they do not reference the actual CVE documentation. They fail to mention altogether CVE-2019-17571.
YOU STILL NEED TO MAKE A CHANGE
CVE-2019-17571 has a severity score of CVSS 9.8...
This is an arbitrary remote code execution vulnerability (JUST LIKE CVE-2021-44228 that you asked about in your question).
CVE-2021-4104 has a severity score of CVSS 8.1...
This is also an arbitrary remote code execution vulnerability and in the description of it in the official documentation is says "that result in remote code execution in a similar fashion to CVE-2021-44228".
CVE-2021-42550 has a severity score of CVSS 6.6...
It is also an arbitrary remote code execution vulnerability
CVE-2021-44228 (which is the one that doesn't affect you) has a severity score of CVSS 10.
My recommendations
While it does seem possible to use logback safely if you smile at it just right and tweak 17 different configurations, and upgrade a package, and manually remove a class file from a jar... I do not feel comfortable giving you all of the specifics of that. While I have been trying to help people with the CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105 Log4J 2.x vulnerabilities... this software you are asking about is a whole different level of complexity and I'm not confident I could steer you in the right direction via a 1-time post without me or you missing crucial steps.
This package depends on software that reached end of life in 2015. My recommendation is that it's time to bite the bullet and upgrade to something that isn't holding on by a thread. I know that isn't what you were hoping to hear... I'm sorry.
Logback does NOT offer a lookup mechanism at the message level. Thus, it is deemed safe with respect to CVE-2021-44228.
reference
I added one dependency to my project which added another and another - in the end, I got the crate pelite. This crate has a "blob" file which was marked by Windows as "Trojan:Win32/Fuery.B!cl"
I assumed that this was a false positive, but it wasn't shown as a "maybe/possible" trojan. I found the crate on GitHub and downloaded the "blob" file from GitHub and it is ok. If I download it from crates.io (either via Cargo or manually) then I get the trojan warning.
My problem is that cargo run downloaded and ran it as the antivirus couldn't stop it or delete the file.
Your first step should be to establish that the malware was not run on your system. Nothing inside of Cargo or Rust will run that specific file automatically, but the crate might contain a build script
The next step is to ascertain if it actually is malware. pelite has an issue where this has been raised:
Ugh this dumb issue, it's a false positive which I already tried to make go away once.
That file contains like 200 PE samples for testing pelite against. These are fairly unusual samples because I wanted to see how pelite would fare and due to their unusual nature tend to get picked up by anti virus.
If you trust the author that it's not really an issue, then there's nothing else to do.
If the author isn't aware of the issue, you can try reaching out to them, following any security contact information they might have (relatively rare for most crates) or opening an issue.
If you don't trust the crate owner or they are unreachable, your final step should be to contact the Rust Security Team via email. Be complete and thorough about the issue and provide as much information as you can.
In my personal opinion, the particular warning you are asking about is a false positive and I would not worry about it. Running an online virus scanner (which I don't know the quality of) reports it as a large number of possible things, in line with what the author said about being a large number of samples.
I've been using terraform for some time now and a doubt always come to my mind is, which versioning scheme is terraform-core using?
Is it semantic versioning AKA semver? Because if it is, why an upgrade in the minor version, as when upgrading a project from 0.11.X to 0.12.Y writes the state of terraform with that 0.12.x and it is not allowed to downgrade it back to 0.11.x?
Another thing related : why they opt for starting their version numbers in 0.X.X rather that 1.X.X? Does it mean anything?
They do use semantic versioning, but their interpretation is a little different than most.
Here's an answer on a GitHub issue from a Hashicorp employee regarding their versioning methodology:
At HashiCorp we take the idea of a v1.0 very seriously, and once Terraform gets there it will represent a strong promise of compatibility because we believe that the configuration language, internal architecture, CLI, and other product features are right for the long haul.
The current state of Terraform is a little more subtle. We do still consider backward-compatibility very important since we know there is a lot of production infrastructure depending on Terraform today. We must therefore make compromises so we can keep making progress towards something we could make v1.0 promises about. While we keep these disruptions to a minimum, they cannot always be avoided, and so we try to be very explicit about them in the changelog and, where applicable, in upgrade guides.
With this in mind, at this time we suggest always referring to the changelog before upgrading since this is our primary means to note any special considerations that apply during an upgrade. We try to reserve significant breaking changes for increases to the second (traditionally "minor") position in the version number, which at this time represent our "major" development milestones as we work towards an eventual v1.0.
Since Terraform is an application rather than a library we do not intend to follow the Semantic Versioning conventions to the letter, but since they do indeed represent common versioning idiom we are likely to follow them in spirit, since of course we wish to be as clear as possible. As #kshep noted, v0 releases are special in the semver conventions, but the meaning of v1.0 in semver is broadly consistent with how we intend to interpret it.
I'm sorry that our version numbering practices caused confusion here; based on this feedback, we will attempt to be clearer about the significance and risk of each release when we announce it and will work on writing some more explicit documentation on what I wrote above.
Ref: https://github.com/hashicorp/terraform/issues/15839#issuecomment-323106524
Better late than never: https://www.hashicorp.com/blog/announcing-hashicorp-terraform-1-0-general-availability
From here on you can expect regular semantic versioning support.
I have a package on Hackage which depends on third-party package, which doesn't build on newer versions of GHC (>= 7.2). The problem with the other package can be solved with just a one-line patch (a LANGUAGE pragma). I sent the patch to the upstream twice, but didn't receive any feedback. The problem is that my package is not installable neither until the dependency is fixed.
I could have uploaded the fixed version of depenency package (with a minor version bump), but I'd like to hear what is the attitude of the community about such non-maintainer uploads. Again, I don't want to change the library interface, I only add a new compilation flag to make it buildable again.
Are non-maintainer uploads to Hackage allowed and tolerated?
When a fork of the package on Hackage is a better approach?
Package uploads by non-maintainers are allowed (there may be license issues, but most packages if not all on hackage have licenses permitting this), but of course they are not usually done. They are tolerated if done in good faith and with reasonable procedure. If you contact the maintainer and don't get any response within n weeks (where I'm not sure what the appropriate value of n is, not less than 3, I'd say), uploading a new version yourself becomes an option, discussing that on the mailing lists seems however more prudent. If the package looks like it is abandoned, even taking over maintainership - of course after again contacting the maintainer, giving her/him time to respond - may be the appropriate action, but that should definitely be discussed with the community (haskell-cafe or mailing list, for example). Whether to prefer a non-maintainer upload or a fork must be left to your judgment, personally I tend to believe forks step on fewer people's toes.
But a better founded reply would be possible if we knew which package is concerned and could look at the concrete situation.
A forking is intrusive for a package that you suspect is still maintained but the author is temporarily missing. By intrusive, I mean that other programmers might pick up your fork then not go back to the mainline once the original author has resumed work on the mainline.
For packages where the original author has left the Haskell community, my personal opinion is that its better to fork the package if you are going to develop it further. Forking prevents succession problems, such as those that happened with Parsec where many developers didn't want to update because the successor was slower and less well documented than the original for some time.
In all cases asking on the Cafe is best, regardless of whether people have chosen not to follow it, it is still the center of the Haskell community.
For the particular case in the question, while it is nice if things on Hackage compile, there is no rule that says they have to. A package that depends on a broken package could simply put change instructions for the broken dependency on its front page, i.e. "This package depends on LambdaThing-0.2.0 which is broken, to fix LambdaThing add ... to the file Lambda.hs"
I would say, it's a very good idea to consult the mailing lists regarding the specific package and the specific person who is missing. I took control of the haskell-src-meta package from its original owner, but only after consulting with the lists and IRC, who assured me that Matt Morrow had been missing for months and no-one knew why.
In my opinion, package ownership should probably only be changed where there is a consensus to do so, or at the very least there should be efforts made to find one. In the development version of the Hackage software, it's my understanding there are access controls so that only administrators can make this kind of intervention.
Cabal allows for a freeform Stability field:
stability: freeform
The stability level of the package, e.g. alpha, experimental, provisional, stable.
What are the community conventions about these stability values? What is considered experimental and what is provisional? I see only few packages are declared as stable. What kind of stability does it refer to, stability of the exposed API or the ultimate bug-free state of the software?
The field is mostly defunct now, and shouldn't be used. As Max said, it will probably be replaced by something meaningful in the future.
If you're interested in the history, the field originated in a design proposal for the first set of Hierarchical Haskell Libraries. That document describes the original intended meanings for the values.
Currently this field is a very poor guide to the stability of the library, so is mostly ignored. Duncan Coutts (one of the main Cabal and Hackage developers) has said that he eventually plans to replace this field entirely, with something like a social voting system on Hackage.
Personally (and I'm not alone) I just always omit the stability field. Given that it's going to go away, its probably not worth losing any sleep over what to put into it.
The original intended meanings were:
experimental: the API is unstable. It may change at any time, i.e.: any version number change;
provisional: the API is moving towards stability. It may be changed at every minor revision, but should provide deprecated versions of features;
stable: the API is stable. Only additions should be made at minor releases. After changes in the API, deprecated features should be kept for at least one major release.
As the other answers pointed out, the community seems not to be following these guidelines anymore.
As Simon Marlow points out, this is described in a design proposal for the first set of Hierarchical Haskell Libraries. The original link is dead, but you can find a copy in the wayback machine.