Is VSCode vulnerable to CVE-2020-15999? - security

As VSCode seems to contain elements of chromium (and thus FreeType), is it vulnerable to CVE-2020-15999?
This is a heap buffer overflow in FreeType < 2.10.4 with potential to remote code execution. If so, is there already a planned release date for a patch?
I know this is an unusual question for StackOverflow, but the official repository linked here via issue template, so I guess this would be the correct place to ask.
Best regards

Related

When should I create my own module package instead of using other packages?

I'm still a new node js developer, currently building a personal project, and I recently found out that there are open source packages available on npm similar to the thing I'm developing.
These packages carry new advanced concepts that I haven't come up with yet and provide more options than I want, but after thinking, it occurred to me why not develop a package that serves me in my project the way I want instead of using packages where I won't use more than 5% of the functions in my project?
Benefits of using an existing, well-supported module:
You save your development time for things that haven't already been written by someone else allowing you to make faster progress on your project
Well tested by the community (pre-tested code saves you lots of time)
Other people finding and fixing bugs (don't underestimate the importance of this)
The code will likely be kept up-to-date as tech changes over time
Possible community of people to ask questions of that knows about that package
Non-issues with using an existing, well-supported module:
Code size is rarely an issue for server-side nodejs development so the fact that a package may contain extra code that you don't need is generally not a practical issue of any consequence. If code size is paramount (like say you were running on a small, embedded system), then nodejs itself might not be the right environment as it's not exactly compact.
Reasons not to use an existing, well-supported module:
You aren't allowed to use open-source code in your project (but then you wouldn't be using nodejs if that was the case).
No existing module does what you want.
Existing modules that do what you want don't appear to be well supported or have many relevant bugs that have been open for a long time. In this case, it still might be worth if for you to clone the repository and use it as a starting point or learning point for your own module.
I'm still a new node js developer, currently building a personal project, and I recently found out that there are open source packages available on npm similar to the thing I'm developing.
IMO, this is part of the magic sauce of doing nodejs development. The huge repository of open source packages (through NPM) that are so easy to use make your development far more productive than developing everything from scratch yourself.
why not develop a package that serves me in my project the way I want instead of using packages where I won't use more than 5% of the functions in my project?
Unused code doesn't really cost you anything of consequence in a server-side environment. If you really wanted you can use bundlers that support tree-shaking which removes the code you're not using.
The question that really matters is whether an existing module meets your needs or is closest enough that you only have to write a little bit of code in order to use it. If that's the case, then the question becomes this: "Why should I use my precious development time to write a package from scratch when I could use far less development time by using something that is already available for free, is already tested and is already proven and then spend that development time (I would have spent developing that package) on other things that advance my product/service further?
In many ways, this is really no different than using the fs module built into nodejs. You use it because it's already developed and already tested and saves you time over developing your own file access module. Yes, the fs module contains lots of code you may never need, but that's not the question. The question is whether it already contains the code you DO need.

Log4Shell allows remote code execution. Could this be used to patch the vulnerability remotely?

The current log4shell issue (the famous CVE-2021-44228) gives attackers a vector to execute untrusted code on vulnerable, internet-facing machines.
Could this be used to patch the vulnerability, by placing code on the target machines that patches the vulnerable libraries?
Edit: To clarify the question, this is only aimed at the technical question of whether patching running Java code remotely through this shell access vector is possible.
There seems to be at least one proof-of-concept level project that attempts to patch the vulnerability in-place, so it is technically feasible: https://github.com/Cybereason/Logout4Shell .
The authors deatail their approach in a blog post, from which it seems that the exploit-fix-combination (which they call a "vaccine") is also an advertisement for their company. Keep that in mind when evaluating it.
Legally it is a clear no.
As you are still executing code on a system you don't own without authorisation by the owner. Although this would have good intention to prevent bad hackers to attack the system, this way of solving the problem would still be illegal.
Technically, yes.
as you correctly understood, you can get full control on the system executing arbitrary code, including installing a "fix".
Realistically, it depends on how you want to implement the fix and if you find a scaling generic solution to "fix" different configured systems with the same approach.
Updating the library would require to identify the location of the vulnerable file log4j-core.jar and replace it with the fixed version, but then you also need to restart the service using the library, as the vulnerable version might already be loaded in memory. A change of the file on the disk would not be reflected in a running program.
Location of the file and running services will vary from system to system making it harder to implement a generic solution.
The exploit is already 5 days a public reported vulnerability and very easy to exploit. If you haven't patched it already, you might already be exploited. An attacker could already have installed another malicious software that is able to load any execute any new code in the future via remote control and would not require the log4j exploit any more.
If you haven't fixed the log4j vulnerability yet, updating log4j is no longer enough. You have to assume that your system is already compromised by sleeping trojans and you should reset the whole system to get rid of it.
My Suggestion is to not do it. Try to contact the owner of the system, tell them to update and reset the complete system.

Reliability and flaws with GitHub's 'Dependabot: Automated security fixes'

I want to learn from others about whether GitHub's 'Dependabot: Automated security fixes' is a secure reliable solution to security issues.
The one and only time GitHub flagged a security issue it was also for outdated dependencies.
At that time I pulled from the repository, manually updated them, and then pushed them back up.
To date I have not found on Stack Overflow any comment directly addressing my question.
Please correct me if I am wrong.
How safe are the automated fixes? Can we blindly rely on them? Should I use them with any sort of caution?
Screenshot including the green 'Create automated security fix' button.
Dependabot will notify you about security issues known for certain dependencies.
The only drawback could be:
The new dependency bring other issues.
It does not guarantee that the upgrade won't provoke side effects, such as bugs introduced by an unexpected behavior of the dependency in your project.
So don't forget to test the upgrade on your project whether through automated ones or manually.

What is the difference between Cabal and Stack?

Yesterday I learnt about a new Haskell tool called Stack. At the first blush, it looks like it does much the same job as Cabal. So, what is the difference between them? Is stack a replacement for Cabal? In which cases should I use Stack instead of Cabal? What can Stack do that Cabal can't?
Is stack a replacement for Cabal?
Yes and No.
In which cases should I use Stack instead of Cabal? What can Stack do that Cabal can't?
Stack uses the curated stackage packages by default. That being so, any dependencies are known to build together, avoiding version conflict problems (which, back when they were commonplace in the Haskell experience, used to be known as "cabal hell"). Recent versions of Cabal also have measures in place to prevent conflict. Still, setting up a reproducible build configuration in which you know exactly what will be pulled from the repositories is more straightforward with Stack. Note that there is also provision for using non stackage packages, so you are good to go even if a package isn't present in the stackage snapshot.
Personally, I like Stack and would recommend every Haskell developers to use it. Their development is fast. And it has a much better UX. And there are things which Stack does which Cabal yet doesn't provide:
Stack even downloads GHC for you and keeps it in an isolated location.
Docker support (which is very convenient for deploying your Haskell applications)
Reproducible Haskell script: You can pinpoint version of a package and can get guarantee that it will always execute without any problem. (Cabal also has a script feature, but fully ensuring reproducibility with it is not quite as straightforward.)
Ability to do stack build --fast --file-watch. This will automatically rebuild if you change the local files present. Using it along with --pedantic option is a deal-breaker for me.
Stack supports creating projects using templates. It also supports your own custom templates.
Stack has built-in hpack support in it. It provides an alternative (IMO, a better) way of writing cabal files using yaml file which is more widely used in the industry.
Intero has a smooth experience when working with Stack.
There is a nice blog post explaining the difference: Why is Stack not Cabal? While Cabal has, in the intervening years since that post, evolved so as to overcome some of the issues discussed there, the discussion of the design goals and philosophy behind Stack remains relevant.
In what follows, I will refer to the two tools being compared as cabal-install and stack. In particular, I will use cabal-install to avoid confusion with the Cabal library, which is common infrastructure used by both tools.
Broadly speaking, we can say cabal-install and stack are frontends to Cabal. Both tools make it possible to build Haskell projects whose sets of dependencies might conflict with each other within the confines of a single system. The key difference between them lies in how they address this goal:
By default, cabal-install will, when asked to build a project, look at the dependencies specified in its .cabal file and use a dependency solver to figure out a set of packages and package versions that satisfy it. This set is drawn from Hackage as a whole -- all packages and all versions, past and present. Once a feasible build plan is found, the chosen version of the dependencies will be installed and indexed in a database somewhere in ~/.cabal. Version conflicts between dependencies are avoided by indexing the installed packages according to their versions (as well as other relevant configuration options), so that different projects can retrieve the dependency versions they need without stepping on each other's toes. This arrangement is what the cabal-install documentation means by "Nix-style local builds".
When asked to build a project, stack will, rather than going to Hackage, look at the resolver field of stack.yaml. In the default workflow, that field specifies a Stackage snapshot, which is a subset of Hackage packages with fixed versions that are known to be mutually compatible. stack will then attempt to satisfy the dependencies specified in the .cabal file (or possibly the project.yaml file -- different format, same role) using only what is provided by the snapshot. Packages installed from each snapshot are registered in separate databases, which do not interfere with each other.
We might say that the stack approach trades some setup flexibility for straightforwardness when it comes to specifying a build configuration. In particular, if you know that your project uses, say, the LTS 15.3 snapshot, you can go to its Stackage page and know, at a glance, the versions of any dependency stack might pull from Stackage. That said, both tools offer features that go beyond the basic workflows so that, by and large, each can do all that the other does (albeit possibly in a less convenient manner). For instance, there are ways to freeze exact versions of a known good build configuration and to solve dependencies with an old state of Hackage with cabal-install, and it is possible to require non-Stackage dependencies or override snapshot package versions while using stack.
Lastly, another difference between cabal-install and stack which is big enough to be worth mentioning in this overview is that stack aims at providing a complete build environment, with features such as automatic GHC installation management and Docker integration. In contrast, cabal-install is meant to be orthogonal to other parts of the ecosystem, and so it doesn't attempt to provide this sort of feature (in particular, GHC versions have to be installed and managed separately, for instance through the ghcup tool).
From what I can glean from the FAQ, it seems that Stack uses the Cabal library, but not the cabal.exe binary (more correctly known as cabal-install). It looks like the aim of the project is automatic sandboxing and avoidance of dependency hell.
In other words, it uses the same Cabal package structure, it just provides a different front-end for managing this stuff. (I think!)

Is it sensible to build an application with static linking on linux?

I need to build an application running on an embedded vendor supplied version of linux. According to the documentation it has libc version 2.8.90. I have built a simple application in C++ on a desktop and copied the binary across to the hardware along with copies of the libraries it is linked to. In order to remove any potential conflicts of linking against different versions of libraries I considered attempting to link to libraries statically. After some research I found the following question and answers and after reading through it gave the impression that linking statically is not a good thing to do. What I could not find here (or anywhere else so far) was a simple explanation of why this seems to be frowned upon. It would seem to me (pretty much a novice to linux) to be a way of solving my problem of bundling my executable as a single package and running it on my hardware but clearly it seems to be considered a bad idea but can someone please explain why??
Obviously I am aware that it would cause bloating of my binary but I am not worried about that. Additionally, I am aware of the licensing issues, but I am not concerned with that aspect of things particularly. This is not a commercial application so I do not think that it applies to me.
The advantages are, as you expect, a single binary that works without having to install the other dependencies and which you can easily move around.
The disadvantages are the size and the need to recompile the entire application if there's an update (e.g. a security fix) to the linked library and perhaps licensing issues (as you've noted).
Tradeoffs. If it solves your problem, go for it.

Resources