How should we compile my assets (Typescript) before sending it to Packagist? - packagist

I am building a plugin for my website that should do some antispam (primarily educational reasons).
It has mostly been finished but now I am now hitting a massive speedbump in the final bits before I want to start doing "live" trials with it.
You see, I have published the plugin onto Packagist but the problem is that Packagist only takes what is in the repo but doesn't take the assets that I roll out of my pipelines in the form of artifacts (which is the final JS that the client can understand and use).
Without this, however, the plugin cannot work.
But in my experiences, compiling the code on my local machine and then manually committing it to Git is a huge pain and caused a lot of errors in the past (because sometimes I would forget to commit the JS, sometimes I would forget to commit the TS, or I would forget to recompile it all etc. etc.).
In this case, Typescript, however, is somewhat interchangeable with things like SCSS (basically any front-end asset that still has to be compiled somewhere).
So my question is how I would go about this, because obviously, I don't want to just go and tell people that want to use this plugin (I try to build code assuming they are): "yea, you can use it but you are going to need to setup a typescript compiler and everything regardless if you are going to make changes to the plugin".

Related

Is there a standard way to serve latest version of static files from a git repository with Node.js?

Is there any standard/conventional way (package/library, pattern, etc.) to serve the latest version of some static files from a given git repository in Node.js?
My idea so far is to clone the repository at npm start and take the files that I need from there, and to provide a webhook to be called when the repository receives a commit, to then pull the changes so that the files get updated even when the app is running.
This seems like it could work, but I do not want to reinvent the wheel, if it already exists. If there is a method that is already relatively well known, a conventional pattern or a package that already does this, it would be wise to use it and not bother with implementing the details, fixing security vulnerabilities and keeping it up to date.
Also, while we are on it, are there reasons why I should not do this in the first place?
Edit:
I should probably give an explanation on why I would want to do this.
I basically have a web app (nothing too complex, a few routes here and there), let's call it example.com, and a few "self contained sites" (i.e. collections of static files that for all intents and purposes could live independently of the main website and each other).
I could put them each on their sub-domain, but that would probably mean using more app instances and time for just serving some static files, so I want to serve these "sub-websites" on their own paths on that app, like example.com/sub-site1, example.com/sub-site2, etc., from the same app instance as the main example.com, as to not incur higher hosting cost.
I could also modify those static websites to wrap them in Node.js packages and install the packages in the main app, but I want to keep them clean, simple, static files, agnostic of the platform they are being served from.
That leaves me with the option of moving a bunch of files to the main app, but I do not want to manually restart the app each time any one of those sites gets updated, or even worse, manually update the repository of the main app with the new versions of the static files.
Is there any standard/conventional way (package/library, pattern, etc.) to serve the latest version of some static files from a given git repository in Node.js?
There is no standard for this kind of task. The best way is to use the official Github SDK since there is a getContent method for this purpose.
Also, while we are on it, are there reasons why I should not do this in the first place?
A Github repository can be deleted, but an npm package cannot (there is a unpublish policy for this reason).
I don't know your use cases, but those files are critical for your application?
If so, it is risky to depends like this on an ephemeral repository: you could just fork and keep the fork up-to-date with a Github action

how can i maintain a modern static website without transpiler or bundler tools

We have a static website, we outsource the website maintenance, we don't have source code repository, so contractor edits the code on production server directly.
It has no problem, as our website built decades ago with old school html4 only. What it store on the web server, is what the source code is.
At today, the web site can be composed by UI framework, eg. Vue, React....etc. Sometimes the HTML file contains web components and other JS module. I have done a little google to learn that, building a website today need NPM, NodeJs, Webpack, Gulp....etc, they manage js module and bundle / built the production code...
My problem is, we like to revamp our website with modern UI (HTML5, CSS3, mobile friendly...). The tools I just mentioned will "process" the source code and output production code. We don't have the source code server (eg. git server), for our contractor to store the source code. ( our company management doesn't allow us to purchase private repository services on the internet. eg..github, gitlab...etc).
Can I keep using the old school way? the source code on the production web server is always the only source code...
I have tried myself to using the require.js, it loads js module on the browser, so I can handle module loading without node.js and Webpack, and writing the web component in vanilla js. Is it the only solution I can do?
You certainly could continue to manage this site the "old school" way, but in doing so, you'll be ignoring the benefits that all the modern tools give you.
For example
no git (or other version control) means no rolling back changes (or errors)
using version control software also means you have a backup and you don't need to set up a backup scheme on the production server to save your files
editing on the production server means if someone makes a typo, the site is messed up; etc.
I would strongly recommend modern tools; if cost is a concern, consider free tools:
Bitbucket has long offered free private repositories; Github has recently also started offering them.
Tools such as Hugo, Jekyll, and others permit creation of static sites quickly and easily.
Edit in answer to some of the comments...
Switching to a more modern development workflow (including version control) is not just about saving money, it's also about:
Does the employer/client want their developer(s) spending a lot of time managing the site - possibly including fixing problems - or do they want them working on something else?
Is the employer/client willing to have periods of time when the site does not work correctly? As #birdspider mentions in the comments above, if you have multiple people working on the website on the production server, they're going to be messing up each others' work. Note that the use of a VCS helps avoid avoid some of the problems with people stepping on each others' toes and it also make fixing those conflicts so much easier.
If you approach the employer/client with these points and their answer is "we just don't like it", then there's probably not much else you can do. If I were in your shoes, I'd be strongly tempted to either a) implement something on my own (just to preserve my own sanity, although really this is probably not a good idea) or b) find a new job.

When to add a dependency? Are there cases where I should rather copy the functionality?

I lately helped out on a project, where I added a really small dependency - in fact, it only contained a regular expression (https://www.npmjs.com/package/is-unc-path).
The feedback I got from the developer of the project was that he tries to minimize third-party dependencies if they can be implemented easily - whereby he - if I understand it correctly - asks me to just copy the code instead of adding another dependency.
To me, adding a new dependency looks just like putting some lines of code into an extra file in the repo. In addition, the developers will get informed by an update if the code needs a change.
Is it just a religious thought that drives a developer to do this? Are there maybe any costs (performance- or space-wise, etc) when adding a dependency?
I also had some disputes with my managers once concerning the third party libraries, the problem was even greater he got into believing that you should version the node_modules folder.
The source of any conflict usually is the ignorance.
His arguments were:
you should deliver to the client a working product not needing for him to do any other jobs like npm install
if github, npm is down in the moment when you run npm install on the server what you will do ?
if the library that you install has a bug who will be responsible
My arguments were:
versioning node_modules is not going to work due to how package dependencies work, each library will download his own node_modules dependencies and then your git repository will grow rapidly to hundreds of mb. Deploy will become more and more slow, downloading each time half a gb of code take time. NPM does use a module caching mechanism if there are no changes it will not download code uselessly.
the problem with left-pad was painfull but after that npm implemented a locking system and now for each package you just lock to a specific commit hash.
And Github, and npm does not have just a single instance service, they run in cloud.
When installing a dependency you always have some ideas behind and there are community best practices, usually they resume to: 1. Does the repo has unit tests. 2. The download number 3. When was the latest update.
Node.js ecosystem is built on modularity, it is not that node is so popular cause of some luck, but cause of how it was designed to create modules and reuse them. Sometimes working in node.js environment feels like putting lego pieces together and building your toy. This is the main cause of super fast development in node.js. People just reuse stuff.
Finally he stayed on his own ideas, and I left the project :D.

Is updating npm dependencies not recommended on a production application?

I recently started exploring npm and installed a github repo yoonic/nicistore.
But when I try to build it fails.
My question is if I start building things on top of node, which I see has tooo many dependencies from different vendors, Am I completely on the mercy of the respective package developers?
I have seen that most node based github repos fail to build in the first try. If I update one of the modules by running a console command, Is it likely to break all the application?
And if It is does, doesn't it prove node.js an unreliable and unstable development platform?
Think of it as the opposite of most other languages.
You are writing an app in Java.
You want to use LibA, LibB and LibC.
So you try to use LibC 2.4, and as soon as you do, your manager throws all kinds of errors at you.
Why?
Because LibB is using LibC 1.9
So now what are your options?
Strip out all of the calls to all of the new API for LibC that you wanted to use...
...or hope that LibB is open-source, and you can contribute an update for a new version of it, so that you can use the latest version of LibC (and hope it doesn't update).
So now you've done that... but now you've broken your LibA, because it wants the old LibB.
You didn't even want LibA, you just had to have it for your app to be happy with your framework, and the libs that you did actually want to use (B and C). LibA is closed-source, and isn't maintained, anymore. Tough luck. Go back to your old ways, and forget about how much better life could be, if you could only use your framework with the new version of LibC. Or start praying that your framework does a major rewrite, to get rid of the LibA dependency... but then figure out what new hell you have to deal with, just to get LibC working.
Is this really better than Node?
What node allows you to do is install dependencies which are at different versions than the same library that your dependencies are using.
Not that you can't do that with Java, too... but the entire community has decided that it's just not ever going to try to do that, and thus outlaw it at a tool level.
Next, you see too many things which leave you at the mercy of too many vendors...
Going back to Java (or C++, or nearly any mainstream language), looking at Java, itself, how many libraries are made by Sun Microsystems, or by James Gosling?
Moreover, if you want to boil it down, to suggest using only, say, one huge, overarching framework (like Spring MVC) and using no other libraries of any kind (like JodaTime), then how many libraries does Spring itself lean on, and why are they of no concern to you, even if you're just using the compiled VM bytecode?
In fact, a strong argument could be made to be more wary of compiled binaries, in languages where it was traditional to see strong, copy-left licensing like that of the GNU GPL... in that realm, you open yourself up to craziness.
Most of the Node stuff, by comparison, is dirt-simple freeware. And even if it's not, it's quickly replaceable as most are micro-libraries.
I would suggest that updating a Node package your server depends on, via CLI is less hazardous than doing the same to a full-fledged Java project, if your goal is to see your project compile again, some time in the next week, but with the newer fixes/features...
...but if you're talking about a full-scale, production application, you also want to be cognizant of what it is you're doing, with regards to your codebase, regardless.
As to why things don't build for you on the first try, assuming that you're on a non-Windows platform, and your environment is up to date, I don't know.
Most C/C++ projects I clone don't build for me, first try, either. I usually forget something, or there was something poorly documented, or the actual project was set up to make unfair assumptions about the system it would operate in.
Does that mean that C++ is an unreliable/unstable development platform?
Or the hours/days spent on getting Eclipse set up in an enterprise environment, with all kinds of crazy, company-specific projects and project settings?
It sounds like a case of bad design, more than anything.
Then again, most of my projects these days are wrapped in Docker containers. They all run in the same environment, whether they're running in Windows, on a Mac, or on the server. That tends to take the sting out of building projects, regardless of what language the code is in, or what VM / processor they're running on.
You should also be using NPM shrinkwrap files, or Yarn Lockfiles to preserve the build configuration, with the known-working versions of libraries. And you should have unit and integration tests to ensure that changing library versions has no discernible impact on your system.

Does the case for not including Node modules in version control also apply to Composer packages?

In doing research on whether Node's node_modules should be checked into your version control repository, the most recent consensus seems to be that you should include it for applications you deploy.
Sources:
Check in node_modules vs. shrinkwrap
Should I check in node_modules to git when creating a node.js app on Heroku?
https://www.npmjs.org/doc/misc/npm-faq.html#Should-I-check-my-node_modules-folder-into-git
In reading these arguments, it lead me to question whether Composer's /vendors directory should also be checked into version control. Composer's documentation suggests that you don't:
Should I commit the dependencies in my vendor directory?
The general recommendation is no. The vendor directory [...] should be added to .gitignore/svn:ignore/etc.
The best practice is to then have all the developers use Composer to install the dependencies. Similarly, the build server, CI, deployment tools etc should be adapted to run Composer as part of their project bootstrapping.
While it can be tempting to commit it in some environment, it leads to a few problems:
Large VCS repository size and diffs when you update code.
Duplication of the history of all your dependencies in your own VCS.
[...]
Contrasting that argument is this one (source):
Doesn’t checking in node_modules create a lot of noise in the source tree that isn’t related to my app?
No, you’re wrong, this code is used by your app, it’s part of your app, pretending it’s not will get you in to trouble. You depend on other people’s code and they are just as likely to write new bugs as you are, probably more so. Checking all of that code in to source control gives you a way to audit every line that ever changed in your application. It allows you to use $ git bisect locally and be ensured that it’s the same as in production and that every machine in production is identical. No more tracking down unknown changes in dependencies, all the changes, in every line, are viewable in source control.
In summary, the question is this: Why would one gitignore (i.e. not version control) node_modules but not do the same for Composer's vendor/ directory?
The reason to commit external dependencies is
it's easier to deploy with git pull
the code used is directly included in the commit anyone checks out
Arguments against this are
Git is no deployment tool
all dependency managers do have a way to make exactly sure the code used can be fetched
I don't know anything about npm, but for Composer that last point is implemented by committing composer.lock.
I don't think the "audit code" argument is a valid one in every case. If you do write software that get's audited by itself, and subsequently needs all libraries to be audited, then probably all code changes need to be conserved. This isn't true for the general case.
git bisect works still as well with a committed composer.lock. It does require installing the dependencies with every bisect step, but this is just one simple step, probably already done in the automatic test suite run.
The last thing to worry about is when used packages go offline. This really is more of a problem with Composer, because there is no central hosting of the downloadable releases (npm probably does this). If this is a problem, either commit the code (and try to figure out how to update that missing package in the future), or setup an instance of Satis to create a local copy of the code you use.
Putting all your modules in you VCS makes it really heavy to download or upload. For example, I work on two node.js projects and the total node_modules directories size is between 250MB and 500MB whereas the whole code with assets is generally less than 40MB. Every Node.js developer likes Node lightness, so the code must stay easy to download and share.
For the second point, an alternative to avoid regressions is to be more restrictive in your package.json dependencies versions. You will find more information here.
Finally the best argument is to take a look on the famous modules everybody know :
express ignores
node_modules
mocha ignores
node_modules
q ignores node_modules
...
The more I research this the more I'm starting to think that there is no one correct answer to this, just different opinions as well as pros and cons of each method.
This blog, looking from the context of Bower, does a good job of weighing the pros and cons of each: http://addyosmani.com/blog/checking-in-front-end-dependencies/.
In short: At least for right now, weigh the pros and cons and decide what best fits your situation.

Resources