I am working on a project that has a few different code-bases (mostly Meteor), but they all use some of the same API code (schemas, publications and methods).
It doesn't seem very intuitive to have one repository and use sub-trees/sub-modules in my case. Maybe I'm wrong and somebody could help clear it up.
An example structure of my project:
project-landing-page (meteor app for collecting leads and offering basic account management)
project-app (an angular/ionic/cordova/meteor app that is distributed via the app store)
project-worker (a set of cron-like scripts that are executed in the background to manage the data in the mongo instance)
They all share the same schemas, and the two meteor apps use the same methods and publications. It seems a bit cluttered to have one repo for all of this code. Making a branch for the app would also branch the code for the worker scripts. That just seems messy.
Would it be okay to have another repo called "project-apis", that provides the shared code an could be cloned into the other projects? What are the drawbacks? Other than having to run git pull when the "project-apis" repo is updated, I can't really see any.
Would any git-wizards be able to chime in?
Thanks!
Related
Is there any standard/conventional way (package/library, pattern, etc.) to serve the latest version of some static files from a given git repository in Node.js?
My idea so far is to clone the repository at npm start and take the files that I need from there, and to provide a webhook to be called when the repository receives a commit, to then pull the changes so that the files get updated even when the app is running.
This seems like it could work, but I do not want to reinvent the wheel, if it already exists. If there is a method that is already relatively well known, a conventional pattern or a package that already does this, it would be wise to use it and not bother with implementing the details, fixing security vulnerabilities and keeping it up to date.
Also, while we are on it, are there reasons why I should not do this in the first place?
Edit:
I should probably give an explanation on why I would want to do this.
I basically have a web app (nothing too complex, a few routes here and there), let's call it example.com, and a few "self contained sites" (i.e. collections of static files that for all intents and purposes could live independently of the main website and each other).
I could put them each on their sub-domain, but that would probably mean using more app instances and time for just serving some static files, so I want to serve these "sub-websites" on their own paths on that app, like example.com/sub-site1, example.com/sub-site2, etc., from the same app instance as the main example.com, as to not incur higher hosting cost.
I could also modify those static websites to wrap them in Node.js packages and install the packages in the main app, but I want to keep them clean, simple, static files, agnostic of the platform they are being served from.
That leaves me with the option of moving a bunch of files to the main app, but I do not want to manually restart the app each time any one of those sites gets updated, or even worse, manually update the repository of the main app with the new versions of the static files.
Is there any standard/conventional way (package/library, pattern, etc.) to serve the latest version of some static files from a given git repository in Node.js?
There is no standard for this kind of task. The best way is to use the official Github SDK since there is a getContent method for this purpose.
Also, while we are on it, are there reasons why I should not do this in the first place?
A Github repository can be deleted, but an npm package cannot (there is a unpublish policy for this reason).
I don't know your use cases, but those files are critical for your application?
If so, it is risky to depends like this on an ephemeral repository: you could just fork and keep the fork up-to-date with a Github action
I set up a GitLab self-hosted instance and its working fine, my problem right now is that I don't really understand how the frontend works. Mostly because I've been focusing on the backend and because I couldn't find documentation about it either. I wish to understand how I can comment out things I don't want to show for the user or in the overall design, change aspects and text, and overall have control of the frontend.
I'm running on Debian 9, the setup was made with Bitnami using Google VM. As far as I understand I have to manually change the files I want, but I really don't understand the structure of this type of frontend.
What language do I need to know here and where should I find the documentation, how to find the correct directory and files, etc.?
While GitLab doesn't officially support any type of "custom frontend", what you can do is:
Fork GitLab
Use the GitLab Development Kit to implement your changes
Run a Source Install of your fork
The frontend is mostly written in HAML (for the server-side bits) and Vue.js (for the client-side bits).
Note: Even an Omnibus install copies raw ruby and javascript files somewhere, and since they’re physically on the system, they can be manually manipulated and hotpatched, but that’s not really a sustainable way of introducing changes to the codebase.
I would like to know what is the best approach in create several deploys from a big code base. The idea is to divide the big API into microservices (each one in it's own server/vm),
The first idea: I could simply create a folder with only the available routes for that microservice, but still using the "common" codebase...
I currently end up with this, and it's a running API in production (with staging environment in heroku with their pipeline):
and I was thinking that I could have something like:
can anyone point me to a good reference on ... where to start? how can I push multiple version of the same base code to a server?
for more detail on the used technologies, I'm using:
mocha and chai for tests
sequelize for mariaDb modeling and access
restify for server engine
When you divide the API into microservices, you have few options:
Make completely separate repos for all of them with some code duplication
Make completely separate repos but sharing common code as Node modules
Make one repo with multiple microservices, each as its own Node module
Make one repo with one big codebase and build multiple modules with needed parts from that
I'm sure you can do it in even more ways
Having a mismatch of the number of Node modules and code repos will cause some troubles but it may have some benefits in certain cases.
Having a 1-to-1 mapping of repos and modules will be easier to work with with some cases, like the ability to add private GitHub repos directy to dependencies in package.json.
If you want to factor out some common functionality then you can do it in several ways:
The npm supports organizations, scoped packages, private modules and private scoped packages with restricted access.
You can host a private npm registry
You can host a module on GitHub or GitLab or any other git server
For more info see:
Node.js: How to create paid node modules?
There are some nice frameworks that can help you with splitting your code base into microservices, like Seneca:
http://senecajs.org/
Or to a certain extent with Serverless if you're using AWS Lambda, Microsoft Azure, IBM OpenWhisk or Google Cloud Platform:
https://serverless.com/
I've got a VS2013 solution with a mix of NodeJS (using TypeScript) and C# class library projects (they're bound together by EdgeJS). Where the NodeJS projects are concerned, one can be considered a library (for a RabbitMQ bus implementation), two are applications which are meant to be hosted as part of a fourth project with both using the bus.
So, one project (host) which will depend on three projects (bus, app1 and app2) (it starts the bus, and passes it to app1 and app2).
Of course, I could just lump all these projects together and be done with it - but that's a horrible idea.
How do I package these projects up for proper reuse and referencing (like assemblies in traditional .NET)?
Is that best done with NPM? If so, does VS provide anything in this area? If not, then what?
Note that, aside from the Bus project, I'm not looking to release these publicly - I'm not sure if that changes anything.
In general, if something can be bundled together as an independent library, then it's best to consider this a Node package and thus, refactor that logic out to it's own project. It sounds like you've already done this to some extent, separating out your bus, app1, and app2 projects. I would recommend they each have their own Git repositories if they are separate packages.
Here's some documentation to get you started with Node packages:
https://www.npmjs.org/doc/misc/npm-developers.html
The host project, if it's not something you would package but instead deploy, probably does not need to be bundled as a Node package. I would instead just consider this something that would be pulled down from Git and started on some server machine.
With all that said, your last line is important:
I'm not looking to release these publicly
GitHub does have private repositories, but as of now npmjs.org does not have private repositories. There are options to create your own private repository (Sinopia and Kappa offer different ways of accomplishing this), but if you don't want this code available for everyone do not deploy it do npmjs.org. You can still package it up in the way I've outlined it above, just not deploy it as of yet.
I am wondering what are your best practices for a Single Web Page app project using the MEAN stack (MongoDB, Express, Angular and Node.js).
Right now we have the following organization:
One Git repository for the Angular Client side code
One Git repo for the node.js & express server side code.
I saw browsing some blogs and checking node.js boilerplate that a common strucure is to have only one repository to handle Angular Code and Server code.
I'd like to know, from the community, if this approach is really better than having 2 difference repo in terms of versioning, easy to deploy etc...
From my personal point of view, I don't see that much difference...
I don't see much difference as well. It should actually be driven by the team. Your code organization could be beneficial if you had a separate front-end and back-end teams. I've seen an environment when UI guys only downloaded UI portion and hooked up to REST back-end deployed somewhere on DEV server.
Number 2 is release procedure. If your front-end and back-end are tightly coupled they will be released together for 99%. Then you don't need to handle 2 repos. However if your back-end will serve as REST service end-point for other clients, not only your UI and you plan to release front-end changes without touching the back-end (no downtime for external clients) you may want to use two separate repos.
Also think about your CI server. You may want to run front-end an back-end builds and tests separately. However for most CI servers it does not matter either it is one repo or two.