I've been using git for a little while now in a new project I am working on.
I decided to use GitLab.com as I would like the opportunity to keep me repos private until I'm ready to share them (which github doesn't allow me to do).
The whole beauty of git for me is that I have a copy of the whole repo on my local machine and on the remote site.
However I make lots of comments, on my 'local' gitlab instance.
I know that I can put the wiki into source control, is it possibly to do the same thing with the comments and milestones (or in some other way share them between repositories)
I feel that this should be possible.
Maybe using an rss feed to push and pull the data to / from the various locations.
Or can I use the issues as a 'mailing list' somehow, with a 'mail into list' (however I would then need to get my local gitlab instance to mail any new issues to the remote - could probably be setup using some form of 'auto forward' filter in my mail client / gmail.
Are any of these ideas even possible ?
Is there a better solution - I'd prefer something that will integrate into my gitlab instance (local and remote), rather than needing having to use a separate interface ~ I like everything to be in a single place if possible.
Remember also I like to have access to my issues etc when offline (and then have them 'sync' when I go back online).
Thanks for any help in advance.
David
You could build a script and make use of the API to sync your issues and notes. Maybe a script that pulls all of the new issues and notes and POSTs them to the equivalent projects on GitLab.com. You could run the script manually or create a cron job to post the new items periodically.
Related
I am using Pull Request Manager Hub from the marketplace for our azure dev ops projects/repos. id like something that has a cleaner UI. It seems too busy and like everything is a button and some of the icons don't show up completely. I don't want to complain too much but is anyone using something else that they like more? My main requirement is that I should be able to see all pull requests across repositories in the same project.
I’m the creator of Pull Request Manager Hub and new improvements (better UI, lots of new features) have been released to address some of the main complains. Also, feel free to open issues in the GitHub repo where we try to fix as soon as possible.
Thanks ;-)
My main requirement is that I should be able to see all pull requests
across repositories in the same project.
For this demand , you can use Pull Request Search extension. This extension allows pull requests to be filtered by status, creator, reviewer, title, start date, end date, and repository. You can specify different repos in the same project in the Repo drop-down list.
Another extension Pull Request Dashboard can also view pull requests across all repositories. But it has a flaw, you can only see the pull requests with active status.
https://marketplace.visualstudio.com/items?itemName=mimeo-vs-marketplace.mimeo-active-pull-requests has a plugin called 'All Active Pull Requests' (among others) is the best I have found so far.
We are using Git for a website project where the develop branch will be the source of the test server, and the master branch will serve as the source for the live, production site. The reason being to keep the git-related steps (switching branches, pushing and pulling) to a minimum for the intended user population. It should be possible for these (not extremely technical) users to run a script that will merge develop into master, after being alerted that this would be pushed to live. master cannot be modified by normal users, only one special user can do the merge.
This is where I'm not sure how to integrate this identity change into my code below:
https://gist.github.com/jfix/9fb7d9e2510d112e83ee49af0fb9e27f
I'm using the simple-git npm library. But more generally, I'm not sure whether what I want to do is actually possible as I can't seem to find information about this anywhere.
My intention would be of course to use a Github personal token instead of a password.
Git itself doesn't do anything about user or permission management. So, the short answer is, don't try to do anything sneaky. Rather, use Github's user accounts they way they were intended.
What I suggest is to give this special user their own Github account, with their own copy of the repo. Let's say the main repo is at https://github.com/yourteam/repo, and the special repo is at https://github.com/special/repo.
The script will pull changes from the team repo's develop branch, and merge this into it's own master branch and push to https://github.com/special/repo.
Then, it will push its changes to the team's master branch. This step can optionally be a forced push, since no one else is supposed to mess with master, anyway. (In case someone does, using a forced push here means they have to fix their local repo to match the team repo later on, rather than having the script fail until someone fixes the team repo.)
At the same time, your CI software will notice that master has changed at https://github.com/special/repo, and will publish as you normally would. This is the linchpin: the CI doesn't pay attention to the team repo, so although your team has permission to change it, those changes don't make it into production.
This special user will need commit access to the team repo, in addition to its own GitHub repo. The easiest way is probably to use an SSH key, and run the git push command from the script, rather than trying to use the GitHub API.
I a writing a little webapp, which allows the users to browse through a Visual SVN server. I would like to add an online editor like github in this webapp, so users can edit the files online, leave a message and the changes appear in the repository.
For that I need to checkout the files locally. My idea was to check them in a mongodb out, so I can save the changes per user like a local working copy.
Is there a way (without reimplementing the svn protocol) to make a checkout in a database or even just the memory and then write it in the database.
If there are any questions, just ask :)
Btw. if someone is interested, here is the code https://bitbucket.org/Knerd/svn-browser
There is no way to do svn checkout to directly to database. But there is some options.
First of all, you can simple create virtual disk that resides in memory and perform checkouts to that disk. Than you can store checked out files to database.
Another option is to use rich Subversion API directly. Note, that Subversion is written in C, so you will need to build bridge between Node.js and SVN (as far as I can remember, there is no official Subversion bindings for Node.js, but there is for Python and Java and there is unofficial nodesvn package available for Node.js). Using the API you can implement your own 'in-database' working copy.
Also you can use svnmucc utility (which is shipped with VisualSVN Server) to make commits directly in the repository (without even making a working copy). If you combine it with svn ls, svn info etc. you can implement repository browsing and editing of files.
I have a couple of dependencies in my Java project on 3rd party libs, and some of them are undergoing development that I would like to track.
I would like to be able to be notified, (By email, desktop popup, or otherwise) when changes are committed to the remote svn repo so I can examine their impact etc.
I looked at svnmailer, but it would seem to require the repo to be local (I think??)
also I found some windows tools that do the job, but I am running linux desktop. so no go there.
worst case, I can do some cron script to poll for remote changes using the command line tools, but I would prefer some existing tool.
Sounds like a good use for a continuous integration server. Something like CruiseControl or Hudson are designed for this use case - the whole point of them is to to check your source control regularly, retrieve any changes, build the project and notify someone. In this case, it sounds like you don't even need to build the project, just send an email anyway.
If you don't already have a CI server this might seem like a little overkill but I bet once you've got one set up you'll find yourself using it again.
This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.