I man a C# project from both my work computer and home computer.
I created the project from my work laptop, and pushed the commits to the DevOps repo before going home. I cloned the repo from DevOps so I can continue working on the program from the home computer later that night after pulling the commit that I done from the workplace. Did another commit before ending the work for the day.
Now, here is the question. as I am going to continue working tomorrow at my workplace. Because the project was created in my work computer, and I am afraid that if I pull the commits from the home computer, there may be conflicts if I screw up with something. What is the safest way to pull the commit to a previous machine (the project folder of the workplace PC, that is) without any conflicts? Again, I am new to Azure DevOps. I will add any more information if you want to.
(For pulling to new machines, I am more comfortable with it. I'm more concerned with pulling to previous machines after working from a newer computer).
Don't leave any uncomitted changes in your work computer if you plan to continue working from home. Commit and push to Az Devops your current stand before continuing from another computer.
Related
I currently work on solutions / projects within a single GIT repository, in Visual Studio. The commits I make are to a local folder on the Visual Studio server, and then I use the command 'git push origin master' (after having changed directory to my local folder / repository) to push commits to a Gitlab in my company's corporate space. The purpose of this is less about using branches and software development (as I am the only person who does any work on this), and more about having a way to rollback changes and keep a master copy off the server.
I now want a fresh copy of this GIT repository, so I can use that as a new baseline for an application migration. I will still continue to work on the existing repository too.
What is the best way to make a copy of the existing repository, that I can treat as a totally separate thing, without accidently messing up my existing config on the server? Should I do the clone from the Gitlab? Or clone locally and then push that up to the new space in my Gitlab? Honestly, I'm a bit confused at this point about the proper model for this stuff.
....................
Sounds like you'd like to fork the project: keep the existing repo and start a new, separate repo based on the old one.
Browse to your project in Gitlab
On the main repo screen, click "fork" in the top right
Select a new/ the same organisation as you'd like
The project is now forked!
From here, you can clone the fork to your local machine in a new folder. These are now separate projects, and code updates can be added, committed and pushed to the separate repos.
I have a project running on a remote server. I cloned it into the server to run. Problem is everytime I make a change to the code via git, I have to go into the remote server delete the folder and clone it once again. How can it automatically detect a change in the repo and update it?
You're looking for what's called continuous deployment|delivery.
Since you're using GitHub, you may want to look at GitHub Actions. This is one of many mechanisms that are available.
You can configure Actions to trigger various actions (including building, testing and deployment of your code [to the Digital Ocean droplet]) every time you make a commit.
At the moment we have a local server as our SVN server, using Subversion Edge 3.1.0, where users push their commits and is used as the main repository. Recently this has been giving us some problems, the server tends to switch off or encounter problems, which then the server needed to be restarted.
Since we also have some people offshore working on the same repository we decided it's best to have an Azure VM set-up, this will act as a backup server and also have the repository updated with each commit (like Dropbox, File Sync, etc.).
My questions are,
Have anyone actually managed to setup an environment similar to this?
How do the commits work? When someone pushes to the cloud repository
and someone then pushes on the local repository.
Have anyone actually managed to setup an environment similar to this?
As long as you have the networking configured such that users can reach this Azure VM via HTTP (preferably HTTPS), it should be no different from hosting a repository on your company network.
How do the commits work? When someone pushes to the cloud repository and someone then pushes on the local repository.
Subversion has no notion of a "cloud repository" vs. a "local repository" because it's a centralized VCS - there is only one repository, ever.
Users would simply commit to your Azure-hosted repository instead of the on-premises one. The commits work exactly the same.
this will act as a backup server and also have the repository updated with each commit
Subversion on its own is not a backup! You must take regular backups of your repository and keep them in a location that is separate from the repository server to truly keep your repository data safe.
Your repository will always be "updated with each commit" because that's how Subversion works in the first place. Assuming your developers are committing code regularly, that is.
Seemingly both GitLab and gitlab-mirrors are setup and working correctly for me. However not long after I create a new branch under username Admin using the default root account for example (not to be confused with the gitmirror user created not long after), I'll read in the commit history that the admin user deleted the branch entirely. Typically an hour later.
Prior to the new branch being deleted, everything seems 100%, and it is really cool to see my branch alongside the history of previous commits to the repo I mirrored, using git-mirror.
To be more clear, I am using gitmirror to pull a GitHub-hosted repo to GitLab. This repo already has two branches, one is called master (which is HEAD) and I can see those fine. The next thing I do is create a new branch for my own development as the Admin user, and while it seems to work just fine. Within an hour the log will read that the Admin user deleted the newly created branch (which happens every time automatically, and I certainly didn't do this manually myself).
I've checked cron, and it seems ok. During setup, as the gitmirror user I used 'crontab e' to install the following text (tweaked for my own domain of course):
Host gitlab.example.com
User git
Since the machine is a virtual machine, it is easy to save snapshots which I've done. So it is possible to recover a snapshot with all the changes described, and I can watch the Admin user automatically delete my new branch not long after restoration (because the snapshot was saved hours ago already); which is handy for debugging purposes.
Should I not be creating repos using the default GitLab root account? Is there a log somewhere that shows more than the recent GIT history on the GitLab projects page? What else can I try?
I've created two different servers from scratch using the latest software versions now, and I get identical results on each.
This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.