This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
I am working in a small startup organization with approximately 12 - 15 developers. We recently had an issue with one of our servers where by the entire server was "Re provisioned" i.e. completely wiped of all the code, databases, and information on it. Our hosting company informed us that only someone with access to the server account could have done something like this - and we believe that it may have been a disgruntled employee (we have recently had to downsize). We had local backups of some of the data but are still trying to recover from the data loss.
My question is this - we have recently began using GitHub to manage source control on some of our other projects - and have more then a few private repositories - is there any way to ensure that there is some sort of protection of our source code? What i mean by this is that I am aware that you can delete an entire Project on GitHub, or even a series of code updates. I would like to avoid this from happening.
What i would like to do is create (perhaps in a separate repository) a complete replica of the project on Git - and ensure that only a single individual has access to this replicated project. That way if the original project is corrupted or destroyed for any reason we can restore where we were (with history intact) from the backup repository.
Is this possible? What is the best way to do this? Github has recently introduced "Company" accounts... is that the way to go?
Any help on this situation would be greatly appreciated.
Cheers!
Well, if a disgruntled employee leaves, you can easily remove them from all your repositories, especially if you are using the Organizations - you just remove them from a team. In the event that someone deletes a repository maliciously that still had access for some reason, we have daily backups of all of the repositories that we will reconstitute if you ask. So you would never lose more than one day of code work at worst. Likely someone on the team will have an update with that code anyhow. If you need more protection than that, then yes, you can setup a cron'd fetch or something that will do mirrors of your code more often.
First, you should really consult github support -- only they can tell you how they do the backup, what options for permission control they have (esp. now that they introduced "organizations") etc. Also you have agreement with them -- do read it.
Second, it's still very easy to do git fetch by cron, say, once an hour (on your local machine or on your server) -- and you're pretty safe.
Git is a distributed system. So your local copy is the same as your remote copy on Git hub! You should be OK to push it back up there.
Related
I am evaluating GitLab for my enterprise, I really hope my assumption is wrong here because I REALLY like the product.
For my enterprise, the built in permissions are far to open. We could lose about 2 or 3 industry certifications due to failed security audits if we turned it on with those permission levels and permissions per level.
How do I create my own security levels? Guest and Reporter need purged from the system completely. Enterprise Security would crucify me in the front lobby if I put those in. Then Developer needs slashed way back in permissions, master needs slashed way back, and I need to create maybe 3 more specialists.
I know there isn't a UI to do this, but please tell me there's a file somewhere I can edit to do this. I hate to be forced to spend 5X as much for GitHub for this single issue.
A custom system is in discussion for quite some time now. See the discussion here. Right now the only way you can modify rights is by editing the file ability.rb which contains what permission a group has.
def project_owner_rules for instance defines what permissions the project owner has.
Keep in mind that this file will be overwritten with every update if you make changes, so keep a copy of your modifications around.
I would like to share a project\solution with two teams, ideally on two TFS.
The option to have both teams using the same TFS doesn't work, because both teams don't have access to one of the TFS and hosting the solution on this TFS is a requirement.
It looks as follows:
Project\solution -> Team1 -> TFS1 (requirement)
Same Project\solution -> Team1 + Team2 -> TFS2 (???)
What are my options? Is there a tool out there that can do this? Should I use tow different version control packages?
You can use TFS Integration Plataform to sync the Team Projects between the TFS's installs... But the best world is: access one TFS trought TFS Proxy.
Another way is use Git repository, you will can sync remote the repository with your repository, but access the work items just by TFS.
There are really three ways to solve your problem. The reality is that only #1 is effective if you can't use the cloud. Note that using #3 is fraught with pain and suffering. Aa with all dark side hack/workarounds nothing meets the needs like solving the underlying problem rather than sweeping it under the carpet.
All access - the only really viable solution I to give all required users access to the TFS Server. I have worked with healthcare, banking, defence, and even Insurance. In all cases, in all companies, you can have a single server where all required users can access. In some cases it is hard and fraught with beurocracy but ultimately it can be done.
Visual Studio Online - while there is fear of the cloud this is likely your only option if you have externals that really can't directly access your network. This would be that common server. If you are in Europe then MS has just signed an agreement that ostensibly puts EU located servers for an American company outside of the reach of the Partiot Act (untested.) You can also easily use the TFS Integration Tools to create a one way sync between VSO and your local server.
Bi-directional synchronization - this can be achieved in many ways but there is always a penalty in merging if you have changes on both ends. You can use the TFS Integration Tools that are free or use a Commercially available tool like OpsHub. If your are using Git as your repository within TFS then you can use the command line to push source between two servers... Even if the can't communicate by using a USB stick.
Use #1 or #2 and only as a temporary and short term measure ever use #3
I use the tools all the time to move one way only from one system to another and even this a is a complicated experience. To move stuff bi-directionally you will need a full time resource to resolve conflicts.
If the servers can communicate with each other you may be able to use a system (akin to) replication. There is one master Tfs instance and then external sites use a proxy to allow the second team to work without direct or always-available access to the main server.
Alternatively you may be able to use branches - you could keep a branch for the external team's code and then merge to or from that branch to your mainline. In this scheme you would be able to sync the code by copying between the branch and external site, so you could even transfer updates on memory sticks if there is no direct net connection. Each sync would be pretty time consuming though, as someone on your main server team would have to merge code back and forth through the branch.
Another thing to consider is whether there is any way you can divide up the codebase and the tasks to minimise the overlap between the two teams. for example if one team provides a library that the other uses. This is just too minimise the merging needed.
I maintain the website for my daughter's school. Historically I used our service provider's webftp interface to make edits in-place through their web-based GUI, which worked well for the small edits I typically need to make. They recently disabled webftp for security reasons, leaving ftp as the means to modify web pages and upload documents. Yes, I know FTP isn't very secure itself, but that's another story.
I have used wget to pull down the site and have built a git repository to manage changes. As I make changes, I want to upload the new and modified files to the website. Ideally I would only upload the new and changed files, not the entire site. My question is how do I do this?
I have found wput, which looks promising, but its documentation is not clear about which directory that wget created is the one I should recursively upload, and what the target directory should be. Since we are talking about a live site, I don't want to experiment until I get things right.
This seems like it should be a common use case with a well-known solution, but I have had little luck finding one. I have tried searches on Google and Stack Overflow with terms like "upload changed files ftp linux", but no clear answer pops up like I usually get. Some recommend rsync, but the target is on the service provider's system, so that might not work. Many variants of my question that I have found are Windows-centric and of little help since I work on Linux.
I suppose I could set up a target VM and experiment with that, but that seems excessive for what should be an easily answered question, hence my appeal to Stack Overflow. What do you recommend?
maybe this answer helps you : https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp/24833#24833
It uses lftp's mirror function to sync up a remote and local directory tree.
I also used mc's (midnight commander) built in ftp client quite a lot when maintaining a site.
You should use git with git-ftp. It is generally a good idea to use a VCS by any project...
Say I've got a \\Repo\... repo. Currently devs generally tend to do all their work directly in there, which normally isn't a problem for small pieces of work. Every so often, this approach fails for various reasons, mainly because they're unable to submit the incomplete change to Live.
So, I was wondering, is there a way to enforce on the server that:
1) no files can be directly checked out from \\Repo\...
2) users then branch to a private area (\\Projects\...)
3) dev, test, submit, dev, test, submit, ...
4) on dev complete, they can re-integrate back into \\Repo\...
I guess the last part is the problem, as files need to be checked out! Has anyone implemented something similar? Any suggestions are much appreciated.
There is no way (that I know of) to enforce this type workflow in P4. You could try to enforce it by setting commit triggers, restricting permissions, or locking files however I believe it would only result in more work (micro-management) and frustrate you and your team.
The best way to establish and enforce any SCM workflow is to set as company/studio policy. Your team should be responsible/able to follow the set procedure and determine (by themselves or through discussion) if an issue is able to be fixed in the main line.
One note about the proposed workflow; creating a new branch for every issue will eventually cause issues and at some point you will need to perform maintenance on the server to conserve disk space and depot browsing speed.
For more information (over) branching on Perforce read this Perforce blog entry from 2009: Perforce Anti-Patterns Part 2: Overuse of branching.
In many studios using Perforce, most developers have their own "working" branch which they continually re-use whenever there are changes that are not safe or able to be performed in the main line.
if i understand your questions properly, you should try with shelving features and working offline features of Perforce. Process is main thing to achieve success in this senario. So you might need to setup a right process to execute this.
For more Info about shelving and working offline with perforce, you can try following links...
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html
A friend of mine and I are developing a web server for system administration in perl, similar to webmin. We have a setup a linux box with the current version of the server working, along with other open source web products like webmail, calendar, inventory management system and more.
Currently, the code is not under revision control and we're just doing periodic snapshots.
We would like to put the code under revision control.
My question is what will be a good way to set this up and software solution to use:
One solution i can think of is to set up the root of the project which is currently on the linux box to be the root of the repository a well. And we will check out the code on our personal machines, work on it, commit and test the result.
Any other ideas, approaches?
Thanks a lot,
Spasski
Version Control with Subversion covers many fundamental version control concepts in addition to being the authority on Subversion itself. If you read the first chapter, you might get a good idea on how to set things up.
In your case, it sounds like you're making the actual development on the live system. This doesn't really matter as far as a version control system is concerned. In your case, you can still use Subversion for:
Committing as a means of backing up your code and updating your repository with working changes. Make a habit of committing after testing, so there are as few broken commits as possible.
Tagging as a means of keeping track of what you do. When you've added a feature, make a tag. This way you can easily revert to "before we implemented X" if necessary.
Branching to developt larger chunks of changes. If a feature takes several days to develop, you might want to commit during development, but not to the trunk, since you are then committing something that is only half finished. In this case, you should commit to a branch.
Where you create a repository doesn't really matter, but you should only place working copies where they are actually usable. In your case, it sounds like the live server is the only such place.
For a more light-weight solution, with less overhead, where any folder anywhere can be a repository, you might want to use Bazaar instead. Bazaar is a more flexible version control system than Subversion, and might suit your needs better. With Bazaar, you could make a repository of your live system instead of setting up a repository somewhere else, but still follow the 3 guidelines above.
How many webapp instances can you run?
You shouldn't commit untested code, or make commits from a machine that can't run your code. Though you can push to backup clones if you like.