How to work collaboratively on a website - web

I'm working on a website with some other people. Usually when we want to modify something, we do the change on our machine and just upload the new version with ftp, hope it'll works (or that nobody will notice it doesn't the time we correct it) and that's it.
It's already not the best way to work alone but even less to work collaboratively so I'm asking advices.
I think that a solution like svn/git/mercurial could help me. I found bitbucket which allows free private repository with mercurial. But still after, how can I upload the changes I did to the ftp and make sure the version I've on my computer is the same than the one on the server.
We are all doing it during our free time (not paid) and some people comes and leave every year so I'm looking for something free, easy to use (explain to everyone why we should use a DVCS is already hard) and which doesn't rely on a specific person.
The server we are using to host the website is a cheap one and doesn't allow the use of ssh, svn,...
Thank you

Version control will not help with the issue you are describing - namely, uploading untested changes to a production site.
What you (and your team) need, is better quality control procedures - you need a test website and a tester (QA) person. The process would be:
Make a change
Update the test website
Have the update and the whole website signed off by QA
Update the production/live site
What you will gain by using version control (CVS, SVN, Git or anything else) is recoverability - you will be able to go back to a version before any breaking change. It will still not solve the issue of "the new code broke the site".

You want scheduled releases.
Commit and update code regularly
Code freeze or develop in a branch and merge to the trunk
test on a staging environment
Find a bug goto step 1
Release

You need to understand that what represents your latest correct working build is not what's on the server but in your source repository whether that be SVN or just the file system. Anything as long as it isn't the live server! Make sure everything works locally as expected then unless the site is huge (I guess not given your situation) deploy it in its entirety as a single version.

Related

Using Headless Domino Designer to create NSF on a Domino Server

This wiki (https://www-10.lotus.com/ldd/ddwiki.nsf/dx/Headless_Designer_Wiki) seemed to indicate that you can only create NSF under your Notes Data directory. I have done a couple of quick test and the only workaround I can find is to install Domino Designer on the same server as the target Domino server and set the target as the Domino data folder (i.e: C:\Domino\Data\sample.nsf instead of just sample.nsf).
The reason for this is I am trying to find an automated way of the following operation
Import ODP into workspace
Associate with a new NSF, but choose a Domino Server as a target
Does anyone have other workaround for this ?
I wish I had a more complete answer for you, but as this is still unanswered after a few days, I'll try to add some insight. It sounds like you have some experience getting headless DDE builds to work, so I won't focus on that. If you're looking for my take on headless DDE builds, I blogged on the subject a while ago, but since adapted the Jenkins CI based process I outlined there for a GitLab CI runner based solution, which I described in another SO answer.
Firstly, I would strongly recommend against setting your Designer target as the same as a server instance. This might work, but seems an unnecessary complication, and potentially issue prone, IMO.
My interpretation of your steps:
automatically receive updates (e.g.- on master branch, or all commits, etc.)
perform build via headless DDE
deploy built NSF
Splitting apart the logic for deploying of the built NSF is ideal here, since you have an asset that needs to be parked in a server path. The two main approaches I see are either:
having a dev/staging server that you can programmatically restart on demand
a more complex mechanism, in an NSF or server plugin, that will ingest the NSF's design and replace the design elements in a (newly created) destination NSF
As you can imagine, that last one is a bit tricky, but it was something I've left off working on, until I have more "free time". As for the former, you'll likely want someone with a bit of admin/operations skills set assist you, but in my mind there would be a total of three scripts involved:
one to down the destination server (this is why it should be a dev/staging server)
one to copy the built NSF to the destination file system path
one to start up the destination server
If you have a design task set to run at a certain interval and point the staging server for any changes, you could conceivable pull from that at whatever your interval is; nightly, etc. I hope the perspective helps.

retrieve lost code after merge went wrong

So we made a rookie mistake, One of our project team members had forgotten to commit for a couple of weeks, (some of which were vacation) but then when he did commit he did something wrong, most of the code he wrote has been overwritten with what was on the server after trying to resolve all the conflicts automatically.
So is there any way to get the code he used to have on his PC back? because a lot of work has been lost and we can't really afford to make it all again.
So just to clarify, the code which is lost is not on the server, it were his uncommitted changes on his client machine.
We are using the team foundation server and visual studio
Take a look at folder:
C:\Users\%USERNAME%\AppData\Local\Temp\TFSTemp\
In my case, searching by method name, i can recover a mistake merge =)
Nope. Code lost that never made it into source is lost. This is one of the biggest selling points of distributed version control like Git.
If your using Windows 7 or similar check for previous version of the file on his/her computer, right click the file options should be their
If you have not compiled after doing the merge, you can use DotPeek by JetBrains to decompile the assembly and get your code back

Remote PC, Office PC connecting to TFS

I'm not sure how to ask this question so let me explain my situation...
I generally work remotely but travel into the office once or twice a week. When I'm working remotely I use VPN to gain access to everything that I would as if I were in the office. When I'm writing code at home I'll grab the latest version of code that I'm working on from TFS and use the local workspace on the home pc. However, when I'm at the office I have no way of accessing that code unless I check it in from the home PC and I'd rather not check in half written code. What is the best possible way to half both sets of code available on both PC's? I've read about remote workspaces but I'm not sure how to set anything like that up.
Any help would be appreciated.
Simplest answer is to shelve your changes. That way they get stored on the server but don't get committed to the code base, you can then unshelve and carry on where you left off. Plus this means you'll be working with code on your local machine, negating any issues with the von connection, you can also share shelvesets with other members of your team

Version control on shared web host with Bazaar

I have a project I am going to begin co-developing on one of my web servers. Due to the nature of this kind of thing I'd like to have some version control going on. I've been searching all day for something that fits my needs and Bazaar seems the way to go, but I cannot figure out how to configure it.
My web host is Linux, without SSH (or SFTP as far as I can tell). I've read that you can use Bazaar in this situation to make a "dumb" server, but I can't for the life of me figure out how to configure, or find a guide. Everything out there requires either SSH/CLI access (both of which I don't have) or are too vague to follow. I am using the Windows GUI for Bazaar as well.
Can anyone either point me to a guide/instructions on how to do it, or post one here?
Edit Since Original Post
I have been trying to do several things since my original post. It might be that I am misunderstanding how bazaar is meant to work. What I want is to have my php files etc. on my web host (to which i do not have ssh access) so that myself and codevelopers can edit and test files without overwritting each other.
I initially tried to "start a new project" on my server via "ftp://user:pass#server" and it says that is successful. Then it prompts with a "Unable to open location" error saying "C:/ftp:/user:pass#server is not a brand, checkout, or repository.
Do you want to open it as a virtual repository, searching for nested locations?"
When I hit yes, it gives me an error "Unable to change to C:/ftp:/user:pass#server - closing page."
if I do the same thing with the "Open an existing location" option, it gives me the same error, except afterwards the Bazaar GUI hangs with "Not Responding" and needs to be killed.
Either way nothing is created that I can then interact with in Bazaar. If I create a local project and then push, it all seems to work. However, if I try to commit changes so I can push them I get an error "Bazaar has encountered an environmental error. Please report a bug if this is not the result of a local problem at https://bugs.launchpad.net/qbzr/+filebug including this traceback, and a description of what you were doing when the error occurred." the show details says "bzr: ERROR: Unable to determine your name.
Please, set your name with the 'whoami' command.
E.g. bzr whoami "Your Name ""
Before you can commit revisions, you need to set a name and email address. These are important metadata in a commit. You can set these in the Settings | Configuration | User Configuration menu. On the General tab enter the Name and E-mail fields. It's recommended to use real data in public projects, so that others who view your project can contact you in case they have questions. But it doesn't have to be real. This is a one-time initial setup.
As a next step, I would do a test to make sure you can really use your server over FTP, as a sanity check:
Commit a few revisions in your local repository, just so that you have something to push. It could be anything, doesn't matter.
Try a push to a URL in the format: ftp://user:pass#server/absolute/path/to/somewhere. In the example in your post you wrote ftp://user:pass#server, but it's important to have an absolute path there, like in this example.
If for some reason the push doesn't work well using the GUI, try it on the command line, for example:
bzr push ftp://user:pass#server/absolute/path/to/somewhere
This should really give an error message we can debug. In that case, paste the output into your question.
UPDATE
You said in comments that something was wrong with your name+email setting, and changing that resolved the problem. It would be nice to know what exactly was the problem there.
About bzr push to an FTP server, I double checked, this will never create the files on the server. From bzr push -h:
The target branch will not have its working tree populated because this
is both expensive, and is not supported on remote file systems.
Some smart servers or protocols may put the working tree in place in the future.
Over FTP it's a "dumb" server, so it definitely won't put the files there, only the .bzr directory, which is the repository and branch data. If you want to have the files there, I'm afraid you have to copy manually. There is a related bzr-push-and-update plugin, but it requires ssh access, which is not your case.

Xpages Build process and Replication

I'm wondering if someone can enlighted me a little bit on the Xpages build process and how this works with other replica copies of a database. Much of the advice I've seen posted regarding working with the the Domino Designer indicates (logically), that you'll have much faster response working on local copies and then replicating those to the server.
I'll usually save my changes locally, build manually, and replicate to the server, and most of the time, that seems to work fine. However, on some occasions, I've found that when I view the work I've done in the browser on the server copy, it hasn't seemed to update... in fact in a couple of scary incidents, it displays a version from several weeks ago (where is it even getting that from??). This isn't a browser caching issue, and I've opened the design elements (xpages, custom controls) on the server copy and verified that the changes ARE there. I end up having to perform a Clean on the server copy (not just a build) of the application, and then it displays as expected.
This seems like a foolish question, but you shouldn't have to perform a build on each replica copy correct? Any thoughts as to what might be an issue here? There is another developer involved, and he works directly on the server as he's in the same location, but we are rarely working at the same time, and never on the same elements. We are not using source control at this time.
We have seen similar behavior ourselves.
In our case, we do development on a server, clean / build project and then copy that database as a template to a deployment server. From there, we update design in the production database.
We have noticed that build process sometimes fails, especially when working over slower links. So we always repeat clean / build / refresh process a couple of times and we try to do it while in office with fast connection between the work stations and the server.
We haven't experienced build problems lately, so this repeating of build process obviously helps.
We have also seen that replicating design between local and server copies sometimes causes build related problems, which could explain the problems you are seeing. We have stopped using replication because of that and are now always working on the server copy directly.
I don't think that your not-using of source control software has anything to do with it.
I usually do all changes inside local template, then perform "Project \ Clean", then update design in server database. It works in 99% of cases. If not, I perform "Project \ Clean" once again. I hate this, but looks like it's the only way to get consistent code on production.

Resources