Setting up environment for project - web

I have a wordpress site. The Goal is to create a web app and some how merge the code into the existing wordpress code base and provide developers access to the GIT and push their changes to the code base. I'm not sure the steps involved in achieving this goal but would appreciate any people who have experience in this.
I'm sure there is a much easier way to do this - but I have tried downloading the WP Theme from my hosting cPanel. Then upload that folder to git hub. Does anyone have experience with Git Integrations for bluehost?

Related

Change magento theme

I want to change theme in magento 2.The problem is that the client dont want the website to go down for 3 days so i have time to fix the new design and add clients images and information. If i build the site in subdomain how can i move it to root folder and after that connect the database of the old site without losing any data or do something wrong.
If you can suggest me something else i am all ears.
I didnt try something.I am new to magento so i cant figure out what to do
Thank you
I suggest you to setup git on your project.
Here is an explanation of how to setup git in an existing project
Add Magento Existing project to Github repository
When you are finished and the project is in a repo.
Clone the repo to your local environment, also take a database dump from the server and setup the project in the local environment.
And finally you can work and test your code independently without affecting production and when you have finished, with the help of git you can easily transfer to the production instance

deploy files from branch to folder

My company has recently moved to self-hosted Gitlab instance, and now I’m trying to wrap my head around it to see how we could use the CI/CD features of it for our cases. For instance:
We have a PHP-based front end project which consists of several PHP, CSS and JS files that as of now are being copied to our Apache2 folder in the same structure as we use for development.
Is it possible to configure Gitlab to do it for us, let’s say, implementing such an algorithm:
we make a DEPLOY branch in our repository
when we’re done with changes in other branches, we merge those branches into DEPLOY
Gitlab CI/CD detects the new activity and automatically puts the latest version of the files from this branch into our Apache2 directory (connected as a remote folder)
Any advice and guidance is appreciated. I’m currently lost in many manuals that describe Docker deployment etc. which is not yet used in our project.

Setup Continuous Deployment with DropBox on Windows Azure Website

Where I work, our marketing team is looking for a "quick and easy" method up periodically updating some files on a website of ours. I opened my mouth and said "We can use Azure Websites with DropBox!". It all works fine, except that with DropBox, files only deploy if I log into the Azure Portal and click Sync. Needless to say, this is a deal breaker, because the users want to save a file and have everything appear magically.
Is there a way to setup continuous deployments via DropBox on Azure? I don't mind setting up a job to run every 15 minutes to perform a file upload if needed.. but would prefer to avoid that if possible
Thanks In Advance
Currently we don't support the continuous sync with Dropbox. The challenge is the noise and the reliability of the site given those changes. Imagine users naturally modify file by file and Dropbox sync them one at time. You can get into the situation where your site is in transient bad state.
This is not currently possible using the Dropbox integration in Azure Websites. Best option for this is the local git integration, where Azure will provide you a remote git location that you can push to that causes an update.
So that gives you behavior, but not the dropbox behavior you want, as someone would still need to commit and push.
To get that you could look into implement a Git hook to mimick the behavior, where you would auto commit and push when a file changes.
Something like this would give you that behavior, but you'd need to translate to a server-based model.
Git Repo Auto-commit and Push
Alternatively, you can host the site in GitHub or Visual Studio Online and I beleive you get that hook automatically.

How to deploy a node app to azure if the node app is buried in repo directory

I am trying to deploy a project to azure, via the "remote git repo" method. But in my repo, the actual node application is a few directories in. Thus, Azure does not do anything when the repo is pushed.
Is there some way to configure the azure website to run from a directory buried in the repo?
There's a super easy way actually. This scenario was anticipated by the Azure team and there's a good way to handle it. It's simple too.
You simply create a text file at the root of your project called .deployment. In the .deployment file you add the following text...
[config]
project = mysubfolder
When you either Git deploy or use CI to deploy from source control, the entire repository is deployed, but the .deployment file tells Kudu (that's the engine that handles your website management) where the actual website (or node project) is.
You can find more info here.
Also, check out this post where I mention an alternative strategy for project repos in case that helps.
This isn't so much an Azure question as a Git question. What you want to know is if there is a way to clone only a sub-directory or branch of a project. From doing some research on this just a couple of weeks ago, the best I could find were solutions for how to do a sparse clone, which does allow one to restrict the files cloned (almost there) but does so within the entire project's directory structure (denied).
A couple of related SO questions & answers which you might find helpful:
How do I clone a subdirectory only of a Git repository?
(Short answer 'no')
Checkout subdirectories in Git?
(Answer describes the sparse checkout ability).
I would love to see if a git guru might have a better answer based on updates to git, etc.
Good luck with it - I ended up just putting my node app in its own Git project as it seemed the most straightforward approach overall, though not ideal.

Using hg repository as web site

This is somewhat related to my security question here. Is it a bad idea to use an hg / mercurial repository for a live website? If so, why?
Furthermore, we have dev, test and production installations of our website, like dev.example.com, test.example.com and www.example.com. If it's a bad idea to use a repository for a live/production website, would it be OK to use an hg repository for the dev and test sites?
I'm also concerned about ease of deployment. We have technical and less technical co-workers who will be working with the site. The technical people (software engineers) won't have any problem working with the command line or TortoiseHG. I'm more concerned about the less technical people (web designers). They won't be comfortable working on the command line, and may even find TortoiseHG daunting. These co-workers mostly upload .css files and images to the server. I'd like for these files (at least the .css files) to be under version control, but I want this to be as transparent as possible for the non technical team members.
What's the best way to achieve this?
Edit:
Our 'site' is actually a multi-site CMS setup with a main repository and several subrepositories. Mock-up of the repository structure:
/root [main repository containing core files and subrepositories]
/modules [modules subrepository]
/sites/global [subrepository for global .css and .php files]
/sites/site1 [site1 subrepository]
...
/sites/siteN [siteN subrepository]
Software engineers would work in the root, modules and sites/global repositories. Less technical people (web designers) would work only in the site1 ... siteN subrepositories.
Yes, it is a bad idea.
Do not have your repository as your website. It means that things checked in, but unworking, will immediately be available. And it means that accidental checkins (it happens) will be reflected live as well (i.e. documents that don't belong there, etc).
I actually address this "concept" however (source control as deployment) with a tool I've written (a few other companies are addressing this topic now, as well, so you'll see it more). Mine is for SVN (at the moment) so it's not particularly relevant; I mention it only to show that I've considered this previously (not on a Repository though; a working copy, in that scenario the answer is the same: better to have a non-versioned "free" are as the website directory, and automate (via user action) the copying of the 'versioned' data to that directory).
Many folks keep their sites in repositories, and so long as you don't have people live-editing the live-site you're fine. Have a staging/dev area where your non-revision control folks make their changes and then have someone more RCS-friendly do the commit-pull-merge-push cycle periodically.
So long as it's the conscious action of a judging human doing the staging-area -> production-repo push you're fine. You can even put a hook into the production clone that automatically does a 'hg update' of the working directory within that production clone, so that 'push' is all it takes to deploy.
That said, I think you're underestimating either your web team or tortoiseHg; they can get this.
me personally (i'm a team of 1) and i quite like the idea of using src control as a live website. more so with hg, then with svn.
the way i see it, you can load an entire site, (add/remove files) with a single cmd
much easier then ftp/ssh this, delete that etc
if you are using apache (and probably iis as well) you can make a simple .htaccess file that will block all .hg files (or .svn if you are using svn)
my preferred structure is
development site is on local machine running directly out of a repository (no security is really required here, do what you like commit as required)
staging/test machine is a separate box or vm running a recent copy of the live database
(i have a script to push committed changes to staging server and run tests)
live machine
(open ssh connection, push changes to live server, test again, can all be scripted reasonably easily, google for examples)
because of push/pull nature of hg, it means you can commit changes and test without the danger of pushing a broken build to the live website. like you say in your comments, only specific people should have permission to push a version to the live site. (if it fails, you should easily be able to revert to the previous version, via src control)
Why not have a repo also be an active web server (for dev or test/QA environment anyway)?
Here's what I am trying to implement:
Developers have local test environments in which they can build and test their code
Developers make a clone of the dev environment on their local dev machine
Developers commit as often as they want to their local repo
When chunk of work is done and tested, then developer pushes working change sets to dev repo
Changes would be merged and tested on Dev, then pushed to Test/QA, and so on.
BTW, we're using Mercurial. I believe this model would only work using a distributed source code management tool.

Resources