Not all Mecurial changesets are visible using RhodeCode - linux

Using RhodeCode 1.5.4. Our remote Mercurial repositories exist on Linux RedHat servers. We typically use Tortoise to clone these repos locally via RhodeCode to Windows environment, do work locally, then push back to the remote repositories. This all works fine. However, I am also the owner of the remote repositories, and as such, I sometimes make changes to these repositories via scripts executed directly on the remote repository. These changes are not reflected in RhodeCode. I have to pull the repo locally, make a change, then push to the remote repo before changes made directly on the server show up in RhodeCode.

This happens due to Cache system that RhodeCode is using. You can use this(https://docs.rhodecode.com/RhodeCode-Enterprise/api/api.html#invalidate-cache) API call to trigger manual cache invalidation. Or simply go to repository settings > caches> invalidate cache to manually trigger that.

Related

Issue with authentication using Git Kraken and self hosted Gitlab

so I have a self-hosted Gitlab, I use Git kraken pro and git for windows to enable the LFS component.
I'm having the issue where when I push to my repository git kraken asks for authentication. (Username and password.) How ever I have no such issue when I pull the data, it will just do the job. And the same is for the LFS, I can pull but not push the data without a password.
Here is the set up:
I have not been successful in setting up the ssh keys at this point as it refuses to use them and I'm still trying to work that issue out.
I have pressed the "remember me" option for the pop up and that doesn't seem to work
I have set up the access token and that all works.
The GitLab ce is installed on a Linux OS and I am connecting to it via 4 windows PC's using git kraken (all on pro licenses).
Git lab CE is updated to the latest version and same for git Kraken
Obviously, the preferred method of connection to the git is SSH but it refuses to work. I have tried the git kraken's version of ssh and manually creating and installing the keys using cmd line.
When I enter the U/P to push the data it only works for that single push even if I don't restart git lab. Every single time I need to place a user name and password and this is tiresome.
What I'm asking is, how can I fix this? This is my first full-fledged self-hosted git, and I've learned things on the fly, I do have normal git experience but the set up for the self-hosted is a lot more involved compared to just using git itself.
The reason I am self-hosting is, cost. My repos are GB in size and I have many. So, I need to have my own set up to avoid having those kinds of costs.
How can this be fixed?
When I enter the U/P to push the data it only works for that single push even if I don't restart git lab. Every single time I need to place a user name and password and this is tiresome.
Double-check if your GitKraken is actually using an SSH URL (git#yourServer:user/repo), as a username/password should only work for HTTMS URL (https://yourServer/user/repo)
Check the SSH port is reachable from your windows:
curl -v telnet://yourServer:22
If it does not connect, double-check your Omnibus installation on Linux, making sure the SSH daemon is started and active, using the right sshd_config.
The OP Maize adds in the comments:
A complete reinstall and removal of setting in GitKraken solved the issue.
Previous uninstalled seems to of kept the settings, so when I removed those, it sorted itself out.

Updating local repo everytime original repo is changed

I have a project running on a remote server. I cloned it into the server to run. Problem is everytime I make a change to the code via git, I have to go into the remote server delete the folder and clone it once again. How can it automatically detect a change in the repo and update it?
You're looking for what's called continuous deployment|delivery.
Since you're using GitHub, you may want to look at GitHub Actions. This is one of many mechanisms that are available.
You can configure Actions to trigger various actions (including building, testing and deployment of your code [to the Digital Ocean droplet]) every time you make a commit.

remote deploy scripts for nodejs?

I am looking for a way to easily deploy a nodejs app via a command line script.
I found one solution:
https://github.com/Skookum/nimbus
I also heard that the whole thing can be done with git and post commit hooks.
What would people recommend?
edit: i am deploying it to my own box where i have root
You have two options on a self hosted setup.
Do it all yourself
This entails git post-receive hooks. In short you setup your production box to host a copy of your repository, on your local machine you setup a remote, let's call the remote production.
Now when you run git push production master on your local machine, the updates are sent and the server executes the post-receive hook on your server which runs whatever you wish.
Actions you may want are: checking out/writing the data in the repo to files/folders (the git repo on the server is stored as a bare repo); restarting your webserver; notifying you that there's been a deployment etc.
I'd suggest reading up on it at http://git-scm.com/book/en/Customizing-Git-Git-Hooks and taking a look at a few tutorials, this one (http://ryanflorence.com/deploying-websites-with-a-tiny-git-hook/) looks prety legit.
Use a service to manage it for you, http://www.deployhq.com/ is the only one that springs to mind but I'm sure there's other.
Good Luck and Happy Hacking :)
There is a tool called shipit.js (https://github.com/shipitjs/shipit) which allows you to perform different deployment tasks like:
moving code from the repo to the server
restarting server
installing node_modules
etc.
You create a config file, and then runs: npx shipit deploy and all tasks you specify are performed. In case of failure, it has a rollback mechanism.
There is a nice screencast about it: https://youtu.be/8PpBySjkWEM.

Linux to Windows repository pull and push

Our entire code base at work is housed on an IIS Windows environment. My task was to
copy the code to our new Linux Ubuntu server and go through the code to make the changes necessary to get it to run on the Linux box. It took a couple months but it works. In the meantime, code updates were made on the productions Windows server to the code base by another developer. Now I have the task of pushing the changes to the Linux box so we can pull the trigger and run it live in the new environment.
PROBLEM:
When I performed a git push origin master I was thrown an error stating that I must perform a pull first. Upon diverging and running a git status it says I have `11 and 3 commits each, respectively. The problem is over the course of 2 months I can't remember all the changes made and something could come crashing down in the Windows environment and that can't happen even for a short time. I just need some advice.
I was wondering if it's possible to create a clone of origin master then push the changes to my local from the prod, merge the files and then upload to the Linux box since I can't push to the Linux now because of the needed pull.
When you do git pull, the changes in the remote origin branch are merged into your local origin branch. That is your "local clone of origin/master," so to speak. Inspect the state of your own master branch, and if everything looks right, then push your changes.
Run gitk --all before and after the pull, and you will see that the merge took place, but only on your system.

Syncing website files between local and live servers using GIT?

Say I have two web servers, one local development and one live.
Under SVN I would checkout the website files to my local webserver's public_html directory and also to the live webserver's public_html directory. I would then work on the files directly on the local server and commit any changes to the central repository.
When I'm ready for those changes to go live on the live server, I would SSH in and perform an SVN update.
Essentially I have two working copies, one on live and one locally, though other users may also have working copies on their local machines. But there will only ever be one working copy on the live server. The reason for this is so that we can just perform SVN update on live server every time we want changes to be published.
How can a simiar workflow be accomplished using GIT?
To model your current work flow almost exactly do:
Set up a git repo.
Clone the repo on the server and locally.
Work locally
git push to the git repo
ssh to server
git pull.
Another way to do it would be to set up a "production" branch in git, have a cron job that continually pulls this branch on the server, and then just merge and push to the "production" branch any time you want to publish your changes. Sounds like you need a more concrete branching strategy.
See: Git flow branching model && git flow cli tool
Good luck! This is a very solvable problem with git.
You might find this useful: http://joemaller.com/990/a-web-focused-git-workflow/
In your local working copy:
git push ssh://you#yourserver/path/to/your/wc
will push the commited changes in your local version to yourserver.
Having a setup that triggers automatically pulling like leonbloy and codemac suggested may seem like a good idea at first but it tends to be very fragile. I suggest a different alternative.
http://toroid.org/ams/git-website-howto

Resources