I cloned a couple of repositories from Github, but now every time a make a new file, my Source Control tracks it and wants to send it to Github. I only want it to track the files I cloned from Github.
Also, every time I load a Python file, I have to choose a Python Interpreter. How do I get it to choose automatically? I only have one Python anyway.
You can add them to .gitignore file.
If you'd like to set up a default interpreter for your applications, you can add an entry for python.pythonPath manually inside your User Settings.
Related
So, to give some context. Recently i had to delete the whole project from my personal computer for some reasons. Now i want to download the whole project back onto my PC but don't know how. I assume that the clone button would do that (since that's what it does on Git) but seems to not be the case. I then tried to force update all the files thinking it would download the missing ones. It kinda works, i think but it takes awfully long. It took around 16 hours to update a folder that was about 20GB in size.
The project in question is a UE4 project, in case that's important
Now the question. How can i, most easily, download the whole project from perforce onto my PC? Thank you for your help in advance!
In the future, you can simplify this by using Perforce's "remove from workspace" or p4 sync #none command to delete the project from your PC. If you use Perforce's commands to clean up the workspace, it will:
not delete anything that isn't backed up to Perforce (i.e. files you didn't add to Perforce will be safe)
keep track of what you deleted (so the next time you do a normal "sync" it will just put it all back, without the force flag)
In the situation you found yourself in, an option apart from "force download" is the p4 clean command, which will reset your workspace's state to whatever you last synced to. Note that this will not necessarily preserve any local changes, but if you delete the entire workspace root this isn't a concern.
The time it takes to re-sync a folder is largely a function of your network speed to the Perforce server, but in some cases can be improved by parallelization (I believe P4V will do this automatically, and it's easy to enable via the command line, but if you're syncing via the UE4 plugin this may not be the case). https://community.perforce.com/s/article/9064
I am working on automating the markdown spell check for all the documents on my website which involves multiple git repo. I have a .spelling file that contains all the word to be excluded from the documents. I would like to keep it one file and updated across the entire website. I can get it to work for one repo. I looked into the npm package method. Is there a way to configure package.json to share this file to many repo? Or is there a better way to do it without npm? Thanks!
Make a separate spell-check repository with the .spelling file and script in it, then include it as a submodule in each of your docs repos. You can then reference it from each repository separately, and pull its latest updates into each one.
This could be cumbersome if you have a large number of docs repos, so another alternative is to centralize the spelling check script by making a separate repository for it and adding a configuration file to tell your script which Github repositories to spellcheck. This way, you can selectively apply the spell check process to any number of repositories in your organization.
I created a new project in svn svnadmin create /myrepo in my server, with my client I did a checkout and add new files, later a commit, so, if I make a checkout from another computer I get the recently added files, which is perfect, but at my folder /myrepo still is no file, all the new files that were added from my client are not visible there, I know they implement many algoritms to take the version control, my question is, should I be able to see all the new files added from /myrepo in my server, without need to make a checkout with a client or something like that??
I want to know where my files are saved at my server,
Thanks
No. The files are stored in the repository you created, but in a specialized database. If you go to myrepo and look in the db folder, you'll see that there are revision files stored there. Those files contain the structure and data of the repository at specific instances in time. The Subversion book has some information on the structure. You can also look at the documentation in the actual Subversion repository about the structure used to store the data.
I am searching for a solution to automatically merge files on upload.
To be more precise, we are working in small groups doing web-development, working on the same folder on our Debian Server remotely, so the Problem is of course that if we often have the situation, where up to 3 People need to write in the same php file, at the moment we are trying to coordinate when which person is allowed to work on it.
So my idea was if there is a subversion like solution, to just merge every time we save the file via sshfs.
You should use version control. Here are some options. Which one you should use depends on a variety of factors.
Mercurial
Git
Subversion
You can then have the server your site is on pull from the repository.
I have been going through documentation and such and have SVN working, but I want to put it on an existing directory. I imported that directory, so do I rename/delete the non SVN directory and then checkout the SVN to the non SVN directory location? I am just trying to understand how to get it to start posting to our website URL.
If so, is there any way to keep the current non SVN and make it SVN rather than import and overwrite?
Thanks, I am trying to understand SVN, but find a lot of the tutorials and such on the web to be confusing.
Yes, you have it exactly. Once your code has been added to the repository, you can get rid of or rename your original code directory. Then checkout the project from the repository into the same location as your previous code and continue working from there.
UPDATE
To make it so that your website is updated from the repository, you actually need two working directories, and a repository.
Repository: The repository stores the code and changesets, but isn't directly accessible as a file system. Keep a backup!.
Working directory 1: You develop and test your code from a working directory checked out from the repository. Commit changes back to the repository.
Working directory 2: Rename the code directory on your webserver. Checkout a copy of the code to your web server in its place. Technically it is now a working directory, since it contains the .svn metadata directories, though you won't usually make changes here.
Make changes to your code from your development working directory (1) and commit them back to the repository. When you are satisfied that they are working correctly and have been properly tested, on the web server's code copy (2) do svn update (or if you're using Tortoise SVN on the web server, do an update). This will synchronize the server code with the current development version.
Subversion will not automatically push updates to your web server. You will need to pull them in with an update when you need to. It is possible to use what's called a "post-commit hook" to cause Subversion to execute a script when commits are made, and that script could update or export code to your production web server. However, you would need to write the script and it's kind of an advanced usage of Subversion. I would recommend trying out the method I described with a working copy on the web server to get accustomed to the workflow befrore trying anything more complicated.
Addendum If you really want to do this (and I don't really recommend it unless you really test well) a very easy method would be to schedule a cron job that does svn update every couple of hours (or minutes) on your production site.
Don't forget that if you do happen to modify your code directly on the web server, you must commit it back to the repository from there, and do an update on your development working copy.