Can't install vim Tabular Plugin - vim

There are no installation instructions for the tabular plugin. I tried either copying the files into the correct folders, or putting in under ~/.vim/bundle to let pathogen deal with it, in both cases I get the following error messages when I load up vim (if it's of any concern, the message is repeated 6 times).
AddTabularPattern: Vim(runtime):E194: No alternate file name to substitute for '#': runtime autoload/tabular#ElementFormatPattern.vim
EDIT some more information if it will help diagnose the problem
Here is where the files are stored in my ~/.vim/bundles/godlygeek-tabular-b7b4d87 folder (not I obviously have not shown all files)
.vim/
├── [drwxrwxr-x] bundle
│   ├── [drwxrwxr-x] godlygeek-tabular-b7b4d87
│   │   ├── [drwxrwxr-x] after
│   │   │   └── [drwxrwxr-x] plugin
│   │   │   └── [-rw-rw-r--] TabularMaps.vim
│   │   ├── [drwxrwxr-x] autoload
│   │   │   └── [-rw-rw-r--] tabular.vim
│   │   ├── [drwxrwxr-x] doc
│   │   │   └── [-rw-rw-r--] Tabular.txt
│   │   └── [drwxrwxr-x] plugin
│   │   └── [-rw-rw-r--] Tabular.vim

Could you tell us a bit more about your setup? With a diagram if possible?
The AddTabularPattern command is called exactly 6 times from after/plugin/TabularMaps.vim and declared in plugin/Tabular.vim. I don't see why it would trigger the expansion of #, though.

IIRC, Tabular comes with an after directory which contains a plugin in its own directory, an autoload directory, a doc directory & a plugin directory.
So just copy the contents of those directories to their counterparts in $HOME/.vim/ (making any directory that does not already exist) & you're good to go.

I realize that this does not answer your question directly (I could not replicate the error), but you could try using Vundle to manage your Vim plugins. It serves a very similar purpose as pathogen, but with one important difference: it's entirely declarative.
If you used Vundle, installing Tabular would require you only to put this line into your .vimrc:
Bundle 'Tabular'
and then issue :BundleInstall command and that's it.
It also supports downloading the plugins from GitHub, so alternatively you could use:
Bundle 'godlygeek/tabular'
I have switched from pathogen to Vundle a while ago and haven't looked back.

Related

How to centralize database models folder if it is use in multiple projects?

We have 3 different RESTapis namely A,B and C and all access the same MySQL database.
Our stack is NodeJS + ExpressJS + SequlizeJS ORM + MySQL
Right now if I want to create a new db model, I need to create it in the api A and
communicate with other developers who are working on the api B and C to copy and paste new model in their projects' models folder.
This is very inefficient and has many mistakes during the process.
So instead of doing this manually, can we automate this task with a new repo in the bitBucket?
The idea is to create a repo in bitBucket and some how refer that models folder in all 3 projects instead of keeping models folder in each and every project?
How do I achieve this using NodeJS, ExpressJS and BitBucket?
I am assuming that by API A, B and C you are referring to completely different projects here. If that's the case then i can suggest you to use GIT Submodules. But having extensively used submodules i would suggest to use this only if it is inevitable.
Project Structure That i usually work on:
.(Git Root)
├── logs
├── resources
├── schema
│   └── <different-entities>
├── src
│   ├── config
│   ├── controllers
│   ├── jobs
│   │   ├── emails
│   │   └── notifications
│   ├── locales
│   ├── middlewares
│   ├── migrations
│   ├── models (You need to have a git submodule here)
│   ├── public
│   ├── seeders
│   ├── services
│   │   ├── entities
│   │   └── factories
│   ├── transformers
│   ├── types
│   ├── types-override
│   ├── util
│   └── validators
│   ├── keywords
│   └── <different-entities>
├── storage
│   ├── <date>
├── stubs
└── temp_files
This sounds easy but keep these things in mind:
If your existing project has models directory in git history, you cannot create a submodule on that directory (At least not an easy way. The way i did it was rename models to shared-models)
Now there will be 2 git repositories:
A. git repository containing all the model files (You will never open this GIT repo. in your IDE)
B. the main git repository of your project.
So there will be unnecessary merge conflicts most of the time you
merge any branches in your main repository. because main repository just keeps track of commit hash your models repository should be on at that moment. Any new commit (Irrespective of whether it's even fast-forward will be treated as a merge conflict)
Third and the drawback that's most tiring: Suppose you have to just make a change in model file and nothing else in project changes. (Yes, that's infrequent but possible. e.g. adding another enum value in status key). To achieve this you will have to make two commits, first one in models repository which will store the actual changes, then in the main repository which will store the new commit hash and push two commits on two different repositories
If I have not lost you already & you feel this is the right approach for your use case (I can't think of anything better though)
copy your complete models directory to a new place (Let's say desktop).
git init inside ~/Desktop/models
push it on a separate bitbucket repo. (I usually name it <project>-models-<lang>, e.g. facebook-models-node)
come back to your main Project A.
remove models directory there
run: git submodule add <HTTPS/SSH Url of bitbucket> src/shared-models
replace imports of models from src/models to src/shared-models
Repeat steps 5-7 for other projects too
Official GIT Submodule: https://git-scm.com/book/en/v2/Git-Tools-Submodules

How to start pyscaffold(python) project?

How to start the pyscaffold project?
I use this command to create the project putup sampleProject
but i don't know how to start this project?
You don't start a pyscaffold project per say -- Its goal is simply to create the files and folder that you will commonly need for your project. See my structure below from "putup MyTestProject". Look at all the nice stuff already created that you now don't have to do by hand.
To get started, you need to start adding packages/code to "..src/mytestproject" and run that code like you normally would.
Might I recommend for you the use of a good IDEA, such as pycharm? I think you will find it makes starting your journey much easier.
A second recommendation -- if you are just getting started, you might skip pyscaffold for now. While a great tool, it might add confusion that you don't need right now.
MyTestProject/
├── AUTHORS.rst
├── CHANGELOG.rst
├── docs
│   ├── authors.rst
│   ├── changelog.rst
│   ├── conf.py
│   ├── index.rst
│   ├── license.rst
│   ├── Makefile
│   └── _static
├── LICENSE.txt
├── README.rst
├── requirements.txt
├── setup.cfg
├── setup.py
├── src
│   └── mytestproject
│   ├── __init__.py
│   └── skeleton.py
└── tests
├── conftest.py
└── test_skeleton.py
[Edit]
With respect to why "python skeleton.py" gives an output, the library is simply providing an example to show the user where to start adding code, and how the code relates to the tests (test_skeleton.py). The intent is that skeleton.py will be erased and replaced with your code structure. This may be some python.py files or packages and sub packages with python.py files. Read it this way; "Your Code goes here ... and here is an arbitrary example to get you started."
But you have to ask yourself what you are trying to accomplish? If you are just creating a few scripts for yourself -- for nobody else in the world to see, do you need the additional stuff (docs, setup, licensing, etc?) If the answer is no - don't use pyscaffold, just create your scripts in a venv and be on your way. This scaffolding is meant to give you most of what you need to create a full, github worthy, project to potentially share with the world. Based on what I gather your python experience to be, I don't think you want to use pyscaffold.
But specific to your question. Were I starting with pyscaffold, I would erase skeleton.py, replace it with "mytester.py", use the begins library to parse my incoming command arguments, then write individual methods to respond to my command line calls.

git - Merge only core functionality of two different branches

So here's the thing:
I have a webapp project I write in Node using Express as a server (in the master branch, and I also have a version of the same app to build a node-webkit desktop app (in the nwjs-sdk branch).
The difference between the two branches are only a handful of files mainly in the root of the directory.
So here's a rough idea of the contents of each branch:
The master branch:
├── package.json
├── node_modules/
├── public
│   ├── css
│   │   ├── cassette.css
│   │   └── style.css
│   ├── data
│   │   └── metadata.json
│   ├── index.html
│   ├── js
│   │   ├── cookie-monster.js
│   │   ├── jquery-3.2.1.min.js
│   │   └── mixtape.js
│   └── tracks
├── README.md
├── readTracks.js
└── server.js
And the nwjs-sdk branch:
├── app
│   ├── css
│   │   ├── cassette.css
│   │   └── style.css
│   ├── data
│   │   ├── metadata.json
│   │   └── tracks.json
│   ├── index.html
│   ├── js
│   │   ├── jquery-3.2.1.min.js
│   │   └── mixtape.js
│   ├── main.js
│   ├── package.json
│   ├── tracks
│   └── uploads
├── package.json
├── readTracks.js
└── writeID3.js
Basically the main diference is that the express server is gone and the public/ dir changes the name to app/
The core functionality of my app is on the public/js/mixtape.js and in index.html (app/js/mixtape.js and app/index.html in the nwjs-sdk branch).
What I want to do is work on the master branch tweaking the core functionality and when everything is ready copy that functionality to the nwjs-sdk branch without breaking the app for node-webkit.
Any ideas on how to use git for this?
You want to perform a merge. But git is really dumb. If you ask it to merge two histories one without a public folder and one with a public folder then the merged history will contain a public folder. The same argument can be said of the app folder. So after the merge you should expect to see both folders. Instead you'll want to direct git.
git checkout master;
git merge --no-commit nwjs-sdk;
This will cause git to pause right before it makes the merge commit. At this point you should move your files around how you would like them. (git's not going to be able to figure this out for you). When you're happy with how your files look you just need to make a regular commit. Since you're creating new history, you can always go back in time, you don't need to worry about losing something in the following instructions.
# Un-stage the changes git was preparing to commit
git reset
# Use git rm to stage the removal of old files (you may want to read up on this command, perhaps try --dry-run)
git rm -r foo ...
# Use git add to stage the new files
git add app ...
# Package everything in the stage into your merge commit
git commit

Managing third party libraries (not node modules) with nodejs?

I'm using package.json file to define nodejs requirements, along with npm update, and of course is working fine.
How can I manage (update the easy way) other third party libraries with node? For example:
https://github.com/documentcloud/backbone.git
https://github.com/twitter/bootstrap.git
In vendor folder.
Summary: I think you want to use http://twitter.github.com/bower/
Details:
There are two ways to understand your question:
how to manage/update non-npm code?
how to manage/update client-side javascript assets?
The question is worded as the former, but from your included examples I think what you want to ask the latter.
In case of server-side code, just insist all code gets shipped with npm-style package.json manifest. If the author of the code is unresponsive, fork it and add the manifest. There is no excuse.
For client-side code, the situation is different. There is no established standard for package management, however it's a widely recognized problem and very active field of development. Several challengers have risen recently, trying to grab the dominant position: BPM, Jam or Ender. It's your call which to pick, they are well summarized here: A package manager for web assets
However, all of the above address a slightly too ambitious problem - they try to sort out the transport of those modules to the browser (via lazy loading, require-js style, dependency resolution, concatenation/minification etc.) This also makes them more difficult to use.
A new entrant to the field is Bower from Twitter. It focuses just on download/update lifecycle in your vendor folder and ignores the browser delivery. I like that approach. You should check it out.
You could go for git submodules:
http://git-scm.com/book/en/Git-Tools-Submodules
Using someone else's repo as a Git Submodule on GitHub
[UPDATE 1]
Do this at the root of your repository:
git submodule add git://github.com/documentcloud/backbone.git vendors/backbone
git submodule add git://github.com/twitter/bootstrap.git vendors/bootstrap
Check this for more: http://skyl.org/log/post/skyl/2009/11/nested-git-repositories-with-github-using-submodule-in-three-minutes/
Although this may not be the nodejs way, and some purists may complain, Composer will do what you want. Even though Composer is used with PHP projects there is no reason why it cannot be used to manage 3rd party non-npm repos for nodejs projects too. Obviously it is preferable for the 3rd party library to include a package.json but thats not aways going to happen. I tried this on my current nodejs app and it worked perfectly for me.
Pros:
One json file specifies custom external dependancies that do not have package.json files
Customize where packages are stored in the project folder
Works with private repos and repos not originally intended for use with nodejs
Cons:
Requires php cli
An extra step for updating dependancies
An extra json file for dependancies
Here how to do it (You need to be able to run php from the cli):
1. Download Composer (directly into your root nodejs project folder)
curl -s https://getcomposer.org/composer.phar > composer.phar
2. Create a composer.json file (in the root of the project)
{
"repositories": [
{
"type": "package",
"package": {
"name": "twitter/bootstrap",
"version": "2.0.0",
"dist": {
"url": "https://github.com/twitter/bootstrap/zipball/master",
"type": "zip"
},
"source": {
"url": "https://github.com/twitter/bootstrap.git",
"type": "git",
"reference": "master"
}
}
}
],
"require": {
"twitter/bootstrap": "2.0.0"
}
}
3. Run the Composer update
php composer.phar update
This will download the package into the vendor folder as you requested:
├── vendor
│   ├── ...
│   ├── composer
│   │   └── installed.json
│   └── twitter
│   └── bootstrap
│   ├── LICENSE
│   ├── Makefile
│   ├── README.md
│   ├── docs
│   │   ├── assets
│   │   └── ...
│   ├── img
│   │   ├── glyphicons-halflings-white.png
│   │   └── glyphicons-halflings.png
│   ├── js
│   │   ├── bootstrap-affix.js
│   │   └── ...
│   ├── less
│   │   ├── accordion.less
│   │   └── ...
│   └── ...

Couchapp directory structure, updates?

when generating a new couchapp, I get this structure:
appname
├── _attachments
│   └── style
├── evently
│   ├── items
│   │   └── _changes
│   └── profile
│   └── profileReady
│   └── selectors
│   └── form
├── lists
├── shows
├── updates
├── vendor
│   └── couchapp
│   ├── _attachments
│   ├── evently
│   │   ├── account
│   │   │   ├── adminParty
│   │   │   ├── loggedIn
│   │   │   ├── loggedOut
│   │   │   ├── loginForm
│   │   │   │   └── selectors
│   │   │   │   └── form
│   │   │   └── signupForm
│   │   │   └── selectors
│   │   │   └── form
│   │   └── profile
│   │   ├── loggedOut
│   │   ├── noProfile
│   │   │   └── selectors
│   │   │   └── form
│   │   └── profileReady
│   └── lib
└── views
└── recent-items
Now, since this structure is meant to reflect the JSON structure of a CouchDB _design document, I figured this out:
[_attachments] Attachments are stored binary. JavaScript, CSS, and HTML files are stored here.
[evently] ???
[lists] Lists are JavaScript functions that are executed to render HTML or AtomFeeds from view results.
[shows] Show functions are the analogue to list functions, but render content by transforming a document into other formats (such as html, xml, csv, png).
[updates] ???
[vendor]Home of external libraries.
[views]View contain MapReduce functions that can later be queried though the HTTP API (see \ref{couchdb:views}).
Appart from me hopefully being not completely wrong with the filled out descriptions, how would I describe the updates directory? Is this hosting validation functions?
The second question would be how you would describe the evently directory...
If there is a summary for this already existing, please point me to it!
Kind Regards!
The generate command builds the backbone document format that CouchDB needs; and it also builds a web app development framework, Evently. I don't know Evently very well; but basically it gives a developer tools and suggestions to make the UI and the couch interact.
Personally, I never use the couchapp generate command. I just create the _id file from scratch (echo -n _design/whatever > _id), then create folders and files as I need them.
List functions (one per file) receive _view output to produce any HTTP response (e.g. XML RSS).
Show functions (one per file) receive a one document to produce any HTTP repsonse.
Update functions (one per file) receive one HTTP query and output one prepared document to be stored by couch. (For example, receiving a form submission and building a JSON document.)
View functions (one map.js and one reduce.js in a folder) are CouchDB views and provide for the querying and stuff.
I'm not sure about updates and vendor. They aren't relevant to the CouchDB server.
I have been using couchapp for about a week or two now. It took me more than a while to get the grasp of how couchDB works and how couchapp fits. In fact, I was having the very questions that you were having and I'm sure now that every newbie to couchapp will have these questions lingering in their mind. To save their time at least, I'm posting some of the links that helped be get better at answering the very questions you have asked for. And the links are as below:
http://couchapp.org/page/filesystem-mapping
http://couchapp.org/page/couchapp-usage
http://couchapp.org/page/evently-do-it-yourself
http://www.ibm.com/developerworks/opensource/tutorials/os-couchapp/?ca=drs-
Hope they help.
Update functions are documented in the CouchDB wiki. Quoting it:
[...] you should think about an _update handler as complementary to _show functions, not to validate_doc_update functions.
Evently is documented on CouchApp site. Documentation is weak, I am using it in a project and I have found only a short blog post with useful info. Luckily the source code is easy to understand. But look at Pages app for sample usage. Anyway it is not clear to me how much used is it.

Resources