I want csv file to be opened in vim in the same way it opens in microsoft office . Data should be in column format and commas should not be seen and its should be traversed easily. Is it possible in vim with help of any plug-ins?
I am probably a little bit later answering that question, but for completeness I'll answer anyway. I have made the csv plugin that should make it possible to do what you want.
Among others, it allows:
Display on which column the cursor is as well as number of columns
Search for text within a column using :SearchInColumn command
Highlight the column on which the cursor is using :HiColumn command
Visually arrange all columns using :ArrangeColumn command
Delete a Column using :DeleteColumn command
Display a vertical or horizontal header line using :Header or :VHeader command
Sort a Column using :Sort command
Copy Column to register using :Column command
Move a column behind another column using :MoveCol command
Calculate the Sum of all values within a column using :SumCol command (you can also define your own custom aggregate functions)
Move through the columns using the normal mode commands (W forwards, H backwards, K upwards, J downwards)
sets up a nice syntax highlighting, concealing the delimiter, if your Vim supports it
I've tried Christian's csv plugin, and it is useful for quick looks at csv files, especially when you need to look at many different files.
However, when I'm going to be looking at the same csv file more than a few times, I import the file into sqlite3, which makes further analysis much faster and easier to perform.
For instance, if my file looks like this:
file.csv:
field1name, field2name, field3name
field1data, field2data, field3data
field1data, field2data, field3data
I create a new sqlite db (from the command line):
commandprompt> sqlite3 mynew.db
Then create a table in the db to import the file into:
sqlite> create table mytable (field1name, field2name, field3name);
sqlite> .mode csv
sqlite> .headers ON
sqlite> .separator ,
sqlite> .import file.csv mytable
Now the new table 'mytable' contains the data from the file, but the first row is storing the header, which you don't typically want, so you need to delete it (use single quotes around the field value; if you use double quotes you'll delete all rows):
sqlite> delete from mytable where field1name = 'field1name';
Now you can easily look at the data, filter by complex formulas, sort by multiple fields, etc.
sqlite> select * from mytable limit 30;
(Sorry this turned into a sqlite tutorial but it seems like every time that I don't import into sqlite, I end up spending much more time using vim/less/grep/sort/cut than I would have had I just imported in the first place).
There's also exist rainbow_csv vim plugin. It will highlight csv/tsv file columns in different "rainbow" colors and will allow you to write SQL-like SELECT and UPDATE queries using Python or JavaScript expressions.
You probably want to look at sc as an alternative.. Have a look at this linux journal page
Here's some tips for working with CSV files in vim:
http://vim.wikia.com/wiki/Working_with_CSV_files
I'm not sure if there's a way to display it in columns, without commas, though the tips in that link allow vim to traverse and manipulate CSV very easily.
I use #chrisbra's plugin,
" depending on your package manager
dein#add('chrisbra/csv.vim')
and I add a quick command on page load;
could be risky on large records.
autocmd BufRead *.csv :%ArrangeColumn
Related
I am using Power Query Editor to create a working file, using multiple tables from several sources.
After I combine these and make my working file, I am using it to make some work on columns I add later on the working file.
I have noticed that the values I enter in the working file are not bound to the main key, lets assume the first column, but they are independent values in a column.
The result is that if one table changes, for example one line is deleted or I change the sorting of the Query, my working file is wrong, since the data changed but the added columns remain as they were.
Is there a way to have the added columns to be bound with a value, as it is for example with VLOOKUP?
How can I make a file that will update from different sourcesbut stil I can work on it without the risk of misplacing the work I do.
I hope I am clear.
Thank you in advance!
This is fairly simple if each line in your table is unique (which in your example you say the first column can serve as a key). Setup your working columns on the table and then load the table into PQ (as a connection only). Then go to your original query that is combining your data and add a merge at the end where you merge against the table you just loaded into PQ and match on your key. Then expand only your working columns from the merge.
This way whenever you refresh your table, it will match lines against it's existing output in your work before updating, so data in your work columns will be maintained. However note this is only going to retain values, not any formulas you may be using in your work columns.
I have a Python script that creates a CSV file, and one of the columns has values such as 4-10, 10-0, etc.
When I open the CSV in Excel it's formatting these values as dates, ex, 4-Oct. When I go to Format Cells and change the type to Text, it changes 4-10 to 43012.
What's the easiest way to stop this?
When you import the data into Excel, tell the Import Wizard that the field is Text.
My preference is to deal with the inputs, when possible, and in this case if you have control over the python script, it may be preferable to simply modify that, so that Excel's default behavior interprets the file in the desired way.
Borrowing from this similar question with a million upvotes, you can modify your python script to include a non-printing character:
output.write('"{0}\t","{1}\t","{2}\t"\n'.format(value1, value2, value3))
This way, you can easily double-click to open the file and the contents will be treated as text, rather than interpreted as a numeric/date value.
The benefit of this is that other users won't have to remember to use the wizard, and it may be easier to deal with mixed data as well.
Example:
def writeit():
csvPath = r'c:\debug\output.csv'
a = '4-10'
b = '10-0'
with open(csvPath, 'w') as f:
f.write('"{0}\t","{1}\t"'.format(a,b))
Produces the following file in text editor:
And when opened via double-click in Excel:
I downloaded geo data from a site that has them set up in text files. When I copy paste these files into excel, they show up in each individual column:
My main problem with excel is that it is very bad with large data. My data file is 100+ MB. Therefore, I use MacVim. MacVim shows the data like so:
How can I delete or even select a column of data using MacVim. Is there a way to distinguish columns using MacVim in the same way that excel distinguishes them?
Thank you, your help is much appreciated
There seems to be a nice library for dealing with csv files within vim at: https://github.com/chrisbra/csv.vim
I'd also suggest looking at the csvkit tools by Chris Groskopf:
https://csvkit.readthedocs.org/
Make the invisible tab character look like "| " using listchars setting. That way, it'll get easier to visually distinguish columns. Then, you can use blockwise visual mode to select columns. This may still not work in cases where the columns are not aligned correctly due to the length of text in previous cells. You can solve that by potentially replacing one tab by two tabs, but then you will see less data, obviously.
The columns are apparently TAB separated.
To delete the first column, you can :%s/\S*\t//
To delete the e. g. 4th column, you can :%s/\(\(\S*\t\)\{3}\)\S*\t*/\1/
To delete all but the e. g. 4th column, you can :%s/\(\S*\t\)\{3}\(\S*\).*/\2/
To delete all but the first column, you can :%s/\(\S*\).*/\1/
I'm trying to select multiple values based on a search key. In it's most basic form there is no problem with this. I followed this example and everything went well:
http://office.microsoft.com/en-us/excel-help/how-to-look-up-a-value-in-a-list-and-return-multiple-corresponding-values-HA001226038.aspx
=IF(ISERROR(INDEX($A$1:$B$7,SMALL(IF($A$1:$A$7=$A$10,ROW($A$1:$A$7)),ROW(1:1)),2)),"",INDEX($A$1:$B$7,SMALL(IF($A$1:$A$7=$A$10,ROW($A$1:$A$7)),ROW(1:1)),2))
The problem with this however is that in my case I have multiple CSV files (external) where some values in my A$ column look like this:
=- sometext // results into #NAME? error
Excel interprets these as a formulas where it is actually only supposed to be a string. Sure I could change it to text and save it again but I would like to avoid any manipulation in these CSV files.
I tried to extend the second IF statement (if you read it from left to right) with:
IF(AND($A$1:$A$7 <> "#NAME?", $A$1:$A$7=$A$10,ROW($A$1:$A$7)))
and
IF(AND(NOT(ISERROR($A$1:$A$7)), $A$1:$A$7=$A$10,ROW($A$1:$A$7)))
Both didn't work. (Sorry if I messed up some syntax and formula names, I'm using a different language version)
Here a small image of what's happening right now and how it should look:
On the right site you can see a list of values right next to Test1 which are missing on the left site due to the #NAME? error.
I would suggest opening the csv's files as text files. Selecting Comma as your delimiter and then select Text as your Column data format. This way, Excel will treat all your data as text and will not try to read =- sometext as a formula.
To do so, you would need to change your .csv files extension to .txt or anything else (even no file format).
Instead of "Opening" the CSV file, you can "Import" it. This will open the Text Import Wizard which will allow you to specify particular columns as Text. This is located in different areas in different versions of Excel. In Excel 2007, it is on the Data Tab / Get External Data / From Text. The example below demonstrates bringing in long numbers, but it should work just as well with your formula "lookalikes"
I used MS SQL Server 2008 R2 (MS SQL) where I could right click the query result, copy/paste it with headers to Excel for easy exploration. Now with PG Admin (PostgreSQL) I have to do export (File > Export > CSV) then bunch of Excel steps (Text To Columns).
Is there an easy way to copy/paste the query result with headers into Excel?
For pgAdmin 4, there is an option to "Copy with headers". It is a drop-down beside the copy button in the Query Tool menu:
PgAdmin seems to make semi colon the default field separator. Excel seems to like tabs by default.
You could try and change excel or each time just do the "text to columns" feature.
I personally would go to Preferences->Query tool->Results grid and change the following
Result copy quote character: "
Result copy field separator: Tab
Copy column names: True
This will make it more behave more like sql management studio.
There's a lot of different ways to accomplish what you want here. The question is a bit confusing because you are talking about Excel, but then you table about '/var/lib/postgres/myfile1.csv', which makes me think you are now using some favor of Linux.
I'm using Ubuntu 12.04 with pgAdminIII 1.16.0. And I have Open Office installed with LibreOffice 3.5.4.2 as the Excel replacement.
I'm not sure why you want to take the information out of the grid in pgAdminIII, but assuming just wanting to take the data and move it over to a spreadsheet to play it for some reason, then about the easiest way to do it is run your query and click the upper left corner of the results (which just like a spreadsheet selects everything) and copy. Then, you should be able to open LibreOffice and paste in the information. It will bring up the same dialog as you would see when importing a CSV file.
Also, you should be able to start psql and then do a "COPY" command. If you get a permissions error, then try the suggested "\COPY" instead. Please see the PostgreSQL docs. Here is a link to a wiki page here.
If I'm missing what you are trying to do, please ask questions in the comments section, and I'll try to improve my answer accordingly.
You have to set your query tool output to text not the grid data. That way the Column names and the query results are all in the same cut past text file. When you do this you are no longer doing CSV. The whole results and field names comes over as a text file in the cut and paste process.
Answering to quite an old post:
The answer by #Phillip Fleischer seems to be the best way, at least in pgAdmin III. But for pgAdmin III version 1.22.2 (the one I am using), instead of Preferences..., the settings mentioned were seen under File > Options > Query tool > Results grid.