log line number in vim whenever a line is deleted - vim

I have an application that generates a txt file with thousands of lines. I have to delete some lines manually by going through the file (using vim). However, I might need to generate the same file again if a change in format is required. That will make me go through the file again to delete the same lines.
The solution to avoid deleting manually repeatedly is that vim somehow logs the line number when I delete a line. I can then use some script to remove those lines. Is it possible to get this behavior in vim?
Otherwise, is there any other editor to get this behavior? There are many lines I have to delete and it's not feasible for me to log each line number manually.

As suggested by phd and wxz, I was able to use git-diff of the file to extract the deleted lines by using node package gitdiff-parser for parsing the diff.
const gitDiffParser = require('gitdiff-parser')
const { exec } = require("child_process");
let p = new Promise( (res,rej) => {
exec("git diff -U0 file.txt", (error, stdout) => {
res(stdout)
});
});
p.then(s=>{
diff = gitDiffParser.parse(s);
diff[0].hunks.forEach(element => {
console.log(`start: ${element.oldStart}, end: ${element.oldStart + element.oldLines - 1}`)
});
})
Another solution or say hack was to append line number in each line of the file and extract the undeleted line numbers after removing the required lines.

Related

Puppet search and replace a number using file pattern replacement

I am fairly new to puppet, I am trying to pass a TimeOut value in 'httpd.conf' file using the below code snippet in my puppet 'config.pp'
replace { 'Httpd Timeout':
file => 'httpd.conf',
pattern => 'TimeOut*',
replacement => 'TimeOut 900',
}
But instead of replacing the code just appends the value "900" to the line like below
"TimeOut 900 300 "
How I can modify the code that the line will be like below
TimeOut 900
I am using Puppet community version 3.7

Logstash KV plugin working

I am trying to use logstash's KV plugin. I have following log format:
time taken for transfer for all files in seconds=23 transfer start time= 201708030959 transfer end time = 201708030959
My .conf file has following KV plugin:
filter {
kv {
value_split => "="
}
}
When I run logstash, it parses complete log file line by line excluding the one having "=". I need seconds, start time and end time to be separated as key value pairs. Please suggest.

Writing at end of every line in a file in Node.js?

I have a file
one
two
three
I want to append a word at the end of every line in this file. How can I achieve that in node ?
eg.
onecandy
twocandy
threecandy
Then I want to use this file in another function ,i.e after allcandy has been added . How do i do that ?
Because you will have to read the line to know where is ending and also you have to write at the end of the each line.
In conclusion you have to read everything and write at the end of each line just appending won't save to much performance it only complicate the things.
var fs = require("fs");
var allLines = fs.readFileSync('./input.txt').toString().split('\n');
fs.writeFileSync('./input.txt', '', function(){console.log('file is empty')})
allLines.forEach(function (line) {
var newLine = line + "candy";
console.log(newLine);
fs.appendFileSync("./input.txt", newLine.toString() + "\n");
});
// each line would have "candy" appended
allLines = fs.readFileSync('./input.txt').toString().split('\n');
Note: For replacing just some specified lines you can go through this answer.

Extracting pattern which does not necessarily repeat

I am working with ANSI 835 plain text files and am looking to capture all data in segments which start with “BPR” and end with “TRN” including those markers. A given file is a single line; within that line the segment can, but not always, repeats. I am running the process on multiple files at a time and ideally I would be able to record the file name in which the segment(s) occur.
Here is what I have so far, based on an answer to another question:
#!/bin/sed -nf
/BPR.*TRN/ {
s/.*\(BPR.*TRN\).*/\1/p
d
}
/from/ {
: next
N
/BPR/ {
s/^[^\n]*\(BPR.*TRN\)[^n]*/\1/p
d
}
$! b next
}
I run all files I have through this and write the results to a file which looks like this:
BPR*I*393.46*C*ACH*CCP*01*011900445*DA*0000009046*1066033492**01*071923909*DA*72
34692932*20150120~TRN
BPR*I*1611.07*C*ACH*CCP*01*031100209*DA*0000009108*1066033492**01*071923909*DA*7
234692932*20150122~TRN
BPR*I*1415.25*C*CHK************20150108~TRN
BPR*H*0*C*NON************20150113~TRN
BPR*I*127.13*C*CHK************20150114~TRN
BPR*I*22431.28*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*72346
92932*20150112~TRN
BPR*I*182.62*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*7234692
932*20150115~TRN
Ideally each line would be prepended with the file name like this:
IDI.Aetna.011415.64539531.rmt:BPR*I*393.46*C*ACH*CCP*01*011900445*DA*0000009046*1066033492**01*071923909*DA*72
34692932*20150120~TRN
IDI.BCBSIL.010915.6434438.rmt:BPR*I*1611.07*C*ACH*CCP*01*031100209*DA*0000009108*1066033492**01*071923909*DA*7
234692932*20150122~TRN
IDI.CIGNA.010215.64058847.rmt:BPR*I*1415.25*C*CHK************20150108~TRN
IDI.GLDRULE.011715.646719.rmt:BPR*H*0*C*NON************20150113~TRN
IDI.MCREIN.011915.6471442.rmt:BPR*I*127.13*C*CHK************20150114~TRN
IDI.UHC.011915.64714417.rmt:BPR*I*22431.28*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*72346
92932*20150112~TRN
IDI.UHC.011915.64714417.rmt:BPR*I*182.62*C*ACH*CCP*01*071000152*DA*99643*1361236610**01*071923909*DA*7234692
932*20150115~TRN
The last two lines would be an example of a file where the segment pattern repeats.
Again, prepending each line with the file name is ideal. What I really need is to be able to process a given single-line file which has the “BPR…TRN” segment repeating and write all segments in that file to my output file.
Try with awk:
awk '
/BPR/ { sub(".*BPR","BPR") }
/TRN/ { sub("TRN.*","TRN") }
/BPR/,/TRN/ { print FILENAME ":" $0 }
' *.rmt

cakedc csvimport fetches no record from csv file

I have the following CSV file, the first line is the header:
admission_no;first_name;last_name;gender;date_of_birth;join_date;form_id;stream_id;school_photo
1003;"cavin";"cavin";"cavin";"male";1992-11-02;2007-01-25;1;1
1004;"joshua";"joshua";"joshua";"male";1992-11-03;2007-01-26;1;1
1005;"elijah";"elijah";"elijah";"male";1992-11-04;2007-01-27;1;1
1006;"lawrent";"lawrent";"lawrent";"male";1992-11-05;2007-01-28;1;1
1007;"steven";"steven";"steven";"male";1992-11-06;2007-01-29;1;2
1008;"javan";"javan";"javan";"male";1992-11-07;2007-01-30;1;2
1009;"miller";"miller";"miller";"male";1992-11-08;2007-01-31;1;2
1010;"javis";"javis";"javis";"male";1992-11-09;2007-02-01;1;2
1011;"fredrick";"fredrick";"fredrick";"male";1992-11-10;2007-02-02;1;3
1012;"fredrick";"fredrick";"fredrick";"male";1992-11-11;2007-02-03;1;3
1013;"nahashon";"nahashon";"nahashon";"male";1992-11-12;2007-02-04;1;3
1014;"nelson";"nelson";"nelson";"male";1992-11-13;2007-02-05;1;3
1015;"martin";"martin";"martin";"male";1992-11-14;2007-02-06;1;4
1016;"felix";"felix";"fwlix";"male";1992-11-15;2007-02-07;1;4
1017;"tobias";"tobias";"tobias";"male";1992-11-16;2007-02-08;1;4
1018;"dennis";"dennis";"dennis";"male";1992-11-17;2007-02-09;1;4
1019;"bildad";"bildad";"bildad";"male";1992-11-18;2007-02-10;1;5
1020;"syslvester";"sylvester";"sylvester";"male";1992-11-19;2007-02-11;1;5
And my database table columns are: admission_no, first_name, last_name, gender, date_of_birth, join_date, form_id, stream_id and school_photo.
Using the CakeDC Utils plugin to import the data, I get a flash message:
Successfully imported 0 records from file.csv
I have tried removing the header, changing the delimiter or even adding NULL for the school_photo column since it is nullable but nothing seems to work.
Can someone tell me what am doing wrong?
am generating the csv using Ubuntu libre Office
the import function:
function import() {
$modelClass = $this->modelClass;
if( $this->request->is('POST') ) {
$records_count = $this->$modelClass->find('count');
try {
$this->$modelClass->importCSV($this->request->data[$modelClass]['CsvFile']['tmp_name']);
}catch (Exception $e) {
$import_errors = $this->$modelClass->getImportErrors();
$this->set('import_errors', $import_errors);
$this->Session->setFlash(__('Error importing')." ".$this->request->data[$modelClass]['CsvFile']['name']. ",". __('column mismatch'));
$this->redirect(array('action' => 'import'));
}
$new_records_count = $this->$modelClass->find('count') - $records_count;
$this->Session->setFlash(__('Successfully imported')." ". $new_records_count . " ".'records from' . " ".$this->request->data[$modelClass]['CsvFile']['name']);
//$this->redirect(array('action' => 'index'));
}
$this->set('modelClass', $modelClass);
$this->render('../Common/import');
}//end import
the view file
<h3>Import <?php echo Inflector::pluralize($modelClass); ?> from CSV data file</h3>
<?php
echo $this->Form->create($modelClass, array('action' => 'import', 'type' => 'file'));
echo $this->Form->input('CsvFile', array('label' => '', 'type' => 'file'));
echo $this->Form->end('Submit');
?>
Make sure your line endings are something your server can read. I've been burned by this a million times... especially if using a mac to generate the CSV from Excel. save as "csv for windows".
Background: OS X tends to save files with \r (carriage return) line endings, while Windows uses \r\n (carriage return, line feed). To make it worse all other Unix-like systems (e.g. Linux) tend to use only \n.
Thus if you save a file on OS X and the open it on Windows, Windows thinks the file is only one big line.
You can verify this with a good editor (a programming editor which can display whitespace as symbols) or by looking at the actual hex Code of the file.

Resources