This is my first post so i apologize if it's not the best format. I'm student writing a code to import a document, read a line in the document and then reverse the letters in each word. The new word will be printed into a new file. For example "Jon 123" would be stored and written as "321 noJ". I have gotten the input to work but there is a problem with the writing of the line. The program is only writing the last word that is stored.
The abridged main method code is as follows:
//get first line of text
line = bw.readLine();
//while string is not null
while (line != null)
{
System.out.println ("Processing..."); //display message to show work being done
tokenLine = lineToken(line); //tokenize string
//to prevent exception from no token found
while (tokenLine.hasMoreTokens())
{
word = flipWord(tokenLine); //get next token and reverse letters
newLine = marginCheck(word); //store or write line depending on margin
flushClose(newLine); //write and flush buffer then close file
}
//move to next line in file
line = bw.readLine();
}
flushClose(newLine); //write and flush buffer then close file
//output completion message
System.out.println("The new file has been written.");
The relevant methods as follows:
public static StringTokenizer lineToken(String line)
{
//local constants
//local variables
StringTokenizer tokenLine; //store tokenized line
/******************* Start lineToken Method ***************/
tokenLine = new StringTokenizer(line); //tokenize the current line of text
return tokenLine;
}//end lineToken
public static String flipWord(StringTokenizer tokenLine)
{
//local constants
//local variables
String word; //store word for manipulation
String revWord = ""; //store characters as they are flipped
/******************************* Start flipWord Method******************/
//store the next token as a string
word = tokenLine.nextToken();
//for each character store that character to create a new word
for (int count = word.length(); count > 0; count--)
revWord = revWord + word.charAt(count - 1); //store the new word character by character
return revWord; //return the word reversed
}//end flipWord
public static String marginCheck(String revWord) throws Exception
{
//local constants
final int MARGIN = 60; //maximum characters per line
//local variables
String newLine = ""; //store the new line
FileWriter fw; //writes to output file
BufferedWriter bWriter; //instantiate buffered writer object
PrintWriter pw; //instantiate print writer object
String outFile = "RevWord.text"; //file to write to
/************* Start marginCheck Method ************/
//open the output file for writing
fw = new FileWriter(outFile);
bWriter = new BufferedWriter(fw);
pw = new PrintWriter(bWriter);
//if the buffered line concatenated with the word is less than the margins
if (newLine.length() + revWord.length() <= MARGIN)
newLine = newLine + revWord + " "; //the buffered line adds the word
else
//put an enline character at the end and write the line
newLine = newLine + "\n";
pw.println(newLine);
//use this word as the first word of the next line
newLine = revWord + " ";
return newLine; //return for use with flush
}//end marginCheck
public static void flushClose(String inLine) throws Exception
{
//local constants
//local variables
FileWriter fw; //writes to output file
BufferedWriter bWriter; //instantiate buffered writer object
String outFile = "RevWord.text"; //file to write to
/************ Start flushClose Method *********/
fw = new FileWriter(outFile);
bWriter = new BufferedWriter(fw); //initialize writer object
//write the last line to the output file then flush and close the buffer
bWriter.write (inLine);
bWriter.flush();
bWriter.close();
}//end flushClose
I am not sure, but my best guess is that every time you write to the file, you are overwriting the file, instead of appending to it.
Try FileWriter(outFile,true);
Answer from: http://www.mkyong.com/java/how-to-append-content-to-file-in-java/
Related
Im using the charAt function to find the first and second letters in a string that was read from a file but after getting the first character from the charAt(0) line, charAt(1), throws an exection that the string is too short when I know it is not. Here is the code.
while(inputFile.hasNext()){
//read file first line
String line = inputFile.nextLine();
//if the first 2 letters of the line the scanner is reading are the same
//as the search letters print the line and add one to the linesPrinted count
String lineOne = String.valueOf(line.charAt(0));
String lineTwo = String.valueOf(line.charAt(1));
String searchOne = String.valueOf(search.charAt(0));
String searchTwo = String.valueOf(search.charAt(1));
if (lineOne.compareToIgnoreCase(searchOne) == 0 && lineTwo.compareToIgnoreCase(searchTwo) == 0){
System.out.println(line);
linesPrinted++;
}
}
I've tried checking the make sure the string isn't being changed after the charAt(0) use by printing and I know it isn't and I've run the program with no probems after just removing the line so I am sure it is this that's causing the problem
The only functional change needed would to change hasNext to hasNextLine.
As one might encounter a line shorter than 2, say an empty line at the end of file, check the length.
while (inputFile.hasNextLine()) {
// read file next line
String line = inputFile.nextLine();
if (line.length() < 2) {
continue;
}
// if the first 2 letters of the line the scanner is reading are the same
// as the search letters print the line and add one to the linesPrinted count
String lineOne = line.substring(0, 1);
String lineTwo = lin.substring(1, 2);
String searchOne = search.substring(0, 1);
String searchTwo = search.substring(1, 2);
if (lineOne.equalsIgnoreCase(searchOne) && lineTwo.equalsIgnoreCase(searchTwo)) {
System.out.println(line);
linesPrinted++;
}
}
There is a problem with special chars and other languages, scripts. A Unicode code point (symbol, character) can be more than one java char.
while (inputFile.hasNextLine()) {
// read file next line
String line = inputFile.nextLine();
if (line.length() < 2) {
continue;
}
// if the first 2 letters of the line the scanner is reading are the same
// as the search letters print the line and add one to the linesPrinted count
int]} lineStart = line.codePoints().limit(2).toArray();
int]} searchStart = search.codePoints().limit(2).toArray();
String lineKey = new String(lineStart, 0, lineStart.length);
String searchKey = new String(searchStart, 0, searchStart.length);
if (lineKey.equalsIgnoreCase(searchKey)) {
System.out.println(line);
linesPrinted++;
}
}
my text file :
3.456 5.234 Saturday 4.15am
2.341 6.4556 Saturday 6.08am
At first line, I want to read 3.456 and 5.234 only.
At second line, I want to read 2.341 and 6.4556 only.
Same goes to following line if any.
Here's my code so far :
InputStream instream = openFileInput("myfilename.txt");
if (instream != null) {
InputStreamReader inputreader = new InputStreamReader(instream);
BufferedReader buffreader = new BufferedReader(inputreader);
String line=null;
while (( line = buffreader.readLine()) != null) {
}
}
Thanks for showing some effort. Try this
while (( line = buffreader.readLine()) != null) {
String[] parts = line.split(" ");
double x = Double.parseDouble(parts[0]);
double y = Double.parseDouble(parts[1]);
}
I typed this from memory, so there might be syntax errors.
int linenumber = 1;
while((line = buffreader.readLine()) != null){
String [] parts = line.split(Pattern.quote(" "));
System.out.println("Line "+linenumber+"-> First Double: "+parts[0]+" Second Double:"
+parts[1]);
linenumber++;
}
The code of Bilbert is almost right. You should use a Pattern and call quote() for the split. This removes all whitespace from the array. Your problem would be, that you have a whitespace after every split in your array if you do it without pattern. Also i added a Linenumber to my output, so you can see which line contains what. It should work fine
I am very new to java, and this is homework. Any direction would be appreciated.
The assignment is to read an external text file and then parse that file to produce a new file.
The external file looks something like this:
2 //number of lines in the file
3,+,4,*,2,-.
5,*,2,T,1,+
I have to read this file and produce an output that takes the preceding int value and prints the following character (skipping the comma). So the output would look like this:
+++****--
*****TT+
I have tried to setup my code using two methods. The first to read the external file (passed as a parameter) which, as long as there is a next line, will call a second method, processLine, to parse the line. This is where I am lost. I can't figure out how this method should be structured so it reads the line and interprets the token values as either ints or chars, and then executes code based on those values.
I am only able to use what we have covered in class, so no external libraries, just the basics.
public static void numToImageRep(File input, File output) //rcv file
throws FileNotFoundException {
Scanner read = new Scanner(input);
while(read.hasNextLine()){ //read file line by line
String data = read.nextLine();
processLine(data); //pass line for processing
}
}
public static void processLine(String text){ //incomplete, all falls apart here.
Scanner process = new Scanner(text);
while(process.hasNext()){
if(process.hasNextInt()){
int multi = process.nextInt();
}
if(process.hasNext()==','){
}
}
this method can be a simple example that can do the job:
public static String processLine(String text){
String result = "";
String[] splitted = text.split(",");
int remaining = 0;
for(int i=0;i<splitted.length;i+=2)
{
remaining = (Integer.parseInt(splitted[i]));
while( remaining-- >0)
result += splitted[i+1];
}
return result;
}
When a log message contains embedded new line characters, then the alignment of such log messages is not proper in the log file.
For example, if I am using the conversion pattern:
[%-5level] %message%newline
and if I log an exception Stack Trace which contains embedded new line characters, or any other multi line log message, then the additional lines in the message start from the beginning of the line.
Is it possible that for each such additional line, the conversion pattern is followed, and the text indented appropriately?
I do it the following way:
void Log(string message, int levelsDeep)
{
StringBuilder sb = new StringBuilder();
for(int i = 0; i < levelsDeep; i++)
sb.Append(" ");
string spacer = sb.ToString();
string msg = message.Replace("\r\n", "\r\n" + spacer);
msg = "\r\n" + msg + "\r\n"; //the prefix and suffix newline characters ensure that this and the next message starts in a new line and is not impacted by the 'spacer' from this message.
// call log4net functions to log the 'msg'
}
I have a tab-delimited text file of size of many GBs. Task here is to append header texts to each column. As of now, I use StreamReader to read line by line and append headers to each column. It takes a lot of time as of now. Is there a way to make it faster ? I was thinking if there is a way to process the file column-wise. One way would be to import the file in database table and then bcp out the data after appending the headers. Is there any other better way, probably by calling powershell, awk/sed in C# code ?
Code is as follows :
StreamReader sr = new StreamReader(#FilePath, System.Text.Encoding.Default);
string mainLine = sr.ReadLine();
string[] fileHeaders = mainLine.Split(new string[] { "\t" }, StringSplitOptions.None);
string newLine = "";
System.IO.StreamWriter outFileSw = new System.IO.StreamWriter(#outFile);
while (!sr.EndOfStream)
{
mainLine = sr.ReadLine();
string[] originalLine = mainLine.Split(new string[] { "\t" }, StringSplitOptions.None);
newLine = "";
for (int i = 0; i < fileHeaders.Length; i++)
{
if(fileHeaders[i].Trim() != "")
newLine = newLine + fileHeaders[i].Trim() + "=" + originalLine[i].Trim() + "&";
}
outFileSw.WriteLine(newLine.Remove(newLine.Length - 1));
}
Nothing else operating on just text files is going to be significantly faster - fundamentally you've got to read the whole of the input file, and you've got to create a whole new output file, as you can't "insert" text for each column.
Using a database would almost certainly be a better idea in general, but adding a column could still end up being a relatively slow business.
You can improve how you're dealing with each line, however. In this code:
for (int i = 0; i < fileHeaders.Length; i++)
{
if(fileHeaders[i].Trim() != "")
newLine = newLine + fileHeaders[i].Trim() + "=" + originalLine[i].Trim() + "&";
}
... you're using string concatenation in a loop, which will be slow if there's a large number of columns. Using a StringBuilder is very likely to be more efficient. Additionally, there's no need to call Trim() on every string in fileHeaders on every line. You can just work out which columns you want once, trim the header appropriately, and filter that way.