Incomplete statement at end of file - cassandra

After running below command :
sh cqlsh --request-timeout=3600 -f test.cql
I am getting below error :
Incomplete statement at end of file
Even when my first line is use sample; followed by 50 insert queries.
What could be the reasons for this error?

That error is returned if the statement at the end of the file either (a) has invalid syntax, or (b) not terminated correctly.
Sometimes the issue can occur several lines up from the last statement in the input file.
Check that the CQL statements have valid syntax. It might be necessary to do a process of elimination and split the file so there's only 10 statements in each so you can identify the offending statement. Cheers!

Related

Argument list too long error in while loop reading from infinite input stream

I have a script that prints my volume status. It checks the output of pactl subscribe to determine when something has changed. Currently I'm doing this with a while loop, and after the script has been running for a certain period of time (I can replicate quickly by holding a key to toggle mute for about a minute), the only output is "/usr/bin/grep: Argument list too long"
I've tried using < <(pactl subscribe), piping into the while loop, and also reading from a fifo. None of these work. Is this expected? If so, what would be the way to handle something like pactl subscribe that prints infinite output? Since the first error mentioned ponymix, I thought it might be an issue there, but using pamixer instead fixes nothing either.
The full script is here. Here is a relevant excerpt:
while read -r event; do
if echo "$event" | grep --quiet --invert-match --ignore-case "client"; then
print_volume
fi
done < <(pactl subscribe)
I expect no errors. The first error is line 36: /usr/bin/ponymix: Argument list too long. The second error is line 36: /usr/bin/grep: Argument list too long. Then afterwards all output is line 88: /usr/bin/grep: Argument list too long.
Edit: This is not the same issue as the suggested duplicate caused by passing a long argument list to something. I am not using globbing like in that example.
The issue is that inside the print_volume function, I was repeatedly sourcing a file with exports in it. As pointed out by Charles Duffy, this caused the environment size to be too large.

Python3: Format SOME MYSQL warnings and write ALL to file

We have a script that handles data-import. Now that most of the data is properly sanitized we want to focus on fine-tuning the MYSQL backend. This backend is in some cases to rigidly defined (i.e. strings are longer than the varchar allows ...). Since we have new data on a weekly basis we want to log it weekly so that we can use that log to check the data-source and if necessary modify the backend.
For this the importscript needs to be modified slightly:
None-1366 MYSQL-warnings need to be suppressed and pretty printed (OK)
All MYSQL-warnings need to be written to a log file (OK)
Hide the default error notice on import, because this floods the terminal with 1366 warnings. (NOT ok)
The code I have now is (This is only a small part out of a larger script):
into_file_operation = "LOAD DATA LOCAL INFILE '%s/%s.csv' INTO TABLE %s FIELDS TERMINATED BY ',' ENCLOSED BY '\"' ESCAPED BY '' LINES TERMINATED BY '\\r';" %(folder, name, name)
#warnings.filterwarnings("ignore")
cursor.execute(into_file_operation)
conn.commit()
conn.close
warnings = conn.show_warnings()
for w in warnings:
if w[1] != 1366:
pprint(w, width=100, depth=2) ##PPRINT non-1366 errors
else:
print(str(w[1]), end='\r') #this can be turned into pass later.
errorfile.write(str(w)+"\n") #write ALL to file
errorfile.flush()
errorfile.write("\n")
errorfile.write("********* DONE TABLE **********")
errorfile.write("\n")
errorfile.flush()
This meets the first two demands, yet it still outputs the very long warnings to the console - which we want to get rid off:
C:\Users\me\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pymysql\cursors.py:166: Warning: (1366, "Incorrect integer value: '' for column 'y1' at row 126") result = self._query(query)
and errors like 1265 can be shown, but need to be pretty printed. With the current code, these get printed twice (first time, the long warning, second time a formatted warning using PrettyPrint)
I think that my next step would be to do something with the STDOUT, yet whatever I tried, I keep ending up with an empty database. I've also tried to use the filter with the ignore parameter, yet then I don't get any warnings at all - which defeats the purpose of having the log.

node.js / oracledb driver limitations

Node.js ver: 9.2
Oracledb driver ver: 2.0.15
I have written an anonymous PL/Sql procedure with declaration, execution and exception sections of 200 lines of coding.
This runs perfectly fine when running directly on Oracle server or using any tool that can run it. However, running from within the .js file gives an error:
"detailed_message":"ORA-06550: line 1, column 3681:\nPL/SQL: ORA-00905: missing keyword\nORA-06550: line 1, column 3467:\nPL/SQL: SQL Statement ignored\nORA-06550: line 1, column 3736:\nPLS-00103: Encountered the symbol \"ELSE\" when expecting one of the following:\n\n ( begin case declare end exception exit for goto if loop mod\n null pragma raise return select update while with\n
Since the code runs fine on the server directly, I would not suspect any issues with the procedure itself. And I also have another anonymous procedure with less than 100 lines of code seems to run fine from .js file.
I would like to know if there is any limitations with the db driver running such a long procedure. (I would not want to store this procedure in the db either)
There's no artificial limit on the PL/SQL block size in node-oracledb.
Check your syntax, e.g. quote handling. Note the current examples use backticks.
If you're concatenating quoted strings together, make sure each string ends or begins with whitespace:
"BEGIN " +
"FORALL ... " +
...

Fortran error check on formatted read

In my code I am attempting to read in output files that may or may not have a formatted integer in the first line of the file. To aid backwards compatibility I am attempting to be able to read in both examples as shown below.
head -n 3 infile_new
22
8
98677.966601475651 -35846.869655806520 3523978.2959464169
or
head -n 3 infile_old
8
98677.966601475651 -35846.869655806520 3523978.2959464169
101205.49395364164 -36765.047712555031 3614241.1159234559
The format of the top line of infile_new is '(i5)' and so I can accommodate this in my code with a standard read statement of
read(iunit, '(I5)' ) n
This works fine, but if I attempt to read in infile_old using this, I as expected get an error. I have attempted to get around this by using the following
read(iunit, '(I5)' , iostat=ios, err=110) n
110 if(ios == 0) then
print*, 'error in file, setting n'
naBuffer = na
!rewind(iunit) #not sure whether to rewind or close/open to reset file position
close(iunit)
open (iunit, file=fname, status='unknown')
else
print*, "Something very wrong in particle_inout"
end if
The problem here is that when reading in either the old or new file the code ends up in the error loop. I've not been able to find much documentation on using the read statement in this way, but cannot determine what is going wrong.
My one theory was my use of ios==0 in the if statement, but figured since I shouldn't have an error when reading the new file it shouldn't matter. It would be great to know if anyone knows a way to catch such errors.
From what you've shown us, after the code executes the read statement it executes the statement labelled 110. Then, if there wasn't an error and iostat==0 the true branch of the if construct is executed.
So, if there is an error in the read the code jumps to that statement, if there isn't it walks to the same statement. The code doesn't magically know to not execute the code starting at label 110 if there isn't an error in the read statement. Personally I've never used both iostat and err in the same read statement and here I think it's tripping you up.
Try changing the read statement to
read(iunit, '(I5)' , iostat=ios) n
You'd then need to re-work your if construct a bit, since iostat==0 is not an error condition.
Incidentally, to read a line which is known to contain only one integer I wouldn't use an explicit format, I'd just use
read(iunit, * , iostat=ios) n
and let the run-time worry about how big the integer is and where to find it.

cant create arrays in while loop or if statement?

when I try to create arrays in my script I get errors.
id[1]=string2; would generate error id[1]=string2: not found
I'm guessing it has something to do with the fact that im in a if statement or while loop since the use []? I'm running a VM so attached is a pic of the script thus far the array at top a[1]=string; generates no errors but the one in the logic id[1]=string2; does.
Posting as an answer so this question can be marked as resolved:
Your script is being executed by sh, not bash. Add a correct shebang line as the first line of the script file:
#!/bin/bash

Resources