Im trying to get the standard vim :clist, :cope functionality working with vim.
Specifically, I'm trying (and failing) to capture the filename from the compiler output.
I have the PL/SQL code compiling okay (well, when there is no errors =), and I've got errorformat picking up the error messages, line numbers and column numbers, but I can't get it to pick up the filename (which vim needs in order to be able to jump to the file).
This is the best errorformat I've been able to come up with:
:set efm=%+P[%f],%E%l/%c%m,%C%m,%Z
This is the output from the compiler. It only has the filename on the first line (which I try and pick up with the +P[%f].
output:
[code/voyager/db/db_source/pck_policy_2.pks]
SQL*Plus: Release 10.2.0.1.0 - Production on Tue Sep 8 14:51:24 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL>
Warning: Package created with compilation errors.
SQL> Errors for PACKAGE PCK_POLICY_2:
LINE/COL ERROR
-------- --------------------------------------------------------------
21/7 PLS-00103: Encountered the symbol "EXP_PBIT_ID" when expecting
one of the following:
:= . ) , # % default character
The symbol "," was substituted for "EXP_PBIT_ID" to continue.
185/1 PLS-00103: Encountered the symbol "END" when expecting one of the
following:
constant exception <an identifier>
<a double-quoted delimited-identifier> table long double ref
char time timestamp interval date binary national character
nchar
LINE/COL ERROR
-------- -------------------------------------------------------------
SQL> Disconnected from Oracle Database 11g Enterprise Edition
:clist afterwards shows the errors have been caught, but not the filename:
:clist
19:21 col 7 error: PLS-00103: Encountered the symbol "EXP_PBIT_ID" when expecting one of the following: := . ) , # % default character The symbol "," was substituted for "EXP_PBIT_ID" to continue.
21:185 col 1 error: PLS-00103: Encountered the symbol "END" when expecting one of the following: constant exception <an identifier> <a double-quoted delimited-identifier> table long double ref char time timestamp interval date binary national character nchar
Does anyone know how I can make this pick up the filename?
Thanks
Dave Smylie
The syntax you're using looks OK to me. So all I have to offer is thoughts...
Even though the file names are not displayed, can you still jump to the correct line in the quickfix window?
Does the compiler produce any output before the file name? Tried a %E before the %+P?
Even though it's not the error format you are looking for, have you tried the oracle.vim plugin?
Knowing NOTHING about efm or vim or this parsing you are trying to do, I would say that you might have better luck performing a query to get the error information. Check out the view
user_errors to gather the error information. It should be easier to get the data the way you want using this method.
Related
I am doing a full extract from a table ABC. In copy activity, I have given a query as
select * from ABC
whereas I am facing issue for few rows (It has special characters - Japanese and Korean)
Error code 2200
Failure type User configuration issue
Details Failure happened on 'Source' side. ErrorCode=DB2DriverRunFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error thrown from driver. Sql code: '-343',Source=Microsoft.DataTransfer.ClientLibrary.Db2Connector,''Type=Microsoft.HostIntegration.DrdaClient.DrdaException,Message=HISMPCB0001 In BasePrimitiveConverter an exception has occurred. Exception description: Output buffer is smaller than required size 12 SQLSTATE=HY000 SQLCODE=-343,Source=Microsoft.HostIntegration.Drda.Requester,'
The character which is causing the issue is '轎ᆃ '
In the error description, it states that there is BasePrimitiveConverter exception that has occurred. The exception description indicates that the output buffer is smaller than the required size. So, please try converting the column to an acceptable type like graphic in db2. Refer to the following link to understand more.
https://bytes.com/topic/db2/answers/488983-storing-some-japanese-data
Referring to these links, I understand that this error might be due to the datatype of source column, or the encoding used. Try working with different encoding options available in your source dataset. Here is a similar problem with a different source but a similar problem of not being able to retrieve special characters.
https://learn.microsoft.com/en-us/answers/questions/467456/failure-happened-source-side-in-copy-activity-for.html
We have a script that handles data-import. Now that most of the data is properly sanitized we want to focus on fine-tuning the MYSQL backend. This backend is in some cases to rigidly defined (i.e. strings are longer than the varchar allows ...). Since we have new data on a weekly basis we want to log it weekly so that we can use that log to check the data-source and if necessary modify the backend.
For this the importscript needs to be modified slightly:
None-1366 MYSQL-warnings need to be suppressed and pretty printed (OK)
All MYSQL-warnings need to be written to a log file (OK)
Hide the default error notice on import, because this floods the terminal with 1366 warnings. (NOT ok)
The code I have now is (This is only a small part out of a larger script):
into_file_operation = "LOAD DATA LOCAL INFILE '%s/%s.csv' INTO TABLE %s FIELDS TERMINATED BY ',' ENCLOSED BY '\"' ESCAPED BY '' LINES TERMINATED BY '\\r';" %(folder, name, name)
#warnings.filterwarnings("ignore")
cursor.execute(into_file_operation)
conn.commit()
conn.close
warnings = conn.show_warnings()
for w in warnings:
if w[1] != 1366:
pprint(w, width=100, depth=2) ##PPRINT non-1366 errors
else:
print(str(w[1]), end='\r') #this can be turned into pass later.
errorfile.write(str(w)+"\n") #write ALL to file
errorfile.flush()
errorfile.write("\n")
errorfile.write("********* DONE TABLE **********")
errorfile.write("\n")
errorfile.flush()
This meets the first two demands, yet it still outputs the very long warnings to the console - which we want to get rid off:
C:\Users\me\AppData\Local\Programs\Python\Python36-32\lib\site-packages\pymysql\cursors.py:166: Warning: (1366, "Incorrect integer value: '' for column 'y1' at row 126") result = self._query(query)
and errors like 1265 can be shown, but need to be pretty printed. With the current code, these get printed twice (first time, the long warning, second time a formatted warning using PrettyPrint)
I think that my next step would be to do something with the STDOUT, yet whatever I tried, I keep ending up with an empty database. I've also tried to use the filter with the ignore parameter, yet then I don't get any warnings at all - which defeats the purpose of having the log.
Node.js ver: 9.2
Oracledb driver ver: 2.0.15
I have written an anonymous PL/Sql procedure with declaration, execution and exception sections of 200 lines of coding.
This runs perfectly fine when running directly on Oracle server or using any tool that can run it. However, running from within the .js file gives an error:
"detailed_message":"ORA-06550: line 1, column 3681:\nPL/SQL: ORA-00905: missing keyword\nORA-06550: line 1, column 3467:\nPL/SQL: SQL Statement ignored\nORA-06550: line 1, column 3736:\nPLS-00103: Encountered the symbol \"ELSE\" when expecting one of the following:\n\n ( begin case declare end exception exit for goto if loop mod\n null pragma raise return select update while with\n
Since the code runs fine on the server directly, I would not suspect any issues with the procedure itself. And I also have another anonymous procedure with less than 100 lines of code seems to run fine from .js file.
I would like to know if there is any limitations with the db driver running such a long procedure. (I would not want to store this procedure in the db either)
There's no artificial limit on the PL/SQL block size in node-oracledb.
Check your syntax, e.g. quote handling. Note the current examples use backticks.
If you're concatenating quoted strings together, make sure each string ends or begins with whitespace:
"BEGIN " +
"FORALL ... " +
...
OK, so my latest project requires loading an Excel 2007 spreadsheet into a SQL Server table. I'm working in SSIS 2008R2. Based on some stuff I found on the internet, I opened the Excel source in Advanced editor and changed the datatype of the long column to DT_NTEXT, so that it wouldn't truncate it. Then I made the database column VARCHAR(MAX). This runs correctly in debug mode on my laptop.
Then I deployed it to the development server and attempted to load the same test file. It failed with the following error messages:
Error: Code: 0xC0208265
Source: Main Data Flow Task Get Main Data [1]
Description: Failed to retrieve long data for column "DESCR".
End Error
Error: Code: 0xC020901C
Source: Main Data Flow Task Get Main Data [1]
Description: There was an error with output column "DESCR" (72) on output "Excel Source Output" (9). The column status returned was: "DBSTATUS_UNAVAILABLE".
End Error
Error: Code: 0xC0209029
Source: Main Data Flow Task Get Main Data [1]
Description: SSIS Error Code DTS_E_INDUCEDTRANSFORMFAILUREONERROR. The "output column "DESCR" (72)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "DESCR" (72)" specifies failure on error. An error occurred on the specified object of the specified component. There may be error messages posted before this with more information about the failure.
End Error
Searching for information about the error, I found about a million sites offering the same three suggested solutions:
Add 'IMEX=1' to the extended properties of the connection string.
It was already there.
Change the TypeGuessRows key in the registry.
This was set to zero on the server, which I understand to mean that it should look at the entire file. Nevertheless, I changed it to 8 to match my laptop. The same error occurred when I ran it again. Then I changed it to 1,763, which is more than the number of rows in the spreadsheet. It still gave the same error. So, I put it back to zero. (There's a 1,900-character value in the first row of my test file, so it shouldn't really matter how many it checks, in this case.)
Change the datatype to DT_WSTR(4000) in the source.
The column is supposed to have up to 10,000 characters, so I'm not sure this would be a good idea even if it worked. However, I tried it anyway. This time it gave me a truncation error. I changed the truncation error disposition to "ignore failure" and it loaded the data, but truncated the value to 255 characters. I have verified that the length is 4000 and doesn't get changed when I save the file, but it's still truncating at 255 characters.
I have no idea what else to look at. Any help would be appreciated.
UPDATE 1/29: The package, without any changes, works correctly when running on the pre-production server. It still fails when running on the development server. Both servers have the same version of SSIS (including minor version numbers) as well as the same versions of Windows, Access and Excel. I do not know how to explain this, nor do I know how to tell if it would work in production.
I created a new package with similar non-functional requirements (Excel 2007 file, SSIS 2008, SQL Server 2008 R2, VARCHAR(MAX) target column) and it worked just fine after deployment into the database server. My package:
Metadata at the Excel Source component's output (checked using Advanced Editor): DT_NTEXT
Derived Column component between source and destination to cast to non-unicode from unicode using (DT_TEXT,1252)
Metadata at the OLE DB Destination component's input (checked using Advanced Editor): DT_TEXT
Target Column data type: VARCHAR(MAX)
I do not explicitly use the extended property IMEX in the connection
Executed by right-clicking on the package at the database server, and loaded a file with a few thousand characters per record into the table without truncation. Hope this helps
I have faced this issue while importing an excel file with a field containing more than 255 characters. I solved the issue using Python.
Simply, import the excel in a pandas data frame and then calculate the length of each of those string values per row.
Then, sort the dataframe in descending order. This will enable SSIS to allocate maximum space for that field as it scans the first 3 rows to allocate storage:
df = pd.read_excel(f,sheet_name=0,skiprows = 1)
df = df.drop(df.columns[[0]], axis = 1)
df['length'] = df['Item Description'].str.len()
df.sort_values('length', ascending=False, inplace=True)
writer = ExcelWriter('Clean/Cleaned_'+f[5:])
df.to_excel(writer,sheet_name='Billing',index=False)
writer.save()
here i am using set numCut [scan $inline1 "%d"] in tcl script for linux server, but after execting script it's showing below error
`different numbers of variable names and field specifiers` variable $inline1
value is `2) "NYMEX UTBAPI Worker" (NYMEX UTBAPI Poller): STOPPED`
i searched in google for this then i got below
`
0x1771b07c tcl_s_cmdmz_diff_num_var_field
Text: Different numbers of variable names and field specifiers
Severity: tcl_c_general_error
Component: tcl / tcl_s_general
Explanation: The scan command detected that the number of variable names
provided differs from the number of field specifiers provided.
Action: Verify that the number of variable names is the same as the number of
field specifiers.
`
here i got the above description.
Can anyone help me out how to solve this issue?
thanks in advance...
The ability to return the matched fields was added in Tcl 8.5. Prior to that, you had to supply a variable for each field that you had in the scan, and the result would be the number of fields matched (and it still is if you provide variable names).
Change:
set numCut [scan $inline1 "%d"]
to:
scan $inline1 "%d" numCut
Or switch to a more recent version of Tcl if you can, as 8.4 is almost out of its extended support period. (There will be a final patch release this summer to address some minor issues with build problems on recent systems, but that's it. We won't support it after that.)
I think that the Tcl error message is telling you that the number of specifiers in your format string %d is different to the number of variables in your Tcl command scan $inline1 "%d".
So, you have one format specifier, and no variables and that's what the Tcl interpreter is telling you.
Try changing your command to scan $inline1 "%d" numCut and see if that works any better.