Sybase bcp doesn't export field headers? - sap-ase

Does anybody know to force sybase bcp to export column headers along with data?
The utility documentation curiously ignores this important feature..., perhaps I missing something.
Any help will be greatly appreciated

It looks like this older post could give you some answers.
export table to file with column headers (column names) using the bcp utility and SQL Server 2008
BCP doesn't support a command line flag to add the headers unfortunately

Related

Using timestamp in file loaded with SQL Loader

I need to load XXXX named dat file everyday in Oracle database. But thing is, I need to read simila file with timestamp like: XXXX20191120.dat
Is it possible to create such a configuration in .ctl that INFILE '/blaa/blaa/blaa/XXXXX20191120.dat' part is possible to different in each day ? If so, please example.
If this has to be done with separate shell script, please example.
Thank you all
If you need to use a different filename each time, don't put it in the ctl file, use the command line parameter DATA e.g.
data=/bla/bla/xxxxx20191121.dat
look at the doc, I gave the 12.1 ref as you did not specify which version you're using.

Cassandra database import issue for timeuuid

I have installed Cassandra 2.2.12 on my window machine locally. I have exported database from live server in a '.sql' file using 'razorsql' GUI tool. I don't have server access for live, only have database access. When i am trying ti import '.sql' file using 'razorsql' to local cassandra setup, its giving me error (Invalid STRING constant '8ca25030-89ab-11e7-addb-70a0656e5127' for "id" of type timeuuid).
Even i tried using COPY FROM command, its returning same error. Please find attached screen-shot for more detail of error.
Could anybody please help?
You should not put any quotes, because then it gets interpreted as a string instead of UUID - hence the error message.
See also: Inserting a hard-coded UUID via CQLsh (Cassandra)
I think you have two solutions:
edit your export file and remove the single quotes from the inserts.
rerun the export and export the data as csv and run the copy command in cqlsh. In this case, the csv file will not have quotes.

Oracle Data Integrator Oracle to Excel encoding

I try to export my Oracle view data to Excel sheet using Oracle Data Integrator.
English text is exported good, but russian (cyrillic) is wrong!
Help me please, how can I configure datasources and encoding.
After export to excel data has cp1252 encoding, but there is no place where such encoding is configured!
Information:
Oracle DS use jdbc:oracle:thin and
NLS_LANG=AMERICAN_AMERICA.CL8MSWIN1251
(in windows registry and environment variables(same to database).
ODI starting with product.conf:
AddVMOption -Dfile.encoding=Cp1251
AddVMOption -Dsun.jnu.encoding=Cp1251
AddVMOption -Duser.language=ru
AddVMOption -Duser.country=RU
(and I see such values in Help-About-Properties.
Excel DS use jdbc:odbc and
charSet=cp1251
property.
Oracle 12c, ODI 12c.
If I execute simple java code with
-Dfile.encoding=Cp1251
option - russian language displays correct, but not over ODI...
I would be glad any advice!
I want to answer my own question.
May be it is obviously for anyone, but I found solution not fast...
I need to change Windows property Current language for non-unicode programs. from English to Russian.
So I even not need "AddVMOption -Dfile.encoding=Cp1251" options in ODI - they are set correct after changing property.
Hope this information to be helpful for someone.

Bulk Insert csv file into Azure

I want import a csv data file straight into a table i've created in Azure. I understand that Bulk Insert isn't supported in Azure.
Please can anyone suggest a method for importing a flat file straight into Azure - I want to be able to do this straight from the command line (so that I can monitor the execution time using set statistics io on; ), and without syncing through SSIS or other applications. I've searched for guidance on this but all the articles I find appear to reference using BCP, but this appears to be an add-in?
Grateful for any suggestions.
R,
Jon
I've actually used the method stated here: Bulk insert with Azure SQL.
Basically you bulk insert your data to a local MSSQL database (which is supported).
Create a txt file with all data and bulk insert it into your azure table by using the BCP command in command prompt.
I ended up doing the same as Jefferey, but in case the site goes down here's the syntax:
BCP <databasename>.dbo.<tablename> IN <localfolder>\<filename>.txt -S <servername> -d <database> -U <username>#<servername> -P <password> -q -c -C -t ;
The -C enables you to use UTF-8 as needed for cases with special chars (like æøå).

Cassandra: No column definition found for column error

I'm a newbie to Cassandra. I tried to insert a log file into Cassandra using the GUI tool Spoon, by Pentaho Data Integration. The file has the following fields. Timestamp, process_id, IP, child_id, dev_id, filter_level_id, uh, URL. The log file did not have a header. So i typed in these column names in the file and loaded it to Cassandra using the tool. But when i tried to create an index for the field IP using cql3, I got the following error.
No column definition found for column ip
Am I doing something wrong? How can this be resolved?
Thanks in advance for any help.

Resources