I want import a csv data file straight into a table i've created in Azure. I understand that Bulk Insert isn't supported in Azure.
Please can anyone suggest a method for importing a flat file straight into Azure - I want to be able to do this straight from the command line (so that I can monitor the execution time using set statistics io on; ), and without syncing through SSIS or other applications. I've searched for guidance on this but all the articles I find appear to reference using BCP, but this appears to be an add-in?
Grateful for any suggestions.
R,
Jon
I've actually used the method stated here: Bulk insert with Azure SQL.
Basically you bulk insert your data to a local MSSQL database (which is supported).
Create a txt file with all data and bulk insert it into your azure table by using the BCP command in command prompt.
I ended up doing the same as Jefferey, but in case the site goes down here's the syntax:
BCP <databasename>.dbo.<tablename> IN <localfolder>\<filename>.txt -S <servername> -d <database> -U <username>#<servername> -P <password> -q -c -C -t ;
The -C enables you to use UTF-8 as needed for cases with special chars (like æøå).
Related
I'm using EXEC CICS SYNCPOINT and EXEC CICS SYNCPOINT ROLLBACK to commit/backout updates to VSAM and DB2 tables when abend happens. However, only updates to DB2 tables are backed out not on VSAM. Am I missing something? CICS parameter RLS is set to RLS=NO.
It will depend on the type of files that you are using. If you are using RLS files then you have to define the files correctly using idcams using the LOG parameter see:
https://www.ibm.com/docs/en/zos/2.2.0?topic=cics-recoverable-nonrecoverable-data-sets
If you are using non-RLS files then you need to set the attributes correctly on your FILE definition.
See the following page within the CICS documentation that describes about file recovery:
https://www.ibm.com/docs/en/cics-ts/5.6?topic=resources-recovery-files
I need to load XXXX named dat file everyday in Oracle database. But thing is, I need to read simila file with timestamp like: XXXX20191120.dat
Is it possible to create such a configuration in .ctl that INFILE '/blaa/blaa/blaa/XXXXX20191120.dat' part is possible to different in each day ? If so, please example.
If this has to be done with separate shell script, please example.
Thank you all
If you need to use a different filename each time, don't put it in the ctl file, use the command line parameter DATA e.g.
data=/bla/bla/xxxxx20191121.dat
look at the doc, I gave the 12.1 ref as you did not specify which version you're using.
I have installed Cassandra 2.2.12 on my window machine locally. I have exported database from live server in a '.sql' file using 'razorsql' GUI tool. I don't have server access for live, only have database access. When i am trying ti import '.sql' file using 'razorsql' to local cassandra setup, its giving me error (Invalid STRING constant '8ca25030-89ab-11e7-addb-70a0656e5127' for "id" of type timeuuid).
Even i tried using COPY FROM command, its returning same error. Please find attached screen-shot for more detail of error.
Could anybody please help?
You should not put any quotes, because then it gets interpreted as a string instead of UUID - hence the error message.
See also: Inserting a hard-coded UUID via CQLsh (Cassandra)
I think you have two solutions:
edit your export file and remove the single quotes from the inserts.
rerun the export and export the data as csv and run the copy command in cqlsh. In this case, the csv file will not have quotes.
I've an SSIS package to load excel file from network drive. It's designed to load content and then move the file to archived folder.
Everything works good when the following SQL statement runs in SSMS window.
However when it's copied to SQL agent job and executes from there, the file is neither loaded nor moved. But it shows "successful" from the agent log.
The same thing also happened to "SSIS job" instead of T-SQL job, even with proxy of windows account.(same account as ssms login)
Declare #execution_id bigint
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null
Select #execution_id
DECLARE #var0 smallint = 1
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0
EXEC [SSISDB].[catalog].[start_execution] #execution_id
GO
P.S. At first relative path of network drive is applied, then switched to absolute path(\\server\folder). It's not solving the issue.
SSIS Package Jobs run under the context of the SQL Server Agent. What Account is setup to run the SQL Server Agent on the SQL Server? It may need to be run as a Domain account that has access to the network share.
Or you can copy the Excel file to local folder on the SQL Server, so the Package can access the file there.
Personally I avoid the File System Task - I have found it unreliable. I would replace that with a Script Task, and use .NET methods from the System.IO namespace e.g. File.Move. These are way more reliable and have mature error handling.
Here's a starting point for the System.IO namespace:
https://msdn.microsoft.com/en-us/library/ms404278.aspx
Be sure to select the relevant .NET version using the Other Versions link.
When I have seen things like this in the past it's been that my package isn't accessing the path I thought it was at run time, its looking somewhere else, finding an empty folder & exiting with success.
SSIS can have a nasty habit of going back to variable defaults . It may be looking at a different path you used in dev? Maybe hard code all path values as a test? or put in break points & double check the run time values of all variables & parameters.
Other long shots may be:
Name resolution, are you sure the network name is resolving correctly at runtime?
32/64 bit issues. Dev tends to run 32 bit, live may be 64 bit. May interfere with file paths? Maybe force to 32 bit at run time?
There is issue with sql statement not having statement terminator (;) that is causing issue.
Declare #execution_id bigint ;
EXEC [SSISDB].[catalog].[create_execution] #package_name=N'SG_Excel.dtsx', #execution_id=#execution_id OUTPUT, #folder_name=N'ETL', #project_name=N'Report', #use32bitruntime=True, #reference_id=Null ;
Select #execution_id ;
DECLARE #var0 smallint = 1 ;
EXEC [SSISDB].[catalog].[set_execution_parameter_value] #execution_id, #object_type=50, #parameter_name=N'LOGGING_LEVEL', #parameter_value=#var0 ;
EXEC [SSISDB].[catalog].[start_execution] #execution_id ;
GO
I have faced similar issue in service broker ..
I need to get a dump(with data) from remote Cassandra database. I was able to get database schema via following command.How can i get all data in the keyspace?
I'm using Cassandra 1.1.9
echo -e "connect localhost/9260;\r\n use PWC_Keyspace;\r\n show schema;\n" | bin/cassandra-cli -h localhost -port 9260 > dilshan.cdl
With Cassandra 1.1.9, I don't believe you have access to cqlsh with the copy-to command, so you'll be stuck with 2 options.
1) Export the data from the data files (sstables) on disk using sstable2json, or
2) Write a program to iterate over every row and copy/serialize it to a format you find easier to work with.
You MAY be able to use a more recent cqlsh (say, from 2.0, which still used thrift instead of the native interface), and point it at your 1.1.9 server and use 'COPY TO' to export each table to a csv. However, the COPY command in cqlsh for 2.0 doesn't use paging, and cassandra 1.1.19 doesn't support paging, so there's a very good chance it's simply going to time out and fail.