How to use imp command for overwrite existing data - linux

I am using the imp command for importing a database but after one time, we are executing the imp command again so that data is inserted a 2nd time. We want to remove the old data and insert fresh data.
This is what I tried...
Please help me and suggest for specific parameter which is help solved that type of problem..
thanks and sorry for my English..

IMPDP has the parameter: TABLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | REPLACE}
table_exists_action=skip: This says to ignore the data in the import file and leave the existing table untouched. This is the default and it is not a valid argument if you set content=data_only.
table_exists_action=append: This says to append the export data onto the existing table, leaving the existing rows and adding the new rows from the dmp file. Of course, the number and types of the data columns must match to use the append option. Just like the append hint, Oracle will not re-user any space on the freelists and the high-water mark for the table will be raised to accommodate the incoming rows.
table_exists_action=truncate: This says to truncate the existing table rows, leaving the table definition and replacing the rows from the expdp dmp file being imported. To use this option you must not have any referential integrity (constraints) on the target table. You use the table_exists_action=truncate when the existing table columns match the import table columns. The truncate option cannot be used over a db link or with a cluster table.
table_exists_action=replace: This says to delete the whole table and replace both the table definition and rows from the import dmp file. To use this option you must not have any referential integrity (constraints) on the target table. You use the table_exists_action=replace when the existing table columns do not match the import table columns.

Related

How to make a file in Excel, that refreshes from Query Editor and work on it

I am using Power Query Editor to create a working file, using multiple tables from several sources.
After I combine these and make my working file, I am using it to make some work on columns I add later on the working file.
I have noticed that the values I enter in the working file are not bound to the main key, lets assume the first column, but they are independent values in a column.
The result is that if one table changes, for example one line is deleted or I change the sorting of the Query, my working file is wrong, since the data changed but the added columns remain as they were.
Is there a way to have the added columns to be bound with a value, as it is for example with VLOOKUP?
How can I make a file that will update from different sourcesbut stil I can work on it without the risk of misplacing the work I do.
I hope I am clear.
Thank you in advance!
This is fairly simple if each line in your table is unique (which in your example you say the first column can serve as a key). Setup your working columns on the table and then load the table into PQ (as a connection only). Then go to your original query that is combining your data and add a merge at the end where you merge against the table you just loaded into PQ and match on your key. Then expand only your working columns from the merge.
This way whenever you refresh your table, it will match lines against it's existing output in your work before updating, so data in your work columns will be maintained. However note this is only going to retain values, not any formulas you may be using in your work columns.

Ignore extra columns in CSV when using COPY FROM in Cassandra

I have a table with two columns id, name and I am ingesting data from a 3rd party that may add data to each row without my knowledge. I want to ensure the file still loads until I am ready to change my data model, ignoring any extra columns that are added to the end of each row.
E.g. If the csv file changes to id, name, email at some point, I want my COPY FROM query to continue happily loading id, name.
I've tried to implement SKIPCOLS, but as far as I can see, this only seems to work when there is a match between fields in the first place.
i.e. The following is the correct usage of COPY FROM with SKIPCOLS but will not help my needs as I can't seem to reference a column that only exists in the csv.
COPY users (id, name) FROM 'users.csv' WITH SKIPCOLS = 'name';
Is there another way to do this or a different way to use SKIPCOLS that I am missing?

Copying Data into Cassandra table

Can we import/copy multiple files into acassandra table which are having same column name in a table and in files?
COPY table1(timestamp ,temp ,total_load ,designl) FROM 'file1', 'file2' WITH HEADER = 'true';
I tried using above syntax: but its saying
Improper COPY command.
i mean to to say suppose we have 100's of delimiter files with same columns, and i want to load all files into single cassandra table using single cql query?
is this possible:?
when i tried it using each COPY command for each file to a table it is Over Writing the data?
Please Help me!
You can specify more tables with the following synax:
COPY table1("timestamp", temp, total_load, designl) FROM 'file1, file2' WITH HEADER = 'true';
or you can also use wildcards:
COPY table1("timestamp", temp, total_load, designl) FROM 'folder/*.csv' WITH HEADER = 'true';
Two remarks however:
Timestamp is a type name in Cassandra, if your column has this name, you need to quote it as I did in the example above.
If your data is overwritten when executing several copy commands, then it will be overwritten even if you execute a single copy command. If you have several lines for the same PRIMARY KEY, then only the last row will win.

Adding columns to a sybase table with unique auto_identity index option

I've inherited a Sybase database that has the 'unique auto_identity index' option enabled on it. As part of an upgrade process I need to add a few extra columns to the tables in this database i.e.
alter table mytable add <newcol> float default -1 not null
When I try to do this I get the follow error:
Column names in each table must be unique, column name SYB_IDENTITY_COL in table #syb__altab....... is specifed more than once
Is it possible to add columns to a table with this property enabled?
Update 1:
I created the following test that replicates the problem:
use master
sp_dboption 'esmdb', 'unique auto_identity indexoption',true
use esmdb
create table test_unique_ids (test_col char)
alter table test_unique_ids add new_col float default -1 not null
The alter table command here produces the error. (Have tried this on ASE 15/Solaris and 15.5/Windows)
Update 2:
This is a bug in the Sybase dbisql interface, which the client tools Sybase Central and Interactive SQL use to access the database and it only appears to affect tables with the 'unique auto_identity index' option enabled.
To work around the problem use a different SQL client (via JDBC for example) to connect to the database or use isql on the command line.
Should be no problem to ALTER TABLE with such columns; the err msg indicates the problem regards something else. I need to see the CREATE TABLE DDL.
Even if we can't ALTER TABLE, which we will try first, there are several work-arounds.
Responses
Hah! Internal Sybase error. Open a TechSupport case.
Workaround:
Make sure you get jthe the exact DDL. sp_help . Note the IDENTITY columns and indices.
Create a staging table, exactly the same. Use the DDL from (1). Exclude the Indices.
INSERT new_table SELECT old_table. If the table is large, break it into batches of 1000 rows per batch.
Now create the Indices.
If the table is very large, AND time is an issue, then use bcp. You need to research that first, I am happy to answer questions afterwards.
When I ran your sample code I first get the error:
The 'select into' database option is not enabled for database 'mydb'. ALTER TABLE with data copy cannot be done. Set the 'select into' database option and re-run
This is no doubt because the data within your table needs copying out because the new column is not null. This will use tempdb I think, and the error message you've posted refers to a temp table. Is it possible that this dboption has been accidentally enabled for the tempdb?
It's a bit of a shot in the dark, as I only have 12.5 to test on here, and it works for me. Or it could be a bug.

SSIS Excel Data Source - Is it possible to override column data types?

When an excel data source is used in SSIS, the data types of each individual column are derived from the data in the columns. Is it possible to override this behaviour?
Ideally we would like every column delivered from the excel source to be string data type, so that data validation can be performed on the data received from the source in a later step in the data flow.
Currently, the Error Output tab can be used to ignore conversion failures - the data in question is then null, and the package will continue to execute. However, we want to know what the original data was so that an appropriate error message can be generated for that row.
According to this blog post, the problem is that the SSIS Excel driver determines the data type for each column based on reading values of the first 8 rows:
If the top 8 records contain equal number of numeric and character types – then the priority is numeric
If the majority of top 8 records are numeric then it assigns the data type as numeric and all character values are read as NULLs
If the majority of top 8 records are of character type then it assigns the data type as string and all numeric values are read as
NULLs
The post outlines two things you can do to fix this:
First, add IMEX=1 to the end of your Excel driver connection string. This will allow Excel to read the values as Unicode. However, this is not sufficient if the data in the first 8 rows are numeric.
In the registry, change the value for HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Nod\Microsoft\Jet\4.0\Engines\Excel\TypeGuessRows to 0. This will ensure that the driver looks at all the rows to determine the data type for the column.
Yes, you can. Just go into the output column list on the Excel source and set the type for each of the columns.
To get to the input columns list right click on the Excel source, select 'Show Advanced Editor', click the tab labeled 'Input and Output Properties'.
A potentially better solution is to use the derived column component where you can actually build "new" columns for each column in Excel. This has the benefits of
You have more control over what you convert to.
You can put in rules that control the change (i.e. if null give me an empty string, but if there is data then give me the data as a string)
Your data source is not tied directly to the rest of the process (i.e. you can change the source and the only place you will need to do work is in the derived column)
If your Excel file contains a number in the column in question in the first row of data, it seems that the SSIS engine will reset the type to a numeric type. It kept resetting mine. I went into my Excel file and changed the numbers to "Numbers stored as text" by placing a single quote in front of them. They are now read as text.
I also noticed that SSIS uses the first row to IGNORE what the programmer has indicated is the actual type of the data (I even told Excel to format the entire column as TEXT, but SSIS still used the data, which was a bunch of digits), and reset it. Once I fixed that by putting a single-quote in my Excel file in front of the number in the first row of data, I thought it would get it right, but no, there is additional work.
In fact, even though the SSIS External DataSource Column now has the type DT_WSTR, it will still read 43567192 as 4.35671E+007. So you have to go back into your Excel file and put single quotes in front of all the numbers.
Pretty LAME, Microsoft! But there's your solution. I have no idea what to do if the Excel file is not under your control.
I was looking for a solution for the similar issue, but didn't find anything on the internet. Although most of the found solutions work at design time, they don't work when you want to automate your SSIS package.
I resolved the issue and made it work by changing the properties of "Excel Source". By default the AccessMode property is set to OpenRowSet. If you change it to SQL Command, you can write your own SQL to convert any column as you wish.
For me SSIS was treating the NDCCode column as float, but I needed it as a string and so I used following SQL:
Select [Site], Cstr([NDCCode]) as NDCCode From [Sheet1$]
Excel source is SSIS behaves crazy. SSIS determines the type of data in a particualr column by reading first 10 rows.. hence the issue. If you have a text column with null values in first 10 roes, SSIS takes the data type as Int. With a bit of struggle, here is a workaround
Insert a dummy row (preferrably first row) in the worksheet. I prefer doing this thru a Script task, you may consider using some service to preprocess the file before SSIS connects to it
With the duummy row, you are sure that the datatypes will be set as you need
Read the data using Excel source and filter out the dummy row before you take it for further processing.
I know it is a bit shabby, but it works :)
I could fix this issue. while creating the SSIS package, I manually changed the specific column to text (Open the excel file select the column, right click on column, select format cells, in number tab select Text and save the excel).
Now create the SSIS package and test it. It works. Now try to use the excel file where this column was not set as text.
It worked for me and I could execute the package successfully.
This should be resolved simply, just untick the box "Frist row as column names" and all data will be collected as text data type. Only downside of this choice is that you have to manage the columns names from the auto names (column 1, 2 etc) and handle the first row which contains the column names.
I had trouble implementing the solution here - I could follow the instructions, but it only gave new errors.
I solved my conversion issues by using a Data Conversion entity. This can be found on the SSIS Toolbox under Data Flow Transformations. I placed the Data Conversion between my Excel Source and OLE DB Destination, linked Excel to Data C, Data C to OLE DB, double clicked Data C to bring up a list of the data columns. Gave the problem column a new Alias, and changed the Data Type column.
Lastly, in the Mappings of the OLE DB Destination, use the Alias column name, rather than the original Excel column name. Job done.
You can use a Data Conversion component to convert to the desired data types.

Resources