oracle data pump import to another schema ora-31696 - linux

When trying to export import into a table into another schema i am facing the following issue:
ORA-31696: unable to export/import TABLE_DATA:"schemaowner"."tablename":"SYS_P41" using client specified AUTOMATIC method
ORA-31696: unable to export/import TABLE_DATA:"schemaowner"."tablename":"SYS_P42" using client specified AUTOMATIC method
ORA-31696: unable to export/import TABLE_DATA:"schemaowner"."tablename":"SYS_P43" using client specified AUTOMATIC method
ORA-31696: unable to export/import TABLE_DATA:"schemaowner"."tablename":"SYS_P44" using client specified AUTOMATIC method
I want to export all data from a table in one schema and import hem into another schema. The errors occur during import. Below is the command i used for export and the command for import which fails:
expdp schema1/schema1#db1 directory=MYDIR tables=schema1.tbl dumpfile=tbl.dmp logfile=tbl.log content=data_only version=10.2.0.4.0
The above works and damp file is created but when trying:
impdp schema2/schema2#db1 directory=MYDIR tables=tbl dumpfile=tbl.dmp logfile=tblload.log content=data_only version=10.2.0.40 remap_schema=schema1:schema2
The above fails with the error specified in the beginning. Can you please advise me what i am doing wrong. I would truly appreciate it
Also as a reference the commands i run them on a linux os with oracle 10

Related

Unable to import from a relative path using node js

Currently my automation framework built on Cucumber+ Nodejs+ webdriverio. It has the following structure for data files
main/
..../data
......../region1.js
......../region2.js
In my step definition I need to import the data files so that my functions can use the data as per the region I intend to execute which I provide during run time
How should I mention my import command? for example, I tried the following but that does not work
import users from '../main/data/*';
Posting a solution which I came across
Step 1: Add an index.js file under /data folder
Step 2: Adding the following code in index.js
import * as region1 from "../region1"
import * as region2 from "../region2"
export {
region1,
region2
}
Now in the file that requires this data, add the following import line
import myValues from "./main/data"
You need to pass your required region from command line as an environment variable lets say as REGION
If you want to access a value from respective region file based on value passed in REGION, the following code will work
const myreqData = myValues[process.env.REGION].<respective node in the js]

How to import excel/CSV data into MongoDB collection/documents in a programmatic way(MERN)

I have an excel sheet with employee data. My task is to store the data from excel file in MongoDB database > employee collection(a row from excel sheet in mongodb document). I'm doing all this in a react application. I thought of using mongoimport. Since I need it in a CSV or Json format, I converted my excel file to CSV using SheetJs npm package and created a blob file of type csv. And then using the below command I was able to import that CSV file to my mongoDB database.
mongoimport --db demo --collection employees --type csv --headerline --file /path/to/myfile.csv
But I did this from mongo shell by giving a path of my local disk. Now I'm trying to implement this within my react app. Initially I proceeded with this idea - as soon as I upload an excel file, I will convert that to CSV file and I will call a post api with that CSV file in body. Upon sending that request, I will call the "mongoimport" command in my nodejs backend/server so that the data from that CSV file will be stored in my mongoDb collection. Now I can't find any solution to usmongoimport command programmatically. How can I call the "Mongoimport" command in my nodejs server code? I couldn't find any documentation regarding it.
If that is not the right way of doing this task, please suggest me any entirely other way of achieving this task.
In Layman's terms I want to import data from an excel file to MongoDb database using Reactjs/nodejs app.
how are you?
First of all, mongoimport also allows you to import TSV files (same command as for CSV but putting --type tsv), many times friendlier to use with Excel.
Regarding mongoImport, I regret to report that mongoImport cannot be used by any means other than the command line.
What you can do from NodeJs is execute commands in the same way that they are executed by a terminal. For this you can use the child_proccess Module or Exec Function.
I hope that helps even a little.
Regards!

Invalid date:Error while import CSV to Cassandra using pySpark

I'm using Jupyter NoteBook to run pySpark code to import CSV file to Cassandra v3.11.3. Getting below error.
... 1 more[![enter image description here][1]][1]
---------------------------------------------------------------------------
pySpark Code i have attached as picture:
[![pyspark_code][1]][1]
Any inputs...
Without the full trace it's hard to know exactly where this is failing. The method you pasted is just the p4yj wrapper method and we really would need to see the underlying Java Exception.
From what I can tell it looks like you are attempting to also use some options on the C* write that are unsupported. For example "MODE" - "DROPMALFORMED" is not a valid C* connector option. DataFrame Writer and Reader options are source specific so you are unfortunately unable to mix and match.
This makes me think that the data being written actually has a malformed date string or two and this code is dying when attempting to write the broken record. One way around this would be to attempt to do the date casting on CSV read which I believe does support DROPMALFORMED style parsing options.

Importing scripts into a notebook in IBM WATSON STUDIO

I am doing PCA on CIFAR 10 image on IBM WATSON Studio Free version so I uploaded the python file for downloading the CIFAR10 on the studio
pic below.
But when I trying to import cache the following error is showing.
pic below-
After spending some time on google I find a solution but I can't understand it.
link
https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/add-script-to-notebook.html
the solution is as follows:-
Click the Add Data icon (Shows the Add Data icon), and then browse the script file or drag it into your notebook sidebar.
Click in an empty code cell in your notebook and then click the Insert to code link below the file. Take the returned string, and write to a file in the file system that comes with the runtime session.
To import the classes to access the methods in a script in your notebook, use the following command:
For Python:
from <python file name> import <class name>
I can't understand this line
` and write to a file in the file system that comes with the runtime session.``
Where can I find the file that comes with runtime session? Where is the file system located?
Can anyone plz help me in this with the details where to find that file
You have the import error because the script that you are trying to import is not available in your Python runtime's local filesystem. The files (cache.py, cifar10.py, etc.) that you uploaded are uploaded to the object storage bucket associated with the Watson Studio project. To use those files you need to make them available to the Python runtime for example by downloading the script to the runtimes local filesystem.
UPDATE: In the meanwhile there is an option to directly insert the StreamingBody objects. This will also have all the required credentials included. You can skip to writing it to a file in the local runtime filesystem section of this answer if you are using insert StreamingBody object option.
Or,
You can use the code snippet below to read the script in a StreamingBody object:
import types
import pandas as pd
from botocore.client import Config
import ibm_boto3
def __iter__(self): return 0
os_client= ibm_boto3.client(service_name='s3',
ibm_api_key_id='<IBM_API_KEY_ID>',
ibm_auth_endpoint="<IBM_AUTH_ENDPOINT>",
config=Config(signature_version='oauth'),
endpoint_url='<ENDPOINT>')
# Your data file was loaded into a botocore.response.StreamingBody object.
# Please read the documentation of ibm_boto3 and pandas to learn more about the possibilities to load the data.
# ibm_boto3 documentation: https://ibm.github.io/ibm-cos-sdk-python/
# pandas documentation: http://pandas.pydata.org/
streaming_body_1 = os_client.get_object(Bucket='<BUCKET>', Key='cifar.py')['Body']
# add missing __iter__ method, so pandas accepts body as file-like object
if not hasattr(streaming_body_1, "__iter__"): streaming_body_1.__iter__ = types.MethodType( __iter__, streaming_body_1 )
And then write it to a file in the local runtime filesystem.
f = open('cifar.py', 'wb')
f.write(streaming_body_1.read())
This opens a file with write access and calls the write method to write to the file. You should then be able to simply import the script.
import cifar
Note: You can get the credentials like IBM_API_KEY_ID for the file by clicking on the Insert credentials option on the drop-down menu for your file.
The instructions that op found miss one crucial line of code. I followed them and was able to import modules but wasn't able to use any functions or classes in those modules. This was fixed by closing the files after writing. This part in the instrucitons:
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
should instead be (at least this works in my case):
f = open('<myScript>.py', 'wb')
f.write(streaming_body_1.read())
f.close()
Hopefully this helps someone.

Azure Storage Explorer: Properties of type '' are not supported error

I inherited a project that uses an Azure table storage database. I'm using Microsoft Azure Storage Explorer as a tool to query and manage data. I'm attempting to migrate data from my Dev database to my QA database. To do this, I'm exporting a CSV from a Dev database table and then trying to import into the QA database table. For a small number of tables, I get the following error when I try to import the CSV:
Failed: Properties of type '' are not supported.
When I ran into this before, since I exported a "typed" CSV from Dev, I checked to make sure all "#type" columns had values. They did. Then I split the CSV (with thousands of records) up into smaller files to try to determine which record was the issue. When I did this and started importing them, I was ultimately able to import all of the records successfully by individual files which is peculiar. Almost like a constraint violation issue.
I'm also seeing errors with different types. Eg:
Properties of type 'Double' are not supported.
In this case, there is already a column in the particular table of type "Double".
Anyway, now that I'm seeing it again, I'm having trouble resolving it. Any thoughts?
UPDATE
I was able to track a few of these errors to "bad" data in the CSV. It was a JSON string in a Edm.String field that for some reason, it wasn't liking. I minified the JSON using an online tool and it imported fine. There is one data set, though, that has over 7,000 records I'm trying to import (the one I referenced breaking up previously earlier in this post). I ended up breaking it up into different files and was able to successfully import them individually. When I try to import the entire file after loading all the data through individual files, though, I again get an error.
I split the CSV (with thousands of records) up into smaller files to try to determine which record was the issue. When I did this and started importing them, I was ultimately able to import all of the records successfully by individual files which is peculiar.
Based on your test, the format and data of source CSV file seems ok. It will be difficult to find out why Azure Storage Explorer return those unexpected error while importing large CSV file. You can try to upgrade your Azure Storage Explorer and check if you can export and import data successfully using the latest Azure Storage Explorer.
Besides, you can try to use AzCopy (designed for copying data to and from Microsoft Azure Blob, File, and Table storage using simple commands with optimal performance) to export/import table.
Export table:
AzCopy /Source:https://myaccount.table.core.windows.net/myTable/ /Dest:C:\myfolder\ /SourceKey:key /Manifest:abc.manifest
Import table:
AzCopy /Source:C:\myfolder\ /Dest:https://myaccount.table.core.windows.net/mytable1/ /DestKey:key /Manifest:"abc.manifest" /EntityOperation:InsertOrReplace

Resources