I'm using Oracle 11g
I'm trying to create a flat file (CSV or TXT) from a result set but am struggling on where to even start. It seems like I have to create a stored proc and use UTL_FILE. After doing some research, I have two questions:
Where does the file get created? According to this question I need to get access to the Oracle user directory, but where is that on a Windows and Linux environment? I have to test on Windows , and the script will eventually be on a Linux environment.
What would be the basic format of a SQL script to create the aforementioned file, and load data into it from a fairly basic SELECT query? I'm not seeing a UTL_FILE function to write the records to the file; do I have to iterate through the entire result set and use PUT or can I somehow just push the entire result to a file?
I think using "spool" can do the trick.
Check this out https://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:9518534700346581975
And more information is here http://www.dba-oracle.com/t_sqlplus_spool.htm
The file will get created in the directory where you launch sqlplus from.
If you're using SQL Developer you can create a view for your query. Right click view in schema browser and choose export and export as csv.
But personally I would go for spool as previous answer said. SQL Plus is most basic client so I don't believe you won't have it.
Related
As far as I understand, APEX 5.1 does not support Excel files to be loaded into tables.
I found this package that seems to make it possible to SELECT from Excel files, but it does not show how to use it with, for example, files loaded via the "File Browse" Item.
Now, I am very new to this environment, so please explain it from the beginning.
What I did is I upload the package script to the SQL workshop and executed it, without errors. But now?
APEX 5.1 doesn't support it out of the box, but you can use the EXCEL2COLLECTION plugin (available here).
It is very straightforward, just create a file browse page item with an upload button which calls an onsubmit process (e.g. CreateCollection) of type Excel2Collection[Plug In] - specify the file browse item, a collection name and the CSV separator, then you can do as you please with the data (e.g. you may want to run some validations on the data then insert it into a table where you can access it as normal).
For example I have to run a query to fetch data for a list of part numbers.
The list is very big say 1000. So I do not want to give all the part numbers in the query instead want to take it from a file say .txt or .xls
Is It possible with Notes SQL?
Why use Notes SQL to import data from a text file into Notes? Just write a Lotusscript agent that does the import.
Notes SQL is just a driver that maps SQL queries onto Notes and Domino databases, views, and documents. It's up to you to construct the queries and run them somehow.
There's nothing in the syntax of NotesSQL that can take input from files.
If the program that you are using to generate and run the Notes SQL queries has the ability to read files and use the contents of a file to modify the query before giving it to NotesSQL, then the answer is yes, it's possible to do what you want with Notes SQL.
You have it all backwards. NotesSQL lets you query (pull data out of) a Notes database (.nsf) using ODBC. That way any software that can connect via ODBC has access to Notes data. It doesn't allow importing data into Notes. To get the data into Notes you'll need to either write a script or use something like IBM Enterprise Integrator (IEI, formerly LEI, formerly formerly Notes Pump.)
For my Azure Machine Learning experiment I want to load a .asc file into an Execute R script in my experiment. It is in fact a tab delimited file with some comments on the first couple of rows. Can anyone tell me how to do this?
A csv goes well, but with this file I get an error.
You need to upload this file as part of the zip file. Follow the steps provided under heading "Script Bundle" - https://azure.microsoft.com/en-us/documentation/articles/machine-learning-r-quickstart/
You need to create a new data set and upload the csv file after selecting the right data type "GenericTSVNoHeader" or with header.
On you experiment, you will be able to view or visualize the data set and then do any manipulation you can add execute R script module.
If you plan to send each line of the text file as parameter to the webservice, then you can also use the "enter data" module for providing the data as shown below
If you want to send the whole file as a parameter to the web service, then I would recommend to use reader with SQL or blob option, clean first couple of rows first, and then use SQL script or blob credentials as web service parameter as described here
Hy,
I'am trying to use Visual Studio 2012 database project to upgrade a database to a newer version but i'am having a weird problem.I select the source database then the target database and hit compare.Visual Studio generates the script with the differences and when i execute it fails because it tries to drop tables without first dropping the FK constraints that are on those tables.(normally it should first script all the constrains from a table,drop them,drop the table,after that create the new table and finally recreating the constraints)
Do you have any ideas why it tries to do directly drop table without dropping constraints first.
Am i missing some settings?
Sounds like a bug to me. Try posting the same question on the SSDT forum.
If you have access to a copy of SQL Compare, it might be worth trying the same comparison to see if this works better. If you're using a database project as a data source, you'll need to select "source control", then "scripts folder", and browse to the folder that contains the .sqlproj file. Here at Red Gate we're working on improving database project support in SQL Compare so we'd welcome any feedback or questions.
If the tables that are being dropped in your database are not in your schema definition and you have got the "DROP objects in target but not in project" option selected in the Deployment Options, then it will try to drop them.
Have you checked this is not the case?
Whenever i work with database generating code from data models, or scrips, i often get that problem, so i have an script just for deleting those keys, sometimes i have to drop my database manually rather than executing the query, because most of the times it does but not completely, so i first dro the database, generate the script and run the script just for erasing the keys
I have to run almost 50 queries daily for daily reports and copy-paste the data into Excel sheets. Is there a way to schedule a job on SQL Developer that exports data from all the queries in an Excel Workbook?
You could link the excel spreadsheets to your queries so they automatically update themselves.
Insert > Data from External Source. I do this with SQL Server a lot, and you can do it with Oracle too if you know the connection strings.
I would comment, but I dont have the rep yet.
I would advise using your operating system to schedule the task. Assuming that this is Windows (as you want to write to Excel) then you can use Task Scheduler to set off a cmd script or powershell script which can call SQLPLUS passing in a parameter for the the sql file that you wish to run. It would not be too difficult to output this to a CSV file which can be opened in Excel. If you actually need to write the data to a .xlsx (or similar) file then there are options (e.g. Python libraries that can do this), but it will not be as straight forward.
I am not sure exactly what part of this you need help with, so can I suggest that you consider the steps below, if you want to proceed do some research and have an attempt at each step and then post a question for each that you are stuck on with details of what you have tried:
Schedule a job from your operating system;
Write a script to call SQLplus and execute a .sql file;
Change query output to csv and redirect to file (or find a way to write directly to an Excel file if this is what you need to do);