HI,
I have a SP which returning more than 100 fields with 1000+ row. I need to to save all in temp table and and rum my customize query to get the appropriate data.
I did many search but i am unable to find the right solutions for my project. I will appreciate if anyone can share his idea.
create table #SP_Result
(i need to create field dynamically according to the SP return result )
exec Ministry..civil_record
"2010-08-07","Autogen",20,NULL,NULL,NULL,NULL,NULL,NULL,NULL
I need dump the result from SP to #SP_Result.
Why don't you run the query itself, in your "customised query", rather than try to capture the result set of the stored proc ? That's the normal method.
All those Nulls look like a bastard of a de-normalised "table", where many rows will not apply to the task. It is much, much faster to deal with the database in a normalised, set-oriented manner.
Related
I have an ADF pipeline which is iterating over a set of files, performing various operations and I have an Azure CosmosDB (SQL API) instance where I would like to insert the name of file and a timestamp, mainly to keep track on which files have been already processed and which not, but in the future I might want to add some other bits of data related to each file.
What I have is my CosmosDB
And currently I am trying to utilice the Copy Data Activity for the insert part.
One problem that I have is that this particular activity expects source while at this point I have only the filename. In theory it was an option to use the Blob Storage from where I read the file at the beginning, but since the Blob Storage is set to store binary files I got the following error if I try to use it as source
Because of that I created a dummy CosmosDB Linked service, but I have several issues with this approach:
Generally the idea for dummy source is not very appealing to me
I haven't find a lot of information on the topic but it seems that if I want to use something in the Sink I need to SELECT from the source
Even though I have selected a value for the id the item is not saved with the selected value from the Source query, but as you can see from the first screenshot I got a GUID and only the name is as I want it.
So my questions are two. I just learn ADF but this approach doesn't look like the proper way to insert item into CosmosDB from activity, so a better/more common approach would be appreciated. If there is not better proposal, how can I at least apply my own value for the id column? If I create the item in the CosmosDB GUI and save it from there, as you can see I am able to use the filename as id which for now seems like a good idea to me, but I wasn't able to add custom value (string or int) when I was trying through the activity, so how can I achieve this?
This is how my Sink looks like
Looking at the following lines of code:
query = "DROP TABLE IF EXISTS my_table"
cur.execute(query)
conn.commit()
# print(table_name)
I'm running the query against multiple tables with various query and I want to return the name of the table and the action executed each time. Is there a way to get some kind of meta data from cur.execute or conn.commit on the action running?
In the example above I'd like to print the table name (my_table) and the action (DROP TABLE). however I want this to be dynamic. If I'm creating a table I want to the name of the table newly created and the action (CREATE TABLE).
Thanks.
Quick and Dirty
tables = ['table_1', 'table_2', 'table_3']
action = 'DROP TABLE'
for table in tables:
cur.execute(f'{str(action)} IF EXISTS {str(table)}')
print(f'ACTION: {action}')
print(f'TABLE: {table}')
conn.commit()
HOWEVER, please do not ever do something like this in anything other than a tiny app that will never leave your computer, and especially not with anything that will accept input from a user.
Bad things will happen.
Dynamically interfacing with databases using OOP is a solved problem, and its not worth reinventing the wheel. Have you considered using an ORM like SQLAlchemy?
I am learning pl/sql. I want to ask a question for importing excel files.
I create a table after that import data from excel nearly 100 rows.
I wonder how can i see this query basic like;
insert into table_name (column1,colum2,...,columnn )
values (value1, value2, ... , value n); and other 100 rows..
Sincerely
I'm not sure whether there is a feature within Oracle engine itself, but I can think of two ways to get those queries:
1. Use Oracle SQL Developer (Or another GUI with the same features) :
Oracle SQL Developer (Download link here) is a free tool developed by Oracle to interact with the database. Add the connection for your database and connect to it, then follow these guidelines carefully to generate your insert script.
2. Use v$sql (Experimental) :
Right now I have no access to an Oracle database to check this, but theoretically, if the database is a development/training one, there should not be a lot of activities and queries inside, so you can query the v$sql table to find the last 100 (or whatsoever) queries:
SELECT SQL_FULLTEXT FROM V$SQL WHERE ROWNUM < 1000 ORDER BY FIRST_LOAD_TIME desc;
Check for the ones starting with INSERT INTO {THE_TABLE_WHICH_HAS_IMPORTED_DATA} to find your insert lines.
As I mentioned, this method is quite experimental and might confuse you, so I strongly suggest using Oracle SQL Developer.
According to recommendation of CWE-89, my function below has been parameterized, but Veracode still reports that CWE-89 is available in that function.
As you can see that the function is used for generating dynamic SQL queries base on input parameters. And, there is only #PrimaryValue parameter came from user input while other dynamic variables behind SELECT, FROM, JOIN, ON and WHERE are queried from database (not from user input).
How do you think about this case? Can I propose a mitigation for this it or I have to modify the code more to solve the problem? Please advice for me.
Your code has SQL injection problem. For example user can pass to this method, param "intofile" like this:
* FROM Table1; DROP TABLE table2; intofile
With this code user convert your query to 3 queries and after run it table2 is drop.
First of all you have to run your query in a read only transaction. After that you have to use a SQL escape method over all inputs to delete key words like DROP from it.
I would like to do more complex queries on jsonb/documents that contain arrays of objects. Is there any library anyone would recommend for Node? I am using pg but I want to do more advanced queries like select the document where a document has an array with an object with a certain key/value. If there aren't any libraries that do this, does anyone know how I could do it with json functions/etc in psql? or point me to a book/resource where I could learn this advanced querying?
If you need to do really complicated things you're going to be writing SQL no matter what. But for basic queries that involve working with JSONB fields Massive (full disclosure, it's my project) has you covered, and executing handwritten prepared statements is as easy as anything else since scripts are loaded into the API.
Searching an embedded array falls into the 'really complicated' category, unfortunately, but if you know your element positions you could do this quite simply with Massive:
await db.mytable.find({
'somejson.arrayfield[0].key': 'value'
});
This would return all records from mytable where the somejson column has an arrayfield array, the first element in which contains a "key": "value" pair.
For searching, check out the Postgres docs. The specific question you have requires a lateral join on the jsonb_array_elements function like so:
SELECT somejson
FROM mytable
JOIN LATERAL jsonb_array_elements(mytable.somejson->'arrayfield') AS elements
ON TRUE
WHERE elements->>'key' = $1;
With Massive, you'd put this query in a script in your application's /db directory and run it as db.myScriptName('value'). You can use folders to group similar scripts too.