Using ISNULL on Result Set of Sybase ASE Stored Procedure - sap-ase

How to use ISNULL on the resultset of Sybase ASE Stored Procedure, or equivalent? Assumes you were not in control of the called stored procedure to fix the data at source. Is it possible to make the result set look like a table result set?
Version is 10.x
For example:
execute sp_get_some_nasty_data 'argument'
Returns:
DIRMAIL Direct Mail
DRIVBY Drive By
REFERAL null
OTHER Other
I'd like to use the 'code_value' if the 'display_value' is null perhaps:
select
isnull(2, 1)
from
execute sp_get_some_nasty_data 'argument'

Related

Azure Data Factory / calling a stored procedure, place results in a file

I'm not familiar with working in an Azure Data Factory, but have a work requirement to have some processing run in that environment.
I have a stored procedure that creates a result set. I've read about a lookup step. That may be what I need to use. I want to call the stored procedure and put the result set into a mass-storage file. Ideally I'd like the process insert pipe delimiters between the columns, but if no Azure process does that, I can put my own delimiters in the stored procedure directly.
What is the process in Data Factory to use to call a stored procedure and put the data set into a mass-storage file?
TIA
Trying to research my options at this point. As mentioned, appears a lookup step may be the process to use?
You can use copy activity for this.
Here is a demo I have reproduced.
Sample Stored procedure for selecting the data:
create or alter procedure sp1
as
begin
select * from copy_table3
end
In your stored procedure create your result set and select from it.
My sample copy_data3 table:
In copy activity source use stored procedure option after giving SQL dataset.
In the drop down you can select your stored procedure and if it has parameters, you can import those above.
In sink I have used a blob csv file, and this is my result csv after execution.

How to troubleshoot - Azure DataFactory - Copy Data Destination tables have not been properly configured

I'm setting up a SQL Azure Copy Data job using Data Factory. For my source I'm selecting the exact data that I want. For my destination I'm selecting use stored procedure. I cannot move forward from the table mapping page as it reports 'one or more destination tables have been been properly configured'. From what I can tell. Everything looks good as I can manually run the stored procedure from SQL without an issue.
I'm looking for troubleshooting advice on how to solve this problem as the portal doesn't appear to provide any more data then the error itself.
Additional but unrelated question: What is the benefit from me doing a copy job in data factory vs just having data factory call a stored procedure?
I've tried executing the stored procedure on via SQL. I discovered one problem with that as I had LastUpdatedDate in the TypeTable but it isnt actually an input value. After fixing that I'm able to execute the SP without issue.
Select Data from Source
SELECT
p.EmployeeNumber,
p.EmailName,
FROM PersonFeed AS p
Create table Type
CREATE TYPE [person].[PersonSummaryType] AS TABLE(
[EmployeeNumber] [int] NOT NULL,
[EmailName] [nvarchar](30) NULL
)
Create UserDefined Stored procedure
CREATE PROCEDURE spOverwritePersonSummary #PersonSummary [person].[PersonSummaryType] READONLY
AS
BEGIN
MERGE [person].[PersonSummary] [target]
USING #PersonSummary [source]
ON [target].EmployeeNumber = [source].EmployeeNumber
WHEN MATCHED THEN UPDATE SET
[target].EmployeeNumber = [source].EmployeeNumber,
[target].EmailName = [source].EmailName,
[target].LastUpdatedDate = GETUTCDATE()
WHEN NOT MATCHED THEN INSERT (
EmployeeNumber,
EmailName,
LastUpdatedDate)
VALUES(
[source].EmployeeNumber,
[source].EmailName,
GETUTCDATE());
END
Datafactory UI when setting destination on the stored procedure reports "one or more destination tables have been been properly configured"
I believe the UI is broken when using the Copy Data. I was able to map directly to a table to get the copy job created then manually edit the JSON and everything worked fine. Perhaps the UI is new and that explains why all the support docs only refer only to the json? After playing with this more it looks like the UI sees the table type as schema.type, but it drops the schema for some reason. A simple edit in the JSON file corrects it.

What am I missing in trying to pass Variables in an SSIS Execute SQL Task?

I am creating an SSIS Execute SQL Task that will use variables but it is giving me an error when I try to use it. When I try to run the below, it gives me an error and when I try to build the query, it gives me an error SQL Sytnax Errors encountered and unable to parse query. I am using an OLEDB connection. Am I not able to use variables to specify the tables?
You can't parameterize a table name.
Use the Expressions editor in your Execute SQL Task to Select a SqlStatementSource property.
Try "SELECT * FROM " + #[User::TableName]
After clicking OK twice (to exit the Task editor), you should be able to reopen the editor and find your table name in the SQL statement.
Add a string cast in a case where it might be a simple Object - (DT_WSTR,100)
You are using only single parameter(?) in the query and assigning 3 inputs to that parameters which is not fair put only single input and assign some variable as input as shown in image and change the value of variable respectively.
the parameter name should be incremented by 1 start with 0 because they are the indexes representing the "?" in the query which was written the query window.

Insert two variables to object in SSIS

ALL,
I am trying to store the value of two variables:
Variable1Value: 2014-05-10 00:00:00.000
Variable2Value: 2014-05-08 00:00:00.000
Into an Object type variable so that I loop it with a FOREACH LOOP in SSIS but, I don't know how to do it.
Normally with a OLE DB connection, I would create a variable of type object and stored my result set there, but with an ODBC connection, that's not the case, because the step always fails.
Can somebody help me.
Thank you
I would replace the Execute SQL Task with a Data Flow Task. Inside that, I would start with an ODBC Source component with your ODBC SQL statement. Then I would connect that to a Recordset Destination, and configure that for the Object type Variable.
This design also exposes the SSIS Data Types of the returned columns (e.g. in the Recordset Destination), avoiding guesswork when you come to use them downstream.

Sybase, execute string as sql query

In Sybase SQL, I would like to execute a String containing SQL.
I would expect something like this to work
declare #exec_str char(100)
select #exec_str = "select 1"
execute #exec_str
go
from the documentation of the exec command
execute | exec
is used to execute a stored procedure or an extended stored
procedure (ESP). This keyword is
necessary if there are multiple
statements in the batch.
execute is also used to execute a string containing Transact-SQL.
However my above example gives an error. Am I doing something wrong?
You need bracketing:
execute ( #exec_str )

Resources