Acumatica custom db script not working - acumatica

I ran into an issue where a db script in an Acumatica package does not work. I see it run, but the change in the db just doesn’t happen. If I run the same script in SSMS, it works. I do make sure that I either tweak the script comment or publish with cleanup so that the script isn’t skipped during publish, and the log says it ran, but it doesn’t work. For example, I have a simple create table script...
IF (NOT EXISTS (
SELECT 1
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_SCHEMA = 'dbo'
AND TABLE_NAME = 'EEdiEntityType'))
BEGIN
CREATE TABLE [dbo].[EEdiEntityType](
[CompanyID] [int] NOT NULL,
[EntityType] [int] NOT NULL,
[Description] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_EEdiEntityType] PRIMARY KEY CLUSTERED
(
[CompanyID] ASC,
[EntityType] ASC
) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
END
GO
I added this script to my package, deployed it, it says it runs, but no new table.
Acumatica v17.209.0028

#tlanzer, please make sure you are verifying in SSMS against correct database. You can check connectionstring in web.config of Acumatica Site.

Related

The sql codes shown in the document is not running on azure synapse dedicate sql pool

I have the following link
when I copy paste the following syntax
-- Create DimProductCategory PK
ALTER TABLE [dbo].[DimProductCategory] WITH CHECK ADD
CONSTRAINT [PK_DimProductCategory_ProductCategoryKey] PRIMARY KEY CLUSTERED
(
[ProductCategoryKey]
) ON [PRIMARY];
GO
The syntax WITH CHECK ADD is not working. Also, many syntax from the document is not working wondering why it is not working on sql pool. is there an alternative way or any other documents related to this from azure.
That syntax will not work on Azure Synapse Analytics dedicated SQL pools and you will receive the following error(s):
Msg 103010, Level 16, State 1, Line 1 Parse error at line: 2, column:
40: Incorrect syntax near 'WITH'.
Msg 104467, Level 16, State 1, Line 1 Enforced unique constraints are
not supported. To create an unenforced unique constraint you must
include the NOT ENFORCED syntax as part of your statement.
The way to write this syntax would be using ALTER TABLE to add a non-clustered and non-enforced primary key, eg
ALTER TABLE [dbo].[DimProductCategory]
ADD CONSTRAINT [PK_DimProductCategory_ProductCategoryKey]
PRIMARY KEY NONCLUSTERED ( [ProductCategoryKey] ) NOT ENFORCED;
However as this table is a dimension, I would also suggest changing its distribution to REPLICATE, which you have to do in the table definition. So the whole statement would be something like:
CREATE TABLE [dbo].[DimProductCategory](
[ProductCategoryKey] [int] IDENTITY(1,1) NOT NULL UNIQUE NOT ENFORCED,
[ProductCategoryAlternateKey] [int] NULL,
[EnglishProductCategoryName] [nvarchar](50) NOT NULL,
[SpanishProductCategoryName] [nvarchar](50) NOT NULL,
[FrenchProductCategoryName] [nvarchar](50) NOT NULL
)
WITH (
DISTRIBUTION = REPLICATE,
CLUSTERED INDEX( [ProductCategoryKey] )
)
It will be a good exercise for you to convert the rest of the syntax in the lab. The foreign keys won't work either.

Load FileName in SQL Table using Copy Data Activity

I am newbie to Azure Data Factory. I'm trying to load multiple files of various states from FTP location into a single Azure SQL Server table. My requirement is to get state name from of the file and dump it into table along with actual data.
Currently, my source is FTP. Sink is Azure SQL Server table. I have used Stored Procedure to load the data. However, I'm unable to send file name as a parameter as shown below to the stored procedure so that I can dump it into the table. Below is the Copy Data component -
I have defined SourceFileName parameter in stored procedure, however, I am unable to send it via COPY Data activity.
Any help is appreciated.
We can conclude that additional column option can not be used here. Because ADF will return a column(contain filepath) not a string. So we need to use GetMetaData activity to get the file list, then foreach the file list and inside a Foreach activity to copy them.
I've created a simple test, it works well.
In my local FTP server, there is two text files. I need to copy them into an Azure SQL table.
At GetMetaData activity, I use Child Items to get the filelist.
At ForEach activity, I use #activity('Get Metadata1').output.childItems to foreach the file list.
Inside the ForEach activity, I use dynamic content #item().name to get the file path.
source setting:
sink setting:
So we can get the filename. Follows are some operations I did on Azure SQL.
-- create a table
CREATE TABLE [dbo].[employee](
[firstName] [varchar](50) NULL,
[lastName] [varchar](50) NULL,
[filePath] [varchar](50) NULL
) ON [PRIMARY]
GO
-- create a table type
CREATE TYPE [dbo].[ct_employees_type] AS TABLE(
[firstName] [varchar](50) NULL,
[lastName] [varchar](50) NULL
)
GO
-- create a Stored procedure
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[spUpsertEmployees]
#employees ct_employees_type READONLY,
#filePath varchar(50)
AS
BEGIN
set #filePath = SUBSTRING( #filePath,1,len(#filePath)-4)
MERGE [dbo].[employee] AS target_sqldb
USING #employees AS source_tblstg
ON (target_sqldb.firstName = source_tblstg.firstName)
WHEN MATCHED THEN
UPDATE SET
firstName = source_tblstg.firstName,
lastName = source_tblstg.lastName
WHEN NOT MATCHED THEN
INSERT (
firstName,
lastName,
filePath
)
VALUES (
source_tblstg.firstName,
source_tblstg.lastName,
#filePath
);
END
GO
After I run debug, the result is follows:

Is there any alternative of CREATE TYPE in SQL as CREATE TYPE is Not supported in Azure SQL data warehouse

I am trying to execute this query but as userdefined(Create type) types are not supportable in azure data warehouse. and i want to use it in stored procedure.
CREATE TYPE DataTypeforCustomerTable AS TABLE(
PersonID int,
Name varchar(255),
LastModifytime datetime
);
GO
CREATE PROCEDURE usp_upsert_customer_table #customer_table DataTypeforCustomerTable READONLY
AS
BEGIN
MERGE customer_table AS target
USING #customer_table AS source
ON (target.PersonID = source.PersonID)
WHEN MATCHED THEN
UPDATE SET Name = source.Name,LastModifytime = source.LastModifytime
WHEN NOT MATCHED THEN
INSERT (PersonID, Name, LastModifytime)
VALUES (source.PersonID, source.Name, source.LastModifytime);
END
GO
CREATE TYPE DataTypeforProjectTable AS TABLE(
Project varchar(255),
Creationtime datetime
);
GO
CREATE PROCEDURE usp_upsert_project_table #project_table DataTypeforProjectTable READONLY
AS
BEGIN
MERGE project_table AS target
USING #project_table AS source
ON (target.Project = source.Project)
WHEN MATCHED THEN
UPDATE SET Creationtime = source.Creationtime
WHEN NOT MATCHED THEN
INSERT (Project, Creationtime)
VALUES (source.Project, source.Creationtime);
END
Is there any alternative way to do this.
You've got a few challenges there, because most of what you're trying to convert is not the way to do things on ASDW.
First, as you point out, CREATE TYPE is not supported, and there is no equivalent alternative.
Next, the code appears to be doing single inserts to a table. That's really bad on ASDW, performance will be dreadful.
Next, there's no MERGE statement (yet) for ASDW. That's because UPDATE is not the best way to handle changing data.
And last, stored procedures work a little differently on ASDW, they're not compiled, but interpreted each time the procedure is called. Stored procedures are great for big chunks of table-level logic, but not recommended for high volume calls with single-row operations.
I'd need to know more about the use case to make specific recommendations, but in general you need to think in tables rather than rows. In particular, focus on the CREATE TABLE AS (CTAS) way of handling your ELT.
Here's a good link, it shows how the equivalent of a Merge/Upsert can be handled using a CTAS:
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-develop-ctas#replace-merge-statements
As you'll see, it processes two tables at a time, rather than one row. This means you'll need to review the logic that called your stored procedure example.
If you get your head around doing everything in CTAS, and separately around Distribution, you're well on your way to having a high performance data warehouse.
Temp tables in Azure SQL Data Warehouse have a slightly different behaviour to box product SQL Server or Azure SQL Database - they exist at the session level. So all you have to do is convert your CREATE TYPE statements to temp tables and split the MERGE out into separate INSERT / UPDATE / DELETE statements as required.
Example:
CREATE TABLE #DataTypeforCustomerTable (
PersonID INT,
Name VARCHAR(255),
LastModifytime DATETIME
)
WITH
(
DISTRIBUTION = HASH( PersonID ),
HEAP
)
GO
CREATE PROCEDURE usp_upsert_customer_table
AS
BEGIN
-- Add records which do not already exist
INSERT INTO customer_table ( PersonID, Name, LastModifytime )
SELECT PersonID, Name, LastModifytime
FROM #DataTypeforCustomerTable AS source
WHERE NOT EXISTS
(
SELECT *
FROM customer_table target
WHERE source.PersonID = target.PersonID
)
...
Simply load the temp table and execute the stored proc. See here for more details on temp table scope.
If you are altering a large portion of the table then you should consider the CTAS approach to create a new table, then rename it as suggested by Ron.

How can I know an update event on mysql using nodejs with mysql?

I'm a newbie in Nodejs and I want to send data to the client when an update occurs on MySQL. So I found the ORM, Sequelize.
Can I know an update event from MySQL using Sequelize? Or how can I know an update event on MySQL using Nodejs with MySQL?
In case of MySql, triggers are the best option.
MySQL Triggers: a trigger or database trigger is a stored program executed automatically to respond to a specific event e.g., insert, update or delete occurred in a table.
For example:- You can have an audit table to save information regarding DATABASE updates or inserts.
Audit table sample for a employee table.
CREATE TABLE employees_audit (
id INT AUTO_INCREMENT PRIMARY KEY,
employeeNumber INT NOT NULL,
lastname VARCHAR(50) NOT NULL,
changedat DATETIME DEFAULT NULL,
action VARCHAR(50) DEFAULT NULL
);
Defining a trigger on employees table
DELIMITER $$
CREATE TRIGGER before_employee_update
BEFORE UPDATE ON employees
FOR EACH ROW
BEGIN
INSERT INTO employees_audit
SET action = 'update',
employeeNumber = OLD.employeeNumber,
lastname = OLD.lastname,
changedat = NOW();
END$$
DELIMITER ;
Then, to view all triggers in the current database, you use SHOW TRIGGERS statement as follows:
SHOW TRIGGERS;
At you backend you can have a polling mechanism (interval based db check) for audit table updates and notify the client accordingly.
This can be done with a simple query to check for employees_audit update either by checking the row cound or based on created date time.
In case you donot need this extra table, you can have the same polling logic to check for updates on the employees table itself based on the update_at date time filed.
For MySQL, the easiest solution would be to set up something to 'tail' MySQL binlogs, such as zongji.
The other, less ideal/trivial solution would be to set up triggers in your database that call out to a custom database plugin that communicates with your process in some way.
You can use https://www.npmjs.com/package/mysql-events
A Node JS NPM package that watches a MySQL database and runs callbacks on matched events.

Strong loop studio with SQL Server

I am trying to use strong loop studio to build an api for a SQL Server database. Almost all the functions are working but if I want to find after id like this localhost:3000/api/tableName/1 where 1 is the id, I get a syntax error.
Incorrect syntax near the keyword 'null'
Using SQL Server Profiler I got the query that is executed and I got this:
SELECT
[id], [name], [description], [application],
FROM
(SELECT
[id], [name], [description], [application], ROW_NUMBER() OVER (null) AS RowNum
FROM [dbo].[tableName]) AS S
WHERE
S.RowNum > 0 AND S.RowNum <= 1
What could be the problem? Can I override this method in some way and rewrite the query?
Actually i tried this on multiple tables and I get the same error.
That null comes from the order by clause in the SQL that StrongLoop creates. If it doesn't get an order, it seems to just use null.
https://github.com/strongloop/loopback-connector-mssql/blob/master/lib/mssql.js#L667
You can fix this by using an order in the default scope in your model.
http://docs.strongloop.com/display/public/LB/Model+definition+JSON+file#ModeldefinitionJSONfile-Defaultscope
"scope": {
"order": "id"
},

Resources