Create fulltext index within Entity Framework Coded Migrations - entity-framework-5

TLDR; How do you add a full text index using Entity framework 5 coded migrations
I'm having issues adding a full text index to a database using Entity framework migrations. It needs to be there from the start so I'm attempting modifying the InitialCreate migration that was automatically generated to add it.
As there isn't a way to do it via the DbMigrations API I've resorted to running inline sql at the end of the 'Up' code.
Sql("create fulltext catalog AppNameCatalog;");
Sql("create fulltext index on Document (Data type column Extension) key index [PK_dbo.Document] on AppNameCatalog;");
When this runs everything gets created fine until it reaches this sql, then it throws the the sql error 'CREATE FULLTEXT CATALOG statement cannot be used inside a user transaction.'. Which is expected and working as designed.
Thankfully Sql() has an overload that allows you to run the sql outside the migration transaction. Awesome! I thought.
Sql("create fulltext catalog AppNameCatalog;", true);
Sql("create fulltext index on Document (Data type column Extension) key index [PK_dbo.Document] on AppNameCatalog;", true);
But low and behold modifying the code to do this (see above) results in a new timeout error 'Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.'
I've tried spitting out the sql and running it manually and it works fine. I've also diff'd the generated sql with and without running it outside a transaction and they are identical so it must be something in the way sql is executed.
Thanks in advance for any help!

I had a similar problem. My InitialCreate migration was creating a table and then attempting to add a full text index to that table, using the overloaded Sql() to indicate that it needs to execute outside the transaction. I was also getting a timeout error and I suspect it's due to a thread deadlock.
I could get it to work in some scenarios by using Sql() calls instead of CreateTable() and by merging the CREATE FULL TEXT CATALOG and CREATE FULL TEXT INDEX statements into a single Sql() call. However, this wasn't very reliable. Sometimes it would work and sometimes it would fail with the same timeout error.
The only reliable solution I found was to move the creation of the catalog and full text index into a separate migration.

Related

how to use a stored log analytics query as a datasource in excel?

In our project we created several useful queries on log analytics that we deploy as a "savedSearch" (Microsoft.OperationalInsights/workspaces/savedSearches#2020-08-01).
Now when we load the query in the editor we can export it to excel, which can be nicely refreshed to view current data.
However this link is created to the query that is in the editor and not the stored/deployed query. The alternative is to export to Power Bi (M query) which generates a script that you can then use in excel.
In both cases the query itself seems to be in the connection, so it does not get updated when we deploy a new version. Does anyone know of a way to make this connection to a stored/deployed query?
I feel like this should be as straightforward as a connection to resource so that not only the data, but also the query itself gets updated.... I must be missing something
One way I can think of is to leverage Functions in log queries.
You can first save your query as a function, then export it to excel that would create a connection but execute the function, instead of the raw query.
You can tweak your query later if needed and save/overwrite to the same function, and the refresh should still be able to pull in the latest results since the changes are now neatly abstracted away via the function. :)

Stream Analytics Query (Select * into output)(Exclude specific columns)

I have a query like;
SELECT
*
INTO [documentdb]
FROM
[iothub]
TIMESTAMP BY eventenqueuedutctime
I need to use * because data is dynamic and dont have specific schema. Problem is Iothub system information data is written to documentdb in this query. Is there any way to exclude Iothub system information data?
Thanks.
This is not possible currently but this will be possible in Job Compatibility Level 1.2 in near future. For now, one workaround is that you could create a post create trigger in Cosmos DB to remove this property from the document.
To answer your question, Azure stream analytics service doesn't have an in-built support for excluding columns from dynamic data (iothub information). But, we can achieve this by using UDF. Here is more info on UDF.
UDF can help us in deleting the column from input data and returning us the updated json.
There are two steps basically to achieve this:
Create a JavaScript UDF.
Go to functions from left hand side navigation (below inputs).
Click on Add --> JavaScript UDF.
Give a function alias = removeiothubinfo
keep output type - any.
copy paste following code into function definition.
function main(input) {
delete input['IoTHub'];
return input;
}
Click on Save
Update query
Go to query mode and copy paste the following query :
WITH NewInput AS
(
SELECT
udf.removeiothubinfo(iothub) AS UpdatedJson
FROM
[iothub]
)
SELECT
UpdatedJson.*
INTO
[documentdb]
FROM
NewInput
Click on Save
I suggest you to test your query before running the job by uploading a sample file containing similar structure for json.
Edited
Also, even in job compatibility level 1.2 there has been no additional functionality to achieve this. Check this out for more info.
As #chetangm said in his answer, no such filtering mechanism is supported in ASA so far. Yes, you could use create trigger in Cosmos db, however it need to be triggered in sdk code or REST API. It won't be triggered automatically.
I provide you with another workaround that using Azure Function Cosmos DB Triggered. It could be executed when data is added to or changed in Azure Cosmos DB. You just need to remove the fields you don't want in the function code.

Is it possible to save on database without a PXGraph or a Screen?

The entry for that screen is not needed. All the records are automatically generated. or probably by using DAC only.
The Graph/DAC logic is preferred as you get all of the framework freebies such as field defaulting and calculated formula fields.
You can however get around this using PXDatabase.Insert or PXDatabase.Update PXDatabase.Delete
I use these for upgrade processes or bulk delete of processing records. These calls do not require a graph to execute but ignore all DAC attributes which may or may not default values, calculate values, etc.
If you search on PXDatabase in the Acumatica code browser you can find examples. Here is one from EmployeeMaint.Location_RowPersisted:
PXDatabase.Update<Location>(
new PXDataFieldAssign("VAPAccountLocationID", _KeyToAbort),
new PXDataFieldRestrict("LocationID", _KeyToAbort),
PXDataFieldRestrict.OperationSwitchAllowed);
PXDataFieldAssign is setting column values.
PXDataFieldRestrict is your where condition.
It is best to find multiple examples of PXDatabase in Acumatica and confirm your query results using a tool such as SQL profiler to make sure its executing the correct statement you intend to run.
You can't use DAC without Graph. All BQL queries require PXGraph instance. The only way to save data without using BQL is using ODBC or any other ORM to connect strictly to database and do your changes. But it is not recommended way as in case of doing it in that way you will ignore all the Business Logic.

Order By not working in Azure Web Document Explorer

I am trying to query documentdb inside the Azure Web Document Explorer. The problem is Order By doesn't seem to work anymore.
For instance the following query:
SELECT * FROM c
WHERE c.type="myType" ORDER BY c.createdDate
When queried I get a red alert stating:
Failed to get documents. Please try again.
If I remove Order By it works fine.
Any idea why it doesn't work anymore to query with Order By?
Any idea why it doesn't work anymore to query with Order By?
Order By can be specified only against a property, either numeric or String when it is range indexed with the Maximum Precision (-1). More detail please refer to document
You also cannot perform the following:
Order By with internal string properties like id, _rid, and _self (coming soon).
Order By with properties derived from the result of an intra-document join (coming soon).
Order By multiple properties (coming soon).
Order By with queries on databases, collections, users, permissions or attachments (coming soon).
Order By with computed properties e.g. the result of an expression or a UDF/built-in function

MVC Switch from Code First to Database first - alter schema without dropping database

With Entity Framework it is possible to enable migrations and create migration steps. But is there an intermediate way where it is possible to change the models, and take care of database schema changes yourself? I don't want to drop the database, because there are future production schenario's.
Now - without enable migrations - I use a code first, and when I create another property in a DbSet - lets assume for example in table 'ExistingTable' int NewField {get; set;}
And when in SQL I update my schema with
Alter table ExistingTable add column NewField int not null
the database knows existence of the new field, the Entity Framework / C# knows the property, but when running, there is some hidden check that still want's to drop my database because of the model change.
Question: can I overwrite a certain setting, in such a way that intial 'Code First' can be transformed to database first?
Removing the __MigrationHistory table from the database (Azure) did work fine for me. I made my (simple) database changes myself and published the code. It all runs fine. There is an alternative see EF Code First Migrations Deployment to an Azure Cloud Service. For a simple one-way patch (and no change history needed) removing the __MigrationHistory works fine.

Resources