System persisting logic operates differently on same steps - acumatica

I have two environments. 1 online, the other is local.
*Online is Production environment: win server 2016, sql server 2016, IIS 10.0, .net frameword 4.7.1, Acumatica 19.100.0122
*Local is development environment: windows 10, sql server 2016, IIS 10.0, .net frameword 4.7.2. Acumatica 19.100.0122
The two environments operate differently when I did the same steps as below:
1,Create a new item named "TESTTODEL"
2,Without any transaction on this new item "TESTTODEL", delete it immediately after creation.
Tip: Per Acuamtica's logic, the "TESTTODEL" won't be physically deleted in the Database. It will be just marked as "DeletedDatabaseRecord=1" in DB table InventoryItem.
Here comes the different
3. Try to recreate a new item also named "TESTTODEL"
In local, it saves successfully, the "DeletedDatabaseRecord" changes to 0 in DB, and all the other information updates as new input. So, acutally,it is a "Update" operation.
However, in Online, it won't allow to save, says "Can not insert duplicated line into object "dbo.InventoryItem" with unique index. The duplicated key is (5,TESTTODEL)"
Obviously, it is an "Insert" operation at this time.
So why is the different? Could anybody help me find out?

Related

Publishing ASP.NET MvC to Azure with SQLite - data fetching fails

Just created a simple ASP.NET MvC project, to list blood pressure measurements. I opted to use SQLite as a database as it is (supposedly) embedded into the project, therefore eliminating the need for an external database. Which is expensive, and the reason why I chose to go with SQLite. That way I would only need to host the web app, which is free, if I chose the free tier, F1.
Publishing through VS2022 is successful, and the app shows correctly, except it shows none of the measurements. Which renders the app ((no) pun intended) useless, at least as a cloud app. I have done some research, and changed the publishingsettings a couple of times, but this is how they look right now.
Configuration: Release
Target Framework: net6.0
Deployment Mode: Self-contained
Target Runtime: win-x86
File Publish Options: None of the options chosen
Databases: Default Connection - Use this connection string at runtime:
=> Data source=bloodpressuremeasurements.db
Entity Framework Migrations: BloodPressureContext (name of the DbContext)
- Apply this migration on publish: NOT chosen, since it gave me an exception and publish failed
Site Extension Options: Install ASP.NET Core Logging Integration Site Extension
- NOT chosen
I also tried changing the option for the db file to Copy To Output Directory: Copy always.
That didn't change a thing. What am I missing?
The website works now as intended, with all the data shown. It looks like the problem stems from scaffolding read and write methods, which made Visual Studio 2022 pull in EntityFrameworkCore.SqlServer. Which is not what I wanted, since I'm using SQLite.
That in turn created some service dependencies under Connected Services, one of them being SQL Server something. It also appeared under the Publish menu, and seems to have caused the compilator to view the connection string as an SQL Server database connection.
I created a new app, and copied the code from the first one. I was careful not to scaffold, as I only need a Get method, to show all measurements. I need none of the other methods in CRUD, neither Post, Delete, nor Update. I will add new measurements by running the app again locally, and read the measurements from a CSV file (did that in the beginning). Then I will publish the app anew, with the updated SQLite database.

Publish to Azure from Vs2013 with 'Execute Code First Migrations' checked overwrote remote database

While regular publish to Azure with WebDeploy, had checked Execute Code First Migrations, which i did before.
But this time the Use this connection string at runtime, was also checked, and i published without noticing it. as a result the remote azure db was wiped and instead is seeded with what looks like a default database with aspnetmemembership tables and _Migrations table that only has migrations related to identity tables.
The production data w db structures is gone and I did not yet setup backup on azure, doing it now.
Is there way to restore the database from some sort of auto backup on azure, i have web version w 1Gb size selected, I do not see any options
this suggests that web version would not have any daily backup, but also that web version is discountinued as of april, but i still have it. http://msdn.microsoft.com/en-us/library/jj650016.aspx
and another questions, i understand everything that happened? But it seems extremly dangerous that its so easy to wipe out the whole database and VS shows no warning nor publishing to azure notifies of anything. Is there anything that can be done to prevent dumb but yet very costly erros like this ?
TIA

t-sql database restore without security

Is there a way to ignore the security portion (users) of a database when doing a restore? I know there is a way to script them all out, but we are restoring production databases on multiple dev machines, each having their particular set of users that we need to keep. Currently they are overwritten by production users.
[Microsoft SQL Server 2008 (SP1) Developer Edition (64-bit)]
There's no way to eliminate the security portion from a backup. It is possible to do a schema comparison using Visual Studio 2010 or higher before doing the restore and generate a script from that to do the permission changes.
See Compare and Synchronize Database Schemas.

Microsoft Visual Studio Team Explorer Everywhere 2010 + MyEclipse 9.1 + TFS 2010 = Not Able To Connect to Multiple Projects

I am having a problem where My Eclipse 9.1 is not able to connect to multiple projects in 2010 using the Team Explorer Everywhere plugin. If I try to connect a second project, it disconnects me from the first one. I can not find any way to be able to pull down multiple projects like I was in TFS 2008.
Any Ideas?
This is as-designed. Team Explorer Everywhere can only connect to a single Team Project Collection at a time. There are myriad reasons why this is the case, but all are to preserve the notion of atomic operations against the server. Some operations (for example, check-in) simply must be scoped to a single server instance in order to make sense.
Since a single changeset is atomic in TFS, an attempt to check-in multiple pending changes either all succeed or all fail. Consider if you had pending changes from two different servers: you cannot commit all these changes as a single changeset - one server could reject your check-in due to conflicts, while the other could proceed successfully. This is, at best, confusing, but most likely actually leaves your projects in an inconsistent state since there may be dependencies between these projects. Since there are distinct changesets for each server, the UI must reflect that.
After much deliberation and experimentation, we concluded that the best user experience is simply to have an experience where you can import projects from multiple TFS servers, but you must select which server you want to work with in the UI by selecting which one is currently "online". All TFS functionality is available for the online server which a limited subset of the TFS functionality is available to the other projects.
We would recommend that you consolidate your Java projects to a single Team Project Collection if you need to import all of them.
This behavior is unchanged from any previous versions of the software, including before the acquisition of the technology by Microsoft (when the product was still part of Teamprise Client Suite.)
Also note that the scope of commands available to "offline" projects has increased dramatically in TFS 2012 thanks to the new Local Workspace functionality.

change collation of DB SQL Server 2008

I have created some new databases in SQL Server 2008 Express (10.0.1600.22) and I have also restored one from SQL Server 2005 Express (9.00.1399.06).
The collations for these are different and I cannot execute queries across them as a result. So I am trying to change the restored database collation
from: SQL_Latin1_Genral_CP1_CI_AS
to: Latin1_General_CI_AS
However the new collation does not appear in the list of options. Not sure if this is possible.
BTW - workarounds that are not options:
I cannot script the data from sql server 2005 express (it seems - I may be missing something)
I cannot script the DB on 2K8 server as I get an out of memory exception when doing that :-(
If the collation is visible for other databases on your instance but not this specific one restored from a 2005 instance I wonder if the list displayed is dependent upon the databases compatibility model.
Maybe try changing the compatibility mode of the restored database to 100 to see if it appears in the list of options.
Note changing the database collation will not affect existing columns. Here's a script that may help with that.

Resources