Excel as front-end - Access as Backend - Only INSERTS - excel

I am working on an Excel to Access Application, where the front-end will be an Excel workbook provided to 60 users who will be filling and submitting a form. On submitting the form, the form data will be inserted into a table (only one table) in an Access mdb file using VBA and ADO. The users will be only inserting records; NO UPDATES. (Basically it is a data entry application and to speed up data entry, multiple users will be using Excel files to input records into a common backend mdb database.)
Will there be any locking or concurrency issues givent that only INSERTS are going to be done? If yes, are there ways to handle them?
(I am aware that an SQL Express or other solution would be better than the MS-Access solution. But business requires an mdb file as it can easily be mailed to another person periodically for analysis.)

Consider building the form within Access instead of Excel. Forms are an integral component of Access and can update, append, delete table data without VBA or ADO since the table will be the form's RecordSource. But still Access forms allow VBA coding and embedded macros with interactivity (i.e., OnClick, OnOpen, AfterUpdate, BeforeChange, AfterDelConfirm events) more advanced than Excel UserForms.
Plus, Access allows simultaneous users up to 255. You can even split Access into two files for frontend and backend and distribute multiple frontends to all 60 users but maintain one backend data file. Even more, Access backends can upsize to any standard RDMS (Oracle, SQL Server, MySQL, PostgreSQL, DB2) using ODBC/OLEDB drivers.
Please don't settle with Excel just because it is a very popular, easy-to-use application. To build a UserForm, connect to a database via ODBC, and run looped inserts with VBA recordsets will amount to quite a bit of code when all this is native to Access and can be built without one line of VBA. Use Excel as an end-use document like a Word or PDF that interacts with static, exported data for formula, statistics, reports, and tables/graphs.
CONCURRENCY
Depending on the DAO or ADO calls or general database settings, various locking mechanisms can be set in MS Access tables, stored queries, or VBA recordsets:
Pessimistic locking - where individual record is locked while first user is in edit mode
Optimistic or no locking - where individual record is locked only while first user is in save mode; usually this is used in environments with low chance of concurrency of SAME record
All records locking - where entire table is locked while first user is in edit mode
Usually, pessimistic locking is the default (Access forms employ this setting) where users after first user receives an error message if attempting to edit the SAME record. For your situation of data inserting, locking would not pose an issue but only if your other users can browse and edit previous data.

Related

How to desing the component which take textfile,csv, or any other custom file and load it in AZURE DB tables

I am looking for the proper kind of architecture to get the data loading done from text files, csv etc... (format is not decided yet) and load it into Azure SQL Database table(s).
It should not be done manually by the user, I want this workflow to happen automatically, provide input for the components, and I need use a service so that it will be a fail safe process.
Please provide approach or architecture which does the similar job.
There are a number of ways to do this an it really depends on your goals and how much you want to invest.
Since you mention CSV by default you could script a SQL bcp (bulk copy).
https://msdn.microsoft.com/en-us/library/ms162802.aspx
You could script it so it isn't manual, but you mentioned it should be fail safe. Any group insert of row into an existing table could fail for a number of reasons (referential integrity checks, hardware issues, locks, etc).
Most integration architectures would bulk copy the data to a clean staging location (empty or new table in the same or other database) using either BCP or SqlBulkCopy (https://msdn.microsoft.com/en-us/library/7ek5da1a(v=vs.110).aspx). Then they would loop through those rows inserting them to the final destination table. That way you can build in either retry or report failures on a row by row basis, and handle accordingly.
Since this is in Azure, there are a number of ways to do the final loop through rows process, like Azure Batch, or Azure Jobs, or even Azure Logic Apps (this is basically a BizTalk like workflow/integration support engine), or custom worker role.

Oracle view columns empty

We use commodity trading software linked to an Oracle database to export reports to Excel. I'm connecting to this Oracle database using PowerPivot as well as SQL developer. In doing so, I'm able to connect to Oracle directly creating live, refreshable reports which no longer need to be constantly exported.
I located an Oracle view responsible for generating one of the most important reports we export to Excel. What's strange is that all of the columns are completely empty. When I open it using PowerPivot or SQL Developer, I just see the headers which contain no data. It populates with data just fine when exported from our trading software however.
Does anyone know why this might be and how I can get this view to populate the data (using PowerPivot for example)?
Is this a materialized view I'm dealing with?
My first guess would be it has to do with permissions or row-level security on the view. Whether it is materialized view is impossible to determine from the data you've provided, but should make no difference in accessing the data from Power Pivot.
This is a question for your DBAs and unlikely a problem in Power Pivot.

How Does the Single Store Xpages work?

I have a number of XPAges design elements that I use in many different databases. If I read the wiki correctly the single store is an all or nothing situation.
So I want to create unique design in a database but use the set of reusable XPages element from a single store location. the wiki says:
Apart from the "dummy or blank XPage with the same name of the default XPage" in each instance application, does it matter if an 'instance' contains XPage design elements?
No. If SCXD is set on an application all XPages design elements are ignored on the database and the application uses the design elements on the SCXD database.
If this is the case then I have to create databases where probably 75% of the code is reusable but I would have to repeat it (and maintain it) in dozens of separate databases. pity!
XPages and related elements (Custom Controls, SSJS Libraries, Java Code) can be inherited from a specific template like other design elements. So, I would setup a database called, perhaps, "Core Components" (.ntf or .nsf) with a template name of "CoreComponents". Then on the individual elements in the target DB you would set inheritance to be specifically from the "CoreComponents" template. Then the elements that are unique to each database do not inherit from any template. You can then use File-Application-Refresh design to update the elements with specific inheritance and the one which are unique in that database will not get overwritten.
You do need to do a clean build after the refresh, so I recommend that you keep the Core Components database locally or on a different server than the others so that the daily design task will not update them resulting in corrupted xsp elements.
IBM's preferred model for reusing XPage artifacts across multiple applications is to create OSGi plugins that leverage the XPages Extensibility API.
NotesIn9 episode 64 demonstrates how to make an existing Custom Control design element a library component, which can then be used in any app that has the library available, instead of having to copy the design element to each app separately. Any subsequent changes to that component are then applied immediately to any apps that use it when a new version of the library is deployed.
If you truly have "dozens" of apps that all share certain features, but the entire design should not be identical across all of them, then the OSGi model is definitely the way to go.
But why not flip the entire model on its head? Traditionally, we've always put the code and the data in the same place (e.g. same NSF) because it was a pain to access -- and, especially, visually represent -- data in one NSF via code in another NSF. That's not true anymore. Why have dozens of apps just because the data lives in dozens of places? Any data source in XPages can be told where the data lives... you can link a central user interface to any number of "remote" data stores (either different NSFs on the same server, or even databases on other servers).
Red Pill, for instance, takes this to its logical extreme: they deploy one NSF, which acts as a portal to all your data, no matter where that data lives. The ACLs of the various NSFs (and Readers fields) still ensure that users don't pry into data they haven't been granted access to, and they have complex analytics algorithms for determining which data the users will actually care about. But if you have 500 NSFs in the domain, you're not maintaining 500 different code templates... it's literally just 1; but that one user interface is how users find, and interact with, all their data.
You certainly don't have to take this premise to that extreme, but perhaps you could identify, say, 5 apps where the UI and / or business logic is similar (or even identical), but the data just lives in multiple places. Create one central app for interacting with all of that data. Create a "homepage" that gives users a way to select which "app" they're trying to access (or, if they should only have access to one to begin with, compute which one that is), and then once they navigate in to the specific "app", just bind the data sources to the relevant NSF instead of assuming each view or document lives in the same NSF that the code does.
It's still a good idea to be aware of the Extensibility API, not only for the sake of code reusability, but also to understand just how much of the behavior of the platform truly is within our control now -- provided, of course, that we're willing to occasionally write some custom Java code. But if you shift away from the one-to-one mapping between code and data that we've habitually maintained in Domino for so long, I can practically guarantee that you'll prefer this approach... both for the ease of implementation and maintenance, and for the comparative simplicity it offers to end users.
You can combine the template technique and the all-code-in-one-database approach:
Divide the application design into two parts: a data part and a code part.
The data part contains all Notes views. If it's an classic Notes application it would contain also all design elements for Notes client like Forms, Subforms, Frames and so on.
The code part contains all XPages, Custom Controls, CSS, client/server JavaScript libraries, Themes, images, jars and so on.
Put your 75% common code into masterData.ntf and masterCode.ntf.
The application code databases appCodeX.ntf inherit all design elements of masterCode.ntf and contain the additional application specific design elements.
The code from all application templates gets united in allCode.ntf. It inherits all from masterCode.ntf and inherits the additional pieces of code from application templates.
Based on that you create an allCode.nsf.
On the data side you use the classic template way.
From here you have to possibilities:
You use Single Copy XPage Design - connect every appData database with allCode.nsf
You connect your XPages in allCode.nsf with appData databases
I prefer the latter. You can define in allCode.nsf where all the application data databases are located, e.g. in property documents.
With the approach showed in picture you're still able to separate application easily e.g. in case you want to sell them. You have already a separate template for every single application.

SSMA does not convert all my access tables to Azure

I have an Access project where the tables will be put up on Azure and then the database split so that the front end stays an Access form (I know not very high tech :) ) The problem is that when I used SSMA previously all the tables were found and everything worked nicely. Since that time I have added more tables (to a fresh version of the program, not the one I already converted to Azure, that was just a test) but when I try the SSMA it only finds the newest tables. What am I doing wrong? Thanks!
It's been quite some time since I worked in Access, but are the tables that you have already ported to Azure in the Access database now linked tables in this Access Database? If the tables are linked tables (meaning they are in another data source) then I would not expect SSMA to show them.
Also, I'd be very suspect of using Access with Azure in this manner. The Access front end forms do not have retry logic built into them which means that in you get transient errors while attempting to save, read, etc. on these records it will not be handled well and you may end up in inconsistent states. One option, besides moving away from Access as the front end if that is not an option, is to no rely on the binding the Access Forms give you and for you to manually write all the binding code handling transient errors; however, by the time you've done that you could have likely re-written the forms into a Web or Desktop app as well.
Article on Transient faults: http://windowsazurecat.com/2010/10/best-practices-for-handling-transient-conditions-in-sql-azure-client-applications/

Is there a way to open a shared read only connection to a shared Access database using ADO?

I have developed a system in which a group of users (appr 50 people right now) registers data and view registered data continously. The system stores data in an Access database, and I currently use the connection mode adModeShareDenyNone for all users in order for the database to never lock up the access to the database.
However it has been requested that I develop a simple Excel worksheet acting as an interface where a user can write an sql select statement and then retrieve data to the sheet according to this (via VBA). This is very simple and I have created such, however I want it to prevent the execution of manipulative statements (insert, update, delete), that is, act as a read-only system.
However I can't seem to find a way to do this without locking up the database for other user too, which is a no go, since the database is in constant use by multiple users. Is there a way to do what I want? I thought of other connection modes, but they all (besides adModeShareDenyNone) seem to apply some sort of locking.
What about adModeRead? That indicates read-only permissions and no share.

Resources