Is there any way to share an object reference (not value) across multiple VBS applications? It seems like an out-of-process COM object might work, but that puts me in way over my head, and I'd like to know if I'm barking up the wrong tree before I spend a week pounding my head on it.
The background: I'm forced by the product I'm using to communicate with my database using a bunch of small vb scripts, each called independently (there's no way around this). This means dozens of individual connections per minute to the database (one connection per script). Rather than flog it this way (constantly establishing new connections), I'd love to figure out if there's a way for a standalone program to define and open the ADO Connection object, and then have that standalone program share the Connection object with all the little vb scripts (so that connection pooling kicks in).
Thanks for your consideration.
As far as I'm aware VB script allows passing parameters by reference. I assume it also allows the references to be returned from functions.
I think your idea may work, and is probably worth a try.
Create a COM Application or library using a language such as VB (as opposed to VBScript) or Delphi which could as you suggest connect to the Database and hold an ADODB connection? Then define a method on that object that is exposed via COM that returns the ADODB connection as an OLEVariant or simple Variant from a function
Something Like
function getConnection() as Object
I really don't know if this will work but it should.
Related
I am new to LINQ and especially PLINQ. I was playing around in ms office interop and was quite surprised to find that when using PLINQ my code got significantly (depending on the query up to double the speed) faster even though COM should marshal all my parallel calls to a single thread (https://learn.microsoft.com/en-us/visualstudio/vsto/threading-support-in-office?view=vs-2019).
Now I am confused on what might be going on here. I have not run into com server is busy or similar exceptions even on very large workbooks.
Dim oWb as Excel.Workbook
oWb.Worksheets.Cast(Of Excel.Worksheet).AsParallel().Where(Function(x) x.Names.Count > 0).SelectMany(Function(x) x.Names.Cast(Of Excel.Name)).Where(Function(y) y.Name.StartsWith("abc"))
The regular approach here is to loop all worksheets and on all worksheets loop the .Names collection.
I know that a Workbook.Names property also exists, this code is just to show a query as an example.
I read that one can use PLINQ queries in Excel (https://devblogs.microsoft.com/pfxteam/plinq-and-office-add-ins/) but I don't know if the same applies for interop.
My question: can this kind of parallel query be used with office interop nowadays and is it safe to do so?
It is not safe to access STA based Office applications from multiple threads. Otherwise, the host application, for example, Outlook may throw an exception if it detects cross-thread calls. I'd recommend extracting the data from the Excel object model and only then run PLINQ queries or any secondary threads on this data in parallel. And it doesn't depend on the interop technology itself, instead, on the COM server implementation - whether it is STA or MTA.
Right now my FMX project is totally based on Livebinding to connect the datasources to my editors on the form.
It works nice, besides to be slow and do not use paging loading (TLisView).
However, I have many different datasources and the amount of data can be huge and connections eventually slow.
My idea is to keep the user interface responsive and let threads in the background make the data load opening the datasources and put them it the right state. After that assigning the datasource to the controls on the form.
I have played with that with LiveBinding but I cannot mix the main thread with background ones. Some problems happened.
Having to load each field record to each control manually seems to be extremely unproductive. I have almost all the controls that I use already wrapped, I made my own controls based on the FMX ones, so I can have the possibility to add more functions.
I was wondering if there is something already done. Any class or library that I could use to map the source and targets and that I can have the control to activate when it is needed, since I can have many datasources in loading state by a thread.
This is not really a livebinding question.
Also without livebindings, when you retrieve data in a thread you have to respect the thread context. When getting a dataset from a connection, this dataset is also bound to that connection and the connection is bound to the thread context.
The solution is to copy the dataset into a clientdataset and hand over that CDS to the UI thread. Now you can bind that CDS wherever you like. Remember, that there is no connection between the CDS and the data connection. You have to take care yourself writing back the changes.
Don't know if this is relevant still. I use livebinding frequently with load time for the underlying data using treads, utilizing the TTask.Run and Thread.Queue. The important point is to have the LiveBinding AutoActivate = FALSE (i.e. TLinkGridToDataBindSource or other livebinding).
Request is done in TThread.Run with Execute of query, and LiveBinding property "Active" set to True in a TThread.Queue [inside the TThread.Run]. The Livebinding is updating UI and must occur in the main thread.
Subsequent update/request is done the same way, setting active to false first.
I have a small project that I was using node-dirty for, but it's not really for production use and I've had way to many surprises with it so I would like to switch. I was looking at using sqlite, but compiling a client for it seems troublesome. Is there something like node-dirty (i.e. a pure Node.js implementation of a data store), but that's more suited for a small project that doesn't have more than a few hundred sets of data. I've faced the following problems with node-dirty that I would expect an altenrative data store not to do:
Saving a Date object makes it come out as a string when reloading the data (but during execution it remains a Date object). I'm fine with having to serialize the Date object myself, as long as I get out the same thing it lets me put into it.
Iterating over data and deleting something in the same forEach loop makes the iteration stop.
My client is reporting deleted data re-appearing and I've intermittently seen this too, I have no idea why.
How much data do you have? For some projects it's reasonable to just have things in memory and persist them by dumping a JSON file with all the data.
Just use npm to install a NoSQL module like redis or mongodb.
I have a project where i should use multiple tables to avoid keeping dublicated data in my sqlite file(Even though i knew usage of several tables was nightmare).
In my application i am reading data from one table in some method and inserting data into another table in some other method. When i do this i am getting from sqlite step function, error code 21 which is sqlite misuse.
Accoding to my researches that was because i was not able to reach tables from multi threads.
Up to now, i read the sqlite website and learned that there are 3 modes to configurate sqlite database:
1) singlethread: you have no chances to call several threads.
2) multithread: yeah multi thread; but there are some obstacles.
3) serialized: this is the best match with multithread database applications.
if sqlite3_threadsafe() == 2 returns true then yes your sqlite database is serialized and this returned true, so i proved it for myself.
then i have a code to configurate my sqlite database for serialized to take it under guarantee.
sqlite3_config(SQLITE_CONFIG_SERIALIZED);
when i use above codes in class where i read and insert data from 1 table works perfectly :). But if i try to use it in class where i read and insert data from 2 tables (actually where i really need it) problem sqlite misuse comes up.
I checked my code where i open and close database, there is no problem with them. they work unless i delete the other.
I am using ios5 and this is really a big problem for my project. i heard that instagram uses postgresql may be this was the reason ha? Would you suggest postgresql or sqlite at first?
It seems to me like you've got two things mixed up.
Single vs. multi-threaded
Single threaded builds are only ever safe to use from one thread of your code because they lack the mechanisms (mutexes, critical sections, etc.) internally that permit safe use from several. If you are using multiple threads, use a multi-threaded build (or expect “interesting” trouble; you have been warned).
SQLite's thread support is pretty simple. With a multi-threaded build, particular connections should only be used from a single thread (except that they can be initially opened in another).
All recent (last few years?) SQLite builds are happy with access to a single database from multiple processes, but the degree of parallelism depends on the…
Transaction type
SQL in general supports multiple types of transaction. SQLite supports only a subset of them, and its default is SERIALIZABLE. This is the safest mode of access; it simulates what you would see if only one thing could happen at a time. (Internally, it's implemented using a scheme that lets many readers in at once, but only one writer; there's some cleverness to prevent anyone from starving anyone else.)
SQLite also supports read-uncommitted transactions. This increases the amount of parallelism available to code, but at the risk of readers seeing information that's not yet been guaranteed to persist. Whether this matters to you depends on your application.
As I understand it, if I open a view from a database using db.getView() there's no point in doing this multiple times from different threads.
But suppose I have multiple threads searching the View using getAllDocumentsByKey() Is it safe to do so and iterate over the DocumentCollections in parallel?
Also, Document.recycle() messes with the DocumentCollection, will this mess with each other if two threads search for the same value and have the same results in their collection?
Note: I'm just starting to research this in depth, but thought it'd be a good thing to have documented here, and maybe I'll get lucky and someone will have the answer.
The Domino Java API doesn't really like sharing objects across threads. If you recycle() one view in one thread, it will delete the backend JNI references for all objects that referenced that view.
So you will find your other threads are then broken.
Bob Balaban did a really good series of articles on how the Java API works and recycling. Here is a link to part of it.
http://www.bobzblog.com/tuxedoguy.nsf/dx/geek-o-terica-5-taking-out-the-garbage-java?opendocument&comments
Each thread will have its own copy of a DocumentCollection object returned by the getAllDocumentsByKey() method, so there won't be any threading issues. The recycle() method will free up memory on your object, not the Document itself, so again there wouldn't be any threading issues either.
Probably the most likely issue you'll have is if you delete a document in the collection in one thread, and then later try to access the document in another. You'll get a "document has been deleted" error. You'll have to prepare for those types of errors and handle them gracefully.