Clarion Database (.dat) - clarion

I am busy rewriting and redesigning a client's software for them, the software is 20 years old and was written in Clarion, I have the .dat file (2mb in size).
Does anyone know how I can extract the information from the clarion database to a csv, I have googled it however the tools I find only extracts the first 50 rows.
Note that it is not a tps file (for somereason my google searches lead to tps files)

Donald, The TopScan utility that comes with the latest Clarion 10 version opens both Clarion .DAT files and Topspeed .TPS files. You can export to CSV from this tool. If the database is password protected you will need to know the password to open it.
There are also old 16-bit utilities from the DOS version of Clarion called CSCN and CFIL, but I don't know if you can find those.

Related

What is the optimal way of merge few lines or few words in the large file using NodeJS?

I would appreciate insight from anyone who can suggest the best or better solution in editing large files anyway ranges from 1MB to 200MB using nodejs.
Our process needs to merge lines to an existing file in the filesystem, we get the changed data in the following format which needs to be merged to filesystem file at the position defined in the changed details.
[{"range":{"startLineNumber":3,"startColumn":3,"endLineNumber":3,"endColumn":3},"rangeLength":0,"text":"\n","rangeOffset":4,"forceMoveMarkers":false},{"range":{"startLineNumber":4,"startColumn":1,"endLineNumber":4,"endColumn":1},"rangeLength":0,"text":"\n","rangeOffset":5,"forceMoveMarkers":false},{"range":{"startLineNumber":5,"startColumn":1,"endLineNumber":5,"endColumn":1},"rangeLength":0,"text":"\n","rangeOffset":6,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":1,"endLineNumber":6,"endColumn":1},"rangeLength":0,"text":"f","rangeOffset":7,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":2,"endLineNumber":6,"endColumn":2},"rangeLength":0,"text":"a","rangeOffset":8,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":3,"endLineNumber":6,"endColumn":3},"rangeLength":0,"text":"s","rangeOffset":9,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":4,"endLineNumber":6,"endColumn":4},"rangeLength":0,"text":"d","rangeOffset":10,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":5,"endLineNumber":6,"endColumn":5},"rangeLength":0,"text":"f","rangeOffset":11,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":6,"endLineNumber":6,"endColumn":6},"rangeLength":0,"text":"a","rangeOffset":12,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":7,"endLineNumber":6,"endColumn":7},"rangeLength":0,"text":"s","rangeOffset":13,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":8,"endLineNumber":6,"endColumn":8},"rangeLength":0,"text":"f","rangeOffset":14,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":9,"endLineNumber":6,"endColumn":9},"rangeLength":0,"text":"s","rangeOffset":15,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":10,"endLineNumber":6,"endColumn":10},"rangeLength":0,"text":"a","rangeOffset":16,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":11,"endLineNumber":6,"endColumn":11},"rangeLength":0,"text":"f","rangeOffset":17,"forceMoveMarkers":false},{"range":{"startLineNumber":6,"startColumn":12,"endLineNumber":6,"endColumn":12},"rangeLength":0,"text":"s","rangeOffset":18,"forceMoveMarkers":false}]
If we just open the full file and merge those details would work but it would break if we getting too many of those changed details very frequently that can cause out of memory issues as the file been opened many times which is also a very inefficient way.
There is a similar question aimed specifically at c# here. If we open the file in stream mode, is there similar example in nodejs?
I would appreciate insight from anyone who can suggest the best or better solution in editing large files anyway ranges from 1MB to 200MB using nodejs.
Our process needs to merge lines to an existing file in the filesystem, we get the changed data in the following format which needs to be merged to filesystem file at the position defined in the changed details.
General OS file systems do not directly support the concept of inserting info into a file. So, if you have a flat file and you want to insert data into it starting at a particular line number, you have to do the following steps:
Open the file and start reading from the beginning.
As you read data from the file, count lines until you reach the desired linenumber.
Then, if you're inserting new data, you need to read some more and buffer into memory the amount of data you intend to insert.
Then do a write to the file at the position of insertion of the data to insert.
Now using another buffer the size of the data you inserted, take turns reading another buffer, then writing out the previous buffer.
Continue until the end of the file is reach and all data is written back to the file (after the newly inserted data).
This has the effect of rewriting all the data after the insertion point back to the file so it will now correctly be in its new location in the file.
As you can tell, this is not efficient at all for large files as you have to read the entire file a buffer at a time and you have to write the insertion and everything after the insertion point.
In node.js, you can use features in the fs module to carry out all these steps, but you have to write the logic to connect them all together as there is no built-in feature to insert new data into a file while pushing the existing data after it.
There is a similar question aimed specifically at c# here. If we open the file in stream mode, is there similar example in nodejs?
The C# example you reference appears to just be appending new data onto the end of the file. That's trivial to do in pretty much any file system library. In node.js, you can do that with fs.appendFile() or you can open any file handle in append mode and then write to it.
To insert data into a file more efficiently, you would need to use a more efficient storage system than a single flat file for all the data. For example, if you stored the file in pieces in approximately 100 line blocks, then to insert data you'd only have to rewrite a portion of one block of data and then perhaps have some cleanup process that rebalances the block boundaries if a block gets way too big or too small.
For efficient line management, you would need to maintain an accurate index of how many lines each file piece contains and obviously what order the pieces should be in. This would allow you to insert data at a somewhat fixed cost no matter how big the entire file was as the most you would need to do is to rewrite one or two blocks of data, even if the entire content was hundreds of GB in size.
Note, you would essentially be building a new file system on top of the OS file system in order to give yourself more efficient inserts or deletions within the overall data. Obviously, the chunks of data could also be stored in a database too and managed there.
Note, if this project is really an editor, text editing a line-based structure is a very well studied problem and you could also study the architectures used in previous projects for further ideas. It's a bit beyond the scope of a typical answer here to study the pros and cons of various architectures. If your system is also a client/server editor where the change instructions are being sent from a client to a server, that also affects some of the desired tradeoffs in the design since you may desire differing tradeoffs in terms of the number of transactions or the amount of data to be sent between client and server.
If some other language uses an optimal way then I think it would be better to find that option as you saying nodejs might not have that option.
This doesn't really have anything to do with the language you choose. This is about how modern and typical operating systems store data in files.
In fs module there is a function named appendFile. It would let you append data in your file. Link.

Avoid data base open when excel is running

I developed a vb.net program that uses excel file to generate some reports.
Once the program takes too much time to generate a report, I usually do other things while the program is running. The problem is that sometimes I need to open other excel files and the excel files used in the program are shown to me. I want to still hide those files being processed even when I run other excel files. Is this possible? Thanks
The FileSystem.Lock Method controls access by other processes to all or part of a file opened by using the Open function.
The My feature gives you better productivity and performance in file I/O operations than Lock and Unlock. For more information, see FileSystem.
More information here.

Excel Workbook External Links not updating when saved on network drive.

I have two spreadsheets, one a source document that has data inputted and the other a destination document.
Both of these sheets are saved to a network drive, the cells are linked through the "=SOURCE!$A$1" type formula.
If i have both spreadsheets opened on the same computer they work swimmingly, but as soon as I open one on one computer and the other on another they no longer update.
Excuse my beginnerness, this is the first time i have attempted to do this, it may be impossible, but if i thought it works on one computer then why isn't it working on another.
I really need them to update in real time :) Both the source and destination are shared.
Any help would be greatly appreciated.
Cheers,
Logan
In your comments you clarified that File1 and File2 both need to opened at the same time, as they both require human interaction in order to function.
This implies that File2 isn't really a "data file" per-se. Data Files are by definition used only for data storage and have no live interaction.
Your setup is an unusual one (having two Excel files that both require interaction and are dependent on each other being open to function). If this functions properly with both files open on the same computer, then this is likely a file-locking and/or permissions issue, and I'm not sure you'll be able to get around it as-is.
Potential Solutions:
Migrate your entire setup to Microsoft Access, which is designed to handle record locking and database splitting that is necessary for multi-user environments. (More info)
Create a 3rd Excel file (an actual data file that is to remain closed) and have Excel files 1 and 2 both link to it, using File3 as the "go-between".
Create an Access ACCDB file as the "go-between" data-storage location location, and have Excel files 1 and 2 both link to it, since the ACCBD can have many connected computers/users regardless of whether it's opened or closed.
-
In order to get this working with Excel, you'll need to figure out the network shares/permissions necessary to open the two files simultaneously with asynchronous file locks, which is more of a network admin topic.
Asynchronous file locking is a built-in feature of Microsoft Access.
More Information:
Wikipedia : Data File (definition)
Microsoft Docs : Asynchronous File I/O
LinkedIn : Yes, Microsoft Access works in a Multi-User Environment
Office.com : Move data from Excel to Access
Stack Exchange : Server Fault
Q&A for system and network administrators
Stack Exchange : Super User
Q&A for computer enthusiasts and power users

What is file expansion?

found this and could understand
Example: Windows 8.3 filename expansion “c:\program files” be -
comes “C:\PROGRA~1”
i tryed to navigate to the two paths and they worked both
anyone could make it clear
This is a holdover from the days of Windows 95, which revamped the filesystem FAT to FAT32, which enabled long filenames, and was a part of the selling point of the system itself.
At the time, there was still, old DOS packages, old Win 3.1 packages, that relied on the old filename convention 8.3 that is, 8 characters with 3 character for extension.
Windows 95 incorporated the API, to convert automatically in both directions, whilst maintaining compatibility with the existing FAT system, even after using the convert FAT utility. This was to ensure that no breakage of the files occurred, in the context of the old applications on it.
That API is still available to this day.
GetShortPathName with the long filename as parameter, returns the short 8.3, with abbreviation in the form of ~.
GetLongPathName with the 8.3 filename as parameter, returns the long filename.
Source found in MSDN
In ye olde days, the FAT file system used by MS-DOG only supported eight character file names.
When MS switched to the FAT32 file system that used longer names (and later to the NTFS, this created migration issues. There were old systems that only supported 8+3 file names that would be accessing FAT32 disks over a network and there would be old software that only worked with 8+3 file names.
The solution MS came up with was to create short path names that used ~ and numbers to create unique 8+3 aliases for longer file names.
If you were on an old system and accessing a networked disk (or even using DOS commands on a FAT32 local disk early on):
c:\program files
became
C:\PROGRA~1
If you had
c:\program settings
That might come out as
C:\PROGRA~2
In short, this was then a system for creating unique 8+3 file names that mapped to longer file names so that they could be used with legacy systems and software.

Copy SQLite database to another path

I am creating a sqlite database in temp folder. Now I want to copy that file to another folder. Is there any sqlite command to rename the sqlite database file?
I tried using rename function in c++ but it returns error 18. Error no 18 means: "The directory containing the name newname must be on the same file system as the file (as indicated by the name oldname)".
Can someone suggest a better way to do this.
Use a temporary directory on the correct filesystem!
First, sqlite database is just a file. It can be moved or copied around whatever you wish, provided that:
It was correctly closed last time, so there would be no stuff to roll-back in the journal
If it uses write-ahead log type of journal, it is fully checkpointed.
So moving it as a file is correct. Now there are two ways to move a file:
Using the rename system call.
By copying the file and deleting the old one.
The former has many advantages: it can't leave partially written file around, it can't leave both files around , it is very fast and if you use it to rename over old file, there is no period where the target name wouldn't exist (the last is POSIX semantics and Windows can do it on NTFS, but not FAT filesystem). It also has one important disadvantage: it only works within the filesystem. So you have to:
If you are renaming it to ensure that a partially written file is not left behind in case the process fails, you need to use rename and thus have to use a temporary location on the same filesystem. Often this is done by using different name in the same directory instead of using temporary directory.
If you are renaming it because the destination might be slow, e.g. because you want to put it on a network share, you obviously want to use temporary directory on different filesystem. That means you have to read the file and write it under the new name. There is no function for this in the standard C or C++ library, but many libraries will have one (and high-level languages like python will have one and you can always just execute /bin/mv to do it for you).
Now I want to copy that file to another folder. Is there any sqlite command to rename the sqlite database file?
Close the database. Copy the database to the new path using the shell.
Also see Distinctive Features Of SQLite:
Stable Cross-Platform Database File
The SQLite file format is cross-platform. A database file written on
one machine can be copied to and used on a different machine with a
different architecture. Big-endian or little-endian, 32-bit or 64-bit
does not matter. All machines use the same file format. Furthermore,
the developers have pledged to keep the file format stable and
backwards compatible, so newer versions of SQLite can read and write
older database files.
Most other SQL database engines require you to dump and restore the
database when moving from one platform to another and often when
upgrading to a newer version of the software.

Resources