Copy / Replace DB2 Table Data from one Remote DB Server to Another - linux

I have two IBM DB2 servers at separate remote locations.
I need to copy data from several tables on one database on the first server to another database in the second server.
Previously I've used the IBM data studio tool to export the data and Load Replace it into the other table in the second server.
I am in need of a way to automate this. Probably through command line shell scripts.
How can I accomplish this?

You can always use the "data movement" tools included in DB2. Using EXPORT in one side (source server), and then IMPORT or LOAD in the other (target server). However, you have to take care of the transportation. I mean, you have to copy the data from one server to the other (via SCP, FTP, etc) and you can automate all of that.
Also, you can use a new tool called INGEST. It is a client tool, and this tool will put the data in the target tables of the remote server. This means that the source server will be the client of the remote server (you can catalog a remote database, in a database server).
Finally, you can create a federation between the two server (and this is my favorite). This means that in a database (for example in the target server) you present the tables of the other (source) server. This allows you to do queries between local and remote tables, and this will also allow you to create a LOAD to a local table from a CURSOR that references a SELECT of a table in the remote server.
The last option could sound complicated but it is not. You just have to define remote elements (nickname, wrapper, etc) and that is it. Once, you have configure that, you do not have to worry about file transfer, states, etc. This option is free because both server are DB2 (you can do that with other RDBS)
For more information:
Ingest: http://publib.boulder.ibm.com/infocenter/db2luw/v10r1/topic/com.ibm.db2.luw.admin.dm.doc/doc/c0057237.html
Load, Import and Export and at the same level of the previous link (Database Administration > Data movement utilities and reference)

There are multiple options (using scripting or DB2 replication):
Script to export data on one server, eventually tar/gz and send it to the 2nd server per SSH/SCP. On the 2nd Server another script extracts and loads the data.
DB2 hat build in replication support.
Check DB2 infocenter#IBM or Google. This is an easy and very common task.

Related

Connect to external SQLite Database

I have a website, which is powered by an SQLite database (essentially a db file). Sometimes it is required to look at my database from other machines (within the same local network). For that purpose I currently use sqlite-web which provides a mini SQL web viewer into my db file.
Since sqlite-web's functionality is quite limited, I am wondering whether there are ways to let other machines connect to my local db file via normal desktop applications (such as DataGrip). Similar to how one can connect to postgres via jdbc:postgresql://host:port. Or is this not possible with SQLite?
Edit: I would like to limit the access (e.g. via username + password or a generic PIN), as I don't want everyone in the network to be able to connect to my db.
Map the drive and then use the file path of the mapping.
Or use remote desktop to access directly
https://support.microsoft.com/en-us/windows/how-to-use-remote-desktop-5fe128d5-8fb1-7a23-3b8a-41e636865e8c

Sync two mongodb databases from different locations programmatically

I have a web application (using MongoDB database, AngularJS on front-end and NodeJS on back-end) that deployed on 2 places. First is on static ip so that it can access from anywhere and second is on one local machine so that user can use it when the internet connection is not available. So on both places, data can be inserted by user. My requirement is to sync the both databases, when internet connection is available on local machine i.e. from local system database to remote system database and vice-versa without loosing any data on both places.
One way I am thinking about is provide the sync button in the application and sync the databases using insert/update query. I am not sure is there any better and automated way to do this task so that the databases sync automatically like data copied in replica set.
Please provide the best solution to do this task. Thanks in advance.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

Transfer liferay files (documentt library) between two servers

I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.

Secure ODBC network connection to an MS Access database

Pardon my outrageous silliness, I don't know if this is even possible.
Here's the situation.
There is an MS Access "database" (yes, I know, believe me, I know) which I'll need to SELECT, UPDATE and INSERT to from a remote location. The catch is that this needs to happen securely.
I have complete control over the remote machine which hosts the MS Access file, so I can put in drivers and software as I please. The server is Microsoft Windows Server 2003.
The approach that I had intended to take was to host a PHP script on an HTTPS server (using either Apache or IIS, doesn't matter), send XML to the PHP script which would then do its thing on the MS Access database and send XML results back. However, due to time restraints, I'm trying to figure out if I can connect directly through ODBC in a secure manner, and have it speak to an MS Access database.
It's my understanding that ODBC is not exactly famous for being secure, but that there are ODBC drivers that support encrypted connections, or that I can somehow tunnel the ODBC connection through SSL. However, all the information I have found so far relies on the database being Microsoft SQL.
In particular I'm interested if there are ways to SSL-ify ODBC connections without regard to the underlying database. I could probably figure that out on a Unix-clone by myself, but the host is a Windows Server 2003 in which case, I don't know how to proceed.
Is this possible at all? Any information highly appreciated!
The problem here is you are not quite understanding how an ODBC connection works with access. We are not talking about a TC/IP or socked based connection here.
If you look at ANY connection string for an JET to access file, you see in the ODBC connection will always, I REPEAT ALWAYS include a fully qualified windows path name. When I say a fully qualified windows path name, I am talking to about a file that sitting on the hard disk.
At the end of the day we are thus talking about opening a plain Jane windows file. A horse is a horse is a horse and a windows file is a windows file, is a windows file.
In other words we are talking about opening a file sitting on the hard disk. So, this whole process is not any different than opening excel file, a text file, a PowerPoint file, or in this case an access file that just also happens to be sitting on the hard disk.
There's no server or particular database software that EVER has to be installed on the computer where this file sets. It is the CLIENT SIDE that must have the software and execute a standard windows file open command to pull the data off the disk drive. Remember when you place a word file on a server and open it, you never had to install word on the server, is the client side that's doing a windows standard file open, and the exact same scenario applies to JET when it opens a access file.
What this means then if you're going to open this file up over an Internet connection, you therefore must extend windows networking over the Internet. HTTP, or even FTP is nothing remotely close to the windows file networking protocol.
However, you can extend windows networking system over the Internet, and this is typically done by which called a VPN (virtual private network). That means you'll have to set up a VPN. This will thus allow you to see this other computer via network neighborhood and browse to the files on that folder on the server, and simply open it. Again your opening a standard windows file, there's not some type of service running on the server that you can connect to like with SQL server.
You can read the following article of mine and I explain why running a VPN over the Internet with windows networking and a JET (access) file simply will not work in an reliable fashion:
http://www.members.shaw.ca/AlbertKallal//Wan/Wans.html
So, just keep in mind that if you look at any JET ODBC connection string, you'll notice it's never a IP based, but must be a FULLY QUALIFIED STANDARD windows file name. I cannot stress and repeat again that we talking about a standard windows file name and location that we going to open.
Remember this is no different than opening word or excel or PowerPoint. The ODBC driver confuses this issue, since the driver is ONLY required to be installed and setup on the client side, there's nothing to connect two on the server side, except the required ability to open a standard plain Jane windows file.
What you thus ask as possible with a VPN, but not practical. You can read the above article and it explains in detail why this cannot reliably work and function.
With the advent of several free editions of SQL server, and so many other choices, the above limitation is likely not going to be an issue for you. These other server database systems are not file based, and your connection strings will NEVER resolve to some file name. And, thus these database servers also do not require the windows networking proto call to open that file, and therefore you can even connect to servers such as running linux etc. that don't even have windows networking installed. For a jet connection, you have to use windows networking to directly open the file .
Usually one puts an intermediary between clients and the database. The intermediary handles authentication, authorization, secure data transmission, etc. You assume that the database is inside your firewall, in a secure area. All the things you want to add to make things secure for clients that are outside your firewall are handled by the intermediary.
Being a Java person, I would automatically think web client talking to one or more servlets. Let the servlet handle authentication and authorization. HTTP means no firewall worries. You can use HTTPS, too.
I think that'd be easier to put in place. Besides, even an SSL-ified ODBC connection still exposes your database to the wider Internet. I wouldn't want my data in such a repository. Would you?
Why does your MS-Access (really MS-Jet) database have only 1 file?
I can't picture that. If it were not an ODBC database, then I can picture it.
Most MS-Jet ODBC databases have 100's of *.MDB files in them,
where each MDB file is acting as either a: single table, group of tables, or partial table that is logically and physically spread (not split, and with no linking) across dozens or 100's of MDB files. No MDB file is considered a database in and of itself.
This is how I have seen ODBC databases built using the MS-Access Driver and
MS-Jet Engine.
Most ODBC MS-Jet/MS-Access Driver databases are around 5 billion rows and 1 Terabyte in size.

Resources