I wrote an Lotus Domino agent to transfer Domlog.nsf data to DB2 database. When I run agent I get:
Error: Cannot find Connector 'DB2COPY2'
DB2COPY2 is name of my DB2 client or instance, don't know how you call it.
And next error is (probably due to first error)
NO RESUME
In the LotusScript I got line Dim conn As New LCConnection("DB2COPY2")
Any ideas on first error? Thank you.
You want to access a data source without installing a driver for it. Its like wanting to drive a car without a key... This does not work. To access odbc data sources, you need tominstall an odbc driver (from the vendor of your database system). To accesss db2 databases, you need a db2- client installed on the server (delivered by ibm if you have the relevant licenses).
In addition you could install decs (for free) or lei (quite cost intensive) to make it easier to configure the access to the data sources, but it will not make the drivers obsolete...
Related
So i am trying to monitor oracle database replication using zabbix,
my setup is:
oracle database 11g r2
zabbix 5.4
i did install ODBC client in zabbix server and follow this step, my zabbix server can connect with my oracle database via ODBC. But, when i add template 'Oracle Database by ODBC' i got this error:
i already set host macros, here's the configuration:
i turn off firewall, disable selinux, but it's still can fetch data from my database..
can someone help me? or maybe now zabbix doesn't support monitoring oracle database 11g?
thank you
This setup does not support oracle 11. I am pretty sure the template could be adjusted for that but you do need some oracle and zabbix knowledge to get that working.
Years ago I created https://github.com/ikzelf/zbxdb and that does support about each and every release I could find. It is quite a different setup but it also supports more advanced oracle configurations. Maybe it can help you.
I have been working with python and postgreql for over a year. I can connect and work with postgres databases by blindly using various libraries. But whenever I change platform (most recently from macOS laptop to remote ubuntu server) I go through a day or so of trying to get libraries working eg. I was using 'pyodbc' in some modules but when I migrated the code to the server I had to switch to 'pg8000' because the modules as they were kept throwing errors.
Can someone explain or point me to a link explaining how python connects to dB's? For example, why do I need a MS ODBC driver for 'pyodbc' to connect to an Azure SQL or postgresql but 'pg8000' seems to need nothing at all to connect to a postgresql? When I move to an Ubuntu environment and install ODBC drivers they show up on root under /etc, and /opt (for MS ODBC) but also in my Conda environment (/anaconda3/envs/) and I don't know which is the correct choice for 'ODBC.ini'?
Like I say, I can get things working but really have no understanding as to why they are working and that means I waste time experimenting every time I deal with a change in environment. I've not yet found an explanation online that covers more than a very specific circumstance eg. 'here's how to install our driver ...' Any help would be appreciated.
Final Update:
Following the responses esp. #Thompson the diagram below seems to be the final interpretation and I have a better idea of where to look for answers. For the record pyodbc, SQLAlchemy and pg8000 have been my tools of choice with no problems except as described in the question.
pyodbc is not actually a driver and doesn't contain one, its a 'module for ODBC databases', so it's more of an interface from python to ODBC driver to some database. That's why to use it you have to have an actual separate driver to connect to. Azure SQL being owned by Microsoft would reasonably require Microsoft's ODBC driver, while Postgres will require a Postgres ODBC driver, etc...
The ODBC driver manager is platform-specific, while the ODBC driver is database-specific. That would explain why if you are you are changing platforms or databases, you need to change drivers.
As Adrian noted, you don't need ODBC drivers for postgres, it is more common to use postgres/python drivers (eg: https://wiki.postgresql.org/wiki/Python)
psycopg2 is an actual PostgresSQL driver. It serves as client from Python to postgres, no intermediary required. That's why you don't need to install anything else when you use it. I haven't used pg8000, but based on this list it's a driver too, so you wan't need anything else.
EDITED TO ADD:
Think of a database as some 'black box' you need to activate, and its drivers as electrical sockets. ODBC driver is a specific type of socket (ODBC is a standard developed by Microsoft). If you are using ODBC plug from python (like pyodbc) to a database, you need to make sure the database has an ODBC socket installed/activated.
But your database can have other sockets too, like python-compatible DBAPI that's available on postgres. In that case you use a different direct DBAPI connector, like psycopg2.
Drivers are specific to a database. ODBC is a two stage process. There is the ODBC driver manager and then there are the database specific drivers that allow you to talk to a database. You don't need ODBC to connect to a Postgresql server. If you are going through Python you just need one of the Postgres drivers. You have already found pg8000. My preference is psycopg2.
I've successfully installed the Pervasive 13's 64bit Client onto Ubuntu Server 18.04.
How can I now establish a connection to the Pervasive 13 server (which is installed on a Windows 2008 R2 server) and perform a sql query?
I'm extremely confused by the documentation, which directs me to the bcfg tool after client installation. I'm not clear if that tool is for configuring the server or for setting up the client connection. Ether way, the documentation is too abstract for my comprehension; I need concrete examples of someone successfully establishing a connection to (at least a hypothetical Pervasive server located at some hypothetical ip address) and NOT JUST abstract syntax that never shows an example of SQL statement being submitted from command line Linux.
Seriously, the documentation covers so much detail of stuff I don't immediately care about, that I can never seem to figure out my practical needs which are to simply establish a connection to the database, perform a SQL query, and get a result set.
The installation of the client should have sensible defaults, and the documentation, after installation, should focus on getting you connected and running sql statements as quickly as possible, instead of going on and on about details that are only of interest if the defaults aren't sensible. Let me connect first! Then if I have a problem, only then do I care to learn further detail about other aspects of configuring the connection.
Pervasive is such an obscure database server, that I'm left with only this documentation to figure this out. Any other database would probably have YouTube videos that show you how to install the client, and start making some SQL queries and getting result sets.
Someone at Actian, ought to be kind enough to make a quick start video for the client on Ubuntu Server that quickly covers installation and finishes where you're submitting sql queries and get result sets. After all, that's the purpose of database client.
Can someone please provide some concrete examples of how I can turn this successful installation into a relationship with the database server where I can submit SQL queries and receive result sets?
I'm not sure why the documentation points to bcfg.
If the Client is installed and didn't display any errors, you need to add an ODBC DSN using dsnadd (https://docs.actian.com/psql/PSQLv13/index.html#page/uguide%2Fuguide.dsnadd.htm%23ww68699). An example of creating a client side DSN pointing to a remote database is:
dsnadd -dsn=clientDemodata -db=Demodata -host=WindowsServerName
(where clientDemodata is the DSN created on the Linux box, Demodata is the PSQL database on the remote server called WindowsServerName).
Once the DSN has been added, you should be able to use isql or isql64 (https://docs.actian.com/psql/PSQLv13/index.html#page/uguide%2Fuguide.isql.htm%23ww138933) to execute a query.
Running isql / isql64 with just the DSN will let you execute SQL queries interactively:
isql64 clientDemodata
An example of running isql using a file as input for the SQL statement(s) is:
cat two-queries.sql | isql clientDemodata -b
If you've done all that, what errors or behavior are you seeing?
I am currently on a project where I have 2 VM (virtual machine), a Windows and Linux one.
I also have an Oracle database where I have a simple table called "Material".
On the 2 VM, I want to connect to my Oracle database without any client or libraries. The thing is I want to create a script which would run on the VM and can connect to my database and insert some datas to my table "Material" but I can't install anything on my VM (like the mysqlclient for exemple).
So is it possible to connect to a database without installing anything on my VM? Or perhaps can I access to an online client to send my SQL to my Oracle Database?
I know it's quite difficult to understand my problem so if you have any question, feel free to ask.
I am working in a business in New Zealand. We currently use a remote server (Plexus) to store a large amount of data (some tables > 2 billion rows). We have started down the SharePoint route, and I have created a number of databases and apps in SharePoint that use this data. Currently, I have to run a program in New Zealand that downloads the data to our local server and then pushes up that data into an Azure database, which the web apps connect to. I would like to remove this middle step for many reasons but the biggest reason is that the web connection between NZ and the US tends to result in a lot of time outs and long pulls due to having to pull large data sets across the Pacific. The remote database we are using is Plexus.
Ideally, I would like to have my C# code sitting in Azure and have this connect to the remote server directly. This way I could simply send the SQL request to Plex and have this data go directly into the Azure databases. The major advantage would be that this would mean it would all be based in the US which would make things a lot faster.
The major hurdle is that we need to install an ODBC Driver given to us by the remote server into Azure so it recognises the calls as genuine. Our systems adminstrator has said he has looked into it and it seems this can't be done?
I was hoping someone on the StackOverFlow community has encountered a similar issue and resolved it?
Note: Please dont think I am asking whether Azure has an ODBC connection because I know it does. I am not asking if I can connect TO Azure, I am asking if I can connect Azure to another external data source.
In a Worker Role/Cloud service in azure you can install the ODBC driver in a startup task using powershells ODBC commandlets.
More info here: Powershell Add-OdbcDsn and here: Powershell startup task in cloud services
One option is to create a virtual machine in the same Azure data center as your database and install your ODBC driver and your C# app.