I want to use MySql database as centralised database for storing user informations(credentials) for an iPhone application.MySql is not installed in my system.Can anyone tell me how to use MySql in mac os.
The MySQL manual has a section on installing MySQL on OS X
Obviously, to access it from an iPhone application you will then have to configure it to be exposed over the network (not a good idea except possibly for LAN use only) or put something in between the database and the world (a Web Service is the usual approach).
If you mean that you want to install it on iOS, then forget it. There isn't an iOS version. Look to SQLite instead. AFAIK it is the usual approach for SQL on that platform.
Related
If it's possible, I'm interested in being able to embed a PostgreSQL database, similar to sqllite. I've read that it's not possible. I'm no database expert though, so I want to hear from you.
Essentially I want PostgreSQL without all the configuration and installation. If it's possible, tell me how.
Run postgresql in a background process.
Start a separate thread in your application that would start a postgresql server in local mode either by binding it to localhost with some random free port or by using sockets (does windows support sockets?). That should be fairly easy, something like:
system("C:\Program Files\MyApplication\pgsql\postgres.exe -D C:\Documents and Settings\User\Local Settings\MyApplication\database -h 127.0.0.1 -p 12345");
and then just connect to 127.0.0.1:12345.
When your application quits, you can always send a SIGTERM to your thread and then wait a few seconds for postgresql to quit (ie join the thread).
PS: You can also use pg_ctl to control your "embedded" database, even without threads, just do a "pg_ctl start" (with appropriate options) when starting the application and "pg_ctl stop" when quitting it.
You cannot embed it, nor should you try.
For embedding you should use sqlite as you mentioned or firebird rdbms.
Unless you do a major rewrite of code, it is not possible to run Postgres "embedded". Either run it as a separate process or use something else. SQLite is an excellent choice. But there are others. MySQL has an embedded version. See it at http://mysql.com/oem/. Also several java choices, and Mac has Core Data you can write too. Hell, you can even use FoxPro. What OS you on and what services you need from the database?
You can't embed it as a in process type thing like sqlite etc, but you can easily embed it into your application setup using Inno setup at http://www.innosetup.org. Search their mailing list archive and you will find someone did most of the work for you and all you have to to is grab the zipped distro and you can easily have postgresql installed when the user installs your app. You can then use the pg_hba.conf file to restrict the server to local host only. Not a true embedded DB, but it would work.
PostgreSQL is intended to run as a stand-alone server; it's probably possible to embed it if you hack at it hard and long enough, but it would be much easier to just run it as intended in a separate process.
HSQLDB (http://hsqldb.org/) is another db which is easily embedded. Requires Java, but is an excellent and often-used choice for Java applications.
Anyone tried on Mac OS X:
http://pagesperso-orange.fr/bruno.gaufier/xhtml/prod_postgresql.xhtml
http://www.macosxguru.net/article.php?story=20041119135924825
(Of course sqlite would be my embedded db of choice as well)
Well, I know this is a very very very old post, but if anyone has nowadays this question, I would refer to:
You can use containers running Postgres. Here's a post that could be helpful, doing something along this line using R:
https://rsangole.netlify.app/post/2021/08/07/docker-based-rstudio-postgres/?utm_source=pocket_mylist
Take a look at duckdb https://duckdb.org/docs/installation/ It is relatively new and still needs to mature. But it works pretty much like an embedded database ("In-process, serverless"), with bindings for several languages (Python, R, Java, ...)
I have been working with python and postgreql for over a year. I can connect and work with postgres databases by blindly using various libraries. But whenever I change platform (most recently from macOS laptop to remote ubuntu server) I go through a day or so of trying to get libraries working eg. I was using 'pyodbc' in some modules but when I migrated the code to the server I had to switch to 'pg8000' because the modules as they were kept throwing errors.
Can someone explain or point me to a link explaining how python connects to dB's? For example, why do I need a MS ODBC driver for 'pyodbc' to connect to an Azure SQL or postgresql but 'pg8000' seems to need nothing at all to connect to a postgresql? When I move to an Ubuntu environment and install ODBC drivers they show up on root under /etc, and /opt (for MS ODBC) but also in my Conda environment (/anaconda3/envs/) and I don't know which is the correct choice for 'ODBC.ini'?
Like I say, I can get things working but really have no understanding as to why they are working and that means I waste time experimenting every time I deal with a change in environment. I've not yet found an explanation online that covers more than a very specific circumstance eg. 'here's how to install our driver ...' Any help would be appreciated.
Final Update:
Following the responses esp. #Thompson the diagram below seems to be the final interpretation and I have a better idea of where to look for answers. For the record pyodbc, SQLAlchemy and pg8000 have been my tools of choice with no problems except as described in the question.
pyodbc is not actually a driver and doesn't contain one, its a 'module for ODBC databases', so it's more of an interface from python to ODBC driver to some database. That's why to use it you have to have an actual separate driver to connect to. Azure SQL being owned by Microsoft would reasonably require Microsoft's ODBC driver, while Postgres will require a Postgres ODBC driver, etc...
The ODBC driver manager is platform-specific, while the ODBC driver is database-specific. That would explain why if you are you are changing platforms or databases, you need to change drivers.
As Adrian noted, you don't need ODBC drivers for postgres, it is more common to use postgres/python drivers (eg: https://wiki.postgresql.org/wiki/Python)
psycopg2 is an actual PostgresSQL driver. It serves as client from Python to postgres, no intermediary required. That's why you don't need to install anything else when you use it. I haven't used pg8000, but based on this list it's a driver too, so you wan't need anything else.
EDITED TO ADD:
Think of a database as some 'black box' you need to activate, and its drivers as electrical sockets. ODBC driver is a specific type of socket (ODBC is a standard developed by Microsoft). If you are using ODBC plug from python (like pyodbc) to a database, you need to make sure the database has an ODBC socket installed/activated.
But your database can have other sockets too, like python-compatible DBAPI that's available on postgres. In that case you use a different direct DBAPI connector, like psycopg2.
Drivers are specific to a database. ODBC is a two stage process. There is the ODBC driver manager and then there are the database specific drivers that allow you to talk to a database. You don't need ODBC to connect to a Postgresql server. If you are going through Python you just need one of the Postgres drivers. You have already found pg8000. My preference is psycopg2.
I'm working with python 3.6 on ubuntu 16.04 and trying to connect with Oracle database which is installed in another machine. So i have installed "cx_Oracle" python module by following this link.
Now, my doubt is what is the necessity to install oracle client in my machine when i need to access the database from another machine in my python script?
The Oracle Client libraries provide all the necessary network connectivity (e.g. things like network encryption), connection management (e.g. connection pooling), high availability features, cross platform and Oracle-version support, data caching, etc etc etc. They are used by many different user and applications. They have significant engineering in them, not to mention testing.
cx_Oracle makes calls to the Oracle Client libraries (which are freely available in Oracle Instant Client), so you need those libraries.
I'm trying to set up my embedded Linux machine as a MySQL client, in order to connect to a external MySQL server (running on a remote machine). sqlite is not an option.
I understand, thanks to Basile Starynkevitch that I have to use libmysqlclient (because that is the only library to run such a connection and dealing with the MySQL client/server protocol on the client side).
Qt stats that:
You need the MySQL header files and as well as the shared library
libmysqlclient.so. Depending on your Linux distribution you may need
to install a package which is usually called "mysql-devel".
Did someone did this and can point out to the right package?
Yes, assuming what you want to do is use the QtSql API to access a MySQL database without connecting to an external MySQL server. With the embedded server library, the server runs in the same process as your client Qt Application, similar to how SQLite works.
One caveat though: the libmysqld embedded server library is deprecated as of MySQL 5.7.17 and will be removed in MySQL 8.0. (as mentionned on http://dev.mysql.com/doc/refman/5.7/en/libmysqld.html)
Your question is confusing and seems contradictory.
Either you want to work with an external MySQL server, and that means that your application opens a (tcp(7) socket) connection to some remote machine running mysqld. Then you have to use libmysqlclient (because that is the only library to run such a connection and dealing with the MySQL client/server protocol on the client side).
If that mysqld server is running as a different process on the same embedded Linux system you should have some way to start it (probably as part of the init scripts on it). Then you still use a socket communication to it, and you still need libmysqlclient. A possible difference with a remote machine running mysqld might be (but I am not sure) the socket family. Perhaps libmysqlclient is using unix(7) sockets in the special case of connecting to a server on the same machine.
Or you don't want any external server. You might consider libmysqld but as Romain answered it is deprecated and is becoming unsupported (so I feel that would be a very bad choice). Then all the database code is running on your embedded Linux computer, which also has all the data storage. In that case (relational database & data & storage on the same embedded Linux computer), I would recommend using sqlite instead because it is well supported and quite stable.
If your mysqld daemon is running on a remote machine you cannot (realistically) avoid libmysqlclient (otherwise you'll need to rewrite most of it).
I ended up doing the following:
Installing MySQL on my embedded Linux and tested it with
mysql --host=1.2.3.4 --user=Foo --password=FooPass testdb
When an MySQL server is runing on 1.2.3.4.
I recompiled Qt with the -sql-mysql option, so the new compiled version will include Qt MySQL plugin.
Test if Qt MySQL plugin is supported with the next code:
QStringList driverslist = QSqlDatabase::drivers();
QString str;
foreach (str, driverslist) { qDebug() << str }
Expected output is:
QMYSQL3
QMYSQL
Test that driver loaded as expected with the next code:
QSqlDatabase db = QSqlDatabase::addDatabase("QMYSQL");
db.setHostName("1.2.3.4");
db.setDatabaseName("TestDB");
db.setUserName("Foo");
db.setPassword("FooPass");
bool ok = db.open();
if(ok)
{
// "Connected"
}
Here is the problem: We have a client that uses Progress Openedge database, we need to execute queries on this database from our servers.
Currently the drivers are installed on our Windows server, and the PHP code uses ODBC to run the queries.
Now we would like to move the code to a Linux server. We tried before to work with their linux drivers but that attempt has failed.
The question is, Is it possible somehow to run PHP code on a linux server, this code communicates with the Windows server, runs the query on the Windows server, and return the results to Linux?
How would you access to this problem.
Thanks!
Yes, it's possible. Your question boils down to "how can my Linux server ask my Windows server to do something" (where the "something" happens to be "talk to a database"), and there are a variety of ways to accomplish that. You could run a web service (RESTful or SOAP) on the Windows server, for example.
Make sure you think about security: if you deploy a service on your Windows server that lets remote clients modify a database, you have to be mindful of which remote clients are allowed to use that service. The last thing you want to do is accidentally allow random strangers to run arbitrary queries against your database.
We have a Knowledgebase Article detailing some setup procedures for Linux installations; it also has a video explaining some aspects of the setup. If the other answered haven't provided a complete solution for you, hopefully our article can at least get you started in the right direction.
Also keep in mind that depending on your version of OE, the driver libraries may be different.