Reading this post I learnt that is possible to have shared in-memory database across thread:
https://stackoverflow.com/a/24708173/7754093
Using sqlite python package I could do this:
sqlite3.connect('file:foobar_database?mode=memory&cache=shared', uri=True)
How can this be done in Peewee? I can't find any documentation that describes it.
If your sqlite3 module can successfully connect to a shared in-memory database, then the following will work:
from peewee import *
sqlite_db = SqliteDatabase('file:foobar_database?mode=memory&cache=shared')
Related
I learned how to download the client library and write to influxdb databases in influxdb OSS v1.8 at this link:
https://docs.influxdata.com/influxdb/v1.8/tools/api_client_libraries/#install-and-use-the-python-client-library
but I can't find out how to use it for querying. My concern is that this version of influx doesn't seem to use buckets, and the explanation on github:
https://github.com/influxdata/influxdb-client-python
only explains how to write and query with buckets. It includes the code that makes querying with v1.8 possible, but it doesn't explain how to use it. If anyone has any tips or resources that could help, please let me know!
The Python Client libraries were developed using the InfluxDB Cloud and InfluxDB 2.0 APIs. Some of these APIs were backported to InfluxDB 1.8, but as you have seen, there are a few differences.
With InfluxDB 1.8, there are two key things to keep in mind:
The "bucket" parameter is the database you wish to read from. For querying, all you need to do is specify the database name. When writing data, it can also take the retention policy via a string like database/retention_policy (e.g. testing/autogen).
To query, your database will need Flux support enabled, which is disabled by default. See this option to enable it. If you are new to Flux, check out this basics page on how to build queries.
Below is a brief example of using the python client library to write a point and then query it back using a flux query:
from influxdb_client import InfluxDBClient, Point
username = ''
password = ''
database = 'testing'
with InfluxDBClient(url='http://localhost:8086', token=f'{username}:{password}', org='-') as client:
with client.write_api() as writer:
point = Point("mem").tag("host", "host1").field("used_percent", 25.43234543)
writer.write(bucket=database, record=point)
querier = client.query_api()
tables = querier.query(f'from(bucket: \"{database}\") |> range(start: -1h)')
for record in tables[0].records:
print(f'{record.get_measurement()} {record.get_field()}={record.get_value()} {record.get_time()}')
Hope that helps!
I did my homework before posting this question
So the case is that I want to create a utility in my nodejs application that will move specific collections from my main database to an archive database and vice versa. I am using mongo db atlas for my application. I have been doing my research and I found two possible ways one is to create a mongodump and store and other is to create a backup file myself using my node application and upload it to archive db. Using the later approach will cause to loose my collection indexes.
I am planning to use mongodump for the purpose but can't find a resource that shows how to achieve that. Any help would be appreciated. Also if any one has any experience with similar situation I am open to suggestions as well.
I recently created a mongodump & mongorestore wrapper for nodejs: node-mongotools
What does it mean?
you have to install mongo binary on your host by following official mongo documentation(example) and then, you could use node-mongotools to call them from nodeJS.
Here is an example but tool doc contains more details:
var mt = new MongoTools();
const dumpResult = await mt.mongodump({ uri, path })
.catch(console.log);
I'm currently trying to create a postgresql database using sqlalchemy and would like this database file to be created in an external hard drive. I can do that using sqlite3 which can be done just be adding the path after the string that will go into the engine:
# To create the tables we need these packages
import sqlalchemy as sql
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship, backref
from sqlalchemy import create_engine
# Creating the base.
Base = declarative_base()
# The tables would be defined here.
# Create an engine that stores data in the given directory.
path = '/path/'
engine = create_engine('sqlite:///'+path+'ARsdb.db')
# Create all tables in the engine.
Base.metadata.create_all(engine)
Which will then make the database file within the specified path. On the other hand postgresql uses a different syntax for the engine:
# default
engine = create_engine('postgresql://user:pass#localhost/mydatabase')
# psycopg2
engine = create_engine('postgresql+psycopg2://user:pass#localhost/mydatabase')
as explained on this question or the documentation. However it is not clear for me if I can specify a path for this database file to be written like on the sqlite. When I created the database using the database string as:
postgresql://user:password#localhost:5432/databasepath/database_name
for a mock database I couldn't find the database_name file inside the databasepath path. Can I specify a path for the file to be created after all or they stay confined within the server created during postgresql installation?
Thanks!
No, that's not possible.
Postgres stores the data for all databases of a cluster (aka "instance") inside a single directory which is specified during the creation of the Postgres instance (via initdb).
Additionally, you can't create a database by connecting to it. You need to first run the create database command, only then you can connect to it.
In which of the MEAN stack level is it best to load bulk data? I have about 200 - 800 entries of 2 - 3 different types (i.e. they would require 2 - 3 different Mongoose schemas).
Here are the options to load these data (feel free to point out any misunderstandings, I'm new):
Client side: Angular level
Automate lots of user inputs
Server side: Nodejs + Express + Mongoose
Define the schema in Mongoose, create the objects, save each one
Database side: Mongodb
Make a json file with the data, and import it directly into Mongo:
mongoimport -d db_name -c collection_name --jsonArray --file jsonfilename.json
The third way is the purest and perhaps fastest, but I don't know if it's good to do it at a such low level.
Which one is the best? If there is not an optimal choice, what would be the advantages and disadvantages of each?
It depends on what you're bulk loading and if you require validations to be done.
Client side: Angular level
If you require the user to do the bulk loading and require some human readable error messages that's your choice
Server side: Nodejs + Express + Mongoose
You can bulk import from a file
Expose a REST endpoint to trigger bulk import of your data
You can use Mongoose for validation (see validation in mongoose)
Mongoose supports creating multiple documents with one call (see Model.create)
Database side: Mongodb
Fast, No code needed
No flexible validation
I'd choose the option that fits your understanding of the bulk data import best: If it requires a UI your option is 1 combined with 2, if you see this as part of your "business" logic and you're importing data from a external file or want other systems to trigger that import your option is 2, if you see it as a one time action to import data or you don't require any validation or logic related to the import the best choice is option 3.
Loading it via the client side will require you to write more code to handle importing and to send to the backend, then handle it in Node.js.
The fastest method out of all of them would be to directly import it the data using mongoimport.
I have created an in-memory database using H2 in groovy. I have also successfully added data in it. Now, I wanted to access the data in that database somewhere in my program like in a service, but I was not able to. Ive tried using the findAll(), getAll() methods, but nothing is returned, though the database has a content.
How could I fix this?
Please help. Thanks.
If you are using an h2 database in groovy, you'll probably want to access it via JDBC through the groovy.sql.Sql interface. For example:
#GrabConfig(systemClassLoader=true)
#Grab(group='com.h2database', module='h2', version='1.3.168')
import groovy.sql.Sql
def sql = Sql.newInstance("jdbc:h2:mem:db1", "sa", "sa", "org.h2.Driver")
println sql.rows("select * from MY_TABLE")