I am optimizing a script I wrote last year that reads documents from a source Couch db, modified the doc and writes the new doc into a destination Couch db.
So the previous version of the script did the following
1.read a document from source db
2.modify document
3.writes doc into destination db
What I'm trying to do is to pile the docs to write in a list and then write a bulk of the (let's say 100) to the destination db to optimize perfomances.
What I found out is that when the bulk upload has to write a list of docs into the destination db if there is a doc in the list which has an "_id" which does not exist into the destination db, then that document won't be written.
The return value will have "success: true" even if after they copy happened there is no such doc in the destination db.
I tried disabling "delayed_commits" and using the flag "all_or_nothing" but nothing has changed. Cannot find info on stackoverflow / documentation so I'm quite lost.
Thanks
To the future generations: what I was experiencing is known Bug and it should be fixed in the next release.
https://issues.apache.org/jira/browse/COUCHDB-1415
The actual workaround is to write a document that is slighty different each time. I added a useless field called "timestamp" that has as value the timestamp of when i run my script.
Related
I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.
There are many queue_promotion_n tables where n is from 1 to 100.
There is an error on the 73 table with a fairly simple query
SELECT count(DISTINCT queue_id)
FROM "queue_promotion_73"
WHERE status_new > NOW() - interval '3 days';
ERROR: could not open file "base/16387/357386324.1" (target block
200005): No such file or directory
Uptime DB 23 days. How to fix it?
Check that you have up-to-date backups (or verify that your DB replica is in sync)
PostgreSQL wiki recommends stopping DB and rsync whole all PostgreSQL files to a safe location.
File where the table is physically stored seems to be missing. You can check where PostgreSQL stores data on disk using:
SELECT pg_relation_filepath('queue_promotion_73');
pg_relation_filepath
----------------------
base/16387/357386324
(1 row)
If you are sure that your hard drives/RAID controller works fine, you can try rebuilding the table. It is a good idea to try this on a replica or backup snapshot of the database first.
VACUUM FULL queue_promotion_73;
Check again the relation path:
SELECT pg_relation_filepath('queue_promotion_73');
it should be different and hopefully with all required files.
The cause could be related to a hardware issue, make sure to check DB consistency.
I am going to make some modifications to methods and the biosphere3 database. As I might break things (I have before), I would like to create backups.
Thankfully, there exist backup() methods for just this. For example:
myBiosphere = Database('biosphere3')
myBiosphere.backup()
According to the docs, this "Write[s] a backup version of the data to the backups directory." Doing so indeed creates a backup, and the location of this backup is conveniently returned when calling backup().
What I wish to do is to load this backup and replace the database I have broken, if need be. The docs seem to stay silent on this, though the docs on serialize say "filepath (str, optional): Provide an alternate filepath (e.g. for backup)."
How can one restore a database with a saved version?
As a bonus question: how is increment_version(database, number=None) called, and how can one use it to help with database management?
The code to backup is quite simple:
def backup(self):
"""Save a backup to ``backups`` folder.
Returns:
File path of backup.
"""
from bw2io import BW2Package
return BW2Package.export_obj(self)
So you would restore the same as with any BW2Package:
from brightway2 import *
BW2Package.import_file(filepath)
However, if would recommend using backup_project_directory(project) and restore_project_directory(filepath) instead, as they don't go through an (older) intermediate format.
increment_version is only for the single file database backend, and is invoked automatically every time the database is saved. You could add versioning to the sqlite database backend, but this is non-trivial.
I got XHProf working with XHgui. I like to clear or restart fresh profiling for certain site or globally. How do i clear/reset XHprof? I assume i have to delete logs in Mongo DB but I am not familiar with Mongo and I don't know the tables it stores info.
To clear XHProf with XHGui, log into mongo db and clear the collection - results as following:
mongo
db.results.drop()
The first line log into mongo db console. The last command, drops collection results that is is going to be recreated by the XHGui on the next request that is profiled
Some other useful commands:
show collections //shows all collection
use results //the same meaning as mysql i believe
db.results.help() //to find out all commands available for collection results
I hope it helps
I have similar issue in my Drupal setup, using Devel module in Drupal.
After a few check & reading on how xhprof library save the data, i'm able to figure out where the data is being saved.
The library will check is there any defined path in php.ini
xhprof.output_dir
if there's nothing defined in your php.ini, it will get the system temp dir.
sys_get_temp_dir()
In short, print out these value to find the xhprof data:
$xhprof_dir = ini_get("xhprof.output_dir");
$systemp_dir = sys_get_temp_dir();
if $xhprof_dir doesn't return any value, check the $systemp_dir, xhprof data should be there with .xhprof extension.
I want to create a new document in SAP. Additional I have some files which belongs to this document, these files I want to upload to the SAP knwolegde base.
I'm using BAPI_DOCUMENT_CREATE2 to create or BAPI_DOCUMENT_CHECKIN2 to add files to a document info data. Every thing works fine, except file upload or checkin.
I'm using the DOCUMENTFILES table. I add a row for each file, currently I set only three fields:
row["STORAGECATEGORY"] = "DMS_C1_ST";
row["DOCFILE"] = "c:\temp\bom.pdf";
row["WASAPPLICATION"] = "PDF";
BAPI erro message:
"Error while checking in and storing c:/temp/bom.pdf"
I set the parameter
PF_FTP_DEST = "SAPFTPA";
PF_HTTP_DEST = "SAPHTTPA";
I have looked in the log data (slg1). I found following entry:
ERRMSG: Error in opening file "..." for reading (No such file or directrory)
V1: SCMS_DOC_CREATE_FILES
V2: 13
It would be nice if anybody has an idea and could bring some light in this issue.
Thanks in advance
Thomas
Remember that BAPIS run inside the application server and are not allowed to make any assumptions about the client side. This also means that they can't call back to the SAP GUI and upload a file from there. C:\temp\bom.pdf has to be a file on the application server, not your local machine!
Have you considered using
row["DOCFILE"] = "bom.pdf";
row["DOCPATH"] = "c:\temp\";
Let me know how it goes, or if you have already done with it then please paste your solution.