I'm using arangodb 3.1.25. I found that some data is missed inside dump directory after execution of such command using 'arangodump':
arangodump --server.endpoint tcp://address:port --server.database DbName --dump-data true --server.password=**** --include-system-collections true --output-directory "dump" -overwrite true
Example of data that is missed:
On the original database(let's name it 'test') i have such documents in collection 'interchange_edges':
{"_from":"interchange_headers/66430","_to":"parts/64020","type":"interchange"}
{"_from":"interchange_headers/66430","_to":"parts/44474","type":"interchange"}
{"_from":"interchange_headers/66430","_to":"parts/48761","type":"interchange"}
Then i execute dump using instruction listed above and then i open a file with dumped data for my collection. In my case it have name interchange_edges_7d8fd33864b65edab6a05b838483239b.data.json. Then i search in this file by substring '66430' because all original records have this substring. As a result i found such matches:
{"type":2300,"data":{"_from":"interchange_headers/66430","_id":"interchange_edges/66430_64020","_key":"66430_64020","_rev":"_V8MSKAS--C","_to":"parts/64020","type":"interchange"}}
{"type":2300,"data":{"_from":"interchange_headers/66430","_id":"interchange_edges/66430_64020","_key":"66430_64020","_rev":"_V8MSKAS--C","_to":"parts/64020","type":"interchange"}}
{"type":2300,"data":{"_from":"interchange_headers/66430","_id":"interchange_edges/66430_64020","_key":"66430_64020","_rev":"_V8MSKAS--C","_to":"parts/64020","type":"interchange"}}
Somehow i have 3 (duplicated?) rows instead of 3 different rows as expected.
What may cause this behaviour? Did i missed something critical?
Some information about environment:
arangosh (ArangoDB 3.1.25 [linux] 64bit, using VPack 0.1.30, ICU 54.1, V8 5.0.71.39, OpenSSL 1.0.1f 6 Jan 2014)
Please note that ArangoDB 3.1 is end of life since a while now.
The Problems described below are specific to the MMFiles-Storageengine.
At the start of a dump, ArangoDump will invoke the wal collection flushing mechanism.
This will collect all documents from the WAL-files to their specific datafiles. After having waited for a while for the server to have completed that, ArangoDump will start to draw the collection files from the server.
Documents that haven't been collected from the wAL-files into the collection files will not be part of the dump.
Hence you have several options here:
You should run the latest release of a release series - 3.1.29 here.
run the WAL-collection manually and observe whether the server actually deletes the WAL-files.
keep a backup of the database files, and upgrade to a supported ArangoDB version
use the slower ArangoExport of a later ArangoDB Version to export collection by collection
Related
I have an RDS instance running Postgres 11.16. I'm trying to upgrade it to 12.11 but it's giving me errors on PostGIS. If I try a "modify" I get the following error in the precheck log file:
Upgrade could not be run on Sun Sep 18 06:05:13 2022
The instance could not be upgraded from 11.16.R1 to 12.11.R1 because of following reasons. Please take appropriate action on databases that have usages incompatible with requested major engine version upgrade and try again.
Following usages in database 'XXXXX' need to be corrected before upgrade:
-- The instance could not be upgraded because there are one or more databases with an older version of PostGIS extension or its dependent extensions (address_standardizer, address_standardizer_data_us, postgis_tiger_geocoder, postgis_topology, postgis_raster) installed. Please upgrade all installations of PostGIS and drop its dependent extensions and try again.
----------------------- END OF LOG ----------------------
First, I tried just removing postgis to upgrade then add it back again. I used: drop extension postgis cascade;. However, this generated the same error
Second, I tried running SELECT postgis_extensions_upgrade();. However, it gives me the following error:
NOTICE: Updating extension postgis_raster from unpackaged to 3.1.5
ERROR: function st_convexhull(raster) does not exist
CONTEXT: SQL statement "ALTER EXTENSION postgis_raster UPDATE TO "3.1.5";"
PL/pgSQL function postgis_extensions_upgrade() line 82 at EXECUTE
SQL state: 42883
Third, I tried to do a manual snapshot and upgrade the snapshot. Same results.
One additional piece of information, I ran SELECT PostGIS_Full_Version(); and this is what it returns:
"POSTGIS=""3.1.5 c60e4e3"" [EXTENSION] PGSQL=""110"" GEOS=""3.7.3-CAPI-1.11.3 b50468f"" PROJ=""Rel. 5.2.0, September 15th, 2018"" GDAL=""GDAL 2.3.1, released 2018/06/22"" LIBXML=""2.9.1"" LIBJSON=""0.12.1"" LIBPROTOBUF=""1.3.0"" WAGYU=""0.5.0 (Internal)"" TOPOLOGY RASTER (raster lib from ""2.4.5 r16765"" need upgrade) (raster procs from ""2.4.4 r16526"" need upgrade)"
As you'll notice, the raster lib is old but I can't really figure out how to upgrade it. I think this is what is causing me problems but I don't know how to overcome it.
I appreciate any thoughts.
I ended up finally giving up on this after many failed attempts. I ended up solving this by:
Spinning up a new instance on the desired postgres version
Using pg_dump on the old version (schema and data)
Using pg_restore on the new version
I'm not sure if I did something wrong with the above but I found my sequences were out of sync on a number of tables. I wrote some scripts to reset the sequence values after doing this. I had to use something like this to re-sync those sequences:
SELECT setval('the_sequence', (SELECT MAX(the_primary_key) FROM the_table)+1);
I wasted enough time and this got me past the issue. Hopefully the next upgrade doesn't give me this much trouble.
I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.
Following the inputs from the below forum, system properties were specified and customized names for DATABASECHANGELOG & DATABASECHANGELOGLOCK tables were used & setup (on liquibase update execution).
http://forum.liquibase.org/topic/configurable-databasechangelog-table-name
Liquibase version-3.5.1, Database-Oracle 12c, OS-Redhat Linux
But, on subsequent attempts to execute future liquibase updates (against same database schema), the execution fails as its trying to recreate the customized DATABASECHANGELOG table again - failing with Object Name already in use. This does not happen when trying to use the standard liquibase control tables names (i.e. DATABASECHANGELOG & DATABASECHANGELOGLOCK)
Is there an option to skip recreation of customized liquibase control tables OR another fix for this this issue?
Why setting these as system properties?
You can invoke liquibase like the following (or these arguments defined in the properties file):
liquibase <regular arguments > --liquibaseSchemaName=YOUR_SCHEMA \
--databaseChangeLogTableName=YOUR_DBCHANGELOG \
--databaseChangeLogLockTableName=YOUR_DBCHANGELOGLOCK ....
It works fine for us (liquibase 3.5.1, Oracle 12c).
Thanks
The issue was due to the case-sensitivity of the custom table names when executing against oracle databases. Weird error, but its resolved when specifying the value in upper case (via system properties/command line)
I am pushing logs to Elastic Search from Logstash and then i need to get back the logs in the order they were written. Sorting by time stamp does not help because there could me multiple log statements in the same time. I followed the solution in Include monotonically increasing value in logstash field? and it worked perfectly in my windows system.
But when the code was moved to the linux production environment, logstash is not starting up. Failing with the below error
reason=>"Couldn't find any filter plugin named 'seq'. Are you sure
this is correct? Trying to load the seq filter plugin resulted in this
error: no such file to load -- logstash/filters/seq", :level=>:error}
Check if the seq.rb file is in the filter folder.
Also check if the line ending of your seq.rb are linux. If you transferred the file from a windows machine to a linux, the problem might come from here.
I got XHProf working with XHgui. I like to clear or restart fresh profiling for certain site or globally. How do i clear/reset XHprof? I assume i have to delete logs in Mongo DB but I am not familiar with Mongo and I don't know the tables it stores info.
To clear XHProf with XHGui, log into mongo db and clear the collection - results as following:
mongo
db.results.drop()
The first line log into mongo db console. The last command, drops collection results that is is going to be recreated by the XHGui on the next request that is profiled
Some other useful commands:
show collections //shows all collection
use results //the same meaning as mysql i believe
db.results.help() //to find out all commands available for collection results
I hope it helps
I have similar issue in my Drupal setup, using Devel module in Drupal.
After a few check & reading on how xhprof library save the data, i'm able to figure out where the data is being saved.
The library will check is there any defined path in php.ini
xhprof.output_dir
if there's nothing defined in your php.ini, it will get the system temp dir.
sys_get_temp_dir()
In short, print out these value to find the xhprof data:
$xhprof_dir = ini_get("xhprof.output_dir");
$systemp_dir = sys_get_temp_dir();
if $xhprof_dir doesn't return any value, check the $systemp_dir, xhprof data should be there with .xhprof extension.