We are experiencing an issue on v2011, where using the admin aspect with the updatesystem command, does not actaully do an updatesystem. By that i mean, it does not perform any typesystem updates or impex etc. where have also tried the command with passing a config file but still the same result. Also, if the completes it shuts down the container and just starts it back up again.
Tried basing the typeSystem only argument, and also a passing a config json file. Check DB SystemINit table to ensure it was not locked
Related
I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.
I am trying to perform system update from command line with a json config but it seems that, no matter what I do,
the command does the exact same thing, which I suppose is running the update with the default platform settings.
For example, when I tried to perform my update without essential data("essential": "false" in json config), essential impexes are also being run.
I tried with an invalid json(that does not have json format) and the build was successfull.
I also tried giving as a paramter a json that does not exist and yet, the build was successfull and essential impexes were also run.
So, it seems to me that, no matter what I do, the json is not taken into account and the update works with the default platform settings.
This is the command I am using:
ant updatesystem -Dtenant=master -DconfigFile=Path/updatesystem.json
Am I doing something wrong or how can I pass my configuration during system update from command line ?
PS:
Hybris version: 6.7.0.25
I Think your JSON path is wrong and please try to do it like this.
ant updatesystem -DconfigFile=../custom/testcore/resources/updatesystem-configuration.json
The problem was caused by the fact that the "updatesystem" macro was overriden in a project specific file and the configFile property was not passed to the UpdatePlatformAntPerformableImpl during creation. That is why, regardless of my input for configFile property , nothing changed.
I fixed the problem by also passing the configFile in the constructor:
new de.hybris.ant.taskdefs.UpdatePlatformAntPerformableImpl("${tenant}", "${configFile}")
I need to use db-migrate to add an index to a Postgres database with CREATE INDEX CONCURRENTLY. However, db-migrate wraps all migrations in a transaction by default, and trying to create a concurrent index inside a transaction results in this error code:
CREATE INDEX CONCURRENTLY cannot run inside a transaction block
I can't find any way to disable transactions as part of the db-migrate options, either CLI options or (preferably) as a configuration directive on the migration itself. Any idea if this can be accomplished?
It turns out that this can be solved on the command line by using --non-transactional. Reading the source, I can see that this sets an internal flag called notransactions, but it's not clear to me whether this can be set as part of the migration configuration or must be passed on the command line.
I kept getting the errors even when running with --non-transactional flag.
The solution for me was to run with --non-transactional AND have each CREATE INDEX CONCURRENTLY statement in its own separate migration file. It turns out you can't have more than one of it in the same file (same transaction block).
I am facing an issue that I could not understand how to resolve.
I created a test plan that need to connect DB and count the results.
The problem is that Jmeter not perform any validation afterwards, I created a JSSR223 in the JDBC request and just want to print the results and Jmeter not print.
I created another sampler to print the DB results and still Jmeter not printing.
Jmeter just passes this steps,
In the results tree I saw that it connects to DB and failed in the assertion, but why it passes the other steps? and just moving to debug sampler?
I can not print the results, I can not perform any debug since it is just black box.
can someone please advise?
you can see in yellow all the steps that Jmeter not performed and just not exists in the results tree.
enter image description here
Get used to check jmeter.log file, it normally contains information regarding what went wrong, you should be able to figure out the root cause by looking into the log file. If you are not - update your question with jmeter.log file contents (at least essential parts)
My expectation is that your ${Conv_sense} variable is not defined (or cannot be cast to Integer). Double check whether it is defined or not using Debug Sampler and View Results Tree listener combination.
Also don't refer JMeter Variables like ${Conv_sense} in Groovy scripts body, use vars.get('Conv_sense}') instead, otherwise it might conflict with Groovy GStringTemplate resulting in undefined behavior.
I'm working in new application written in Siebel 8.1, issue appears when I'm trying to replay script and I can't handle that.
Replay Output:
Error -27086: Auto-correlation callback function
"flCorrelationCallbackParseWebPage" failed (rc=1) for parameter
"Siebel_Parse_Web_Page40"
web_reg_save_param("Siebel_Parse_Web_Page40",
"LB/IC=",
"RB/IC=",
"Ord=1",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseWebPage",
"AutoCorrelationDll=LrwiSiebelCorrelationWrapper",
LAST);
I have done all steps for prepare record options from: http://software-qe.blogspot.se/2008/01/siebel-7x-record-and-replay-for.html
I'm using Loadrunner 11.52 (Siebel Web protocol), IE8.
We've been using the autocorrelation library for quite a few years on my team and we see this a lot. Unfortunately, it's not an easy problem to diagnose.
First I would check your test results and your VUser log to see if something happened before the autocorrelation failed. (Make sure your logging is set to parameter substitution in runtime settings).
Check your parameter files for extra spaces, commas, etc. Sometimes I've seen that error right after it rejects something about your parameter file.
Worst case scenario, your script is corrupted and you'll have to start over. We've gotten in the habit of making frequent backups of our scripts just because of this issue. Usually, we'll be able to start from our backup and continue or create a new script and paste the old code in. Autocorrelation error "magically" goes away with the same code in a new script.
If auto(magical)correlation does not work then use manual correlation.
Record twice with same data: Compare. You will find session, state and time data.
Change the credentials: Re-record. Compare. You will find credential related correlation
Change the business record but keep the same business process. Re-Record. You will find the business related correlation.
Do not expect autocorrelation to provide a magical working script. You have about a 0.0001% chance of that happening without LoadRunner script development intervenetion.