I got XHProf working with XHgui. I like to clear or restart fresh profiling for certain site or globally. How do i clear/reset XHprof? I assume i have to delete logs in Mongo DB but I am not familiar with Mongo and I don't know the tables it stores info.
To clear XHProf with XHGui, log into mongo db and clear the collection - results as following:
mongo
db.results.drop()
The first line log into mongo db console. The last command, drops collection results that is is going to be recreated by the XHGui on the next request that is profiled
Some other useful commands:
show collections //shows all collection
use results //the same meaning as mysql i believe
db.results.help() //to find out all commands available for collection results
I hope it helps
I have similar issue in my Drupal setup, using Devel module in Drupal.
After a few check & reading on how xhprof library save the data, i'm able to figure out where the data is being saved.
The library will check is there any defined path in php.ini
xhprof.output_dir
if there's nothing defined in your php.ini, it will get the system temp dir.
sys_get_temp_dir()
In short, print out these value to find the xhprof data:
$xhprof_dir = ini_get("xhprof.output_dir");
$systemp_dir = sys_get_temp_dir();
if $xhprof_dir doesn't return any value, check the $systemp_dir, xhprof data should be there with .xhprof extension.
Related
I have an existing app which uses SQLAlchemy for DB access. It works well.
Now I want to introduce DB migrations, and I read alembic is the recommended way. Basically I want to start with the "current DB state" (not empty DB!) as revision 0, and then track further revisions going forward.
So I installed alembic (version 1.7.3) and put
from my_project.db_tables import Base
target_metadata = Base.metadata
into my env.py. The Base is just standard SQLAlchemy Base = sqlalchemy.ext.declarative.declarative_base() within my project (again, which works fine).
Then I ran alembic revision --autogenerate -m "revision0", expecting to see an upgrade() method that gets me to the current DB state from an empty DB. Or maybe an empty upgrade(), since it's the first revision, I don't know.
Instead, the upgrade() method is full of op.drop_index and op.drop_table calls, while downgrade() is all op.create_index and op.create_table. Basically the opposite of what I expected.
Any idea what's wrong?
What's the recommended way to "initialize" migrations from an existing DB state?
OK, I figured it out.
The existing production DB has lots of stuff that alembic revision --autogenerate is not picking up on. That's why its generated migration scripts are full of op.drops in upgrade(), and op.creates in downgrade().
So I'll have to manually clean up the generated scripts every time. Or automate this cleanup Python script cleanup somehow programmatically, outside of Alembic.
Using sqlite3 I'm connecting to my SQLite database, and I can get responses just fine, but I need the column headers included in the results. Searching for hours now, I can find the '.headers on' command for the sqlite command line tool, but everything says dot commands are not available to Node in the sqlite3 library. Is this just an impossibility? If not, please!!! tell me how to do this.
I am optimizing a script I wrote last year that reads documents from a source Couch db, modified the doc and writes the new doc into a destination Couch db.
So the previous version of the script did the following
1.read a document from source db
2.modify document
3.writes doc into destination db
What I'm trying to do is to pile the docs to write in a list and then write a bulk of the (let's say 100) to the destination db to optimize perfomances.
What I found out is that when the bulk upload has to write a list of docs into the destination db if there is a doc in the list which has an "_id" which does not exist into the destination db, then that document won't be written.
The return value will have "success: true" even if after they copy happened there is no such doc in the destination db.
I tried disabling "delayed_commits" and using the flag "all_or_nothing" but nothing has changed. Cannot find info on stackoverflow / documentation so I'm quite lost.
Thanks
To the future generations: what I was experiencing is known Bug and it should be fixed in the next release.
https://issues.apache.org/jira/browse/COUCHDB-1415
The actual workaround is to write a document that is slighty different each time. I added a useless field called "timestamp" that has as value the timestamp of when i run my script.
I'm using MongoDB with NodeJS and am wondering if I need to sanitize data before inserting/updating database documents. Its hard to find definite answer and I'm wondering if there are any Node modules that do it nicely or I need to strip all occurences of $ in strings or simply no need to worry about this. I know that PHP has holes but I'm using Node/Mongo (native driver) combo but still not sure if I need to do any cleaning of user input.
If you store your data as String and you are not parsing it to execute Mongo command, then there is nothing much to worry about it.
Nice article on security
http://cr.yp.to/qmail/guarantee.html
The only problem occurs when you are retrieving the user input, and you parse that input to execute the Mongo command, here you will need to take care to sanitize the input, or else you will get attack.
There is a npm package to do that for you
https://www.npmjs.com/package/mongo-sanitize
and nice article on this too
https://thecodebarbarian.wordpress.com/2014/09/04/defending-against-query-selector-injection-attacks/
Yes, you do.
For more information check this out; https://www.npmjs.com/package/content-filter
Also native escape() method might be used for to protect the database.
Run the code snippet below to see the results.
let a = "{$gt:25}"
console.log(a)
console.log(escape(a))
I followed the ElasticSearch Java-Api Guide.I added Java_Home to Computer->Settings->Advanced Settings->Environment Variables.And I run ElasticSearch -> elasticsearch-service-x64.exe and run as administrator elasticsearch.bat.When I did these settings ,I wrote localhost:9200 to browser and I got information abaout my client or node I guess.After a while I wrote localhost:9200 again but this time anything showed up.
I dont know is it the reason of my problem.But I know I cant keep indexed data permanently.When I indexed , data is going away in 5 seconds and searching operation dont give me any Hits.Meanwhile please tolerate my poor english.
In contrast to SOLR, elasticsearch has no notion of a commit, so everything you feed to the server without error gets indexed.