schemacrawlar can't print out table name - schemacrawler

blow code just print out database name only,why?
public static void main(final String[] args) throws Exception
{
// Create a database connection
final DataSource dataSource = new DatabaseConnectionOptions("jdbc:mysql://localhost:3306/target_db");
final Connection connection = dataSource.getConnection("root", "password");
// Create the options
final SchemaCrawlerOptions options = new SchemaCrawlerOptions();
options.setSchemaInfoLevel(SchemaInfoLevelBuilder.standard());
options.setTableTypes(Lists.newArrayList("BASE TABLE","TABLE","VIEW"));
options.setRoutineInclusionRule(new ExcludeAll());
options.setSchemaInclusionRule(new RegularExpressionInclusionRule("target_db"));
options.setTableNamePattern("*");
// Get the schema definition
final Catalog catalog = SchemaCrawlerUtility.getCatalog(connection, options);
for (final Schema schema : catalog.getSchemas())
{
System.out.print("c--> " + schema.getCatalogName() + "\n");
for (final Table table : catalog.getTables(schema))
{
System.out.print("o--> " + table);
if (table instanceof View)
{
System.out.println(" (VIEW)");
} else
{
System.out.println();
}
for (final Column column : table.getColumns())
{
System.out.println(" o--> " + column + " (" + column.getColumnDataType() + ")");
}
}
}
}
}
Strangely,
./schemacrawler.sh -server=mysql -database=target_db -user=root -password=password -infolevel=ALL -command=schema
will output tables and corresponded columns.
Update:my configuration
schemacrawler-14.09.03-main
Ubuntu 16.04 64bit
MariaDB 10.2.1-MariaDB-1~xenial
(I assumed mariadb may not be supported yet,so switch between blow two drivers,but neither works)
mysql-connector-java-6.0.3
mariadb-java-client-1.4.6

Finally,I figured it out:
options.setTableTypes(Lists.newArrayList("BASE TABLE","TABLE","VIEW","UNKNOWN"));
Caution:with MariaDB,table types is "UNKNOWN"

Related

Cassandra 3.6.0 bug: StackOverflowError thrown by HashedWheelTimer on Connection.release

When running some inserts and updates on Cassandra database via the java driver version 3.6.0, I get the following StackOverflowError, of which I am showing here just the top, but it repeats the last 10 rows endlessly.
There is no mention of any line in my code, so I don't know what was the specific operation that invoked this.
2018-09-03 00:19:58,294 WARN {cluster1-timeouter-0} [c.d.s.n.u.HashedWheelTimer] : An exception was thrown by TimerTask.
java.lang.StackOverflowError: null
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$BranchConn.match(Pattern.java:4568)
at java.util.regex.Pattern$GroupTail.match(Pattern.java:4717)
at java.util.regex.Pattern$Curly.match0(Pattern.java:4279)
at java.util.regex.Pattern$Curly.match(Pattern.java:4234)
at java.util.regex.Pattern$GroupHead.match(Pattern.java:4658)
at java.util.regex.Pattern$Branch.match(Pattern.java:4604)
at java.util.regex.Pattern$Branch.match(Pattern.java:4602)
at java.util.regex.Pattern$BmpCharProperty.match(Pattern.java:3798)
at java.util.regex.Pattern$Start.match(Pattern.java:3461)
at java.util.regex.Matcher.search(Matcher.java:1248)
at java.util.regex.Matcher.find(Matcher.java:664)
at java.util.Formatter.parse(Formatter.java:2549)
at java.util.Formatter.format(Formatter.java:2501)
at java.util.Formatter.format(Formatter.java:2455)
at java.lang.String.format(String.java:2940)
at com.datastax.driver.core.exceptions.BusyConnectionException.<init>(BusyConnectionException.java:29)
at com.datastax.driver.core.Connection$ResponseHandler.<init>(Connection.java:1538)
at com.datastax.driver.core.Connection.write(Connection.java:711)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.write(RequestHandler.java:451)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.access$1600(RequestHandler.java:307)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:397)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:407)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution$1.onSuccess(RequestHandler.java:384)
at com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1355)
at com.google.common.util.concurrent.MoreExecutors$DirectExecutor.execute(MoreExecutors.java:398)
at com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1024)
at com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:866)
at com.google.common.util.concurrent.AbstractFuture.set(AbstractFuture.java:689)
at com.google.common.util.concurrent.SettableFuture.set(SettableFuture.java:48)
at com.datastax.driver.core.HostConnectionPool$PendingBorrow.set(HostConnectionPool.java:755)
at com.datastax.driver.core.HostConnectionPool.dequeue(HostConnectionPool.java:407)
at com.datastax.driver.core.HostConnectionPool.returnConnection(HostConnectionPool.java:366)
at com.datastax.driver.core.Connection.release(Connection.java:810)
I do not use any UDTs.
Here are the keyspace and table creation code:
session.execute(session.prepare(
"CREATE KEYSPACE IF NOT EXISTS myspace WITH REPLICATION = {'class': 'NetworkTopologyStrategy', 'dc1': '3'} AND DURABLE_WRITES = true;").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.tasks (myId TEXT PRIMARY KEY, pointer BIGINT)").bind());
session.execute(session.prepare("CREATE TABLE IF NOT EXISTS myspace.counters (key TEXT PRIMARY KEY, cnt COUNTER)").bind());
This is the prepared statement that I use:
PreparedStatement quickSearchTasksInsert = session.prepare("INSERT INTO myspace.tasks (myId, pointer) VALUES (:oid,:loc)");
The code that reproduces the issue does the following:
Runs about 10,000 times the method writeTask() with different values such as the following example rows which are selecteed from a SQL database:
05043FA57ECEAABC3E096B281A55356B, 1678192046
5DE661E77D19C157C31EB7309494EA89, 3959390363
85D6211384E6E190299093E501169625, 3146521416
0327817F8BD59039069C13D581E8EBBE, 2907072247
D913FA0F306D6516D8DF87EB0CB1EE9B, 2507147331
DC946B409CD1E59F560A0ED75559CB16, 2810148057
2A24B1DC71D395938BA77C6CA822A5F7, 1182061065
F70705303980DA40D125CC3497174A5D, 1735385855
runs the setLocNum() method with some Long number.
Loop back to (1) above.
public void writeTask(String myId, long pointer) {
try {
session.executeAsync(quickSearchTasksInsert.bind().setString("oid",myId).setLong("loc", pointer));
incrementCounter("tasks_count", 1);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setLocNum(long num) {
setCounter("loc_num", num);
}
public void incrementCounter(String key, long incVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt + " + incVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public void decrementCounter(String key, long decVal) {
try {
session.executeAsync(
"UPDATE myspace.counters SET cnt = cnt - " + decVal + " WHERE key = '" + key.toLowerCase() + "'");
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}
public synchronized void setCounter(String key, long newVal) {
try {
Long prevCounterValue = countersCache.get(key);
long oldCounter = prevCounterValue == null ? readCounter(key) : prevCounterValue.longValue();
decrementCounter(key, oldCounter);
incrementCounter(key, newVal);
countersCache.put(key, newVal);
} catch (OperationTimedOutException | NoHostAvailableException e) {
// some error handling ommitted from post
}
}

cassandra trigger on composite blob key

I use Cassandra 2.1.9 and have table like
create table "Keyspace1"."Standard4" ( id blob, user_name blob, data blob, primary key(id, user_name));
and I follow the post in Cassandra Sample Trigger Code to get inserted value and do trigger code like
public class InvertedIndex implements ITrigger
{
private static final Logger logger = LoggerFactory.getLogger(InvertedIndex.class);
public Collection augment(ByteBuffer key, ColumnFamily update)
{
CFMetaData cfm = update.metadata();
ByteBuffer id_bb = key;
String id_Value = new String(id_bb.array());
Iterator col_itr=update.iterator();
Cell username_col=(Cell)col_itr.next();
ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
String username_Value = new String(username_bb.array());
Cell data_col=(Cell)col_itr.next();
ByteBuffer data_bb=BytesType.instance.compose(data_col.value());
String data_Value = new String(data_bb.array());
logger.info(" id --> "+id_Value);
logger.info(" username-->"+username_Value);
logger.info(" data ---> "+data_Value);
return null;
}
}
I tried insert into "Keyspace1"."Standard4" (id, user_name, data) values (textAsBlob('id1'), textAsBlob('user_name1'), textAsBlob('data1'));
and got run time exception in ByteBuffer username_bb=CompositeType.extractComponent(username_col.name().collectionElement(),0);
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.marshal.CompositeType.extractComponent(CompositeType.java:191) ~[apache-cassandra-2.1.9.jar:2.1.9]
at org.apache.cassandra.triggers.InvertedIndex.augment(InvertedIndex.java:52) ~[na:na]
at org.apache.cassandra.triggers.TriggerExecutor.executeInternal(TriggerExecutor.java:223) ~[apache-cassandra-2.1.9.jar:2.1.9]
... 17 common frames omitted
Can anybody tell me how to correct?
You are trying to show all the inserted column name and value right ?
Here is the code:
#Override
public Collection<Mutation> augment(ByteBuffer key, ColumnFamily update) {
CFMetaData cfm = update.metadata();
System.out.println("key => " + ByteBufferUtil.toInt(key));
for (Cell cell : update) {
if (cell.value().remaining() > 0) {
try {
String name = cfm.comparator.getString(cell.name());
String value = cfm.getValueValidator(cell.name()).getString(cell.value());
System.out.println("Column Name => " + name + " Value => " + value);
} catch (Exception e) {
System.out.println("Exception : " + e.getMessage());
}
}
}
return null;
}

Plugin code to update another entity when case is created mscrm 2011

Im new with plugin. my problem is, When the case is created, i need to update the case id into ledger. What connect this two is the leadid. in my case i rename lead as outbound call.
this is my code. I dont know whether it is correct or not. Hope you guys can help me with this because it gives me error. I manage to register it. no problem to build and register but when the case is created, it gives me error.
using System;
using System.IO;
using System.ServiceModel;
using System.ServiceModel.Description;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
using Microsoft.Xrm.Sdk.Messages;
using Microsoft.Xrm.Sdk.Client;
using System.Net;
using System.Web.Services;
/*
* Purpose: 1) To update case number into lejar
*
* Triggered upon CREATE message by record in Case form.
*/
namespace UpdateLejar
{
public class UpdateLejar : IPlugin
{
/*public void printLogFile(String exMessage, String eventMessage, String pluginFile)
{
DateTime date = DateTime.Today;
String fileName = date.ToString("yyyyMdd");
String timestamp = DateTime.Now.ToString();
string path = #"C:\CRM Integration\PLUGIN\UpdateLejar\Log\" + fileName;
//open if file exist, check file..
if (File.Exists(path))
{
//if exist, append
using (StreamWriter sw = File.AppendText(path))
{
sw.Write(timestamp + " ");
sw.WriteLine(pluginFile + eventMessage + " event: " + exMessage);
sw.WriteLine();
}
}
else
{
//if no exist, create new file
using (StreamWriter sw = File.CreateText(path))
{
sw.Write(timestamp + " ");
sw.WriteLine(pluginFile + eventMessage + " event: " + exMessage);
sw.WriteLine();
}
}
}*/
public void Execute(IServiceProvider serviceProvider)
{
ITracingService tracingService = (ITracingService)serviceProvider.GetService(typeof(ITracingService));
IPluginExecutionContext context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
IOrganizationServiceFactory serviceFactory = (IOrganizationServiceFactory)serviceProvider.GetService(typeof(IOrganizationServiceFactory));
IOrganizationService service = serviceFactory.CreateOrganizationService(context.UserId);
//for update and create event
if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity)
{
// Obtain the target entity from the input parmameters.
Entity targetEntity = (Entity)context.InputParameters["Target"];
// Verify that the entity represents a connection.
if (targetEntity.LogicalName != "incident")
{
return;
}
else
{
try
{
//triggered upon create message
if (context.MessageName == "Create")
{
Guid recordid = new Guid(context.OutputParameters["incidentid"].ToString());
EntityReference app_inc_id = new EntityReference();
app_inc_id = targetEntity.GetAttributeValue<EntityReference>("new_outboundcalllid");
Entity member = service.Retrieve("new_lejer", ((EntityReference)targetEntity["new_outboundcallid"]).Id, new ColumnSet(true));
//DateTime createdon = targetEntity.GetAttributeValue<DateTime>("createdon");
if (app_inc_id != null)
{
if (targetEntity.Attributes.Contains("new_outboundcallid") == member.Attributes.Contains("new_outboundcalllistid_lejer"))
{
member["new_ringkasanlejarid"] = targetEntity.Attributes["incidentid"].ToString();
service.Update(member);
}
}
}
tracingService.Trace("Lejar updated.");
}
catch (FaultException<OrganizationServiceFault> ex)
{
//printLogFile(ex.Message, context.MessageName, "UpdateLejar plug-in. ");
throw new InvalidPluginExecutionException("An error occurred in UpdateLejar plug-in.", ex);
}
catch (Exception ex)
{
//printLogFile(ex.Message, context.MessageName, "UpdateLejar plug-in. ");
tracingService.Trace("UpdateLejar: {0}", ex.ToString());
throw;
}
}
}
}
}
}
Please check,
is that entity containing the attributes or not.
check it and try:
if (targetEntity.Contains("new_outboundcallid"))
((EntityReference)targetEntity["new_outboundcallid"]).Id
member["new_ringkasanlejarid"] = targetEntity.Attributes["incidentid"].ToString();
What is new_ringkasanlejarid's type? You're setting a string to it. If new_ringkasanlejarid is an entity reference, this might be causing problems.
You might want to share the error details or trace log, all we can do is assume what the problem is at the moment.

Non-unique ldap attribute name with Unboundit LDAP SDK

I am attempting to retrieve objects having several attributes with the name from netscape LDAP directory with LDAP SDK from Unboundit. The problem is that only one of the attributes are returned - I am guessing LDAP SDK relays heavily on unique attribute names, is there a way to configure it to also return non-distinct attributes as well?
#Test
public void testRetrievingUsingListener() throws LDAPException {
long currentTimeMillis = System.currentTimeMillis();
LDAPConnection connection = new LDAPConnection("xxx.xxx.xxx", 389,
"uid=xxx-websrv,ou=xxxx,dc=xxx,dc=no",
"xxxx");
SearchRequest searchRequest = new SearchRequest(
"ou=xxx,ou=xx,dc=xx,dc=xx",
SearchScope.SUB, "(uid=xxx)", SearchRequest.ALL_USER_ATTRIBUTES );
LDAPEntrySource entrySource = new LDAPEntrySource(connection,
searchRequest, true);
try {
while (true) {
try {
System.out.println("*******************************************");
Entry entry = entrySource.nextEntry();
if (entry == null) {
// There are no more entries to be read.
break;
} else {
Collection<Attribute> attributes = entry.getAttributes();
for (Attribute attr : attributes)
{
System.out.println (attr.getName() + " " + attr.getValue());
}
}
} catch (SearchResultReferenceEntrySourceException e) {
// The directory server returned a search result reference.
SearchResultReference searchReference = e
.getSearchReference();
} catch (EntrySourceException e) {
// Some kind of problem was encountered (e.g., the
// connection is no
// longer valid). See if we can continue reading entries.
if (!e.mayContinueReading()) {
break;
}
}
}
} finally {
entrySource.close();
}
System.out.println("Finished in " + (System.currentTimeMillis() - currentTimeMillis));
}
Non-unique LDAP attributes are considered multivalued and are reperesented as String array.
Use Attribute.getValues() instead of attribute.getValue.

PreparedStatement Logging

I use log4j for the logging.
I could see the SQLs through Log4j like below.
Here's my java source which access data base with jdbcTemplate.
public QnaDTO selectBoard(int articleID) {
String SQL =
"SELECT " +
" QA.ARTICLE_ID, " +
" QA.EMAIL, " +
" QA.TEL, " +
" QA.CATEGORY_ID, " +
" CG.CATEGORY_NAME, " +
" QA.SUBJECT, " +
" QA.CONTESTS, " +
" QA.WRITER_NAME, " +
" QA.WRITER_ID, " +
" QA.READCOUNT, " +
" QA.ANSWER, " +
" QA.FILE_NAME, " +
" QA.OPEN_FLG, " +
" QA.KTOPEN_FLG, " +
" TO_CHAR(QA.WRITE_DAY, 'YYYY.MM.DD') WRITE_DAY, " +
" QA.DISPOSAL_FLG " +
"FROM QNA QA JOIN QNA_CATEGORY_GROUP CG " +
"ON QA.CATEGORY_ID = CG.CATEGORY_ID " +
"WHERE QA.ARTICLE_ID = ? ";
QnaDTO qnaDTO = (QnaDTO) jdbcTemplate.queryForObject(
SQL,
new Object[]{articleID},
new RowMapper() {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
QnaDTO qnaDTO = new QnaDTO();
qnaDTO.setArticleID(rs.getInt("ARTICLE_ID"));
qnaDTO.setCategoryID(rs.getInt("CATEGORY_ID"));
qnaDTO.setCategoryName(rs.getString("CATEGORY_NAME"));
qnaDTO.setEmail1(rs.getString("EMAIL"));
qnaDTO.setTel1(rs.getString("TEL"));
qnaDTO.setSubject(rs.getString("SUBJECT"));
qnaDTO.setContests(rs.getString("CONTESTS"));
qnaDTO.setName(rs.getString("WRITER_NAME"));
qnaDTO.setUserID(rs.getString("WRITER_ID"));
//
qnaDTO.setReadcount(rs.getString("READCOUNT"));
qnaDTO.setAnswer(rs.getString("ANSWER"));
qnaDTO.setFileName(rs.getString("FILE_NAME"));
qnaDTO.setOpenFlg(rs.getString("OPEN_FLG"));
qnaDTO.setKtOpenFlg(rs.getString("KTOPEN_FLG"));
//
qnaDTO.setWriteDay(rs.getString("WRITE_DAY"));
qnaDTO.setDisposalFlg(rs.getString("DISPOSAL_FLG"));
return qnaDTO;
}
}
);
return qnaDTO;
}
As you can see above.
jdbcTemplate.queryForObject(...) is the method which really send Query And Get some result.
Inside of jdbcTemplate.queryForObject, finally logger used
public Object query(final String sql, final ResultSetExtractor rse)
throws DataAccessException
{
Assert.notNull(sql, "SQL must not be null");
Assert.notNull(rse, "ResultSetExtractor must not be null");
if(logger.isDebugEnabled())
logger.debug("Executing SQL query [" + sql + "]");
class _cls1QueryStatementCallback
implements StatementCallback, SqlProvider
{
public Object doInStatement(Statement stmt)
throws SQLException
{
ResultSet rs = null;
Object obj;
try
{
rs = stmt.executeQuery(sql);
ResultSet rsToUse = rs;
if(nativeJdbcExtractor != null)
rsToUse = nativeJdbcExtractor.getNativeResultSet(rs);
obj = rse.extractData(rsToUse);
}
finally
{
JdbcUtils.closeResultSet(rs);
}
return obj;
}
public String getSql()
{
return sql;
}
_cls1QueryStatementCallback()
{
super();
}
}
return execute(new _cls1QueryStatementCallback());
}
But with above sources, I could only get the SQL with ?.
What I want is that my result doesn't have question mark ?
It means filling ? with real data.
Is there any way to do this?
thanks
Jeon, sorry, was occupied with work. :-) Anyway, I have looked into your code and replicate here using spring 2.5. I've also google and I think you want to read this and this further to understand.
From the official documentation,
Finally, all of the SQL issued by this class is logged at the 'DEBUG'
level under the category corresponding to the fully qualified class
name of the template instance (typically JdbcTemplate, but it may be
different if a custom subclass of the JdbcTemplate class is being
used).
So you need to figure out how to enable logging with debug level.
Not sure exactly how you trace, but with my trace, I end up to below. So if you enable debug level, you should be able to see the output, maybe not exactly like QA.ARTICLE_ID = 123; but you should probably get the value printed in the next line something like in that example. Anyway, I don't have the exact setup like in your environment but this I think should you give a clue.
public Object execute(PreparedStatementCreator psc, PreparedStatementCallback action)
throws DataAccessException {
Assert.notNull(psc, "PreparedStatementCreator must not be null");
Assert.notNull(action, "Callback object must not be null");
if (logger.isDebugEnabled()) {
String sql = getSql(psc);
logger.debug("Executing prepared SQL statement" + (sql != null ? " [" + sql + "]" : ""));
}

Resources