How to generate graph from SQL Server database? - schemacrawler

I'm trying to generate a graph of SQL Server tables. For some reason -command=graph and outputformat=pdf are not working.
I was able to generate the HTML report when I run with -command=schema but command=graph is not even working.
_schemacrawler>schemacrawler.cmd -server=sqlserver -host=sqlserver -port=1433 -database=test -schemas=test\..* -user=test -password=******** -command=graph -noinfo -infolevel=standard -routines= -infolevel=maximum -tabletypes=TABLE -grepcolumns=.*\.TestID -only-matching -outputformat=pdf -outputfile=graph.pdf -loglevel=CONFIG
Output:
Jan 16, 2019 3:12:09 PM schemacrawler.tools.executable.SchemaCrawlerExecutable execute
INFO: Executing command <graph> using <schemacrawler.tools.executable.CommandDaisyChain>
Jan 16, 2019 3:12:09 PM schemacrawler.tools.text.operation.OperationCommand execute
INFO: Output format <pdf> not supported for command <graph>

Related

OPC Publisher module does not start on my Ubuntu VM as an edge module

The OPC Publisher marketplace image runs successfully as a standalone container (albeit with server connection problems). But I am not able to deploy it as an edge module, especially after changing container create options.
Background: In my host laptop I was never able to get the module up so I created a Ubuntu VM. When I tried to deploy the edge module in the VM with default container create options the module did show up in the iotedge module list as "running". I wanted to set the "--op" option to set publishing rate so I changed it in the create options using the portal "Set modules" tab. Since there is no update button I used create button to "recreate" the modules. After this the module did not show up.
After that the OPC publisher module is not showing up on the edge VM. I am following the Microsoft tutorial.
Following is the command:
sudo docker run -v /iiotedge:/appdata mcr.microsoft.com/iotedge/opc-publisher:latest --aa --pf=/appdata/publishednodes.json --c="HostName=<iot hub name>.azure-devices.net;DeviceId=iothubowner;SharedAccessKey=<hub primary key>" --dc="HostName=<edge device id/name>.azure-devices.net;DeviceId=<edge device id/name>;SharedAccessKey=<edge primary key>" --op=10000
Container create options:
{
"Hostname": "opcpublisher",
"Cmd": [
"--pf=/appdata/publishednodes.json",
"--aa",
"--op=10000"
],
"HostConfig": {
"Binds": [
"/iiotedge:/appdata"
]
}
}
I have not specified the connection strings explicitly since the documentation from Microsoft assures that the runtime will pass them automatically.
The relevant iotedge journalctl logs are here.
Oct 06 19:36:05 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:05Z [INFO] - Pulling image mcr.microsoft.com/iotedge/opc-publisher:latest...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Successfully pulled image mcr.microsoft.com/iotedge/opc-publisher:latest
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Creating module OPCPublisher...
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [INFO] - Starting new listener for module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: 2021-10-06T14:06:08Z [ERR!] - Internal server error: Could not create module OPCPublisher
Oct 06 19:36:08 shreesha-VirtualBox iotedged[9622]: caused by: Could not get module OPCPublisher
The logs from iotedge itself is not much useful. Find below anyway.
~$ iotedge logs OPCPublisher
A module runtime error occurred
I have also tried docker container prune just to be sure but it did not help.
Also strangely in the Azure portal when I try to restart the module from the troubleshoot page it throws an error "module not found in the current environment"
Can someone please help me out in troubleshooting this problem? I will be glad to share more details if required.
I raised a support query in Azure portal. After sending support bundles and trying various suggestions like removing DNS configuration, changing bind path to a non-sudo location etc. the team zeroed in on the edge version mismatch.
After re-reading the documentation I uninstalled the earlier iotedge package and installed aziot-edge instead and problem solved!
The team has raised a github issue for public tracking here:
https://github.com/Azure/Industrial-IoT/issues/1425
#asergaz also pointed to the right direction but did not notice since it came a bit later

Sql Server 2019 (Fedora 32) - Msg 22022, Level 16, State 1, Line 0 SQLServerAgent is not currently running so it cannot be notified of this action

I am getting this error. Though Agent seems to be running I have checked running state by calling
EXEC xp_servicecontrol 'querystate', 'SQLSERVERAGENT'
And it shows "Stopped." Then I call
EXEC xp_servicecontrol N'START',N'SQLServerAGENT';
And it shows "Running." After that I try executing the job by calling
EXEC dbo.sp_start_job N'SendCollectionSummaryReport' ;
And I get the Error: SQLServerAgent is not currently running so it cannot be notified of this action.
Then I called again EXEC xp_servicecontrol 'querystate', 'SQLSERVERAGENT'
And it shows "Stopped."
I realized that something is causing failure to start SQLServerAgent. Environment details below:
Microsoft SQL Server 2019 (RTM-CU4) (KB4548597) - 15.0.4033.1 (X64)- Developer Edition (64-bit) ort n Linux (Fedora 32 (Workstation Edition))
Access same server using SSME on Windows Client Machine and All Options coming Disabled

Access Denied Issue with Radoop. Connecting RapidMiner with Cloudera Quickstart VM

I have stood up a Quickstart Cloudera VM on a PC with all services running allocating 14G of ram. On the desktop that the VM is running on(not in the vm) I installed RapidMiner in order to test out Radoop before it goes onto a production server. I used the "Import from Cluster Manager" from RapidMiner which retrieved the correct configuration from CHD. When I run the full test I am running into a Access denied when rapidminer tests if it can create a table etc on Hive.
Logs:
May 18, 2018 3:45:29 PM FINE: Hive query: SHOW TABLES
May 18, 2018 3:45:29 PM FINE: Hive query: set -v
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM INFO: Getting radoop_hive-v4.jar file from plugin jar...
May 18, 2018 3:45:32 PM INFO: Remote radoop_hive-v4.jar is up to date.
May 18, 2018 3:45:32 PM FINE: Hive query: SHOW FUNCTIONS
May 18, 2018 3:45:33 PM INFO: Remote radoop-mr-8.2.1.jar is up to date.
May 18, 2018 3:45:33 PM FINE: Hive query: CREATE TABLE radoop__tmp_cloudera_1526672733223_qznjpj8 (a1 DOUBLE , a2 DOUBLE , a3 DOUBLE , a4 DOUBLE , id0 STRING COMMENT 'role:"id" ', label0 STRING COMMENT 'role:"label" ') ROW FORMAT DELIMITED FIELDS TERMINATED BY ';' STORED AS TEXTFILE
May 18, 2018 3:45:33 PM FINE: Hive query: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Hive query failed: LOAD DATA INPATH '/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew/' OVERWRITE INTO TABLE radoop__tmp_cloudera_1526672733223_qznjpj8
May 18, 2018 3:45:33 PM FINE: Error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destinationhdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
May 18, 2018 3:45:33 PM FINER: Connecting to Hive took 108 ms.
May 18, 2018 3:45:33 PM FINE: Hive query failed again, error: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINE: Error while processing statement: FAILED: Execution Error, return code 20009 from org.apache.hadoop.hive.ql.exec.MoveTask. Access denied: Unable to move source hdfs://quickstart.cloudera:8020/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew to destination hdfs://quickstart.cloudera:8020/user/hive/warehouse/radoop__tmp_cloudera_1526672733223_qznjpj8: Permission denied by sticky bit: user=hive, path="/tmp/radoop/cloudera/tmp_1526672733088_x0ldwew":cloudera:supergroup:drwxrwxrwx, parent="/tmp/radoop/cloudera":cloudera:supergroup:drwxrwxrwt
May 18, 2018 3:45:33 PM FINER: Connecting to Hive. JDBC url: radoop_hive_0.13.0jdbc:hive2://192.168.100.113:10000/default
Maybe this is just a configuration change I can make in CDH like modify the Hive config, or some way to allow RapidMiner to read/write.
Long story short: on Cloudera Quickstart 5.13 VM you should use the same username for "Hadoop username" on Global tab and "Hive username" and on Hive tab.

RetryableHazelcastException when launching cluster using Hazelcast 3.7

I have a cluster with two members that have map loaders to a database.
Version 3.6.1 shows no issues during startup - however when I upgraded to 3.7, I was presented with lots of exceptions like below - and the cluster failed to start!
Any ideas what it means?
Thanks
14:32:50.613), waitTimeout=-1, callTimeout=60000, name=TRADE_SETTLEMENT}, tryCount=250, tryPauseMillis=500, invokeCount=240, callTimeoutMillis=60000, firstInvocationTimeMs=1473427838152, firstInvocationTime='2016-09-09 14:30:38.152', lastHeartbeatMillis=0, lastHeartbeatTime='1970-01-01 01:00:00.000', target=[xxx.co.uk]:5702, pendingResponse={VOID}, backupsAcksExpected=0, backupsAcksReceived=0, connection=null}, Reason: com.hazelcast.spi.exception.RetryableHazelcastException: Map TRADE_SETTLEMENT is still loading data from external store
Sep 09, 2016 2:32:50 PM com.hazelcast.spi.impl.operationservice.impl.Invocation

Unable to start titan server with embedded cassandra and rexter

I am trying to run Titan with embedded cassandra and rexster. Downloaded Titan distribution titan-all-0.3.2 and unpacked on a linux box. After unpacking this is what i ran the command
$ ./bin/titan.sh config/titan-server-rexster.xml config/titan-server-cassandra.properties
This is what i see in the logs
After starting RexPro services its unable to deploy and start grizzly. Has anyone had this issue?
Exception stack trace:
13/10/18 14:51:31 INFO server.RexProRexsterServer: RexPro serving on port: [8184]
Oct 18, 2013 2:51:31 PM org.glassfish.grizzly.servlet.WebappContext deploy
INFO: Starting application [jersey] ...
Oct 18, 2013 2:51:31 PM org.glassfish.grizzly.servlet.WebappContext deploy
SEVERE: [jersey] Exception deploying application. See stack trace for details.
java.lang.RuntimeException: com.sun.jersey.api.container.ContainerException: No WebApplication provider is present
at org.glassfish.grizzly.servlet.WebappContext.initServlets(WebappContext.java:1479)
at org.glassfish.grizzly.servlet.WebappContext.deploy(WebappContext.java:265)
There were some packaging problems in some of the 0.3.2 zip files. You basically need to replace a jar file or two around Jersey to get it to work (or I think use the titan-cassandra distribution instead of titan-all).
You can read more about the issue here and its solution (also reported here), but the answer is:
You should be able to patch 0.3.2 by replacing this jar file in the
Titan lib directory:
jersey-core-1.8.jar
with:
jersey-core-1.17
(http://repo1.maven.org/maven2/com/sun/jersey/jersey-core/1.17/jersey-core-1.17.jar)

Resources