Can anybody suggest a good way to visualize schemacrawler output in a webapplication. I need the output as ER diragram. is there any good javascript or jquery plugin which uses DOT format? One more thing when i try to get the output in the dot format it is giving the output as
System Information
SchemaCrawler Information
-=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=-
product name SchemaCrawler
product version 12.04.02
Database Information
-=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=-
database product name MySQL
database product version 5.6.19-0ubuntu0.14.04.1
database user name demo#localhost
JDBC Driver Information
-=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=--=-
driver name MySQL Connector Java
driver version mysql-connector-java-5.1.34 ( Revision: jess.balint#oracle.com-20141014163213-wqbwpf1ok2kvo1om )
driver class name com.mysql.jdbc.Driver
url jdbc:mysql://localhost:3306/demodb
is JDBC compliant false
Tables
demodb.dbconnection
[table]
id INT NOT NULL
auto-incremented
dbmsType VARCHAR(100)
ipAddress VARCHAR(100)
port VARCHAR(10)
username VARCHAR(100)
This table is used to store database connections
password VARCHAR(100)
databaseName VARCHAR(100)
[primary key]
id ascending
auto-incremented
demodb.roles
[table]
roleId INT NOT NULL
roleName VARCHAR(45) NOT NULL
[primary key]
roleId ascending
demodb.userdetails
[table]
id INT NOT NULL
auto-incremented
name VARCHAR(45)
dob DATE
sex VARCHAR(1)
bloodgroup VARCHAR(5)
address VARCHAR(45)
place VARCHAR(45)
city VARCHAR(45)
state VARCHAR(45)
country VARCHAR(45)
zipcode VARCHAR(45)
mobile VARCHAR(45)
email VARCHAR(45)
occupation VARCHAR(45)
[primary key]
id ascending
auto-incremented
[foreign key, with no action]
id <-- demodb.user.userDetailsId
demodb.userroles
[table]
id INT NOT NULL
auto-incremented
username VARCHAR(45) NOT NULL
roleName VARCHAR(45) NOT NULL
[primary key]
id ascending
auto-incremented
demodb.user
[table]
username VARCHAR(100) NOT NULL
password VARCHAR(300) NOT NULL
userDetailsId INT NOT NULL
active INT NOT NULL
[primary key]
username ascending
[foreign key, with no action]
userDetailsId --> demodb.userdetails.id
[non-unique index]
userDetailsId ascending
[unique index]
username ascending
and my schemacrawler-context.xml is like
<bean id="outputOptions" class="schemacrawler.tools.options.OutputOptions">
<property name="outputFormatValue" value="DOT" />
<!-- <property name="outputFile" value="scOutput.txt" /> --><!-- This should be given(writer not file) given from the program. -->
</bean>
it is not working for DOT as well as dot
Khader,
For web output, you have a few output options available out of the box from SchemaCrawler. One option is to generate output in "htmlx" format, which will give you an ER diagram embedded in HTML, in a single file. Another option is to use "png" format to generate a PNG file. It is hard to see what executable you have in the Spring context, since you have not included this key information in your question. I would advise you to use the GraphExecutable.
Please note that in order to generate the ER diagram, you will need GraphViz installed on the web server. GraphViz will always generate a file, and cannot use a Java writer. So, please use the most appropriate constructor for OutputOptions.
If you like, you can have SchemaCrawler generate DOT format. Again, please use the most appropriate constructor for OutputOptions. You do not need GraphViz for this. You can use viz.js to visualize the DOT file using JavaScript.
Hope this helps.
Sualeh Fatehi, SchemaCrawler
Related
I am trying the following codes to create a keyspace and a table inside of it:
CREATE KEYSPACE IF NOT EXISTS books WITH REPLICATION = { 'class': 'SimpleStrategy',
'replication_factor': 3 };
CREATE TABLE IF NOT EXISTS books (
id UUID PRIMARY KEY,
user_id TEXT UNIQUE NOT NULL,
scale TEXT NOT NULL,
title TEXT NOT NULL,
description TEXT NOT NULL,
reward map<INT,TEXT> NOT NULL,
image_url TEXT NOT NULL,
video_url TEXT NOT NULL,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
But I do get:
SyntaxException: line 2:10 no viable alternative at input 'UNIQUE'
(...NOT EXISTS books ( id [UUID] UNIQUE...)
What is the problem and how can I fix it?
I see three syntax issues. They are mainly related to CQL != SQL.
The first, is that NOT NULL is not valid at column definition time. Cassandra doesn't enforce constraints like that at all, so for this case, just get rid of all of them.
Next, Cassandra CQL does not allow default values, so this won't work:
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
Providing the current timestamp for created_at is something that will need to be done at write-time. Fortunately, CQL has a few of built-in functions to make this easier:
INSERT INTO books (id, user_id, created_at)
VALUES (uuid(), 'userOne', toTimestamp(now()));
In this case, I've invoked the uuid() function to generate a Type-4 UUID. I've also invoked now() for the current time. However now() returns a TimeUUID (Type-1 UUID) so I've nested it inside of the toTimestamp function to convert it to a TIMESTAMP.
Finally, UNIQUE is not valid.
user_id TEXT UNIQUE NOT NULL,
It looks like you're trying to make sure that duplicate user_ids are not stored with each id. You can help to ensure uniqueness of the data in each partition by adding user_id to the end of the primary key definition as a clustering key:
CREATE TABLE IF NOT EXISTS books (
id UUID,
user_id TEXT,
...
PRIMARY KEY (id, user_id));
This PK definition will ensure that data for books will be partitioned by id, containing multiple user_id rows.
Not sure what the relationship is between books and users is, though. If one book can have many users, then this will work. If one user can have many books, then you'll want to switch the order of the keys to this:
PRIMARY KEY (user_id, id));
In summary, a working table definition for this problem looks like this:
CREATE TABLE IF NOT EXISTS books (
id UUID,
user_id TEXT,
scale TEXT,
title TEXT,
description TEXT,
reward map<INT,TEXT>,
image_url TEXT,
video_url TEXT,
created_at TIMESTAMP,
PRIMARY KEY (id, user_id));
I have a pgadmin container running in ECS. The container is using a specific LDAP_BIND_USER user to bind to the LDAP server but then every user is connecting to pgadmin with their own credentials.
I have modified the entrypoint.sh to fetch the list of RDS instances and generate the servers.json file when the container is first started, this is done by this command defined in the entrypoint.sh:
/venv/bin/python3 /pgadmin4/setup.py --load-servers "${PGADMIN_SERVER_JSON_FILE}" --user ${PGADMIN_DEFAULT_EMAIL}
The problem
As the container is running in SERVER mode, I need to specify a user (PGADMIN_DEFAULT_EMAIL) to whom these servers are going to be visible when doing the import. This means that only this user will see the server list.
Is there a way to import the servers for every user that connects?
So, sadly it is not possible! You can only import servers to a particular user and as the users are coming from LDAP, they are created in the sqlite db just once the users connects for the first time.
There is a workaround which it is not nice but it does the job:
Create and import the db tables (servergroup and server) manually!
This is how the user table looks like:
CREATE TABLE user (
id INTEGER NOT NULL,
username VARCHAR(256) NOT NULL,
email VARCHAR(256),
password VARCHAR(256),
active BOOLEAN NOT NULL,
confirmed_at DATETIME,
masterpass_check VARCHAR(256),
auth_source VARCHAR(256) NOT NULL DEFAULT 'internal',
PRIMARY KEY (id),
UNIQUE (username, auth_source),
CHECK (active IN (0, 1))
);
This is how the server table looks like:
CREATE TABLE server (
id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
servergroup_id INTEGER NOT NULL,
name VARCHAR(128) NOT NULL,
host VARCHAR(128),
port INTEGER,
maintenance_db VARCHAR(64) NOT NULL,
username VARCHAR(64),
password VARCHAR(64),
role VARCHAR(64),
ssl_mode VARCHAR(16) NOT NULL CHECK(ssl_mode IN
( 'allow' , 'prefer' , 'require' , 'disable' ,
'verify-ca' , 'verify-full' )
),
comment VARCHAR(1024),
discovery_id VARCHAR(128),
hostaddr TEXT(1024),
db_res TEXT,
passfile TEXT,
sslcert TEXT,
sslkey TEXT,
sslrootcert TEXT,
sslcrl TEXT,
sslcompression INTEGER DEFAULT 0,
bgcolor TEXT(10),
fgcolor TEXT(10),
service TEXT,
use_ssh_tunnel INTEGER DEFAULT 0,
tunnel_host TEXT,
tunnel_port TEXT,
tunnel_username TEXT,
tunnel_authentication INTEGER DEFAULT 0,
tunnel_identity_file TEXT, connect_timeout INTEGER DEFAULT 0, tunnel_password TEXT(64), save_password INTEGER DEFAULT 0, shared BOOLEAN,
PRIMARY KEY(id),
FOREIGN KEY(user_id) REFERENCES "user_old"(id),
FOREIGN KEY(servergroup_id) REFERENCES servergroup(id)
);
And this is how the servergroup table looks like:
CREATE TABLE servergroup (
id INTEGER NOT NULL,
user_id INTEGER NOT NULL,
name VARCHAR(128) NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY(user_id) REFERENCES "user_old" (id),
UNIQUE (user_id, name)
);
Looking at the server table closer we can see that for each id there is a user_id and servergroup_id associated, so we can create these tables manually and import the values before the server is started.
The catch: There is a server line for each user, so just iterate over the lenght of users and everyone will see the servers in their console
I'm using Django 3, Python 3.8 and MySql 8. I have the following Django model in which I create a search based on a partial name ...
class Coop(models.Model):
objects = CoopManager()
name = models.CharField(max_length=250, null=False)
types = models.ManyToManyField(CoopType, blank=False)
addresses = models.ManyToManyField(Address)
enabled = models.BooleanField(default=True, null=False)
phone = models.ForeignKey(ContactMethod, on_delete=models.CASCADE, null=True, related_name='contact_phone')
email = models.ForeignKey(ContactMethod, on_delete=models.CASCADE, null=True, related_name='contact_email')
web_site = models.TextField()
...
# Look up coops by a partial name (case insensitive)
def find_by_name(self, partial_name):
queryset = Coop.objects.filter(name__icontains=partial_name, enabled=True)
print(queryset.query)
return queryset
The code above produces this query ...
SELECT `directory_coop`.`id`, `directory_coop`.`name`, `directory_coop`.`enabled`, `directory_coop`.`phone_id`, `directory_coop`.`email_id`, `directory_coop`.`web_site` FROM `directory_coop` WHERE (`directory_coop`.`enabled` = True AND `directory_coop`.`name` LIKE %Credit%)
Below is the table that Django migrations produced. Is there any kind of index or other adjustment I can make to speed up these queries -- specifically, the "name LIKE %Credit%" part?
CREATE TABLE `directory_coop` (
`id` int NOT NULL AUTO_INCREMENT,
`name` varchar(250) COLLATE utf8mb4_unicode_ci NOT NULL,
`enabled` tinyint(1) NOT NULL,
`phone_id` int DEFAULT NULL,
`email_id` int DEFAULT NULL,
`web_site` longtext COLLATE utf8mb4_unicode_ci NOT NULL,
PRIMARY KEY (`id`),
KEY `directory_coop_email_id_c20abcd2` (`email_id`),
KEY `directory_coop_phone_id_4c7e2178` (`phone_id`),
CONSTRAINT `directory_coop_email_id_c20abcd2_fk_directory_contactmethod_id` FOREIGN KEY (`email_id`) REFERENCES `directory_contactmethod` (`id`),
CONSTRAINT `directory_coop_phone_id_4c7e2178_fk_directory_contactmethod_id` FOREIGN KEY (`phone_id`) REFERENCES `directory_contactmethod` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=993 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci
The SQL query will not speedup with regular indexing with like operator, but you can use MySQL's full-text search functions on a FULLTEXT indexed column
For that, you need to index the name column manually using SQL query since Django doesn't have that functionality yet.
After enabling FULLTEXT index on the name column, you can use either the search lookup or Django Func(...) expression to query the data.
References
Creating FULLTEXT Indexes for Full-Text Search
MySQL 8.0 Full-Text Search Functions
Django MySQL full text search
What is Full Text Search vs LIKE
How to speed up SELECT .. LIKE queries in MySQL on multiple columns?
I'm trying to use node-pg-migrate to handle migrations for an ExpressJS app. I can translate most of the SQL dump into pgm.func() type calls, but I can't see any method for handling actual INSERT statements for initial data in my solution's lookup tables.
It is possible using the pgm.sql catch all:
pgm.sql(`INSERT INTO users (username, password, created, forname, surname, department, reviewer, approver, active) VALUES
('rd#example.com', 'salty', '2019-12-31 11:00:00', 'Richard', 'Dyce', 'MDM', 'No', 'No', 'Yes');`)
Note the use of backtick (`) to allow breaking the SQL statement across multiple lines.
You can use raw sql if you needed.
Create a migration file with the extension .sql and write usual requests.
This article has a great example.
My example:
-- Up Migration
CREATE TABLE users
(
id BIGSERIAL NOT NULL PRIMARY KEY,
username VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL,
password VARCHAR(50) NOT NULL,
class_id INTEGER NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
CREATE TABLE classes
(
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR(50) NOT NULL,
health INTEGER NOT NULL,
damage INTEGER NOT NULL,
attack_type VARCHAR(50) NOT NULL,
ability VARCHAR(50) NOT NULL,
created_at DATE NOT NULL,
updated_at DATE NOT NULL
);
INSERT INTO classes (id,
name,
health,
damage,
attack_type,
ability,
created_at,
updated_at)
VALUES (0,
'Thief',
100,
25,
'Archery Shot',
'Run Away',
NOW(),
NOW());
-- Down Migration
DROP TABLE users;
DROP TABLE classes;
Getting this error when trying to create a table with a default value for a "_loaded_at" column:
ERROR 1067 (42000): Invalid default value for '_loaded_at'
This does not work:
CREATE TABLE json01(
id BIGINT PRIMARY KEY AUTO_INCREMENT
, _loaded_at DATETIME DEFAULT NOW()
, properties JSON NOT NULL
, SHARD KEY (id)
);
Whereas this does work:
CREATE TABLE json01(
id BIGINT PRIMARY KEY AUTO_INCREMENT
, _loaded_at DATETIME DEFAULT '1970-01-01 00:00:01'
, properties JSON NOT NULL
, SHARD KEY (id)
);
I also tried with the function UTC_TIMESTAMP(). Hoping that there is a way to specify a function as the default, since this is pretty standard functionality. Thanks so much for your help!
How about considering something like:
_loaded_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP
?