Integer Columns in ScalaFX TableView - javafx-2

I am new to ScalaFX. I am trying to adapt a basic TableView example, to include Integer columns.
So far, I have come up with the following code:
class Person(firstName_ : String, age_ : Int) {
val name = new StringProperty(this, "Name", firstName_)
val age = new IntegerProperty(this, "Age", age_)
}
object model{
val dataSource = new ObservableBuffer[Person]()
dataSource += new Person("Moe", 45)
dataSource += new Person("Larry", 43)
dataSource += new Person("Curly", 41)
dataSource += new Person("Shemp", 39)
dataSource += new Person("Joe", 37)
}
object view{
val nameCol = new TableColumn[Person, String]{
text = "Name"
cellValueFactory = {_.value.name}
}
val ageCol = new TableColumn[Person, Int]{
text = "Age"
cellValueFactory = {_.value.age}
}
}
object TestTableView extends JFXApp {
stage = new PrimaryStage {
title = "ScalaFx Test"
width = 800; height = 500
scene = new Scene {
content = new TableView[Person](model.dataSource){
columns += view.nameCol
columns += view.ageCol
}
}
}
}
The problem is that, while the nameCol works well, the ageCol doesn't even compile.
In the line cellValueFactory = {_.value.age}, I get a type mismatch error. It is expecting a ObservableValue[Int,Int] but getting an IntegerProperty.
I am using ScalaFX 1.0 M2, compiled for Scala 2.10.

Change IntegerProperty to ScalaFX ObjectProperty[Int], simply:
val age = ObjectProperty(this, "Age", age_)
The rest can stay the same.

so try...
TableColumn<Person, String> firstNameCol = new TableColumn<>("First Name");
or table action
TableColumn<Person, Boolean> actionCol = new TableColumn<>("Action");
actionCol.setSortable(false);
actionCol.setCellValueFactory(new Callback<TableColumn.CellDataFeatures<Person, Boolean>, ObservableValue<Boolean>>() {
#Override public ObservableValue<Boolean> call(TableColumn.CellDataFeatures<Person, Boolean> features) {
return new SimpleBooleanProperty(features.getValue() != null);
}
});

Related

how to fix CameraSource preview orientation in android studio

So I am using CameraSource to preview camera scan in my application, but the problem is that the camera preview is always horizontal. TO be more clear look at this picture:
Preview
I mean the preview is just always tilted. How do I make the camera scan preview upwards (normal) ?
This is my current code:
private fun setupCameraSource() {
cameraSource = CameraSource.Builder(activity, cameraSourceCustomDetector)
.setAutoFocusEnabled(true).setRequestedFps(10F)
.setFacing(CameraSource.CAMERA_FACING_BACK).build()
}
Or if the whole code helps, I don't know if I'm doing it wrong.
class colorDetector(activity: Activity, cameraPreview: ImageView?, editCameraPreview: ImageView?) {
private val TAG = "colorDetector"
private var bitmap:Bitmap? = null
private val FPS: Number = 20
private var cameraSource: CameraSource? = null
private var cameraSourceCustomDetector: CustomDetector? = null
private var editCameraPreview: ImageView? = null
//private var visionUtilities: VisionUtilities? = null
private var activity: Activity? = null
private var cameraPreview: ImageView? = null
init {
this.activity = activity
this.cameraPreview = cameraPreview
this.editCameraPreview = editCameraPreview
cameraSourceCustomDetector = CustomDetector()
setupCameraSource()
}
private fun setupCameraSource() {
cameraSource = CameraSource.Builder(activity, cameraSourceCustomDetector)
.setAutoFocusEnabled(true).setRequestedFps(10F)
.setFacing(CameraSource.CAMERA_FACING_BACK).build()
}
fun start(context: Context) {
try {
if (ActivityCompat.checkSelfPermission(context, Manifest.permission.CAMERA) == PackageManager.PERMISSION_GRANTED) {
cameraSource?.start()
}
} catch (e: IOException) {
Log.d(TAG, "Couldn't start camera")
}
}
fun stop() {
cameraSource!!.stop()
}
fun saveImageToStorage() {
saveImgInStorage(bitmap!!,activity!!)
}
fun uploadImageToFirebase(type: uploadType) {
uploadPictureToFirebaseStorage(activity!!,bitmap,null,type)
}
inner class CustomDetector : Detector<Point>() {
#RequiresApi(Build.VERSION_CODES.O)
override fun detect(frame: Frame?): SparseArray<Point>? {
val byteBuffer: ByteBuffer = frame!!.grayscaleImageData
val bytes: ByteArray = byteBuffer.array()
val w = frame.metadata.width
val h = frame.metadata.height
val yuvimage = YuvImage(bytes, ImageFormat.NV21, w, h, null)
val baos = ByteArrayOutputStream()
yuvimage.compressToJpeg(
Rect(0, 0, w, h),
100,
baos
) // Where 100 is the quality of the generated jpeg
val jpegArray = baos.toByteArray()
bitmap = BitmapFactory.decodeByteArray(jpegArray, 0, jpegArray.size)
activity?.runOnUiThread(Runnable{ getEditedImg(bitmap!!,w,h, cameraPreview!!, editCameraPreview!!,
activity!!
) })
return null
}
}
}
Well, in the end after a really long search, I've decided to do it like that, I'm not sure if it's the ideal way, but if someone wants, here is the solution I did:
I created a function called rotate:
// rotates a 'bitmap' clockwise direction by 'degree'
fun rotate(bitmap: Bitmap, degree: Float): Bitmap? {
val matrix = Matrix()
matrix.postRotate(degree)
val resizedBitmap = Bitmap.createScaledBitmap(bitmap, bitmap.width, bitmap.width, false)
return Bitmap.createBitmap(resizedBitmap, 0, 0, resizedBitmap.width, resizedBitmap.height, matrix, true)
}
And replaced the 3 places where bitmap with rotate(bitmap, 90F).

Spark 2 Dataframe Save to Hive - Compaction

I am using spark session to save a data frame to hive table. The code is as below.
df.write.mode(SaveMode.Append).format("orc").insertInto("table")
The data comes to spark from kafka. This can be huge amount of data coming throughout the day. Does , spark dataframe save internally does hive compaction ?. If not what is the best way to do compaction at regular intervals without affecting the table insertions.
In your example you should add partitionBy as data can be in huge amount
df.write..mode(SaveMode.Append).format("orc").partitionBy("age")
OR you can also archive as below
The way I have done this is to first register a temp table in Spark job itself and then leverage the sql method of the HiveContext to create a new table in hive using the data from the temp table. For example if I have a dataframe df and HiveContext hc the general process is:
df.registerTempTable("my_temp_table")
hc.sql("Insert into overwrite table_name PARTITION SELECT a,b, PARTITION_col from my_temp_table")
public class HiveCompaction {
private static SparkConf sparkConf;
private static JavaSparkContext sc;
private static SparkSession sqlContext = springutil.getBean("testSparkSession");
private static HashMap<Object, Object> partitionColumns;
public static void compact(String table, Dataset<Row> dataToCompact) {
logger.info("Started Compaction for - " + table);
if (!partitionColumns.containsKey(table)) {
compact_table_without_partition(table, dataToCompact);
} else {
compact_table_with_partition(table, dataToCompact, partitionColumns);
}
logger.info("Data Overwritten in HIVE table : " + table + " successfully");
}
private static void compact_table_with_partition(String table, Dataset<Row> dataToCompact,
Map<Object, Object> partitionData) {
String[] partitions = ((String) partitionData.get(table)).split(",");
List<Map<Object, Object>> partitionMap = getPartitionsToCompact(dataToCompact, Arrays.asList(partitions));
for (Map mapper : partitionMap) {
// sqlContext.sql("REFRESH TABLE staging.dummy_table");
String query = "select * from " + table + " where " + frameQuery(" and ", mapper);
Dataset<Row> originalTable = sqlContext.sql(query.toString());
if (originalTable.count() == 0) {
dataToCompact.write().mode("append").format("parquet").insertInto(table);
} else {
String location = getHdfsFileLocation(table);
String uuid = getUUID();
updateTable(table, dataToCompact, originalTable, uuid);
String destinationPath = framePath(location, frameQuery("/", mapper), uuid);
sqlContext.sql("Alter table " + table + " partition(" + frameQuery(",", mapper) + ") set location '"
+ destinationPath + "'");
}
}
}
private static void compact_table_without_partition(String table, Dataset<Row> dataToCompact) {
String query = "select * from " + table;
Dataset<Row> originalTable = sqlContext.sql(query.toString());
if (originalTable.count() == 0) {
dataToCompact.write().mode("append").format("parquet").insertInto(table);
} else {
String location = getHdfsFileLocation(table);
String uuid = getUUID();
String destinationPath = framePath(location, null, uuid);
updateTable(table, dataToCompact, originalTable, uuid);
sqlContext.sql("Alter table " + table + " set location '" + destinationPath + "'");
}
}
private static void updateTable(String table, Dataset<Row> dataToCompact, Dataset<Row> originalTable, String uuid) {
Seq<String> joinColumnSeq = getPrimaryKeyColumns();
Dataset<Row> unModifiedRecords = originalTable.join(dataToCompact, joinColumnSeq, "leftanti");
Dataset<Row> dataToInsert1 = dataToCompact.withColumn("uuid", functions.lit(uuid));
Dataset<Row> dataToInsert2 = unModifiedRecords.withColumn("uuid", functions.lit(uuid));
dataToInsert1.write().mode("append").format("parquet").insertInto(table + "_compacted");
dataToInsert2.write().mode("append").format("parquet").insertInto(table + "_compacted");
}
private static String getHdfsFileLocation(String table) {
Dataset<Row> tableDescription = sqlContext.sql("describe formatted " + table + "_compacted");
List<Row> rows = tableDescription.collectAsList();
String location = null;
for (Row r : rows) {
if (r.get(0).equals("Location")) {
location = r.getString(1);
break;
}
}
return location;
}
private static String frameQuery(String delimiter, Map mapper) {
StringBuilder modifiedQuery = new StringBuilder();
int i = 1;
for (Object key : mapper.keySet()) {
modifiedQuery.append(key + "=");
modifiedQuery.append(mapper.get(key));
if (mapper.size() > i)
modifiedQuery.append(delimiter);
i++;
}
return modifiedQuery.toString();
}
private static String framePath(String location, String framedpartition, String uuid) {
StringBuilder loc = new StringBuilder(location);
loc.append("/");
if (StringUtils.isNotEmpty(framedpartition)) {
loc.append(framedpartition);
loc.append("/");
}
loc.append("uuid=");
loc.append(uuid);
logger.info(loc.toString());
return loc.toString();
}
public static Seq<String> getColumnSeq(List<String> joinColumns) {
List<String> cols = new ArrayList<>(joinColumns.size());
for (int i = 0; i < joinColumns.size(); i++) {
cols.add(joinColumns.get(i).toLowerCase());
}
return JavaConverters.asScalaBufferConverter(cols).asScala().readOnly();
}
private static String getUUID() {
StringBuilder uri = new StringBuilder();
Random rand = new Random();
int randNum = rand.nextInt(200);
String uuid = DateTimeFormatter.ofPattern("yyyyMMddHHmmSSS").format(LocalDateTime.now()).toString()
+ (String.valueOf(randNum));
return uuid;
}
private static List<Map<Object, Object>> getPartitionsToCompact(Dataset<Row> filteredRecords,
List<String> partitions) {
Column[] columns = new Column[partitions.size()];
int index = 0;
for (String c : partitions) {
columns[index] = new Column(c);
index++;
}
Dataset<Row> partitionsToCompact = filteredRecords.select(columns)
.distinct(); /**
* TOD : add filter condition for selecting
* known paritions
*/
JavaRDD<Map<Object, Object>> querywithPartitions = partitionsToCompact.toJavaRDD().map(row -> {
return convertRowToMap(row);
});
return querywithPartitions.collect();
}
private static Map<Object, Object> convertRowToMap(Row row) {
StructField[] fields = row.schema().fields();
List<StructField> structFields = Arrays.asList(fields);
Map<Object, Object> a = structFields.stream()
.collect(Collectors.toMap(e -> ((StructField) e).name(), e -> row.getAs(e.name())));
return a;
}
private static Seq<String> getPrimaryKeyColumns() {
ArrayList<String> primaryKeyColumns = new ArrayList<String>();
Seq<String> joinColumnSeq = getColumnSeq(primaryKeyColumns);
return joinColumnSeq;
}
/*
* public static void initSpark(String jobname) { sparkConf = new
* SparkConf().setAppName(jobname); sparkConf.setMaster("local[3]");
* sparkConf.set("spark.driver.allowMultipleContexts", "true"); sc = new
* JavaSparkContext(); sqlContext = new SQLContext(sc); }
*/
public static HashMap<Object, Object> getParitionColumns() {
HashMap<Object, Object> paritionColumns = new HashMap<Object, Object>();
paritionColumns.put((Object) "staging.dummy_table", "trade_date,dwh_business_date,region_cd");
return paritionColumns;
}
public static void initialize(String table) {
// initSpark("Hive Table Compaction -" + table);
partitionColumns = getParitionColumns();
}
}
Usage:
String table = "staging.dummy_table";
HiveCompaction.initialize(table);
Dataset<Row> dataToCompact = sparkSession.sql("select * from staging.dummy_table");
HiveCompaction.compact(table, dataToCompact);
sparkSession.sql("select * from staging.dummy_table_compacted").show();
System.out.println("Compaction successful");

How to edit data with dynamic TableView with dynamic column in JAVAFX

Today This is the demo to show data from CSV for DAT file without make custom class on tableView in JavaFX 2.0. I call this TableView as Dynamic TableView because the tableview automatically manages the columns and rows.
On my research about the editable on tableView we must have a custom class and implement it to tableView to show as this demo ==> http://docs.oracle.com/javafx/2/ui_controls/table-view.htm
But in this case I can not do it because we don't know how many column example with csv file or .dat file.... I want to do editable on this tableView in this case by add TextField into TableCell. How does it do without make custom class (because you do not how many column ...), and if it must make custom class then how about the design of custom class for this case?
Could you please help me?
private void getDataDetailWithDynamic() {
tblView.getItems().clear();
tblView.getColumns().clear();
tblView.setPlaceholder(new Label("Loading..."));
// #Override
try {
File aFile = new File(txtFilePath.getText());
InputStream is = new BufferedInputStream(new FileInputStream(aFile));
Reader reader = new InputStreamReader(is, "UTF-8");
BufferedReader in = new BufferedReader(reader);
final String headerLine = in.readLine();
final String[] headerValues = headerLine.split("\t");
for (int column = 0; column < headerValues.length; column++) {
tblView.getColumns().add(
createColumn(column, headerValues[column]));
}
// Data:
String dataLine;
while ((dataLine = in.readLine()) != null) {
final String[] dataValues = dataLine.split("\t");
// Add additional columns if necessary:
for (int columnIndex = tblView.getColumns().size(); columnIndex < dataValues.length; columnIndex++) {
tblView.getColumns().add(createColumn(columnIndex, ""));
}
// Add data to table:
ObservableList<StringProperty> data = FXCollections
.observableArrayList();
for (String value : dataValues) {
data.add(new SimpleStringProperty(value));
}
tblView.getItems().add(data);
}
} catch (Exception ex) {
System.out.println("ex: " + ex.toString());
}
for(int i=0; i<tblView.getColumns().size(); i++) {
TableColumn col = (TableColumn)tblView.getColumns().get(i);
col.setPrefWidth(70);
}
}
private TableColumn createColumn(
final int columnIndex, String columnTitle) {
TableColumn column = new TableColumn(DefaultVars.BLANK_CHARACTER);
String title;
if (columnTitle == null || columnTitle.trim().length() == 0) {
title = "Column " + (columnIndex + 1);
} else {
title = columnTitle;
}
Callback<TableColumn, TableCell> cellFactory = new Callback<TableColumn, TableCell>() {
#Override
public TableCell call(TableColumn p) {
System.out.println("event cell");
EditingCellData cellExtend = new EditingCellData();
return cellExtend;
}
};
column.setText(title);
column.setCellValueFactory(cellFactory);
return column;
}
Thanks for your reading.
This is the best way to resolve it ==> https://forums.oracle.com/message/11216643#11216643
I'm really thank for your reading about that.
Thanks

How to Export DataTable into Excel

I have list of objects and i converted those into data table now i am unable to export those into excel
Below is the sample code
class Program
{
static void Main(string[] args)
{
Student s1 = new Student("Student-A",100);
Student s2 = new Student("Student-B", 90);
Student s3 = new Student("Student-C", 80);
List<Student> studentList = new List<Student>() { s1,s2,s3};
ListToDataTable converter = new ListToDataTable();
DataTable dt = converter.ToDataTable(studentList);
Console.WriteLine();
}
}
Below is the student class which has two properties
class Student
{
public string Name { get; set; }
public int? Score { get; set; }
public Student(string name,int? score)
{
this.Name = name;
this.Score = score;
}
}
Below is the class used for converting list of objects to data table
public class ListToDataTable
{
public DataTable ToDataTable<T>(List<T> items)
{
DataTable dataTable = new DataTable(typeof(T).Name);
PropertyInfo[] Props = typeof(T).GetProperties(BindingFlags.Public | BindingFlags.Instance);
foreach (PropertyInfo prop in Props)
{
dataTable.Columns.Add(prop.Name);
}
foreach (T item in items)
{
var values = new object[Props.Length];
for (int i = 0; i < Props.Length; i++)
{
values[i] = Props[i].GetValue(item, null);
}
dataTable.Rows.Add(values);
}
return dataTable;
}
}
Try to create a simple CSV file from your DataTable.
You can use the following DataTable extension, after you have converted your list to a DataTable.
public static string ToCSV(this DataTable table)
{
var result = new StringBuilder();
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(table.Columns[i].ColumnName);
result.Append(i == table.Columns.Count - 1 ? "\n" : ",");
}
foreach (DataRow row in table.Rows)
{
for (int i = 0; i < table.Columns.Count; i++)
{
result.Append(row[i].ToString());
result.Append(i == table.Columns.Count - 1 ? "\n" : ",");
}
}
return result.ToString();
}
Example of usage :
// replace with your data table here
DataTable dt = new DataTable();
var bytes = Encoding.GetEncoding("iso-8859-1").GetBytes(dt.ToCSV());
MemoryStream stream = new MemoryStream(bytes);
StreamReader reader = new StreamReader(stream);
Response.Clear();
Response.Buffer = true;
Response.AddHeader("content-disposition", string.Format("attachment;filename={0}.csv", "filename"));
Response.ContentType = "application/text";
Response.ContentEncoding = Encoding.Unicode;
Response.Output.Write(reader.ReadToEnd());
Response.Flush();
Response.End();
Since you have this tagged as interop, I went that route (no need to create a csv file, just export directly to excel).
This solution is not the prettiest, but it works. I've also changed it some as you can export your studentList directly to excel (no need to convert it to a dataTable first).
First thing, in your solution, you need to add a reference to "Microsoft.Office.Interop.Excel". To do this, right click on "References" in Solution Explorer, then "Add Reference", then click on the ".NET" tab, then scroll down to find it.
Once that is done, update your code as follows:
using System;
using System.Collections.Generic;
using Excel = Microsoft.Office.Interop.Excel;
static void Main()
{
var s1 = new Student("Student-A", 100);
var s2 = new Student("Student-B", 90);
var s3 = new Student("Student-C", 80);
var studentList = new List<Student> { s1, s2, s3 };
// Create an excel sheet
var xlApp = new Excel.Application { Visible = true }; // Create instance of Excel and make it visible.
xlApp.Workbooks.Add(Excel.XlSheetType.xlWorksheet); // Create a workbook (WB)
var xlWS = (Excel.Worksheet)xlApp.ActiveSheet; // Reference the active worksheet (WS)
xlWS.Name = "Exported Student"; // Name the worksheet
// Add Header fields to Excel [row, column]
var r = 1;
xlWS.Cells[r, 1] = "Name";
xlWS.Cells[r, 2] = "Score";
// Copy data from StudentList to Excel
foreach (Student student in studentList)
{
r++;
xlWS.Cells[r, 1] = student.Name;
xlWS.Cells[r, 2] = student.Score;
}
}
This will automatically export your studentList to an excel sheet. There wasn't a need for the ListToDataTable class.

With OrmLite, is there a way to automatically update table schema when my POCO is modified?

Can OrmLite recognize differences between my POCO and my schema and automatically add (or remove) columns as necessary to force the schema to remain in sync with my POCO?
If this ability doesn't exist, is there way for me to query the db for table schema so that I may manually perform the syncing? I found this, but I'm using the version of OrmLite that installs with ServiceStack and for the life of me, I cannot find a namespace that has the TableInfo classes.
I created an extension method to automatically add missing columns to my tables. Been working great so far. Caveat: the code for getting the column names is SQL Server specific.
namespace System.Data
{
public static class IDbConnectionExtensions
{
private static List<string> GetColumnNames(IDbConnection db, string tableName)
{
var columns = new List<string>();
using (var cmd = db.CreateCommand())
{
cmd.CommandText = "exec sp_columns " + tableName;
var reader = cmd.ExecuteReader();
while (reader.Read())
{
var ordinal = reader.GetOrdinal("COLUMN_NAME");
columns.Add(reader.GetString(ordinal));
}
reader.Close();
}
return columns;
}
public static void AlterTable<T>(this IDbConnection db) where T : new()
{
var model = ModelDefinition<T>.Definition;
// just create the table if it doesn't already exist
if (db.TableExists(model.ModelName) == false)
{
db.CreateTable<T>(overwrite: false);
return;
}
// find each of the missing fields
var columns = GetColumnNames(db, model.ModelName);
var missing = ModelDefinition<T>.Definition.FieldDefinitions
.Where(field => columns.Contains(field.FieldName) == false)
.ToList();
// add a new column for each missing field
foreach (var field in missing)
{
var alterSql = string.Format("ALTER TABLE {0} ADD {1} {2}",
model.ModelName,
field.FieldName,
db.GetDialectProvider().GetColumnTypeDefinition(field.FieldType)
);
Console.WriteLine(alterSql);
db.ExecuteSql(alterSql);
}
}
}
}
No there is no current support for Auto Migration of RDBMS Schema's vs POCOs in ServiceStack's OrmLite.
There are currently a few threads being discussed in OrmLite's issues that are exploring the different ways to add this.
Here is a slightly modified version of code from cornelha to work with PostgreSQL. Removed this fragment
//private static List<string> GetColumnNames(object poco)
//{
// var list = new List<string>();
// foreach (var prop in poco.GetType().GetProperties())
// {
// list.Add(prop.Name);
// }
// return list;
//}
and used IOrmLiteDialectProvider.NamingStrategy.GetTableName and IOrmLiteDialectProvider.NamingStrategy.GetColumnName methods to convert table and column names from PascalNotation to this_kind_of_notation used by OrmLite when creating tables in PostgreSQL.
public static class IDbConnectionExtensions
{
private static List<string> GetColumnNames(IDbConnection db, string tableName, IOrmLiteDialectProvider provider)
{
var columns = new List<string>();
using (var cmd = db.CreateCommand())
{
cmd.CommandText = getCommandText(tableName, provider);
var tbl = new DataTable();
tbl.Load(cmd.ExecuteReader());
for (int i = 0; i < tbl.Columns.Count; i++)
{
columns.Add(tbl.Columns[i].ColumnName);
}
}
return columns;
}
private static string getCommandText(string tableName, IOrmLiteDialectProvider provider)
{
if (provider == PostgreSqlDialect.Provider)
return string.Format("select * from {0} limit 1", tableName);
else return string.Format("select top 1 * from {0}", tableName);
}
public static void AlterTable<T>(this IDbConnection db, IOrmLiteDialectProvider provider) where T : new()
{
var model = ModelDefinition<T>.Definition;
var table = new T();
var namingStrategy = provider.NamingStrategy;
// just create the table if it doesn't already exist
var tableName = namingStrategy.GetTableName(model.ModelName);
if (db.TableExists(tableName) == false)
{
db.CreateTable<T>(overwrite: false);
return;
}
// find each of the missing fields
var columns = GetColumnNames(db, model.ModelName, provider);
var missing = ModelDefinition<T>.Definition.FieldDefinitions
.Where(field => columns.Contains(namingStrategy.GetColumnName(field.FieldName)) == false)
.ToList();
// add a new column for each missing field
foreach (var field in missing)
{
var columnName = namingStrategy.GetColumnName(field.FieldName);
var alterSql = string.Format("ALTER TABLE {0} ADD COLUMN {1} {2}",
tableName,
columnName,
db.GetDialectProvider().GetColumnTypeDefinition(field.FieldType)
);
Console.WriteLine(alterSql);
db.ExecuteSql(alterSql);
}
}
}
I implemented an UpdateTable function. The basic idea is:
Rename current table on database.
Let OrmLite create the new schema.
Copy the relevant data from the old table to the new.
Drop the old table.
Github Repo: https://github.com/peheje/Extending-NServiceKit.OrmLite
Condensed code:
public interface ISqlProvider
{
string RenameTableSql(string currentName, string newName);
string GetColumnNamesSql(string tableName);
string InsertIntoSql(string intoTableName, string fromTableName, string commaSeparatedColumns);
string DropTableSql(string tableName);
}
public static void UpdateTable<T>(IDbConnection connection, ISqlProvider sqlProvider) where T : new()
{
connection.CreateTableIfNotExists<T>();
var model = ModelDefinition<T>.Definition;
string tableName = model.Name;
string tableNameTmp = tableName + "Tmp";
string renameTableSql = sqlProvider.RenameTableSql(tableName, tableNameTmp);
connection.ExecuteNonQuery(renameTableSql);
connection.CreateTable<T>();
string getModelColumnsSql = sqlProvider.GetColumnNamesSql(tableName);
var modelColumns = connection.SqlList<string>(getModelColumnsSql);
string getDbColumnsSql = sqlProvider.GetColumnNamesSql(tableNameTmp);
var dbColumns = connection.SqlList<string>(getDbColumnsSql);
List<string> activeFields = dbColumns.Where(dbColumn => modelColumns.Contains(dbColumn)).ToList();
string activeFieldsCommaSep = ListToCommaSeparatedString(activeFields);
string insertIntoSql = sqlProvider.InsertIntoSql(tableName, tableNameTmp, activeFieldsCommaSep);
connection.ExecuteSql(insertIntoSql);
string dropTableSql = sqlProvider.DropTableSql(tableNameTmp);
//connection.ExecuteSql(dropTableSql); //maybe you want to clean up yourself, else uncomment
}
private static String ListToCommaSeparatedString(List<String> source)
{
var sb = new StringBuilder();
for (int i = 0; i < source.Count; i++)
{
sb.Append(source[i]);
if (i < source.Count - 1)
{
sb.Append(", ");
}
}
return sb.ToString();
}
}
MySql implementation:
public class MySqlProvider : ISqlProvider
{
public string RenameTableSql(string currentName, string newName)
{
return "RENAME TABLE `" + currentName + "` TO `" + newName + "`;";
}
public string GetColumnNamesSql(string tableName)
{
return "SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '" + tableName + "';";
}
public string InsertIntoSql(string intoTableName, string fromTableName, string commaSeparatedColumns)
{
return "INSERT INTO `" + intoTableName + "` (" + commaSeparatedColumns + ") SELECT " + commaSeparatedColumns + " FROM `" + fromTableName + "`;";
}
public string DropTableSql(string tableName)
{
return "DROP TABLE `" + tableName + "`;";
}
}
Usage:
using (var db = dbFactory.OpenDbConnection())
{
DbUpdate.UpdateTable<SimpleData>(db, new MySqlProvider());
}
Haven't tested with FKs. Can't handle renaming properties.
I needed to implement something similiar and found the post by Scott very helpful. I decided to make a small change which will make it much more agnostic. Since I only use Sqlite and MSSQL, I made the getCommand method very simple, but can be extended. I used a simple datatable to get the columns. This solution works perfectly for my requirements.
public static class IDbConnectionExtensions
{
private static List<string> GetColumnNames(IDbConnection db, string tableName,IOrmLiteDialectProvider provider)
{
var columns = new List<string>();
using (var cmd = db.CreateCommand())
{
cmd.CommandText = getCommandText(tableName, provider);
var tbl = new DataTable();
tbl.Load(cmd.ExecuteReader());
for (int i = 0; i < tbl.Columns.Count; i++)
{
columns.Add(tbl.Columns[i].ColumnName);
}
}
return columns;
}
private static string getCommandText(string tableName, IOrmLiteDialectProvider provider)
{
if(provider == SqliteDialect.Provider)
return string.Format("select * from {0} limit 1", tableName);
else return string.Format("select top 1 * from {0}", tableName);
}
private static List<string> GetColumnNames(object poco)
{
var list = new List<string>();
foreach (var prop in poco.GetType().GetProperties())
{
list.Add(prop.Name);
}
return list;
}
public static void AlterTable<T>(this IDbConnection db, IOrmLiteDialectProvider provider) where T : new()
{
var model = ModelDefinition<T>.Definition;
var table = new T();
// just create the table if it doesn't already exist
if (db.TableExists(model.ModelName) == false)
{
db.CreateTable<T>(overwrite: false);
return;
}
// find each of the missing fields
var columns = GetColumnNames(db, model.ModelName,provider);
var missing = ModelDefinition<T>.Definition.FieldDefinitions
.Where(field => columns.Contains(field.FieldName) == false)
.ToList();
// add a new column for each missing field
foreach (var field in missing)
{
var alterSql = string.Format("ALTER TABLE {0} ADD {1} {2}",
model.ModelName,
field.FieldName,
db.GetDialectProvider().GetColumnTypeDefinition(field.FieldType)
);
Console.WriteLine(alterSql);
db.ExecuteSql(alterSql);
}
}
}
So I took user44 answer, and modified the AlterTable method to make it a bit more efficient.
Instead of looping and running one SQL query per field/column, I merge it into one with some simple text parsing (MySQL commands!).
public static void AlterTable<T>(this IDbConnection db, IOrmLiteDialectProvider provider) where T : new()
{
var model = ModelDefinition<T>.Definition;
var table = new T();
var namingStrategy = provider.NamingStrategy;
// just create the table if it doesn't already exist
var tableName = namingStrategy.GetTableName(model.ModelName);
if (db.TableExists(tableName) == false)
{
db.CreateTable<T>(overwrite: false);
return;
}
// find each of the missing fields
var columns = GetColumnNames(db, model.ModelName, provider);
var missing = ModelDefinition<T>.Definition.FieldDefinitions
.Where(field => columns.Contains(namingStrategy.GetColumnName(field.FieldName)) == false)
.ToList();
string alterSql = "";
string addSql = "";
// add a new column for each missing field
foreach (var field in missing)
{
var alt = db.GetDialectProvider().ToAddColumnStatement(typeof(T), field); // Should be made more efficient, one query for all changes instead of many
int index = alt.IndexOf("ADD ");
alterSql = alt.Substring(0, index);
addSql += alt.Substring(alt.IndexOf("ADD COLUMN")).Replace(";", "") + ", ";
}
if (addSql.Length > 2)
addSql = addSql.Substring(0, addSql.Length - 2);
string fullSql = alterSql + addSql;
Console.WriteLine(fullSql);
db.ExecuteSql(fullSql);
}

Resources