Im trying to import data from flat file(.dat) into azure sqldb for first time (new to azure) using below command.i'm getting error as shown below.
command used:
bcp NAV_MO_MB in /Users/n1234/Documents/Tickets/Tickets/NEXT/SGC_test.dat" -f -S
stage-nonprod-5f188055.database.windows.net -d amlstage -U aml_user -P -q -t"|" -c -e /Users/n1234/Documents/Tickets/Tickets/NEXT/err_log.txt -F2.
SQLState = 22005, NativeError = 0
Error = [Microsoft][ODBC Driver 17 for SQL Server]Invalid character value for cast specification
SQLState = 22001, NativeError = 0
Error = [Microsoft][ODBC Driver 17 for SQL Server]String data, right truncation```
data sample : SGC | ?| 4762| 7001297| 20/03/10|15:38:00 | 91| 1| 331| 0| 1| | -99.71| 100| 37| 353.71|OLGA |SILVA | |613 CAMINO |WALNUT |CA |0 |9095697291 |CA |US | 53/06/10|CAN4431594 | 1| 0 | 0
table created using bewlo query in sqldb**
```SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[NAV_MO_MB](
[SOURCE] [varchar](6) NOT NULL,
[VENDOR] [nvarchar](35) NULL,
[store] [INT] NOT NULL,
[visit] [int] NOT NULL,
[tran_date] [date],
[tran_time] [time](7) null,
[reg_nbr] [smallint] null,
[trans_nbr] [int] null,
[op_nbr] [int] null,
[vo_ind] [tinyint] null,
[seq_nbr] [int] null,
[acc_nbr] [nvarchar](40) null,
[amt] [money] null,
[code] [smallint] null,
[tender] [tinyint] null,
[vamt] [money] null,
[first_name] [nvarchar](35) null,
[last_name] [nvarchar](35) null,
[middle_init] [nchar](1) null,
[address_line1] [nchar](30) null,
[city_name] [nvarchar](30) null,
[state_code] [nchar](2) null,
[postal_code] [nchar](10) null,
[phone_nbr] [nchar](18) null,
[photo_id] [nchar](2) null,
[photo_id_cnt] [nchar](2) null,
[birth_date] [date] null,
[id1_nbr] [nvarchar](40) null,
[id1_type_code] [smallint] null,
[national_id] [nvarchar](32) null,
[govt_id_code] [smallint] null
) ON [PRIMARY]```
**Please help me to understood the error and the resolution?.**
You have some trailing invalid characters.
The last line : [PRIMARY]''''
Remove the '''' at the end.
Sometimes this happens when you copy and paste code.
Related
I have a table with polygons stored as geojson.
CREATE TABLE `location_boundaries` (
`id` INT UNSIGNED NOT NULL,
`name` VARCHAR(255) NULL DEFAULT NULL COLLATE 'utf8mb4_unicode_ci',
`geo_json` JSON NULL DEFAULT NULL,
`geom` GEOMETRY NULL DEFAULT NULL,
PRIMARY KEY (`id`)
)
COLLATE='utf8mb4_unicode_ci'
ENGINE=InnoDB
I have geojson multipolygon for asia as follows in the table:
{"type": "MultiPolygon", "coordinates": [[[[-168.25, 77.7], [-180, 77.7], [-180, 58.1], [-168.25, 58.1], [-168.25, 77.7]]], [[[39.6908, 84.52666], [180, 84.38487], [180, 26.27883], [142.084541, 22.062707], [130.147, 3.608598], [141.1373, -1.666358], [141.0438, -9.784795], [130.2645, -10.0399], [118.2545, -13.01165], [102.7975, -8.388008], [89.50451, -11.1417], [61.62511, -9.103512], [51.62645, 12.54865], [44.20775, 11.6786], [39.78016, 16.56855], [31.60401, 31.58641], [33.27769, 34.00057], [34.7674, 34.85347], [35.72423, 36.32686], [36.5597, 37.66439], [44.1053, 37.98438], [43.01638, 41.27191], [41.28304, 41.41274], [36.26378, 44.40772], [36.61315, 45.58723], [37.48493, 46.80924], [38.27497, 47.61317], [39.56164, 48.43141], [39.77264, 50.58891], [39.6908, 84.52666]]]]}
When I run the following
UPDATE location_boundaries SET geom = ST_GeomFromGeoJSON(geo_json) where id = 6255147
I'm getting the following error:
Longitude -180.000000 is out of range in function st_geomfromgeojson. It must be within (-180.000000, 180.000000].")
What's going on. All this was working fine in mysql 5.7 yet in mysql 8 everything has messed up?
I'm trying to convert HQL to Spark.
I have the following query (Works in Hue with Hive editor):
select reflect('java.util.UUID', 'randomUUID') as id,
tt.employee,
cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date,
collect_set(tt.employee_detail) as employee_details,
collect_set( tt.emp_indication ) as employees_indications,
named_struct ('employee_info', collect_set(tt.emp_info),
'employee_mod_info', collect_set(tt.emp_mod_info),
'employee_comments', collect_set(tt.emp_comment) )
as emp_mod_details,
from (
select views_ctr.employee,
if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail,
if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info,
if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment,
if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info,
if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication,
from
( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr
) tt
group by employee
distribute by employee
First, What I'm trying is to write it in spark.sql as follow:
sparkSession.sql("select reflect('java.util.UUID', 'randomUUID') as id, tt.employee, cast( from_unixtime(unix_timestamp (date_format(current_date(),'dd/MM/yyyy HH:mm:ss'), 'dd/MM/yyyy HH:mm:ss')) as timestamp) as insert_date, collect_set(tt.employee_detail) as employee_details, collect_set( tt.emp_indication ) as employees_indications, named_struct ('employee_info', collect_set(tt.emp_info), 'employee_mod_info', collect_set(tt.emp_mod_info), 'employee_comments', collect_set(tt.emp_comment) ) as emp_mod_details, from ( select views_ctr.employee, if ( views_ctr.employee_details.so is not null, views_ctr.employee_details, null ) employee_detail, if ( views_ctr.employee_info.so is not null, views_ctr.employee_info, null ) emp_info, if ( views_ctr.employee_comments.so is not null, views_ctr.employee_comments, null ) emp_comment, if ( views_ctr.employee_mod_info.so is not null, views_ctr.employee_mod_info, null ) emp_mod_info, if ( views_ctr.emp_indications.so is not null, views_ctr.emp_indications, null ) employees_indication, from ( select * from views_sta where emp_partition=0 and employee is not null ) views_ctr ) tt group by employee distribute by employee")
But I got the following exception:
Exception in thread "main" org.apache.spark.SparkException: Job
aborted due to stage failute: Task not serializable:
java.io.NotSerializableException:
org.apache.spark.unsafe.types.UTF8String$IntWrapper
-object not serializable (class : org.apache.spark.unsafe.types.UTF8String$IntWrapper, value:
org.apache.spark.unsafe.types.UTF8String$IntWrapper#30cfd641)
If I'm trying to run my query without collect_set function its work, It's can fail because struct column types in my table?
How can I write my HQL query in Spark / fix my exception?
I'm trying to use liquibase (3.5.5) from an existing database (on MySQL).
I've used the generateChangeLog command to generate a db.changelog.xml file.
C:/liquibase-3.5.5/liquibase.bat --driver=com.mysql.jdbc.Driver ^
--classpath=C:/Libraries/mysql-connector-java-5.1.37-bin.jar ^
--changeLogFile=db.changelog.xml ^
--url="jdbc:mysql://vbalder/izalerting" ^
--username=* ^
--password=* ^
generateChangeLog
result: Liquibase 'generateChangeLog' Successful
The generated db.changelog.xml file contains changeSets with author BGADEYNE (generated) and id's who are prepended by 1533645947580-. e.g. 1533645947580-1
Added logicalFilePath="db.changelog.xml" to the databaseChangeLog tag
I've used the changelogSync command to create and fill the DATABASECHANGELOG and DATABASECHANGELOGLOCK tables. They do contain rows for each changeSet.
C:/liquibase-3.5.5/liquibase --driver=com.mysql.jdbc.Driver ^
--classpath=C:/Libraries/mysql-connector-java-5.1.37-bin.jar ^
--changeLogFile=db.changelog.xml ^
--url="jdbc:mysql://vbalder/izalerting" ^
--username=izalerting ^
--password=alfa ^
changelogSync
result: Liquibase 'changelogSync' Successful
Created a CDI component to execute the db.changelog.xml when the application starts.
Added maven dependency:
<dependency>
<groupId>org.liquibase</groupId>
<artifactId>liquibase-cdi</artifactId>
<version>3.5.5</version>
</dependency>
Added CLI component:
#Dependent
public class LiquibaseProducer {
#Resource(name="java:/izalerting")
private DataSource dbConnection;
#Produces #LiquibaseType
public CDILiquibaseConfig createConfig() {
CDILiquibaseConfig config = new CDILiquibaseConfig();
config.setChangeLog("be/uzgent/iz/alerting/liquibase/db.changelog.xml");
config.setContexts("non-legacy");
return config;
}
#Produces #LiquibaseType
public DataSource createDataSource() throws SQLException {
return dbConnection;
}
#Produces #LiquibaseType
public ResourceAccessor create() {
return new ClassLoaderResourceAccessor(getClass().getClassLoader());
}
}
When deploying the application to WildFly i can see this
2018-08-07 15:07:09,234 ERROR [stderr] (MSC service thread 1-4) INFO 8/7/18 3:07 PM: liquibase.integration.cdi.CDILiquibase: Booting Liquibase 3.5.4
2018-08-07 15:07:09,285 ERROR [stderr] (MSC service thread 1-4) INFO 8/7/18 3:07 PM: liquibase: Successfully acquired change log lock
2018-08-07 15:07:09,781 ERROR [stderr] (MSC service thread 1-4) INFO 8/7/18 3:07 PM: liquibase: Reading from PUBLIC.DATABASECHANGELOG
2018-08-07 15:07:09,814 ERROR [stderr] (MSC service thread 1-4) SEVERE 8/7/18 3:07 PM: liquibase: db.changelog.xml: db.changelog.xml::1533645947580-1::BGADEYNE (generated): Change Set db.changelog.xml::1533645947580-1::BGADEYNE (generated) failed. Error: Table "ALERTRESULT" already exists; SQL statement:
2018-08-07 15:07:09,815 ERROR [stderr] (MSC service thread 1-4) CREATE TABLE PUBLIC.alertresult (triggerid VARCHAR(255) NOT NULL, application VARCHAR(40) NOT NULL, resultid INT NOT NULL, subject VARCHAR(255), content CLOB, contenturl CLOB, executetime TIMESTAMP, html BOOLEAN DEFAULT TRUE NOT NULL, alertlevel VARCHAR(20) DEFAULT 'INFO' NOT NULL, closable BOOLEAN DEFAULT TRUE NOT NULL, screenwidth INT, screenheight INT) [42101-173] [Failed SQL: CREATE TABLE PUBLIC.alertresult (triggerid VARCHAR(255) NOT NULL, application VARCHAR(40) NOT NULL, resultid INT NOT NULL, subject VARCHAR(255), content CLOB, contenturl CLOB, executetime TIMESTAMP, html BOOLEAN DEFAULT TRUE NOT NULL, alertlevel VARCHAR(20) DEFAULT 'INFO' NOT NULL, closable BOOLEAN DEFAULT TRUE NOT NULL, screenwidth INT, screenheight INT)]
2018-08-07 15:07:09,816 ERROR [stderr] (MSC service thread 1-4) INFO 8/7/18 3:07 PM: liquibase: db.changelog.xml::1533645947580-1::BGADEYNE (generated): Successfully released change log lock
The DATABASECHANGELOG table contains a row for each changeSet.
+------------------+-----------------------+-------------------+-----------+
| # ID | AUTHOR | FILENAME | EXECTYPE |
+------------------+-----------------------+-------------------+-----------+
| 1533645947580-1 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-2 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-3 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-4 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-5 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-6 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-7 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-8 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-9 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-10 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-11 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-12 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-13 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-14 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-15 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-16 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-17 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-18 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-19 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
| 1533645947580-20 | BGADEYNE (generated) | db.changelog.xml | EXECUTED |
+------------------+-----------------------+-------------------+-----------+
Does anyone know what I'm doiing wrong here?
instead of
#Resource(name="java:/izalerting")
I needed to use
#Resource(lookup="java:/izalerting")
on wildfly 9
I'm a beginner on ELK and trying to load data from MySQL to elasticsearch(for next step I want to query them via javarestclient), so I used logstash-6.2.4 and elasticsearch-6.2.4. and followed an example here.
when I run: bin/logstash -f /path/to/my.conf, I got the error:
[2018-04-22T10:15:08,713][ERROR][logstash.pipeline ] Error registering plugin {:pipeline_id=>"main", :plugin=>"<LogStash::Inputs::Jdbc jdbc_connection_string=>\"jdbc:mysql://localhost:3306/testdb\", jdbc_user=>\"root\", jdbc_password=><password>, jdbc_driver_library=>\"/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar\", jdbc_driver_class=>\"com.mysql.jdbc.Driver\", statement=>\"SELECT * FROM testtable\", id=>\"7ff303d15d8fc2537248f48fae5f3925bca7649bbafc30d2cd52394ea9961797\", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>\"plain_f8d44c47-8421-4bb9-a6b9-0b34e0aceb13\", enable_metric=>true, charset=>\"UTF-8\">, jdbc_paging_enabled=>false, jdbc_page_size=>100000, jdbc_validate_connection=>false, jdbc_validation_timeout=>3600, jdbc_pool_timeout=>5, sql_log_level=>\"info\", connection_retry_attempts=>1, connection_retry_attempts_wait_time=>0.5, last_run_metadata_path=>\"/Users/chu/.logstash_jdbc_last_run\", use_column_value=>false, tracking_column_type=>\"numeric\", clean_run=>false, record_last_run=>true, lowercase_column_names=>true>", :error=>"can't dup Fixnum", :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,256][ERROR][logstash.pipeline ] Pipeline aborted due to error {:pipeline_id=>"main", :exception=>#<TypeError: can't dup Fixnum>, :backtrace=>["org/jruby/RubyKernel.java:1882:in `dup'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date/format.rb:838:in `_parse'", "uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/date.rb:1830:in `parse'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:87:in `set_value'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:36:in `initialize'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/plugin_mixins/value_tracking.rb:29:in `build_last_value_tracker'", "/usr/local/logstash-6.2.4/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.9/lib/logstash/inputs/jdbc.rb:216:in `register'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:342:in `register_plugin'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `block in register_plugins'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:353:in `register_plugins'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:500:in `start_inputs'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:394:in `start_workers'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:290:in `run'", "/usr/local/logstash-6.2.4/logstash-core/lib/logstash/pipeline.rb:250:in `block in start'"], :thread=>"#<Thread:0x3fae16e2 run>"}
[2018-04-22T10:15:09,314][ERROR][logstash.agent ] Failed to execute action {:id=>:main, :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: LogStash::PipelineAction::Create/pipeline_id:main, action_result: false", :backtrace=>nil}
here is the testdbinit.conf(utf-8 encoding):
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://localhost:3306/testdb"
jdbc_user => "root"
jdbc_password => "mypassword"
jdbc_driver_library => "/usr/local/logstash-6.2.4/config/mysql-connector-java-6.0.6.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * FROM testtable"
}
}
output {
stdout { codec => json_lines }
elasticsearch {
"hosts" => "localhost:9200"
"index" => "testdemo"
document_id => "%{personid}"
"document_type" => "person"
}
}
here is the table(database:testdb--->table:testtable):
mysql> select * from testtable;
+----------+----------+-----------+-----------+-------+
| PersonID | LastName | FirstName | City | flag |
+----------+----------+-----------+-----------+-------+
| 1003 | McWell | Sharon | Cape Town | exist |
| 1002 | Baron | Richard | Cape Town | exist |
| 1001 | Kallis | Jaques | Cape Town | exist |
| 1004 | Zhaosi | Nicholas | Iron Hill | exist |
+----------+----------+-----------+-----------+-------+
I try to google the issue, but still have no clue; I think maybe some type conversion errors(TypeError: can't dup Fixnum ) cause this issue, but what exactly is this"dup Fixnum", how to solve them?
And one more thing also confused me is: I run the same code yesterday, and succeed loaded data into elasticsearch and I could also search them via localhost:9200, but next morning when I try the same thing(on the same cpmputer), I met these issues. I have tossed this a whole day, please help me get some hints.
I also asked the same question on logstash community, with their help, I think I find the solution to my issue:
the exception trace exception=>#<TypeError: can't dup Fixnum> means there is a type conversion error. The sql_last_value which is initialized as 0 for numeric values or 1970-01-01 for datetime values. I think my sql_last_value stored in last_run_metadata_path is not the numeric or datetime value, so I add clean_run => true in conf file and run logstash again, no more error occurs. After clean_run => true was added, the wrong value of sql_last_value was reset to 0 or 1970-01-01 , the thread goes on and data be indexed successfully.
I'm generating a dynamic update query based on a list of provided objects for postgres. This is what my query looks like:
update loan_item_assignment as t set id = c.id, dateselectionid = c.dateselectionid, loanitemid = c.loanitemid, active = c.active, type = c.type from (values ( $1, $2, $3, $4, $5 ), ( $6, $7, $8, $9, $10 ), ( $11, $12, $13, $14, $15 ), ( $16, $17, $18, $19, $20 ), ( $21, $22, $23, $24, $25 ), ( $26, $27, $28, $29, $30 ), ( $31, $32, $33, $34, $35 ), ( $36, $37, $38, $39, $40 ) ) as c( id, dateselectionid, loanitemid, active, type ) where c.id = t.id returning *
And here's the values list I'm giving it:
[ 7,
35,
3,
true,
'normal',
8,
35,
4,
true,
'normal',
1,
35,
6,
true,
'normal',
2,
35,
7,
true,
'normal',
3,
35,
8,
true,
'normal',
5,
35,
10,
true,
'normal',
4,
35,
11,
true,
'normal',
6,
35,
12,
true,
'normal' ]
As far as I can tell, the values match up correctly. This is the error I'm seeing:
{ [error: operator does not exist: text = integer]
name: 'error',
length: 195,
severity: 'ERROR',
code: '42883',
detail: undefined,
hint: 'No operator matches the given name and argument type(s). You might need to add explicit type casts.',
position: '448',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'parse_oper.c',
line: '726',
routine: 'op_error' }
And this is the code that's ultimately running the query:
var performQuery = function(text, values, cb) {
pg.connect(connectionString, function(err, client, done) {
client.query(text, values, function(err, result) {
done();
if (!result) {
console.log(err);
cb([], err);
} else {
cb(result.rows, err);
}
})
});
}
And here is the table definition:
Table "public.loan_item_assignment"
Column | Type | Modifiers | Storage | Stats target | Description
-----------------+---------+-------------------------------------------------------------------+----------+--------------+-------------
id | integer | not null default nextval('loan_item_assignment_id_seq'::regclass) | plain | |
dateselectionid | integer | | plain | |
loanitemid | integer | | plain | |
active | boolean | | plain | |
type | text | | extended | |
Indexes:
"loan_item_assignment_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
"loan_item_assignment_dateselectionid_fkey" FOREIGN KEY (dateselectionid) REFERENCES date_selection(id)
"loan_item_assignment_loanitemid_fkey" FOREIGN KEY (loanitemid) REFERENCES loan_item(id)
Vitaly-t's comment on my answer is the solution - to use the pg-promise library to generate the query, and specifically method helpers.update for generating multi-row update queries, as shown in PostgreSQL multi-row updates in Node.js.