Im running spark 2.3.1 on kubernetes 1.11.0
Im getting below error when spark driver pod is trying to bringup executor pods ,it truncates first 7 letters in the pod name and throws error that the name starts with "-"
io.fabric8.kubernetes.client.KubernetesClientException: Failure
executing: POST at:
https://kubernetes.default.svc/api/v1/namespaces/mynamespace/pods.
Message: Pod
"mybrand-sb1-ca-privacy-abc469-38957af1c3393cae8941b0613376040c-exec-29"
is invalid: spec.hostname: Invalid value:
"-sb1-ca-privacy-abc469-38957af1c3393cae8941b0613376040c-exec-29": a
DNS-1123 label must consist of lower case alphanumeric characters or
'-', and must start and end with an alphanumeric character (e.g.
'my-name', or '123-abc', regex used for validation is
'a-z0-9?').
the max size of the name is 63 characters. Thats the reason its getting truncated for the first 9 characters. Try to minimize the app name, this will solve the issue.
Related
I am doing a full extract from a table ABC. In copy activity, I have given a query as
select * from ABC
whereas I am facing issue for few rows (It has special characters - Japanese and Korean)
Error code 2200
Failure type User configuration issue
Details Failure happened on 'Source' side. ErrorCode=DB2DriverRunFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Error thrown from driver. Sql code: '-343',Source=Microsoft.DataTransfer.ClientLibrary.Db2Connector,''Type=Microsoft.HostIntegration.DrdaClient.DrdaException,Message=HISMPCB0001 In BasePrimitiveConverter an exception has occurred. Exception description: Output buffer is smaller than required size 12 SQLSTATE=HY000 SQLCODE=-343,Source=Microsoft.HostIntegration.Drda.Requester,'
The character which is causing the issue is '轎ᆃ '
In the error description, it states that there is BasePrimitiveConverter exception that has occurred. The exception description indicates that the output buffer is smaller than the required size. So, please try converting the column to an acceptable type like graphic in db2. Refer to the following link to understand more.
https://bytes.com/topic/db2/answers/488983-storing-some-japanese-data
Referring to these links, I understand that this error might be due to the datatype of source column, or the encoding used. Try working with different encoding options available in your source dataset. Here is a similar problem with a different source but a similar problem of not being able to retrieve special characters.
https://learn.microsoft.com/en-us/answers/questions/467456/failure-happened-source-side-in-copy-activity-for.html
I am reading a csv file and writing the same in parquet format in ADLS Gen2 using ADF copy activity.
My source:
My Sink :
I am facing the below error :
Failure type
User configuration issue
Details
ErrorCode=AdlsGen2OperationFailed,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=ADLS Gen2 operation failed for: Operation returned an invalid status code 'BadRequest'. Account: 'adlsedmadifpoc'. FileSystem: 'raw_area'. ErrorCode: 'InvalidResourceName'. Message: 'The specifed resource name contains invalid characters.'. RequestId: '70d7xbfd-6xxf-00ec-2c74-9axxxx000000'. TimeStamp: 'Thu, 26 Aug 2021 12:19:56 GMT'..,Source=Microsoft.DataTransfer.ClientLibrary,''Type=Microsoft.Azure.Storage.Data.Models.ErrorSchemaException,Message=Operation returned an invalid status code 'BadRequest',Source=Microsoft.DataTransfer.ClientLibrary,'
Any help appreciated. Thanks.
This is caused when there are characters introduced to the Container Name which do not follow the Azure storage container naming convention.
The best way to avoid this error is to copy it as it is from the Azure environment.
A container name must contain valid characters, conforming to the following naming rules:
All letters must be lowercase.
Names must be from 3 through 63 characters long.
Names must start or end with a letter or number.
Names can contain only letters, numbers, and the dash (-) character.
Every dash (-) character must be immediately preceded and followed by a letter or number; consecutive dashes are not permitted in container names.
I'm trying to establish a connection in AWS Glue, using a pyspark script.
The JDBC connection is pointing to a Microsoft SQL Server in Azure Cloud.
When I try to enter the connection string, it works until it gets to the table that it should read. That's mainly because of the whitespace inside the table name. Do you have any hint on how to write the syntax here?
source_df = sparksession.read.format("jdbc").option("url","jdbc:sqlserver://00.000.00.00:1433;databaseName=Sample").option("dbtable", "dbo.122 SampleCompany DE$Contract Header").option("user", "sampleuser").option("password", "sampL3p4ssw0rd").load()
When you execute this, it always throws the error:
py4j.protocol.Py4JJavaError: An error occurred while calling o69.load. : com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near '.122'
Do you have any idea how to solve this?
Given the presence of spaces (and probably the dollar sign, and the fact the identifier starts with numbers), you need to quote the object name. Quoting object names in SQL Server is done by enclosing it in brackets (or, though this may depend on the session config, double quotes).
Keep in mind that dbo is the schema, while 122 SampleCompany DE$Contract Header is the table name. Schema and table name need to be quoted separately, not as a unit.
So, try to pass "dbo.[122 SampleCompany DE$Contract Header]"
I am Bulk Loading the data into cassandra using SSTables.I am following https://github.com/SPBTV/csv-to-sstable this.
I created the SSTables by
$ java -jar csv-to-sstable.jar quote /home/arque/table_big.cql /home/arque/Documents/data.csv /home/arque
I am getting an error while I am trying to run following command:
$ sstableloader -d 192.168.0.7 /home/arque/quote/table_big
Error:
Error: Established connection to initial hosts
Opening sstables and calculating sections to stream
Failed to list files in /home/arque/quote/table_big
java.lang.AssertionError
java.lang.RuntimeException: Failed to list files in /home/arque/quote /table_big
at org.apache.cassandra.db.lifecycle.LogAwareFileLister.list(LogAwareFileLister.java:77)
The error is in the csv-to-sstable tool. Look at this file: https://github.com/SPBTV/csv-to-sstable/blob/master/src/main/java/com/spbtv/cassandra/bulkload/Bulkload.java
You say you only have an issue when Primary key is composite key. That's because the tool expects primary key to be defined on same lane as column.
Line 66:
// Primary key defined on the same line as the corresponding column
Pattern pattern = Pattern.compile(".*?(\\w+)\\s+\\w+\\s+PRIMARY KEY.*");
If you change this to suite your needs it should work.
Trying to load tsv file in HBase running in HDInsight in Microsoft Azure cloud using a recommended approach connecting through Remote Desktop and running on the command line trying to load t1.tsv file (with two tab separated columns) from hdfs into hbase t1 table:
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=HBASE_ROW_KEY,num t1 t1.tsv
and get:
ERROR: One or more columns in addition to the row key and timestamp(optional) are required
Usage: importtsv -Dimporttsv.columns=a,b,c
replacing order of the specified columns to num,HBASE_ROW_KEY
C:\apps\dist\hbase-0.98.0.2.1.5.0-2057-hadoop2\bin>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns=num,HBASE_ROW_KEY t1 t1.tsv
I get:
ERROR: Must specify exactly one column as HBASE_ROW_KEY
Usage: importtsv -Dimporttsv.columns=a,b,c
This tells me that comma separator in the column list is not recognized or column name is incorrect I also tried to use column with qualifier as num:v and as 'num' - nothing helps
Any ideas what could be wrong here? Thanks.
>hbase org.apache.hadoop.hbase.mapreduce.ImportTsv -Dimporttsv.columns="HBASE_ROW_KEY,d:c1,d:c2" testtable /example/inputfile.txt
This works for me. I think there are some differences between terminals in Linux and Windows, thus in windows you need to add quotation marks to clarify this is a value string, otherwise might not be recognized.