I am trying to a run a code to read from bigquery do some transformation using spark. Getting the below error
Exception in thread "main" java.io.IOException: Error accessing: bucket: test-n, object: spark/output/wordcount
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.wrapException(GoogleCloudStorageImpl.java:1707)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getObject(GoogleCloudStorageImpl.java:1733)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageImpl.getItemInfo(GoogleCloudStorageImpl.java:1618)
at com.google.cloud.hadoop.gcsio.ForwardingGoogleCloudStorage.getItemInfo(ForwardingGoogleCloudStorage.java:214)
at com.google.cloud.hadoop.gcsio.GoogleCloudStorageFileSystem.getFileInfo(GoogleCloudStorageFileSystem.java:1094)
at com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystemBase.getFileStatus(GoogleHadoopFileSystemBase.java:1422)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1426)
at com.spark.bigquery.App.main(App.java:67)
Caused by: com.google.api.client.auth.oauth2.TokenResponseException: 400 Bad Request
{
"error" : "invalid_grant",
"error_description" : "Robot is missing a project number."
}
I have done the configuration of service account and email in the code. Placed the P12 file also.
conf.set("fs.gs.project.id", projectId);
// Use service account for authentication. The service account key file is located at the path
// specified by the configuration property google.cloud.auth.service.account.json.keyfile.
conf.set(EntriesCredentialConfiguration.BASE_KEY_PREFIX +
EntriesCredentialConfiguration.ENABLE_SERVICE_ACCOUNTS_SUFFIX,
"true");
conf.set(EntriesCredentialConfiguration.BASE_KEY_PREFIX +
EntriesCredentialConfiguration.SERVICE_ACCOUNT_KEYFILE_SUFFIX,
"aesthetic-genre-216711-3a23f8112565.p12");
conf.set(EntriesCredentialConfiguration.BASE_KEY_PREFIX +
EntriesCredentialConfiguration.SERVICE_ACCOUNT_EMAIL_SUFFIX,
"reddevil.c06#gmail.com");
Related
I have the following code to send a email through EWS API,
ExchangeService service = new ExchangeService();
service.setUrl(new URI(**MyExchangeURL**));
ExchangeCredentials credentials = new WebCredentials(**UserName**, **Password**);
service.setCredentials(credentials);
EmailMessage message = new EmailMessage(service);
message.setSubject("EWS Test Mail");
message.setBody(MessageBody.getMessageBodyFromText("This is a test mail from EWS"));
message.getToRecipients().add("Test#gmail.com");
message.send();
With the above code, I facing a null pointer exception when message.send() is called.
Log:
Exception in thread "main" microsoft.exchange.webservices.data.core.exception.service.remote.ServiceRequestException: The request failed. null
at microsoft.exchange.webservices.data.core.request.SimpleServiceRequestBase.internalExecute(SimpleServiceRequestBase.java:74)
at microsoft.exchange.webservices.data.core.request.MultiResponseServiceRequest.execute(MultiResponseServiceRequest.java:158)
at microsoft.exchange.webservices.data.core.ExchangeService.internalCreateItems(ExchangeService.java:598)
at microsoft.exchange.webservices.data.core.ExchangeService.createItem(ExchangeService.java:657)
at microsoft.exchange.webservices.data.core.service.item.Item.internalCreate(Item.java:245)
at microsoft.exchange.webservices.data.core.service.item.EmailMessage.internalSend(EmailMessage.java:147)
at microsoft.exchange.webservices.data.core.service.item.EmailMessage.send(EmailMessage.java:258)
at EWS.main(EWS.java:28)
Caused by: java.lang.NullPointerException
at microsoft.exchange.webservices.data.core.request.ServiceRequestBase.readResponse(ServiceRequestBase.java:369)
at microsoft.exchange.webservices.data.core.request.SimpleServiceRequestBase.internalExecute(SimpleServiceRequestBase.java:63)
... 7 more
When debugged further there is a mismatch in the attribute name.
In request, the attribute is contentType is setted in request. But while reading the response, we are getting Null as code used have Content-type. There is a mismatch in the attribute name being set and read.
Is anyone else facing this issue or any workaround?
I am trying to deploy a Keras model but getting an error. My code is
service = Webservice.deploy_from_model(workspace = ws,
name = "test-classifier",
deployment_config = aciconfig,
models = [model],
image_config = image_config)
service.wait_for_deployment(show_output = True)
Error:
{
"code": "GatewayTimeout",
"statusCode": 504,
"message": "ACI Service request failed. Reason: The gateway did not receive a response from 'Microsoft.ContainerInstance' within the specified time period.."
}
Why am I getting this error?
The default timeout is 1 minute.You can increase the timeout or try to speed up the service by modifying the score.py to remove unnecessary calls. If these actions do not correct the problem, use the information in this article to debug the score.py file. The code may be in a non-responsive state or an infinite loop.
I am trying to connect Spark Databricks from PERL code over Simba JDBC (Databricks recommended way) .For ref this is the JDBC driver: https://databricks-bi-artifacts.s3.us-east-2.amazonaws.com/simbaspark-drivers/jdbc/2.6.17/SimbaSparkJDBC42-2.6.17.1021.zip
So far I managed to setup PERL and all PERL related module config and below issue is nothing to do with PERL which I strongly believe.
I have below code trying to connect Spark Databricks.
Note : 'replaceme' in the password is databricks personaL ACCESS TOKEN.
#!/usr/bin/perl
use strict;
use DBI;
my $user = "token";
my $pass = "replaceme";
my $host = "DBhost.azuredatabricks.net";
my $port = 9001;
my $url = "jdbc:spark://DBhost.azuredatabricks.net:443/default;transportMode=http;ssl=1;httpPath=sql/protocolv1/o/853imaskedthis14/1005-imaskedthis-okra138;AuthMech=3;UID=token;PWD=replaceme"; # Get this URL from JDBC data src
my %properties = ('user' => $user,
'password' => $pass,
'host.name' => $host,
'host.port' => $port);
my $dsn = "dbi:JDBC:hostname=localhost;port=$port;url=$url";
my $dbh = DBI->connect($dsn, undef, undef,
{ PrintError => 0, RaiseError => 1, jdbc_properties => \%properties })
or die "Failed to connect: ($DBI::err) $DBI::errstr\n";
my $sql = qq/select * from table/;
my $sth = $dbh->prepare($sql);
$sth->execute();
my #row;
while (#row = $sth->fetchrow_array) {
print join(", ", #row), "\n";
}
I am ending up below issue and error with SIMBA driver connecting to SPARK THRIFT server as Authentication issue.
failed: [Simba][SparkJDBCDriver](500164) Error initialized or created transport for authentication: Invalid status 21
Also, could not send response: com.simba.spark.jdbc42.internal.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed). at ./perldatabricksconntest.pl line 18.
The logger recorded below Java stack trace:
[Thread-1] 05:40:16,718 WARN - Error
java.sql.SQLException: [Simba][SparkJDBCDriver](500164) Error initialized or created transport for authentication: Invalid status 21
Also, could not send response: com.simba.spark.jdbc42.internal.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed).
at com.simba.spark.hivecommon.api.HiveServer2ClientFactory.createTransport(Unknown Source)
at com.simba.spark.hivecommon.api.ServiceDiscoveryFactory.createClient(Unknown Source)
at com.simba.spark.hivecommon.core.HiveJDBCCommonConnection.establishConnection(Unknown Source)
at com.simba.spark.spark.core.SparkJDBCConnection.establishConnection(Unknown Source)
at com.simba.spark.jdbc.core.LoginTimeoutConnection.connect(Unknown Source)
at com.simba.spark.jdbc.common.BaseConnectionFactory.doConnect(Unknown Source)
at com.simba.spark.jdbc.common.AbstractDriver.connect(Unknown Source)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:677)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:189)
at com.vizdom.dbd.jdbc.Connection.handleRequest(Connection.java:417)
at com.vizdom.dbd.jdbc.Connection.run(Connection.java:211)
Caused by: com.simba.spark.support.exceptions.GeneralException: [Simba][SparkJDBCDriver](500164) Error initialized or created transport for authentication: Invalid status 21
Also, could not send response: com.simba.spark.jdbc42.internal.apache.thrift.transport.TTransportException: java.net.SocketException: Broken pipe (Write failed).
... 11 more
Also as per SIMBA JDBC connector document I have tried NO authentication mode, Username , Username / Password none of them working .
So wonder where is the Authentication issue here in transport layer . Note I already have created token and mentioned that in password section while initiating jdbc:spark call .
You need to generate personal access token and put it instead of the replaceme string in the JDBC url? After that you don't need to specify user & password fields in the %properties.
I am using AWS Cognito Service Provider to create and list User Pool Clients. I have a locally installed DynamoDB to store the additional data. But I am getting the above error in the callback. I looked a lot for the error context but couldn't fine one.
const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
cognitoidentityserviceprovider.listUserPoolClients(params, function(clientListError, clientListData) {
console.log(clientListError)
if(clientListError){
return res.json({
status: false,
message: 'Error Fetching Client Apps',
data: clientListError
})
}
return res.json({
status: true,
message: 'List fetch success',
data: clientListData
})
});
This is for fetching the user pool client apps. In the same way I am creating the user pool client but I am getting the same error "InvalidAction"
The error thrown was from Dynamodb because I was connected to my local DB which had no tables and data and I was also not passing the credentials generated by the token manager. I removed the local DB URL from the config and then passed the credentials from the token manager and I got the desired result.
I am facing the same issue, but unable to solve it. Can you guide me on:
I removed the local DB URL from the config and then passed the credentials from the token manager and I got the desired result.
I am configuring DB this way:
static DB_CONFIG = AppConfig.ENVIRONMENT === 'localhost' ? { endpoint: 'http://localhost:8000', region: 'us-east-1' } : { region: 'us-east-1' };
which, in my case is localhost and first object gets passed in.
I'm developing an application that receives several massage queue containing 20-30 gmail addresses. My app owns a single google drive folder that must be shared with all of the users. My app uses Google drive Java client and for the HTTP calls I'm using Google batching
The problem is that I'm getting internal error 500 when I'm trying to share the folder with 30 users simultaneously by multithreading. However when I use threads synchronisation everything is fine but the performance is terrible about 0.5 second per user!
Can anyone explain Why I'm receiving this error?
500 OK
{
"code" : 500,
"errors" : [ {
"domain" : "global",
"message" : "Internal Error. User message: \"An internal error has occurred which prevented the sharing of these item(s): fileame\"",
"reason" : "internalError"
} ],
"message" : "Internal Error. User message: \"An internal error has occurred which prevented the sharing of these item(s): filename\""
}
Here is the thread code:
try {
//batch start
BatchRequest batch = service.batch();
ArrayList<String> users = readUsers(this.file);
Permission[] permissions= new Permission[users.size()];
for (int i = 0 ; i < users.size(); i++){
permissions[i]= new Permission();
permissions[i].setValue(users.get(i)+"#gmail.com");
permissions[i].setType("user");
permissions[i].setRole("writer");
service.permissions().insert(fileId, permissions[i]).setSendNotificationEmails(Boolean.FALSE) .queue(batch, callback);
}
//batch execute
batch.execute();
} catch (IOException e) {
System.out.println("An error occurred and I am " + id + ": " + e);
}