Accumulo client throwing NotServingTabletException - accumulo

I'm using accumulo in my project (inside jetty), but I'm getting this error when trying to run my code
[client.impl.ThriftScanner] Error getting transport to hostname:9997 : NotServingTabletException
Accumulo itself is up, and I can connect with the shell - create tables, insert and scan.
Thank you.

Accumulo TabletServers host Tablets. Each Tablet hosts a portion of the rows for an Accumulo table.
It sounds like there may be some issue where a Tablet is not hosted by a TabletServer. When your application tries to access/update a certain row on this Tablet, the error would be thrown if that Tablet is offline. This situation would not prevent you from being able to perform operations on other tables.
Try to scan the table which your application is accessing and see if an error is thrown. Also, check the Accumulo Monitor or the log files for errors.

Related

Authentication to Apache Drill is temporary failing

I'm running a 5 Nodes Mapr Drill cluster, and everything is working fine, except that sometimes (can be multiple time during the day, sometimes once in a few days, no specific pattern), when I try to connect to one of the drillbits (Via Drill Web-UI / PyDrill), the login is failing with Invalid username/password error, even though the password and username are correct!
Scripts that trying to open connections with PyDrill also failing with the same error.
The issue is resolved by itself after a while, or when I restart the affected drillbit with maprcli command.
this issue occurs only on specifc drillbits and not on all. (Usually node #1. It happens on others too, but only a few times. On the first node it happens almost on a daily basis)
Login is failing with all users. Mapr user, AD users, etc.
Did anyone encounter this? I'm trying to find the root cause and a solution. I suspect it is happening when the cluster is running on low memory so the login service (PAM) is failing.
Thanks!

"Error: read ECONNRESET" on Node-RED when writing to InfluxDB

I have just started with Node-RED and InfluxDB, and I would like to apologise if this is a very silly question.
There was a network disconnection on my server earlier - after reconnecting the server back to the network, the error Error: read ECONNRESET is frequently showing whenever receiving an MQTT signal and trying to write it into influxdb.
A little bit of the background on my work - I am working on an Industrial IoT project, where each machines will send in signals via MQTT to Node-RED, get processed in Node-RED and log into influxDB. The code has been running without issue before the network disconnection, and I have seen other posts stating that restarting Node-RED would solve the problem - but I cannot afford to restart it unless schedule a time with the factory - till then, more data will be loss.
"Error: read ECONNRESET"
This error is happening at many different influxdb nodes - not a single specific incident. Is there anyway to resolve this without having to restart Node-RED?
Thank you
Given that it's not storing any data at the moment, I would say take the hit and restart Node-RED as soon as possible.
The other option is if you are on a recent Node-RED release is to just restart the flow. You can do this from the bottom of the drop down menu on the Deploy button. This will leave Node-RED running and just stop all the nodes and restart them. This will be quicker than a full restart.
I assume you are using the node-red-contrib-influxdb node. It looks to be using the Influx npm node under the covers. I can't see anything obvious in the doc about configuring it to reconnect in case of a failure with the database. I suggest you set up a test system and then try and reproduce this by restarting the DB, if you can then you can open an issue with the node-red-contrib-influxdb on github and see if they can work out how to get it to reconnect after a failure.
There was a power outage one day and have restarted the whole system. Now the database is working fine. It worked, and I didn't know why. Hope this would help.

Azure Virtual Machine Crashing every 2-3 hours

We've got a classic VM on azure. All it's doing is running SQL server on it with a lot of DB's (we've got another VM which is a web server which is the web facing side which accesses the sql classic VM for data).
The problem we have that since yesterday morning we are now experiencing outages every 2-3 hours. There doesnt seem to be any reason for it. We've been working with Azure support but they seem to be still struggling to work out what the issue is. There doesnt seem to be anything in the event logs that give's us any information.
All that happens is that we receive a pingdom alert saying the box is out, we then can't remote into it as it times out and all database calls to it fail. 5 minutes later it will come back up. It doesnt seem to fully reboot or anything it just haults.
Any ideas on what this could be caused by? Or any places that we could look for better info? Or ways to patch this from happening?
The only thing that seems to be in the event logs that occurs around the same time is a DNS Client Event "Name resolution for the name [DNSName] timed out after none of the configured DNS servers responded."
Smartest or Quick Recovery:
Did you check SQL Server by connecting inside VM(internal) using localhost or 127.0.0.1/Instance name. If you can able connect SQL Server without any Issue internally and then Capture or Snapshot SQL Server VM and Create new VM using Capture VM(i.e without lose any data).
This issue may be occurred by following criteria:
Azure Network Firewall
Windows Server Update
This ended up being a fault with the node/sector that our VM was on. I fixed this by enlarging the size of our VM instance (4 core to 8 core), this forced azure to move it to another node/sector and this rectified the issue.

Using ODBC Driver in Azure to connect to external database

I am working in a business in New Zealand. We currently use a remote server (Plexus) to store a large amount of data (some tables > 2 billion rows). We have started down the SharePoint route, and I have created a number of databases and apps in SharePoint that use this data. Currently, I have to run a program in New Zealand that downloads the data to our local server and then pushes up that data into an Azure database, which the web apps connect to. I would like to remove this middle step for many reasons but the biggest reason is that the web connection between NZ and the US tends to result in a lot of time outs and long pulls due to having to pull large data sets across the Pacific. The remote database we are using is Plexus.
Ideally, I would like to have my C# code sitting in Azure and have this connect to the remote server directly. This way I could simply send the SQL request to Plex and have this data go directly into the Azure databases. The major advantage would be that this would mean it would all be based in the US which would make things a lot faster.
The major hurdle is that we need to install an ODBC Driver given to us by the remote server into Azure so it recognises the calls as genuine. Our systems adminstrator has said he has looked into it and it seems this can't be done?
I was hoping someone on the StackOverFlow community has encountered a similar issue and resolved it?
Note: Please dont think I am asking whether Azure has an ODBC connection because I know it does. I am not asking if I can connect TO Azure, I am asking if I can connect Azure to another external data source.
In a Worker Role/Cloud service in azure you can install the ODBC driver in a startup task using powershells ODBC commandlets.
More info here: Powershell Add-OdbcDsn and here: Powershell startup task in cloud services
One option is to create a virtual machine in the same Azure data center as your database and install your ODBC driver and your C# app.

500 Internal Server Error when performing large MySQL Insert with PHP / IIS7

System Spec:
VPS running Windows Server 2008 R2 SP1
64-bit dual core 2.39GHz VCPU
2GB RAM
Parallels Plesk for Windows 10.4.4
IIS 7.5
PHP 5.2.17
MySQL 5.1.56
I have a PHP script to loop through a static file and import each line as a row in MySQL. This works fine if the file is split into several thousand lines at a time, but this creates a lot of manual effort.
The whole file contains around 160,000 lines to be imported. The script currently connects to the database via mysql_connect / mysql_select_db, processes the loop with mysql_query, and disconnects at the end of the loop. However, at any point between around 55 seconds - 1 min 35 seconds, the client browser returns a 500 Internal Server Error page, which contains no useful diagnostic info.
I have tried increasing the max connection times of MySQL, PHP, IIS and even the max user sockets for winsock, to no avail.
I tried performing a connect / disconnect to MySQL for each insert query, but this caused thousands of connections to the server which were then stuck in a "TIME_WAIT" state, and returned a "could not connect to server" error, presumably due to insufficient sockets remaining. I have also tried both the mysql and mysqli extensions.
I have looked through all the logs I can find for IIS and MySQL, but cannot see anything that would help with finding the cause.
The last two attempts inserted 33,979 and 78,173 rows respectively.
Can anyone offer any assistance?
Thanks.
** UPDATE **
This must be an IIS issue. I have converted the script to run via command-line PHP and it processes the whole file with no issues.
Sounds like a IIS issue. Most I have found reside in the Web.config file. I would take a look at that and make sure the settings are correct and the syntax is correct. Many a time I forgot to close my tags and received a 500 error.
Use LOAD DATA INFILE instead of trying to do the INSERTs via PHP. It will run a lot faster, thereby avoiding the 500 error.
Do not even consider using the mysql_* interface. Switch to mysqli or PDO. It is deprecated and gone in the latest PHP release.

Resources