How to disable recursive query of DNS java? - dnsjava

I'm doing query using dnsjava, and program did query 2 times, the first query is necessary and return desired result (this result only see if I capture UDP package, not in result of my program), and after that, it automatically do the second query and return error Host Not Found for my program
tcpdump_picture
public static void main(String[] args) {
// TODO Auto-generated method stub
try {
final Cache cache = new Cache(DClass.IN);
SimpleResolver resolver = new SimpleResolver("10.212.9.108");
Name name = new Name("8491698766.6.msisdn.sub.cs");
Lookup.setDefaultResolver(resolver);
Lookup.setDefaultCache(new Cache(), DClass.IN);
Lookup l = new Lookup(name);
l.setResolver(resolver);
l.run();
Record[] records = l.getAnswers();
System.out.println("Error str: " + l.getErrorString());
// System.out.println("Record length: " + records.length);
// InetAddress[] array = new InetAddress[records.length];
if(records != null) {
for(int i=0; i<records.length; i++) {
Record record = records[i];
if(record instanceof ARecord) {
ARecord a = (ARecord)record;
System.out.println("ARecord: " + a.getAddress().getHostName() + ";" + a.getAddress().getHostAddress());
}else {
CNAMERecord cname = (CNAMERecord)record;
System.out.println("CnameRecord: " + cname.toString());
}
}
}else {
System.out.println("Record null");
}
}catch(Exception e) {
e.printStackTrace();
}
}
}

You just need to unset the Recursion desired Flag in the query header.
Assuming you have a DNS server with a zone bob. and the following entries:
peter.bob. 38400 IN NS ns2.bobsen-technology.org.
peter.bob. 38400 IN NS ns.klaus.de.
the code
Resolver resolver = new SimpleResolver("localhost");
Record question = Record.newRecord(new Name("peter.bob."), Type.NS, DClass.IN);
Message query = Message.newQuery(question);
query.getHeader().unsetFlag(Flags.RD);
Message response = resolver.send(query);
for (RRset rRset : response.getSectionRRsets(Section.AUTHORITY)) {
System.out.println(rRset);
}
produces:
{ peter.bob. 38400 IN NS [ns2.bobsen-technology.org.] [ns.klaus.de.] }

Related

azure search work around $skip limit

I'm doing a job to check if all records from my database exists on Azure Search (around 610k). However there's a 100000 limit with the $skip parameter. Is there a way to work around this limit?
You can not facet over more thank 100K documents, however, you can add facets to work around this. For example, let’s say you have a facet called Country and no one facet has more than 100K documents. You can facet over all documents where Country == ‘Canada’, then facet over all documents where Country == ‘USA’, etc…
Just to clarify the other answers: you can't bypass the limit directly but you can use a workaround.
Here's what you can do:
1) Add a unique field to the index. The contents can be a modification timestamp (if it's granular enough to make it unique) or for example a running number. Or alternatively you can use some existing unique field for this.
2) Take the first 100000 results from the index ordered by your unique field
3) Check what is the maximum value (if ordering ascending) in the results for your unique field - so the value of the last entry
4) Take the next 100000 results by ordering based on the same unique field and adding a filter which takes results only where the unique field's value is bigger that the previous maximum. This way the same first 100000 values are not returned but we get the next 100000 values.
5) Continue until you have all results
The downside is that you can't use other custom ordering with the results unless you do the ordering after getting the results.
I use data metadata_storage_last_modified as filter, and the following is my example.
offset skip time
0 --%--> 0
100,000 --%--> 100,000 getLastTime
101,000 --%--> 0 useLastTime
200,000 --%--> 99,000 useLastTime
201,000 --%--> 100,000 useLastTime & getLastTime
202,000 --%--> 0 useLastTime
Because Skip limit is 100k, so we can calculate skip by
AzureSearchSkipLimit = 100k
AzureSearchTopLimit = 1k
skip = offset % (AzureSearchSkipLimit + AzureSearchTopLimit)
If total search count will large than AzureSearchSkipLimit, then apply
orderby = "metadata_storage_last_modified desc"
When skip reach AzureSearchSkipLimit ,then get metadata_storage_last_modified time from end of data. And put metadata_storage_last_modified as next 100k search filer.
filter = metadata_storage_last_modified lt ${metadata_storage_last_modified}
I understand the limitation of the API with the 100K limit, and MS's site says as a work around "you can work around this limitation by adding code to iterate over, and filter on, a facet with less that 100K documents per facet value."
I'm using the "Back up and restore an Azure Cognitive Search index" sample solution provided by MS. (https://github.com/Azure-Samples/azure-search-dotnet-samples)
But can some when tell me where or how I implement this "iterate loop" on a facet" The facetable field I'm trying to use is "tribtekey" but I don't know where to place the code in the below. Any help would be greatly appreciated.
// This is a prototype tool that allows for extraction of data from a search index
// Since this tool is still under development, it should not be used for production usage
using Azure;
using Azure.Search.Documents;
using Azure.Search.Documents.Indexes;
using Azure.Search.Documents.Models;
using Microsoft.Extensions.Configuration;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Net.Http;
using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;
namespace AzureSearchBackupRestore
{
class Program
{
private static string SourceSearchServiceName;
private static string SourceAdminKey;
private static string SourceIndexName;
private static string TargetSearchServiceName;
private static string TargetAdminKey;
private static string TargetIndexName;
private static string BackupDirectory;
private static SearchIndexClient SourceIndexClient;
private static SearchClient SourceSearchClient;
private static SearchIndexClient TargetIndexClient;
private static SearchClient TargetSearchClient;
private static int MaxBatchSize = 500; // JSON files will contain this many documents / file and can be up to 1000
private static int ParallelizedJobs = 10; // Output content in parallel jobs
static void Main(string[] args)
{
//Get source and target search service info and index names from appsettings.json file
//Set up source and target search service clients
ConfigurationSetup();
//Backup the source index
Console.WriteLine("\nSTART INDEX BACKUP");
BackupIndexAndDocuments();
//Recreate and import content to target index
//Console.WriteLine("\nSTART INDEX RESTORE");
//DeleteIndex();
//CreateTargetIndex();
//ImportFromJSON();
//Console.WriteLine("\r\n Waiting 10 seconds for target to index content...");
//Console.WriteLine(" NOTE: For really large indexes it may take longer to index all content.\r\n");
//Thread.Sleep(10000);
//
//// Validate all content is in target index
//int sourceCount = GetCurrentDocCount(SourceSearchClient);
//int targetCount = GetCurrentDocCount(TargetSearchClient);
//Console.WriteLine("\nSAFEGUARD CHECK: Source and target index counts should match");
//Console.WriteLine(" Source index contains {0} docs", sourceCount);
//Console.WriteLine(" Target index contains {0} docs\r\n", targetCount);
//
//Console.WriteLine("Press any key to continue...");
//Console.ReadLine();
}
static void ConfigurationSetup()
{
IConfigurationBuilder builder = new ConfigurationBuilder().AddJsonFile("appsettings.json");
IConfigurationRoot configuration = builder.Build();
SourceSearchServiceName = configuration["SourceSearchServiceName"];
SourceAdminKey = configuration["SourceAdminKey"];
SourceIndexName = configuration["SourceIndexName"];
TargetSearchServiceName = configuration["TargetSearchServiceName"];
TargetAdminKey = configuration["TargetAdminKey"];
TargetIndexName = configuration["TargetIndexName"];
BackupDirectory = configuration["BackupDirectory"];
Console.WriteLine("CONFIGURATION:");
Console.WriteLine("\n Source service and index {0}, {1}", SourceSearchServiceName, SourceIndexName);
Console.WriteLine("\n Target service and index: {0}, {1}", TargetSearchServiceName, TargetIndexName);
Console.WriteLine("\n Backup directory: " + BackupDirectory);
SourceIndexClient = new SearchIndexClient(new Uri("https://" + SourceSearchServiceName + ".search.windows.net"), new AzureKeyCredential(SourceAdminKey));
SourceSearchClient = SourceIndexClient.GetSearchClient(SourceIndexName);
// TargetIndexClient = new SearchIndexClient(new Uri($"https://" + TargetSearchServiceName + ".search.windows.net"), new AzureKeyCredential(TargetAdminKey));
// TargetSearchClient = TargetIndexClient.GetSearchClient(TargetIndexName);
}
static void BackupIndexAndDocuments()
{
// Backup the index schema to the specified backup directory
Console.WriteLine("\n Backing up source index schema to {0}\r\n", BackupDirectory + "\\" + SourceIndexName + ".schema");
File.WriteAllText(BackupDirectory + "\\" + SourceIndexName + ".schema", GetIndexSchema());
// Extract the content to JSON files
int SourceDocCount = GetCurrentDocCount(SourceSearchClient);
WriteIndexDocuments(SourceDocCount); // Output content from index to json files
}
static void WriteIndexDocuments(int CurrentDocCount)
{
// Write document files in batches (per MaxBatchSize) in parallel
string IDFieldName = GetIDFieldName();
int FileCounter = 0;
for (int batch = 0; batch <= (CurrentDocCount / MaxBatchSize); batch += ParallelizedJobs)
{
List<Task> tasks = new List<Task>();
for (int job = 0; job < ParallelizedJobs; job++)
{
FileCounter++;
int fileCounter = FileCounter;
if ((fileCounter - 1) * MaxBatchSize < CurrentDocCount)
{
Console.WriteLine(" Backing up source documents to {0} - (batch size = {1})", BackupDirectory + "\\" + SourceIndexName + fileCounter + ".json", MaxBatchSize);
tasks.Add(Task.Factory.StartNew(() =>
ExportToJSON((fileCounter - 1) * MaxBatchSize, IDFieldName, BackupDirectory + "\\" + SourceIndexName + fileCounter + ".json")
));
}
}
Task.WaitAll(tasks.ToArray()); // Wait for all the stored procs in the group to complete
}
return;
}
static void ExportToJSON(int Skip, string IDFieldName, string FileName)
{
// Extract all the documents from the selected index to JSON files in batches of 500 docs / file
string json = string.Empty;
try
{
SearchOptions options = new SearchOptions()
{
SearchMode = SearchMode.All,
Size = MaxBatchSize,
Skip = Skip,
// ,IncludeTotalCount = true
// ,Filter = Azure.Search.Documents.SearchFilter.Create('%24top=2&%24skip=0&%24orderby=tributeId%20asc')
//,Filter = String.Format("&search=*&%24top=2&%24skip=0&%24orderby=tributeId%20asc")
//,Filter = "%24top=2&%24skip=0&%24orderby=tributeId%20asc"
//,Filter = "tributeKey eq '5'"
};
SearchResults<SearchDocument> response = SourceSearchClient.Search<SearchDocument>("*", options);
foreach (var doc in response.GetResults())
{
json += JsonSerializer.Serialize(doc.Document) + ",";
json = json.Replace("\"Latitude\":", "\"type\": \"Point\", \"coordinates\": [");
json = json.Replace("\"Longitude\":", "");
json = json.Replace(",\"IsEmpty\":false,\"Z\":null,\"M\":null,\"CoordinateSystem\":{\"EpsgId\":4326,\"Id\":\"4326\",\"Name\":\"WGS84\"}", "]");
json += "\r\n";
}
// Output the formatted content to a file
json = json.Substring(0, json.Length - 3); // remove trailing comma
File.WriteAllText(FileName, "{\"value\": [");
File.AppendAllText(FileName, json);
File.AppendAllText(FileName, "]}");
Console.WriteLine(" Total documents: {0}", response.GetResults().Count().ToString());
json = string.Empty;
}
catch (Exception ex)
{
Console.WriteLine("Error: {0}", ex.Message.ToString());
}
}
static string GetIDFieldName()
{
// Find the id field of this index
string IDFieldName = string.Empty;
try
{
var schema = SourceIndexClient.GetIndex(SourceIndexName);
foreach (var field in schema.Value.Fields)
{
if (field.IsKey == true)
{
IDFieldName = Convert.ToString(field.Name);
break;
}
}
}
catch (Exception ex)
{
Console.WriteLine("Error: {0}", ex.Message.ToString());
}
return IDFieldName;
}
static string GetIndexSchema()
{
// Extract the schema for this index
// We use REST here because we can take the response as-is
Uri ServiceUri = new Uri("https://" + SourceSearchServiceName + ".search.windows.net");
HttpClient HttpClient = new HttpClient();
HttpClient.DefaultRequestHeaders.Add("api-key", SourceAdminKey);
string Schema = string.Empty;
try
{
Uri uri = new Uri(ServiceUri, "/indexes/" + SourceIndexName);
HttpResponseMessage response = AzureSearchHelper.SendSearchRequest(HttpClient, HttpMethod.Get, uri);
AzureSearchHelper.EnsureSuccessfulSearchResponse(response);
Schema = response.Content.ReadAsStringAsync().Result.ToString();
}
catch (Exception ex)
{
Console.WriteLine("Error: {0}", ex.Message.ToString());
}
return Schema;
}
private static bool DeleteIndex()
{
Console.WriteLine("\n Delete target index {0} in {1} search service, if it exists", TargetIndexName, TargetSearchServiceName);
// Delete the index if it exists
try
{
TargetIndexClient.DeleteIndex(TargetIndexName);
}
catch (Exception ex)
{
Console.WriteLine(" Error deleting index: {0}\r\n", ex.Message);
Console.WriteLine(" Did you remember to set your SearchServiceName and SearchServiceApiKey?\r\n");
return false;
}
return true;
}
static void CreateTargetIndex()
{
Console.WriteLine("\n Create target index {0} in {1} search service", TargetIndexName, TargetSearchServiceName);
// Use the schema file to create a copy of this index
// I like using REST here since I can just take the response as-is
string json = File.ReadAllText(BackupDirectory + "\\" + SourceIndexName + ".schema");
// Do some cleaning of this file to change index name, etc
json = "{" + json.Substring(json.IndexOf("\"name\""));
int indexOfIndexName = json.IndexOf("\"", json.IndexOf("name\"") + 5) + 1;
int indexOfEndOfIndexName = json.IndexOf("\"", indexOfIndexName);
json = json.Substring(0, indexOfIndexName) + TargetIndexName + json.Substring(indexOfEndOfIndexName);
Uri ServiceUri = new Uri("https://" + TargetSearchServiceName + ".search.windows.net");
HttpClient HttpClient = new HttpClient();
HttpClient.DefaultRequestHeaders.Add("api-key", TargetAdminKey);
try
{
Uri uri = new Uri(ServiceUri, "/indexes");
HttpResponseMessage response = AzureSearchHelper.SendSearchRequest(HttpClient, HttpMethod.Post, uri, json);
response.EnsureSuccessStatusCode();
}
catch (Exception ex)
{
Console.WriteLine(" Error: {0}", ex.Message.ToString());
}
}
static int GetCurrentDocCount(SearchClient searchClient)
{
// Get the current doc count of the specified index
try
{
SearchOptions options = new SearchOptions()
{
SearchMode = SearchMode.All,
IncludeTotalCount = true
};
SearchResults<Dictionary<string, object>> response = searchClient.Search<Dictionary<string, object>>("*", options);
return Convert.ToInt32(response.TotalCount);
}
catch (Exception ex)
{
Console.WriteLine(" Error: {0}", ex.Message.ToString());
}
return -1;
}
static void ImportFromJSON()
{
Console.WriteLine("\n Upload index documents from saved JSON files");
// Take JSON file and import this as-is to target index
Uri ServiceUri = new Uri("https://" + TargetSearchServiceName + ".search.windows.net");
HttpClient HttpClient = new HttpClient();
HttpClient.DefaultRequestHeaders.Add("api-key", TargetAdminKey);
try
{
foreach (string fileName in Directory.GetFiles(BackupDirectory, SourceIndexName + "*.json"))
{
Console.WriteLine(" -Uploading documents from file {0}", fileName);
string json = File.ReadAllText(fileName);
Uri uri = new Uri(ServiceUri, "/indexes/" + TargetIndexName + "/docs/index");
HttpResponseMessage response = AzureSearchHelper.SendSearchRequest(HttpClient, HttpMethod.Post, uri, json);
response.EnsureSuccessStatusCode();
}
}
catch (Exception ex)
{
Console.WriteLine(" Error: {0}", ex.Message.ToString());
}
}
}
}

The remote server returned an error: (401) Unauthorized - CSOM - OAuth - after specific time

I am trying to get permissions assigned to each document from the document library using csom in console application. Console app is running fine for almost one hour and post that I get error : "The remote server returned an error: (401) Unauthorized."
I registered an app in SharePoint and I'm using Client ID & Client Secret Key to generate Client Context. Here is the code to get the client context
public static ClientContext getClientContext()
{
ClientContext ctx = null;
try
{
ctx = new AuthenticationManager().GetAppOnlyAuthenticatedContext(SiteUrl, ClientId, ClientSecret);
ctx.RequestTimeout = 6000000;
ctx.ExecutingWebRequest += delegate (object sender, WebRequestEventArgs e)
{
e.WebRequestExecutor.WebRequest.UserAgent = "NONISV|Contoso|GovernanceCheck/1.0";
};
}
catch (Exception ex)
{
WriteLog("Method-getClientContext, Error - " + ex.Message);
}
return ctx;
}
Here is the code which is getting all the assignments assigned to document in document library.
static StringBuilder excelData = new StringBuilder();
static void ProcessWebnLists()
{
try
{
string[] lists = { "ABC", "XYZ", "DEF", "GHI"};
foreach (string strList in lists)
{
using (var ctx = getClientContext())
{
if (ctx != null)
{
Web web = ctx.Site.RootWeb;
ListCollection listCollection = web.Lists;
ctx.Load(web);
ctx.Load(listCollection);
ctx.ExecuteQuery();
List list = web.Lists.GetByTitle(strList);
ctx.Load(list);
ctx.ExecuteQuery();
Console.WriteLine("List Name:{0}", list.Title);
var query = new CamlQuery { ViewXml = "<View/>" };
var items = list.GetItems(query);
ctx.Load(items);
ctx.ExecuteQuery();
foreach (var item in items)
{
var itemAssignments = item.RoleAssignments;
ctx.Load(item, i => i.DisplayName);
ctx.Load(itemAssignments);
ctx.ExecuteQuery();
Console.WriteLine(item.DisplayName);
string groupNames = "";
foreach (var assignment in itemAssignments)
{
try
{
ctx.Load(assignment.Member);
ctx.ExecuteQuery();
groupNames += "[" + assignment.Member.Title.Replace(",", " ") + "] ";
Console.WriteLine($"--> {assignment.Member.Title}");
}
catch (Exception e)
{
WriteLog("Method-ProcessWebnLists, List-" + list.Title + ", Document Title - " + item.DisplayName + ", Error : " + e.Message);
WriteLog("Method-ProcessWebnLists, StackTrace : " + e.StackTrace);
}
}
excelData.AppendFormat("{0},{1},ITEM,{2}", list.Title, (item.DisplayName).Replace(",", " "), groupNames);
excelData.AppendLine();
}
}
excelData.AppendLine();
}
}
}
catch (Exception ex)
{
WriteLog("Method-ProcessWebnLists, Error : " + ex.Message);
WriteLog("Method-ProcessWebnLists, StackTrace : " + ex.StackTrace.ToString());
}
}
Can anyone suggest what else I am missing in this or what kind of changes do I need to make to avoid this error?
The process is taking too much time so I want my application to keep running for couple of hours.
Thanks in advance!
Is there a reason why getClientContext() is inside the foreach loop?
I encountered this error before.
I added this to solve mine.
public static void ExecuteQueryWithIncrementalRetry(this ClientContext context, int retryCount, int delay)
{
int retryAttempts = 0;
int backoffInterval = delay;
if (retryCount <= 0)
throw new ArgumentException("Provide a retry count greater than zero.");
if (delay <= 0)
throw new ArgumentException("Provide a delay greater than zero.");
while (retryAttempts < retryCount)
{
try
{
context.ExecuteQuery();
return;
}
catch (WebException wex)
{
var response = wex.Response as HttpWebResponse;
if (response != null && response.StatusCode == (HttpStatusCode)429)
{
Console.WriteLine(string.Format("CSOM request exceeded usage limits. Sleeping for {0} seconds before retrying.", backoffInterval));
//Add delay.
System.Threading.Thread.Sleep(backoffInterval);
//Add to retry count and increase delay.
retryAttempts++;
backoffInterval = backoffInterval * 2;
}
else
{
throw;
}
}
}
throw new MaximumRetryAttemptedException(string.Format("Maximum retry attempts {0}, have been attempted.", retryCount));
}
Basically, this code backs off for a certain time/interval before executing another query, to prevent throttling.
So instead of using ctx.ExecuteQuery(), use ctx.ExecuteQueryWithIncrementalRetry(5, 1000);
Further explanation here
https://learn.microsoft.com/en-us/sharepoint/dev/general-development/how-to-avoid-getting-throttled-or-blocked-in-sharepoint-online

Solr/Solrj pagination with cursor

I would like to ask how to fetch all doc from solr collection using solrJ.
I have written one code but getting error
Exception in thread "main" org.apache.solr.client.solrj.SolrServerException: No collection param specified on request and no default collection has been set.
String zkHostString = "linux152:2181,linuxUL:2181,linux170:2181/solr";
CloudSolrClient server = new CloudSolrClient(zkHostString);
SolrQuery parameters = new SolrQuery();
public void cursorMark() throws IOException, SolrServerException {
SolrQuery parameters = new SolrQuery();
QueryResponse response = new QueryResponse();
response = server.query(parameters);
parameters.set("q",":");
parameters.set("qt","/select");
parameters.setParam("wt","json");
parameters.set("collection", "RetailDev_Protocol");
int fetchSize = 2;
parameters.setRows(fetchSize);
String cursorMark = CursorMarkParams.CURSOR_MARK_START;
boolean done = false;
while (! done) {
parameters.set(CursorMarkParams.CURSOR_MARK_PARAM, cursorMark);
long offset = 0;
long totalResults = response.getResults().getNumFound();
while (offset < totalResults)
{
parameters.setStart((int) offset);
try {
for (SolrDocument doc : server.query(parameters).getResults())
{
log.info((String) doc.getFieldValue("title"));
}
} catch (SolrServerException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
offset += fetchSize;
}
String nextCursorMark = (response).getNextCursorMark();
}
SolrDocumentList list = response.getResults();
System.out.println(list.toString());
}
You need to set your collection in the following way:
server.setDefaultCollection("<MY_COLLECTION");
otherwise you get the error that you specified in your question.

save data from servlet to recordstore in j2me

I am developing an j2me application which is communicating with the database using servlet.
I want to store the data received from servlet into record store and display it.how this can be achieved?Please provide code examples.
Thank you in advance
public void viewcon()
{
StringBuffer sb = new StringBuffer();
try {
HttpConnection c = (HttpConnection) Connector.open(url);
c.setRequestProperty("User-Agent","Profile/MIDP-1.0, Configuration/CLDC-1.0");
c.setRequestProperty("Content-Language","en-US");
c.setRequestMethod(HttpConnection.POST);
DataOutputStream os = (DataOutputStream)c.openDataOutputStream();
os.flush();
os.close();
// Get the response from the servlet page.
DataInputStream is =(DataInputStream)c.openDataInputStream();
//is = c.openInputStream();
int ch;
sb = new StringBuffer();
while ((ch = is.read()) != -1) {
sb.append((char)ch);
}
// return sb;
showAlert(sb.toString());//display data received from servlet
is.close();
c.close();
} catch (Exception e) {
showAlert(e.getMessage());
}
}
Put this function call where you show the response data alert
writeToRecordStore(sb.toString().getBytes());
The function definition is as below:
private static String RMS_NAME = "NETWORK-DATA-STORAGE";
private boolean writeToRecordStore(byte[] inStream) {
RecordStore rs = null;
try {
rs = RecordStore.openRecordStore(RMS_NAME, true);
if (null != rs) {
//Based on your logic either ADD or SET the record
rs.addRecord(inStream, 0, inStream.length);
return true;
} else {
return false;
}
} catch (RecordStoreException ex) {
ex.printStackTrace();
return false;
} finally {
try {
if (null != rs) {
rs.closeRecordStore();
}
} catch (RecordStoreException recordStoreException) {
} finally {
rs = null;
}
}
}
After you have saved the data, read the records store RMS-NAME and check the added index to get the response data.
.
NOTE: The assumption is the network response data is to be appended to the record store. If you want to set it to a particular record modify the method writeToRecordStore(...) accordingly.
//this code throws exception in display class.
try
{
byte[] byteInputData = new byte[5];
int length = 0;
// access all records present in record store
for (int x = 1; x <= rs.getNumRecords(); x++)
{
if (rs.getRecordSize(x) > byteInputData.length)
{
byteInputData = new byte[rs.getRecordSize(x)];
}
length = rs.getRecord(x, byteInputData, 0);
}
alert =new Alert("reading",new String(byteInputData,0,length),null,AlertType.WARNING);
alert.setTimeout(Alert.FOREVER);
display.setCurrent(alert);
}

Multithreaded Server using TCP in Java

I'm trying to implement a simple TCP connection between Client/Server. I made the Server multithreaded so that it can take either multiple requests (such as finding the sum, max, min of a string of numbers provided by the user) from a single client or accept multiple connections from different clients. I'm running both of them on my machine, but the server doesn't seem to push out an answer. Not sure what I'm doing wrong here --
public final class CalClient {
static final int PORT_NUMBER = 6789;
public static void main (String arg[]) throws Exception
{
String serverName;
#SuppressWarnings("unused")
String strListOfNumbers = null;
int menuIndex;
boolean exit = false;
BufferedReader inFromUser = new BufferedReader(new InputStreamReader(System.in));
System.out.println("Please enter host name...");
System.out.print("> ");
serverName = inFromUser.readLine();
Socket clientSocket = new Socket(serverName, PORT_NUMBER);
DataOutputStream outToServer = new DataOutputStream(clientSocket.getOutputStream());
BufferedReader inFromServer = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
//outToServer.writeBytes(serverName + '\n');
System.out.println("");
System.out.println("Enter 1 to enter the list of numbers");
System.out.println("Enter 2 to perform Summation");
System.out.println("Enter 3 to calculate Maximum");
System.out.println("Enter 4 to calculate Minimum");
System.out.println("Enter 5 to Exit");
while (!exit) {
System.out.print(">");
menuIndex = Integer.parseInt(inFromUser.readLine());
if (menuIndex == 1) {
System.out.println("Please enter the numbers separated by commas.");
System.out.print(">");
strListOfNumbers = inFromUser.readLine();
outToServer.writeBytes("List" + strListOfNumbers);
//continue;
}
else if (menuIndex == 2) {
outToServer.writeBytes("SUM");
System.out.println(inFromServer.readLine());
}
else if (menuIndex == 3) {
outToServer.writeBytes("MAX");
System.out.println(inFromServer.readLine());
}
else if (menuIndex == 4) {
outToServer.writeBytes("MIN");
System.out.println(inFromServer.readLine());
}
else if (menuIndex == 5) {
outToServer.writeBytes("EXIT");
exit = true;
}
}
}
}
public final class CalServer
{
static final int PORT_NUMBER = 6789;
public static void main(String[] args) throws IOException
{
try {
ServerSocket welcomeSocket = new ServerSocket(PORT_NUMBER);
System.out.println("Listening");
while (true) {
Socket connectionSocket = welcomeSocket.accept();
if (connectionSocket != null) {
CalRequest request = new CalRequest(connectionSocket);
Thread thread = new Thread(request);
thread.start();
}
}
} catch (IOException ioe) {
System.out.println("IOException on socket listen: " + ioe);
ioe.printStackTrace();
}
}
}
final class CalRequest implements Runnable
{
Socket socket;
BufferedReader inFromClient;
DataOutputStream outToClient;
TreeSet<Integer> numbers = new TreeSet<Integer>();
int sum = 0;
public CalRequest(Socket socket)
{
this.socket = socket;
}
#Override
public void run()
{
try {
inFromClient = new BufferedReader(new InputStreamReader(socket.getInputStream()));
outToClient = new DataOutputStream(socket.getOutputStream());
while(inFromClient.readLine()!= null) {
processRequest(inFromClient.readLine());
}
} catch (IOException e) {
e.printStackTrace();
}
}
public void processRequest(String string) throws IOException
{
String strAction = string.substring(0,3);
if (strAction.equals("LIS")) {
String strNumbers = string.substring(5);
String[] strNumberArr;
strNumberArr = strNumbers.split(",");
// convert each element of the string array to type Integer and add it to a treeSet container.
for (int i=0; i<strNumberArr.length; i++)
numbers.add(new Integer(Integer.parseInt(strNumberArr[i])));
}
else if (strAction.equals("SUM")) {
#SuppressWarnings("rawtypes")
Iterator it = numbers.iterator();
int total = 0;
while (it.hasNext()) {
total += (Integer)(it.next());
}
}
else if (strAction.equals("MAX")) {
outToClient.writeBytes("The max is: " + Integer.toString(numbers.last()));
}
else if (strAction.equals("MIN")) {
outToClient.writeBytes("The max is: " + Integer.toString(numbers.first()));
}
}
}
Since you are using readLine(), I would guess that you actually need to send line terminators.
My experience with TCP socket communications uses ASCII data exclusively, and my code reflects that I believe. If that's the case for you, you may want to try this:
First, try instantiating your data streams like this:
socket = new Socket (Dest, Port);
toServer = new PrintWriter (socket.getOutputStream(), true);
fromServer = new BufferedReader (new InputStreamReader
(socket.getInputStream()), 8000);
The true at the end the printWriter constructor tells it to auto flush (lovely term) the buffer when you issue a println.
When you actually use the socket, use the following:
toServer.println (msg.trim());
resp = fromServer.readLine().trim();
I don't have to append the \n to the outgoing text myself, but this may be related to my specific situation (more on that below). The incoming data needs to have a \n at its end or readLine doesn't work. I assume there are ways you could read from the socket byte by byte, but also that the code would not be nearly so simple.
Unfortunately, the TCP server I'm communicating with is a C++ program so the way we ensure the \n is present in the incoming data isn't going to work for you (And may not be needed in the outgoing data).
Finally, if it helps, I built my code based on this web example:
http://content.gpwiki.org/index.php/Java:Tutorials:Simple_TCP_Networking
Edit: I found another code example that uses DataOutputStream... You may find it helpful, assuming you haven't already seen it.
http://systembash.com/content/a-simple-java-tcp-server-and-tcp-client/

Resources