I have an application that requires mappings between string values, so essentially a container that can hold key values pairs. Instead of using a dictionary or a name-value collection I used a resource file that I access programmatically in my code. I understand resource files are used in localization scenarios for multi-language implementations and the likes. However I like their strongly typed nature which ensures that if the value is changed the application does not compile.
However I would like to know if there are any important cons of using a *.resx file for simple key-value pair storage instead of using a more traditional programmatic type.
There are two cons which I can think of out of the blue:
it requires I/O operation to read key/value pair, which may result in significant performance decrease,
if you let standard .Net logic to resolve loading resources, it will always try to find the file corresponding to CultureInfo.CurrentUICulture property; this could be problematic if you decide that you actually want to have multiple resx-es (i.e. one per language); this could result in even further performance degradation.
BTW. Couldn't you just create helper class or structure containing properties, like that:
public static class GlobalConstants
{
private const int _SomeInt = 42;
private const string _SomeString = "Ultimate answer";
public static int SomeInt
{
get
{
return _SomeInt;
}
}
public static string SomeString
{
get
{
return _SomeString;
}
}
}
You can then access these properties exactly the same way, as resource files (I am assuming that you're used to this style):
textBox1.Text = GlobalConstants.SomeString;
textBox1.Top = GlobalConstants.SomeInt;
Maybe it is not the best thing to do, but I firmly believe this is still better than using resource file for that...
Related
We want to implement a character counter in our Javascript data entry form, so the user gets immediate keystroke feedback as to how many characters he has typed and how many he has left (something like "25/100", indicating current string length is 25 and 100 is the max allowed).
To do this, I would like to write a service that returns a list of dto property names and their max allowed lengths.
{Name='SmallComment', MaxLength=128}
{Name='BigComment', MaxLength=512}
The best way I can think of to do this would be to create an instance of the validator for that dto and iterate through it to pull out the .Length(min,max) rules. I had other ideas as well, like storing the max lengths in an attribute, but this would require rewriting all the validators to set up the rules based on the attributes.
Whatever solution is best, the goal is to store the max length for each property in a single place, so that changing that length affects the validation rule and the service data passed down to the javascript client.
If you want to maintain a single source of reference for both client/server I would take a metadata approach and provide a Service that returns the max lengths to the client for all types, something like:
public class ValidationMetadataServices : Service
{
public object Any(GetFieldMaxLengths request)
{
return new GetFieldMaxLengthsResponse {
Type1 = GetFieldMaxLengths<Type1>(),
Type2 = GetFieldMaxLengths<Type2>(),
Type3 = GetFieldMaxLengths<Type3>(),
};
}
static Dictionary<string,int> GetFieldMaxLengths<T>()
{
var to = new Dictionary<string,int>();
typeof(T).GetPublicProperties()
.Where(p => p.FirstAttribute<StringLengthAttribute>() != null)
.Each(p => to[p.PropertyName] =
p.FirstAttribute<StringLengthAttribute>().MaximumLength);
return to;
}
}
But FluentValidation uses Static properties so that would require manually specifying a rule for each property that validates against the length from the property metadata attribute.
I am trying to pass a string as value in the mapper, but getting error that it is not Writable. How to resolve?
public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
String TempString = value.toString();
String[] SingleRecord = TempString.split("\t");
//using Integer.parseInt to calculate profit
int Amount = Integer.parseInt(SingleRecord[7]);
int Asset = Integer.parseInt(SingleRecord[8]);
int SalesPrice = Integer.parseInt(SingleRecord[9]);
int Profit = Amount*(SalesPrice-Asset);
String ValueProfit = String.valueOf(Profit);
String ValueOne = String.valueOf(one);
custID.set(SingleRecord[2]);
data.set(ValueOne + ValueProfit);
context.write(custID, data);
}
Yahoo's tutorial says :
Objects which can be marshaled to or from files and across the network must obey a particular interface, called Writable, which allows Hadoop to read and write the data in a serialized form for transmission.
From Cloudera site :
The key and value classes must be serializable by the framework and hence must implement the Writable interface. Additionally, the key classes must implement the WritableComparable interface to facilitate sorting.
So you need an implementation of Writable to write it as a value in the context. Hadoop ships with a few stock classes such as IntWritable. The String counterpart you are looking for is the Text class. It can be used as :
context.write(custID, new Text(data));
OR
Text outValue = new Text();
val.set(data);
context.write(custID, outValue)
I case, you need specialized functionality in the value class, you may implement Writable (not a big deal after all). However seems like Text is just enough for you.
you havent set data in map function according to import text in above,and TextWritable is wrong just use Text as well.
I am new to Java. I have a requirement of holding a lookup table in memory(Abbreviations and their expansions). I was thinking of using Java Hash map. But I want to know if that really is the best approach.
Also, If there are any equivalent libraries in Google Guava, for the same requirement.
I want it to me optimized and very efficient w.r.t time and memory
Using Maps
Maps are indeed fine for this, as used below.
Apparently, it's a bit early for you to care that much about performance or memory consumption though, and we can't really help you if we don't have more context on the actual use case.
In Pure Java
final Map<String, String> lookup = new HashMap<>();
lookup.put("IANAL", "I Ain't A Lawyer");
lookup.put("IMHO", "In My Humble Opinion");
Note that there are several implementations of the Map interface, or that you can write your own.
Using Google Guava
If you want an immutable map:
final Map<String, String> lookup = ImmutableMap.<String, String>builder()
.put("IANAL", "I Ain't A Lawyer")
.put("IMHO", "In My Humble Opinion")
.build();
Retrieving Data
Then to use it to lookup an abbreviation:
// retrieval:
if (lookup.containsKey("IMHO")) {
final String value = lookup.get("IMHO");
/* do stuff */
}
Using Enums
I was speaking of alternatives...
If you know at coding time what the key/value pairs will be, you may very well be better off using a Java enum:
class Abbrevations {
IANAL ("I Ain't A Lawyer")
IMHO ("In My Humble Opinion");
private final String value;
private Abbreviations(final String value) {
this.value = value;
}
public String getValue() {
return (value);
}
}
You can then lookup values directly, ie either by doing this:
Abbreviations.IMHO.getValue()
Or by using:
Abbreviations.valueOf("IMHO).getValue()
Considering where you seem to be in your learning process, I'd recommend you follow the links and read through the Java tutorial and implement the examples.
I need to grab a large amount of data from one set of tables and SQLBulkInsert into another set...unfortunately the source tables are ALL varchar(max) and I would like the destination to be the correct type. Some tables are in the millions of rows...and (for far too pointless policital reasons to go into) we can't use SSIS.
On top of that, some "bool" values are stored as "Y/N", some "0/1", some "T/F" some "true/false" and finally some "on/off".
Is there a way to overload IDataReader to perform type conversion? Would need to be on a per-column basis I guess?
An alternative (and might be the best solution) is to put a mapper in place (perhaps AutoMapper or custom) and use EF to load from one object and map into the other? This would provoide a lot of control but also require a lot of boilerplate code for every property :(
In the end I wrote a base wrapper class to hold the SQLDataReader, and implementing the IDataReader methods just to call the SQLDataReader method.
Then inherit from the base class and override GetValue on a per-case basis, looking for the column names that need translating:
public override object GetValue(int i)
{
var landingColumn = GetName(i);
string landingValue = base.GetValue(i).ToString();
object stagingValue = null;
switch (landingColumn)
{
case "D4DTE": stagingValue = landingValue.FromStringDate(); break;
case "D4BRAR": stagingValue = landingValue.ToDecimal(); break;
default:
stagingValue = landingValue;
break;
}
return stagingValue;
}
Works well, is extensible, and very fast thanks to SQLBulkUpload. OK, so there's a small maintenance overhead, but since the source columns will very rarely change, this doesn't really affect anything.
We have a solution where we parallelize reading and writing to Azure Table Storge.
Because the TableServiceContext does not support reading an entity on one thread and saving it on another thread, we want to keep the Entity using another Context. To do that we need to set:
context.MergeOption = MergeOption.NoTracking;
And when updating (or removing) an entity we call:
context.AttachTo(entitySetName, entity, eTag);
However to do that we need to know the ETag, and I don't know how to get that.
If the entity was tracked, we could use the EntityDesciptor.ETag like this:
private string GetETagFromEntity<T>(T entity) where T : TableServiceEntity
{
return context.Entities.Single(entityDescriptor =>
entityDescriptor.Entity == entity).ETag;
}
... but context.Entities are empty because we don't track entities.
The only solution we found so fare is:
context.AttachTo(entitySetName, entity, "*");
... but that means we have concurrency problems where the last to write always wins.
We also tried to construct the following which works on local Compute Emulator but not in the cloud:
private string GetETagFromEntity<T>(T entity) where T : TableServiceEntity
{
string datePart = entity.Timestamp.ToString("yyyy-MM-dd");
string hourPart = entity.Timestamp.ToString("HH");
string minutePart = entity.Timestamp.ToString("mm");
string secondPart = entity.Timestamp.ToString("ss");
string milisecondPart = entity.Timestamp.ToString("fff").TrimEnd('0');
return string.Format(
"W/\"datetime'{0}T{1}%3A{2}%3A{3}.{4}Z'\"",
datePart,
hourPart,
minutePart,
secondPart,
milisecondPart
).Replace(".Z", "Z");
}
The general problem with this approach even if we could get it to work, is that Microsoft does not make any garanties about how the ETag looks, so this could change over time.
So the question is: How do we get ETag of a Azure Table storage Entity that is not tracked?
I think you'll have to note the etag when you read the entity. (There's probably an event you can hook, maybe ReadingEntity where you can access the etag and store it somewhere.)
I have written an alternate table storage client which is very explicit in exposing the etag and can be used context free and is thread safe. It may work for you. It is available at www.lucifure.com.