File reading in Kotlin without piggybacking off a button - android-studio

I am coding a data class that is wanting to read a csv file to grab some information that is stored on the file. How ever, every way that I have tried to read the file will not work.
Here is what I have tried so far:
data class Bird(val birdNumIn: Int){
private var birdNum = birdNumIn
/**
* A function that searches the bird-data.csv file which is a list of birds to find the bird that was
* inputted into the class constructor and then add the values of the bird to the private variables.
*/
fun updateValues(){
var birdNumber = birdNum
var birdInfo: MutableList<String> = mutableListOf()
val minput = InputStreamReader(assets().open("bird-data.csv"), "UTF-8")
val reader = BufferedReader(minput)
}
How ever the assets().open() does not work. It returns an error of trying to open a file that does not exist, but the is in the assets folder, and the filename is spelt right.
I have tried many other methods on trying to read files, like using Java.io.File and using the path of the file.
If you would like to look at our whole project, please feel free to go to our github

What's the assets() function you're calling? This is just a bare data class, it has no connection to the Android environment it's running in, so unless you've injected an AssetManager instance (or a Context to pull it from) into your class, you can't access it.
You probably need to do this:
fun updateValues(context: Context){
val inputStream = context.assets.open("bird-data.csv")
val minput = InputStreamReader(inputStream, "UTF-8")
...
}
which requires your caller to have access to a Context.
Honestly from a quick look at your class, you might want to rework this. Instead of having a bunch of empty fields in your data class (which aren't part of the "data" by the way, only stuff in the constructor parameters is), and then having those updated later by the data class doing some IO, you might want to keep them as just basic stores of data, and create them when you read from your assets file.
So something like:
// all fixed values, initialised during construction
// Also you won't need to override toString now (unless you want to)
data class Bird(
val birdNum: Int
val nameOfBird: String
val birdFilePic: String
val birdFileSong: String
val alternativeName: String
val birdInfoFile: String
) { ... }
Then somewhere else
fun getBirbs(context: Context) {
// open CSV
// read the lines
val allBirds = lines.map {
// parse data for each bird, use it to construct a Bird object
}
}
or whatever you need to do, e.g. loading certain birds by ID.
That way your Bird class is just data and some functions/properties that work with it, it doesn't need a Context because it's not doing any I/O. Something else (that does have access to a Context) is responsible for loading your data and turning it into objects - deserialising it basically. And as soon as it's created, it's ready and initialised and immutable - you don't have to call update on it to get it actually initialised.
And if you ever wanted to do that a different way (e.g. loading a file from the internet) the data class wouldn't need to change, just the thing that does the loading. You could even have different loading classes! One that loads local data, one that fetches from the internet. The point is the separation of concerns, so it's possible to do this kind of thing because that functionality isn't baked into a class that's really about something else.
Up to you but just a thought! Especially if passing the context in like I suggested is a problem - that's a sign your design might need tweaking

Related

Use JOOQ Multiset with custom RecordMapper - How to create Field<List<String>>?

Suppose I have two tables USER_GROUP and USER_GROUP_DATASOURCE. I have a classic relation where one userGroup can have multiple dataSources and one DataSource simply is a String.
Due to some reasons, I have a custom RecordMapper creating a Java UserGroup POJO. (Mainly compatibility with the other code in the codebase, always being explicit on whats happening). This mapper sometimes creates simply POJOs containing data only from the USER_GROUP table, sometimes also the left joined dataSources.
Currently, I am trying to write the Multiset query along with the custom record mapper. My query thus far looks like this:
List<UserGroup> = ctx
.select(
asterisk(),
multiset(select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(new UserGroupMapper()))
Now my question is: How to create the UserGroupMapper? I am stuck right here:
public class UserGroupMapper implements RecordMapper<Record, UserGroup> {
#Override
public UserGroup map(Record rec) {
UserGroup grp = new UserGroup(rec.getValue(USER_GROUP.ID),
rec.getValue(USER_GROUP.NAME),
rec.getValue(USER_GROUP.DESCRIPTION)
javaParseTags(USER_GROUP.TAGS)
);
// Convention: if we have an additional field "datasources", we assume it to be a list of dataSources to be filled in
if (rec.indexOf("datasources") >= 0) {
// How to make `rec.getValue` return my List<String>????
List<String> dataSources = ?????
grp.dataSources.addAll(dataSources);
}
}
My guess is to have something like List<String> dataSources = rec.getValue(..) where I pass in a Field<List<String>> but I have no clue how I could create such Field<List<String>> with something like DSL.field().
How to get a type safe reference to your field from your RecordMapper
There are mostly two ways to do this:
Keep a reference to your multiset() field definition somewhere, and reuse that. Keep in mind that every jOOQ query is a dynamic SQL query, so you can use this feature of jOOQ to assign arbitrary query fragments to local variables (or return them from methods), in order to improve code reuse
You can just raw type cast the value, and not care about type safety. It's always an option, evne if not the cleanest one.
How to improve your query
Unless you're re-using that RecordMapper several times for different types of queries, why not do use Java's type inference instead? The main reason why you're not getting type information in your output is because of your asterisk() usage. But what if you did this instead:
List<UserGroup> = ctx
.select(
USER_GROUP, // Instead of asterisk()
multiset(
select(USER_GROUP_DATASOURCE.DATASOURCE_ID)
.from(USER_GROUP_DATASOURCE)
.where(USER_GROUP.ID.eq(USER_GROUP_DATASOURCE.USER_GROUP_ID))
).as("datasources").convertFrom(r -> r.map(Record1::value1))
)
.from(USER_GROUP)
.where(condition)
.fetch(r -> {
UserGroupRecord ug = r.value1();
List<String> list = r.value2(); // Type information available now
// ...
})
There are other ways than the above, which is using jOOQ 3.17+'s support for Table as SelectField. E.g. in jOOQ 3.16+, you can use row(USER_GROUP.fields()).
The important part is that you avoid the asterisk() expression, which removes type safety. You could even convert the USER_GROUP to your UserGroup type using USER_GROUP.convertFrom(r -> ...) when you project it:
List<UserGroup> = ctx
.select(
USER_GROUP.convertFrom(r -> ...),
// ...

Why would you use the spread operator to spread a variable onto itself?

In the Google Getting started with Node.js tutorial they perform the following operation
data = {...data};
in the code for sending data to Firestore.
You can see it on their Github, line 63.
As far as I can tell this doesn't do anything.
Is there a good reason for doing this?
Is it potentially future proofing, so that if you added your own data you'd be less likely to do something like data = {data, moreData}?
#Manu's answer details what the line of code is doing, but not why it's there.
I don't know exactly why the Google code example uses this approach, but I would guess at the following reason (and would do the same myself in this situation):
Because objects in JavaScript are passed by reference, it becomes necessary to rebuild the 'data' object from it's constituent parts to avoid the original data object being further modified by the ref.set(data) call on line 64 of the example code:
await ref.set(data);
For example, in MongoDB, when you pass an object into a write or update method, Mongo will actually modify the object to add extra properties such as the datetime it was insert into a collection or it's ID within the collection. I don't know for sure if Firestore does the same, but if it doesn't now, it's possible that it may in future. If it does, and if your original code that calls the update method from Google's example code goes on to further manipulate the data object that it originally passed, that object would now have extra properties on it that may cause unexpected problems. Therefore, it's prudent to rebuild the data object from the original object's properties to avoid contamination of the original object elsewhere in code.
I hope that makes sense - the more I think about it, the more I'm convinced that this must be the reason and it's actually a great learning point.
I include the full original function from Google's code here in case others come across this in future, since the code is subject to change (copied from https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/bookshelf/books/firestore.js at the time of writing this answer):
// Creates a new book or updates an existing book with new data.
async function update(id, data) {
let ref;
if (id === null) {
ref = db.collection(collection).doc();
} else {
ref = db.collection(collection).doc(id);
}
data.id = ref.id;
data = {...data};
await ref.set(data);
return data;
}
It's making a shallow copy of data; let's say you have a third-party function that mutates the input:
const foo = input => {
input['changed'] = true;
}
And you need to call it, but don't want to get your object modified, so instead of:
data = {life: 42}
foo(data)
// > data
// { life: 42, changed: true }
You may use the Spread Syntax:
data = {life: 42}
foo({...data})
// > data
// { life: 42 }
Not sure if this is the particular case with Firestone but the thing is: spreading an object you get a shallow copy of that obj.
===
Related: Object copy using Spread operator actually shallow or deep?

How do I add map markers to a separate class file?

In android studio I've created a map with hundreds of markers on it. I want to separate these into individual classes and put them in a separate package, so that there isn't one massive list of markers in my main code. Is there a way of doing this? I'm using Kotlin.
so what I think you are trying to say is.
There is an Activity let's say MainActivity and it has maps in it and has let's say some 200 markers on it.
and all are individually initialized and assigned and you want to club them all together so that you'll be able to use them just by searching for one.
if that's the case, what I would suggest is.
make a separate Data Class that stores Marker and other data related to it.
data class MarkerInfo(
//marker ID
val id:Int,
//marker Data
val markerData:Marker,
//other Data
var otherData:String
)
now coming to storing and accessing data.
class MainActivity(){
//at the top level, inside Activity
// This will create an empty list of Marker Info
var markerData = mutableListOf<MarkerInfo>()
//Now take any function, let's say x
private fun x(){
//Mark a Marker on Map and assign it to a variable.
val markerA : Marker = map.addMarker( MarkerOptions.position(somePosition))
//we assign a function with id and other relevant data
val x= markerData.size
//store data in list
markerData.add(MarkerInfo(x, markerA, "This is a very useful marker"))
}
}
//now to access that marker.
//let's say there is a function named y.
private fun y(){
//say we want to access the first marker
//there are two ways to do so.
//first method: you know some data which is already in there let's say we know id
for(i in markerData){
if(i.id == 1){
Log.d("TAG", i.toString())
}
}
//second method: directly
Log.d("TAG",markerData[0].toString())
}

Is it possible to do data type conversion on SQLBulkUpload from IDataReader?

I need to grab a large amount of data from one set of tables and SQLBulkInsert into another set...unfortunately the source tables are ALL varchar(max) and I would like the destination to be the correct type. Some tables are in the millions of rows...and (for far too pointless policital reasons to go into) we can't use SSIS.
On top of that, some "bool" values are stored as "Y/N", some "0/1", some "T/F" some "true/false" and finally some "on/off".
Is there a way to overload IDataReader to perform type conversion? Would need to be on a per-column basis I guess?
An alternative (and might be the best solution) is to put a mapper in place (perhaps AutoMapper or custom) and use EF to load from one object and map into the other? This would provoide a lot of control but also require a lot of boilerplate code for every property :(
In the end I wrote a base wrapper class to hold the SQLDataReader, and implementing the IDataReader methods just to call the SQLDataReader method.
Then inherit from the base class and override GetValue on a per-case basis, looking for the column names that need translating:
public override object GetValue(int i)
{
var landingColumn = GetName(i);
string landingValue = base.GetValue(i).ToString();
object stagingValue = null;
switch (landingColumn)
{
case "D4DTE": stagingValue = landingValue.FromStringDate(); break;
case "D4BRAR": stagingValue = landingValue.ToDecimal(); break;
default:
stagingValue = landingValue;
break;
}
return stagingValue;
}
Works well, is extensible, and very fast thanks to SQLBulkUpload. OK, so there's a small maintenance overhead, but since the source columns will very rarely change, this doesn't really affect anything.

Using *.resx files to store string value pairs

I have an application that requires mappings between string values, so essentially a container that can hold key values pairs. Instead of using a dictionary or a name-value collection I used a resource file that I access programmatically in my code. I understand resource files are used in localization scenarios for multi-language implementations and the likes. However I like their strongly typed nature which ensures that if the value is changed the application does not compile.
However I would like to know if there are any important cons of using a *.resx file for simple key-value pair storage instead of using a more traditional programmatic type.
There are two cons which I can think of out of the blue:
it requires I/O operation to read key/value pair, which may result in significant performance decrease,
if you let standard .Net logic to resolve loading resources, it will always try to find the file corresponding to CultureInfo.CurrentUICulture property; this could be problematic if you decide that you actually want to have multiple resx-es (i.e. one per language); this could result in even further performance degradation.
BTW. Couldn't you just create helper class or structure containing properties, like that:
public static class GlobalConstants
{
private const int _SomeInt = 42;
private const string _SomeString = "Ultimate answer";
public static int SomeInt
{
get
{
return _SomeInt;
}
}
public static string SomeString
{
get
{
return _SomeString;
}
}
}
You can then access these properties exactly the same way, as resource files (I am assuming that you're used to this style):
textBox1.Text = GlobalConstants.SomeString;
textBox1.Top = GlobalConstants.SomeInt;
Maybe it is not the best thing to do, but I firmly believe this is still better than using resource file for that...

Resources