How to select data from excel sheet categorically in Flutter - excel

I am a beginner in Flutter, I have a database of product items in a General Store in Excel file. I want to select the data from the excel database using their categories(the database has a column of categories)(such as Medicine or Automobile Items). Then I want to use the data in form of cards to show customers the product image and info. Please suggest me to how to select the data from an excel file (like ordering the data as per some condition).
WorkAround - I thought of converting .xlsx to .csv then to .json file and then finally storing it in Firebase which is the database I am currently using in my project. But the problem with that would be, the download usage would increase which would lead me to cost money.

You can create a class to store the data of your items. Then hard code items into a file and use that file to look up in your application.
For example;
// Data Structre
class Item {
int id;
String category;
Item({this.id, this.category});
}
// Hard coded data storage
List<Item> items = [Item(id: 1, category: 'Medicine'), Item(id: 2, category: 'Automobile')];
// Methods to fetch data
List<Item> getItemsByCategory(String category) {
return items.where((item) => item.category == category).toList();
}

Related

A csv file has two column CATEGORY and MILES, I have to find % of Miles under Business Category and % of Miles under Personal Category, in python

CATEGORY MILES
Business 5.1
Business 4.6
Business 3.9
Personal 8.5
Business 3.7
Personal 6.2
Personal 11
This is an excerpt from the excel sheet
So you have a text file in CSV format, and you need to read it, convert it into combinations of [Category, Miles], and you want for every Category "% of miles", whatever that may be.
Category Miles
X 1
X 4
X 2
Y 3
I think that you want: "Category X has 70% of the miles, Category Y also has 30% of the miles".
To solve this, it is best to cut your problem into smaller pieces.
Given a string fileName, read the file as text
Given a string in CSV format, convert it into a sequence of class BusinessMiles
Given a sequence of BusinessMiles, convert it to "% of miles", according to the definition above.
Cutting your problem into smaller pieces has several advantages:
The function of each piece will be easier to understand
Easier to unit test.
Easier to change, for instance if you don't read from a CSV file, but from a database, or if you don't read from a CSV string, but from an XML or JSON file.
Most important: you will be able to reuse the pieces for other tasks, like: "How many of my rows are about Business?"
The reusability is demonstrated most clearly, because several of these pieces already exist and can be used freely: reading the file and converting the file to CSV.
For this, consider to use Nuget Package CSV helper. Easy to use, versatile, and thus one of the most used CSV packages.
So let's assume you have procedures to read the CSV file and to convert it to a sequence of BusinessMiles
enum Category
{
Business,
Personal,
}
class BusinessMile // TODO: invent proper name
{
public Category Category {get; set;}
public Decimal Miles {get; set;}
}
By using an enum you can be certain that after reading the CSV there won't be any incorrect Categories. It will be easy to add new Categories for a future version. If you don't know at compile time which Categories are allowed, consider to use a string for it. This has the danger that someone might have a typing error, which leads to a complete new Category, without anyone noticing "Personnel" instead of "Personal"
IEnumerable<BusinessMile> ReadBusinessMiles(string csvText)
{
// use CSVHelper to convert the csvText to the sequence
}
IEnumerable<BusinessMile> ReadBusineMilesFile(string fileName)
{
// either use CSVHelper, or read the file and call the other method
}
After this, your problem will be easy:
string fileName = ...
IEnumerable<BusinessMile> businessMiles = ReadBusinesMilesFile(fileName);
Make groups of BusinessMiles that have the same Category:
var categoryGroups = businessMiles.GroupBy(
businessMile => businessMile.Category,
// parameter resultSelector: for every Category, and all BusinessMiles
// that have this Category to make one new
(category, businessMilesInThisCategory) => new
{
Category = category,
TotalMiles = businesMilesInThisCategory
.Select(businessMile => businessMile.Miles)
.Sum(),
});
So now you've got:
Category TotalMiles
X 7
Y 3
If you really want to have percentages, you need to get the total of all Miles of all Categories (=6), and divide TotalMiles by this total
var totalMiles = categorieGroups.Select(group => group.TotalMiles).Sum();
var result = categoryGroups.Select(group => new
{
Category = group.Category,
TotalMilesPercentage = 100.0M * group.TotalMiles / totalMiles,
})
In my definition of BusinessMiles, the Miles are a decimal. Take care to convert it if your Miles are integers.

ILOG CPLEX / OPL dynamic Excel sheet referencing

I'm trying to dynamically reference Excel sheets or tables within the .dat for a Mixed Integer Problem in Vehicle Routing that I'm trying to solve in CPLEX (OPL).
The setup is a: .mod = model, .dat = data and a MS Excel spreadsheet
I have a 2 dimensional array with customer demand data = Excel range (for coding convenience I did not format the excel data as a table yet)
The decision variable in .mod looks like this:
dvar boolean x[vertices][vertices][scenarios]
in .dat:
vertices from SheetRead (data, "Table!vertices");
and
scenarios from SheetRead (data, "dont know how to yet"); this might not be needed
without the scenario Index everything is fine.
But as the demand for the customers changes in this model I'd like to include this via changing the data base reference.
Now what I'd like to do is one of 2 things:
Either:
Change the spreadsheet in Excel so that depending on the scenario I get something like that in .dat:
scenario = 1:
vertices from SheetRead (data, "table-scenario-1!vertices");
scenario = 2:
vertices from SheetRead (data, "table-scenario-2!vertices");
so changing the spreadsheet for new base data,
or:
Change the range within the same spreadsheet:
scenario = 1:
vertices from SheetRead (data, "table!vertices-1");
scenario = 2:
vertices from SheetRead (data, "table!vertices-2");
either way would be fine.
Knowing how 3D Tables in Excel are created using multiple spreadsheets with 2D Tables grouped, the more natural approach seems to be, to have vertices always reference the same range in every Excel spreadsheet while depending on the scenario the spreadsheet/page is switched, but I just don't know how to.
Thanks for the advice.
Unfortunately, the arguments to SheetConnection must be a string literal or an Id (see the OPL grammar in the user manual here). And similarly for SheetRead. This means, you cannot have dynamic sources for a sheet connection.
As we discussed in the comments, one option is to add an additional index to all data: the scenario. Then always read the data for all scenarios and in the .mod file select what you want to actually use.
at https://www.ibm.com/developerworks/community/forums/html/topic?id=5af4d332-2a97-4250-bc06-76595eef1ab0&ps=25 I shared an example where you can set a dynamic name for the Excel file. The same way you could have a dynamic range, the trick is to use flow control.
sub.mod
float maxOfx = 2;
string fileName=...;
dvar float x;
maximize x;
subject to {
x<=maxOfx;
}
execute
{
writeln("filename= ",fileName);
}
and then the main model is
main {
var source = new IloOplModelSource("sub.mod");
var cplex = new IloCplex();
var def = new IloOplModelDefinition(source);
var opl = new IloOplModel(def,cplex);
for(var k=11;k<=20;k++)
{
var opl = new IloOplModel(def,cplex);
var data2= new IloOplDataElements();
data2.fileName="file"+k;
opl.addDataSource(data2);
opl.generate();
if (cplex.solve()) {
writeln("OBJ = " + cplex.getObjValue());
} else {
writeln("No solution");
}
opl.postProcess();
opl.end();
}
}

Updating sharepoint item multi lookup field via odata

I need some help sorting out some syntax for an update to a list item in sharepoint from an application. Here's a rundown on the situation :
There are two lists within this sp site. One list is a products list, and the second list is a pricing. The way these lists are setup however are a 1 to many scheme. One product can have many pricing records. The product then has a column against it that is a look up field that supports multiple values.
Using REST and oData I can query and get the pricing information easily enough now, but my problem is when I need to update the products record to add a price.
with regular lookup fields I normally just set the ID property for the object, then call the update and savechanges methods for that list. With the pricing column however supporting multiple records there is no ID to set, and the field is an array of sorts. Adding the pricing object (list item) and updating and savechanges doesn't actually save. No errors are thrown but the then when viewing the list it isn't actually saving.
How can I add a price lookup to my Product?
I wrote a small method to query through each price and add it's initial price to the product below for testing :
InventoryCatalogDataContext dc = new InventoryCatalogDataContext(_pushinTinSvc);
dc.Credentials = CredentialCache.DefaultCredentials;
List<PricingItem> pricing = (from q in dc.Pricing
select q).ToList<PricingItem>();
foreach (PricingItem price in pricing)
{
var query = (DataServiceQuery<ProductsItem>)
dc.Products
.Expand("Pricing")
.Where(p => p.Id.Equals(price.StockCodeId));
List<ProductsItem> prods = query.ToList<ProductsItem>();
ProductsItem product = prods[0];
product.Pricing.Add(price);
dc.UpdateObject(product);
}
try
{
dc.SaveChanges();
}
catch (Exception ex)
{
string stopHere = ex.Message;
}
I'm not sure if I'm doing something wrong or if this is a bug. If I inspect the item after the SaveChanges, the item still has the pricing item lookup attached, showing a count of 1. At the end of the code block, if I re-query for the product, at that point it even still has the pricing attached. But once the method finishes and returns to the UI, the pricing is no longer attached, the fields are empty when you look at the list in sharepoint, but the version does increment. So I'm a little lost...

cassandra data model for web logging

Been playing around with Cassandra and I am trying to evaluate what would be the best data model for storing things like views or hits for unique page id's? Would it best to have a single column family per pageid, or 1 Super-column (logs) with columns pageid? Each page has a unique id, then would like to store date and some other metrics on the view.
I am just not sure which solution handles better scalability, lots of column family OR 1 giant super-column?
page-92838 { date:sept 2, browser:IE }
page-22939 { date:sept 2, browser:IE5 }
OR
logs {
page-92838 {
date:sept 2,
browser:IE
}
page-22939 {
date:sept 2,
browser:IE5
}
}
And secondly, how to handle lots of different date: entries for page-92838?
You don't need a column-family per pageid.
One solution is to have a row for each page, keyed on the pageid.
You could then have a column for each page-view or hit, keyed and sorted on time-UUID (assuming having the views in time-sorted order would be useful) or other unique, always-increasing counter. Note that all Cassandra columns are time-stamped anyway, so you would have a precise timestamp 'for free' regardless of what other time- or date- stamps you use. Using a precise time-UUID as the key also solves the problem of storing many hits on the same date.
The value of each column could then be a textual value or JSON document containing any other metadata you want to store (such as browser).
page-12345 -> {timeuuid1:metadata1}{timeuuid2:metadata2}{timeuuid3:metadata3}...
page-12346 -> ...
With cassandra, it is best to start with what queries you need to do, and model your schema to support those queries.
Assuming you want to query hits on a page, and hits by browser, you can have a counter column for each page like,
stats { #cf
page-id { #key
hits : # counter column for hits
browser-ie : #counts of views with ie
browser-firefox : ....
}
}
If you need to do time based queries, look at how twitters rainbird denormalizes as it writes to cassandra.

best practices with code or lookup tables

[UPDATE] Chosen approach is below, as a response to this question
Hi,
I' ve been looking around in this subject but I can't really find what I'm looking for...
With Code tables I mean: stuff like 'maritial status', gender, specific legal or social states... More specifically, these types have only set properties and the items are not about to change soon (but could). Properties being an Id, a name and a description.
I'm wondering how to handle these best in the following technologies:
in the database (multiple tables, one table with different code-keys...?)
creating the classes (probably something like inheriting ICode with ICode.Name and ICode.Description)
creating the view/presenter for this: there should be a screen containing all of them, so a list of the types (gender, maritial status ...), and then a list of values for that type with a name & description for each item in the value-list.
These are things that appear in every single project, so there must be some best practice on how to handle these...
For the record, I'm not really fond of using enums for these situations... Any arguments on using them here are welcome too.
[FOLLOW UP]
Ok, I've gotten a nice answer by CodeToGlory and Ahsteele. Let's refine this question.
Say we're not talking about gender or maritial status, wich values will definately not change, but about "stuff" that have a Name and a Description, but nothing more. For example: Social statuses, Legal statuses.
UI:
I want only one screen for this. Listbox with possibe NameAndDescription Types (I'll just call them that), listbox with possible values for the selected NameAndDescription Type, and then a Name and Description field for the selected NameAndDescription Type Item.
How could this be handled in View & Presenters? I find the difficulty here that the NameAndDescription Types would then need to be extracted from the Class Name?
DB:
What are pro/cons for multiple vs single lookup tables?
Using database driven code tables can very useful. You can do things like define the life of the data (using begin and end dates), add data to the table in real time so you don't have to deploy code, and you can allow users (with the right privileges of course) add data through admin screens.
I would recommend always using an autonumber primary key rather than the code or description. This allows for you to use multiple codes (of the same name but different descriptions) over different periods of time. Plus most DBAs (in my experience) rather use the autonumber over text based primary keys.
I would use a single table per coded list. You can put multiple codes all into one table that don't relate (using a matrix of sorts) but that gets messy and I have only found a couple situations where it was even useful.
Couple of things here:
Use Enumerations that are explicitly clear and will not change. For example, MaritalStatus, Gender etc.
Use lookup tables for items that are not fixed as above and may change, increase/decrease over time.
It is very typical to have lookup tables in the database. Define a key/value object in your business tier that can work with your view/presentation.
I have decided to go with this approach:
CodeKeyManager mgr = new CodeKeyManager();
CodeKey maritalStatuses = mgr.ReadByCodeName(Code.MaritalStatus);
Where:
CodeKeyManager can retrieve CodeKeys from DB (CodeKey=MaritalStatus)
Code is a class filled with constants, returning strings so Code.MaritalStatus = "maritalStatus". These constants map to to the CodeKey table > CodeKeyName
In the database, I have 2 tables:
CodeKey with Id, CodeKeyName
CodeValue with CodeKeyId, ValueName, ValueDescription
DB:
alt text http://lh3.ggpht.com/_cNmigBr3EkA/SeZnmHcgHZI/AAAAAAAAAFU/2OTzmtMNqFw/codetables_1.JPG
Class Code:
public class Code
{
public const string Gender = "gender";
public const string MaritalStatus = "maritalStatus";
}
Class CodeKey:
public class CodeKey
{
public Guid Id { get; set; }
public string CodeName { get; set; }
public IList<CodeValue> CodeValues { get; set; }
}
Class CodeValue:
public class CodeValue
{
public Guid Id { get; set; }
public CodeKey Code { get; set; }
public string Name { get; set; }
public string Description { get; set; }
}
I find by far the easiest and most efficent way:
All code-data can be displayed in a identical manner (in the same view/presenter)
I don't need to create tables and classes for every code table that's to come
But I can still get them out of the database easily and use them easily with the CodeKey constants...
NHibernate can handle this easily too
The only thing I'm still considering is throwing out the GUID Id's and using string (nchar) codes for usability in the business logic.
Thanks for the answers! If there are any remarks on this approach, please do!
I lean towards using a table representation for this type of data. Ultimately if you have a need to capture the data you'll have a need to store it. For reporting purposes it is better to have a place you can draw that data from via a key. For normalization purposes I find single purpose lookup tables to be easier than a multi-purpose lookup tables.
That said enumerations work pretty well for things that will not change like gender etc.
Why does everyone want to complicate code tables? Yes there are lots of them, but they are simple, so keep them that way. Just treat them like ever other object. Thy are part of the domain, so model them as part of the domain, nothing special. If you don't when they inevitibly need more attributes or functionality, you will have to undo all your code that currently uses it and rework it.
One table per of course (for referential integrity and so that they are available for reporting).
For the classes, again one per of course because if I write a method to recieve a "Gender" object, I don't want to be able to accidentally pass it a "MarritalStatus"! Let the compile help you weed out runtime error, that's why its there. Each class can simply inherit or contain a CodeTable class or whatever but that's simply an implementation helper.
For the UI, if it does in fact use the inherited CodeTable, I suppose you could use that to help you out and just maintain it in one UI.
As a rule, don't mess up the database model, don't mess up the business model, but it you wnt to screw around a bit in the UI model, that's not so bad.
I'd like to consider simplifying this approach even more. Instead of 3 tables defining codes (Code, CodeKey and CodeValue) how about just one table which contains both the code types and the code values? After all the code types are just another list of codes.
Perhaps a table definition like this:
CREATE TABLE [dbo].[Code](
[CodeType] [int] NOT NULL,
[Code] [int] NOT NULL,
[CodeDescription] [nvarchar](40) NOT NULL,
[CodeAbreviation] [nvarchar](10) NULL,
[DateEffective] [datetime] NULL,
[DateExpired] [datetime] NULL,
CONSTRAINT [PK_Code] PRIMARY KEY CLUSTERED
(
[CodeType] ASC,
[Code] ASC
)
GO
There could be a root record with CodeType=0, Code=0 which represents the type for CodeType. All of the CodeType records will have a CodeType=0 and a Code>=1. Here is some sample data that might help clarify things:
SELECT CodeType, Code, Description FROM Code
Results:
CodeType Code Description
-------- ---- -----------
0 0 Type
0 1 Gender
0 2 Hair Color
1 1 Male
1 2 Female
2 1 Blonde
2 2 Brunette
2 3 Redhead
A check constraint could be added to the Code table to ensure that a valid CodeType is entered into the table:
ALTER TABLE [dbo].[Code] WITH CHECK ADD CONSTRAINT [CK_Code_CodeType]
CHECK (([dbo].[IsValidCodeType]([CodeType])=(1)))
GO
The function IsValidCodeType could be defined like this:
CREATE FUNCTION [dbo].[IsValidCodeType]
(
#Code INT
)
RETURNS BIT
AS
BEGIN
DECLARE #Result BIT
IF EXISTS(SELECT * FROM dbo.Code WHERE CodeType = 0 AND Code = #Code)
SET #Result = 1
ELSE
SET #Result = 0
RETURN #Result
END
GO
One issue that has been raised is how to ensure that a table with a code column has a proper value for that code type. This too could be enforced by a check constraint using a function.
Here is a Person table which has a gender column. It could be a best practice to name all code columns with the description of the code type (Gender in this example) followed by the word Code:
CREATE TABLE [dbo].[Person](
[PersonID] [int] IDENTITY(1,1) NOT NULL,
[LastName] [nvarchar](40) NULL,
[FirstName] [nvarchar](40) NULL,
[GenderCode] [int] NULL,
CONSTRAINT [PK_Person] PRIMARY KEY CLUSTERED ([PersonID] ASC)
GO
ALTER TABLE [dbo].[Person] WITH CHECK ADD CONSTRAINT [CK_Person_GenderCode]
CHECK (([dbo].[IsValidCode]('Gender',[Gendercode])=(1)))
GO
IsValidCode could be defined this way:
CREATE FUNCTION [dbo].[IsValidCode]
(
#CodeTypeDescription NVARCHAR(40),
#Code INT
)
RETURNS BIT
AS
BEGIN
DECLARE #CodeType INT
DECLARE #Result BIT
SELECT #CodeType = Code
FROM dbo.Code
WHERE CodeType = 0 AND CodeDescription = #CodeTypeDescription
IF (#CodeType IS NULL)
BEGIN
SET #Result = 0
END
ELSE
BEGiN
IF EXISTS(SELECT * FROM dbo.Code WHERE CodeType = #CodeType AND Code = #Code)
SET #Result = 1
ELSE
SET #Result = 0
END
RETURN #Result
END
GO
Another function could be created to provide the code description when querying a table that has a code column. Here is an
example of querying the Person table:
SELECT PersonID,
LastName,
FirstName,
GetCodeDescription('Gender',GenderCode) AS Gender
FROM Person
This was all conceived from the perspective of preventing the proliferation of lookup tables in the database and providing one lookup table. I have no idea whether this design would perform well in practice.

Resources