Tabulator - select rows displayed on current page - tabulator

I am using local pagination option on a tabulator table, and I am trying to provide a means of selecting all rows currently displayed on the page, but I can't find a way of determining the rows that are displayed on the current page, is there a way to do that, if so how?
I have looked through the documentation on pagination and the pagination module but the closest thing I have found is .selectRow(), but that selects all rows in the tabulator, not just the current page.
http://tabulator.info/docs/4.2/page
http://tabulator.info/docs/4.2/modules#module-page

const rows = this.tabulator.getRows(true); // returns array of all the currently filtered/sorted elements
const currentPage = this.tabulator.getPage(); // returns the current page
const pageSize = this.tabulator.getPageSize(); // returns the current number of items per page
const start = (currentPage - 1) * pageSize; //start index
// figure out end index, taking into account last page may not be full
if (rows.length < currentPage * pageSize) {
end = start + (rows.length % pageSize);
} else {
end = start + pageSize;
}
for (let index = start;index < end;index++) {
rows[index].select();
}

Related

Inventory Assignment Sublist doesn't react to Suitescript

I'm currently having some trouble with Client-Side Scripting of an Inventory Detail Subrecord on an Assembly Build. As you know, Assembly Builds have two Inventory Details. The one in the top right corner works as expected, and I can access every field with Suitescript.
I'm having trouble with the bottom Inventory Detail though. I'm able to access it and use nlapiGetFieldValue() just fine. However, when I access the sublist, I am only able to look up the value of 'id'.
These are the fields that are supposed to exist, along with a less documented one called "receiptinventorynumber".
Here is my code:
//get the line items in the bottom inventory details
var bottom_line_items = [];
for(var line_index = 0; line_index < nlapiGetLineItemCount("component"); line_index++)
{
var bottom_inv_detail = nlapiViewLineItemSubrecord("component", 'componentinventorydetail', line_index+1);
if(bottom_inv_detail != null)
{
var bottom_line_count = bottom_inv_detail.getLineItemCount('inventoryassignment');
for(var index =0; index < bottom_line_count; index++)
{
bottom_inv_detail.selectLineItem('inventoryassignment', index+1);
var sn = bottom_inv_detail.getCurrentLineItemValue('inventoryassignment', 'receiptinventorynumber');
bottom_line_items.push(sn);
}
}
}
console.log(bottom_line_items);
Here is the result of executing it in the browser console:
As you can see, 'id', and 'internalid' work. 'receiptinventorynumber' does not. Neither do any of the other fields.
Because of my use case, I cannot wait for the record to be saved on the server. I have to catch this client side. Any suggestions are appreciated.
It has been a long time since I have worked with Inventory Detail subrecord, but I think there is another field called 'assigninventorynumber'. Have you tried using that?
Just was able to answer my own question, but it did end up involving a server-side search. I'm pretty sure it works as I wanted it to, but I am still involved in testing it. In essence, I grabbed the field 'issueinventorynumber', which had an id. That id was the internalid of a 'Inventory Serial Number', which I was able to perform a search for to get the actual number. Here's the resulting code:
//get the line items in the bottom inventory details
var bottom_line_ids = [];
for(var line_index = 0; line_index < nlapiGetLineItemCount("component"); line_index++)
{
var bottom_inv_detail = nlapiViewLineItemSubrecord("component", 'componentinventorydetail', line_index+1);
if(bottom_inv_detail != null)
{
var bottom_line_count = bottom_inv_detail.getLineItemCount('inventoryassignment');
for(var index =0; index < bottom_line_count; index++)
{
bottom_inv_detail.selectLineItem('inventoryassignment', index+1);
var sn = bottom_inv_detail.getCurrentLineItemValue('inventoryassignment', 'issueinventorynumber');
bottom_line_ids.push(sn);
}
}
}
//do search to identify numbers of bottom serial numbers
var columns = [new nlobjSearchColumn('inventorynumber')];
var filters = []
for(var index = 0; index < bottom_line_ids.length; index++)
{
filters.push(['internalid', 'is', bottom_line_ids[index]]);
filters.push('or');
}
//remove the last 'or'
if(filters.length > 0)
{
filters.pop();
}
var search = nlapiCreateSearch('inventorynumber', filters, columns);
var results = search.runSearch().getResults(0,1000);
bottom_line_items = []
if(results.length != bottom_line_ids.length)
{
//if you get to this point, pop an error as the 'issueinventorynumber' we pulled is associated with multiple serial numbers
//you can see which ones by doing a 'Inventory Serial Number' Saved Search
//this is a serious problem, so we'd have to figure out what to do from there
}
for(var index = 0; index < results.length; index++)
{
bottom_line_items.push(results[index].getValue('inventorynumber'));
}
console.log(bottom_line_items);

react-virtualized - how do I use this as a true infinite scroller

I can't find any code example or docs that answers this:
Achieve almost complete infinite scroll -> unknown # of items, but there is a finite amount that may be infeasible to compute beforehand - e.g. at some point the list needs to stop scrolling
Can I trigger first load of data from within InfiniteScroller/List - it seems you need to pass in a data source that is populated with initial page
I am using this example:
https://github.com/bvaughn/react-virtualized/blob/master/docs/creatingAnInfiniteLoadingList.md
and:
https://github.com/bvaughn/react-virtualized/blob/master/source/InfiniteLoader/InfiniteLoader.example.js
along with CellMeasurer for dynamic height:
https://github.com/bvaughn/react-virtualized/blob/master/source/CellMeasurer/CellMeasurer.DynamicHeightList.example.js
The docs for InfiniteLoader.rowCount say:
"Number of rows in list; can be arbitrary high number if actual number is unknown."
So how do you indicate there are no more rows.
If anyone can post an example using setTimeout() to simulate dynamic loaded data, thanks. I can likely get CellMeasurer working from there.
Edit
This doesn't work the way react-virtualized creator says it should or the infinite loading example implies.
Calls:
render(): rowCount = 1
_rowRenderer(index = 0)
_isRowLoaded(index = 0)
_loadMoreRows(startIndex = 0, stopIndex = 0)
_rowRenderer(index = 0)
end
Do I need to specify a batch size or some other props?
class HistoryBrowser extends React.Component
{
constructor(props,context,updater)
{
super(props,context,updater);
this.eventEmitter = new EventEmitter();
this.eventEmitter.extend(this);
this.state = {
history: []
};
this._cache = new Infinite.CellMeasurerCache({
fixedWidth: true,
minHeight: 50
});
this._timeoutIdMap = {};
_.bindAll(this,'_isRowLoaded','_loadMoreRows','_rowRenderer');
}
render()
{
let rowCount = this.state.history.length ? (this.state.history.length + 1) : 1;
return <Infinite.InfiniteLoader
isRowLoaded={this._isRowLoaded}
loadMoreRows={this._loadMoreRows}
rowCount={rowCount}
>
{({ onRowsRendered, registerChild }) =>
<Infinite.AutoSizer disableHeight>
{({ width }) =>
<Infinite.List
ref={registerChild}
deferredMeasurementCache={this._cache}
height={200}
onRowsRendered={onRowsRendered}
rowCount={rowCount}
rowHeight={this._cache.rowHeight}
rowRenderer={this._rowRenderer}
width={width}
/>}
</Infinite.AutoSizer>}
</Infinite.InfiniteLoader>
}
_isRowLoaded({ index }) {
if (index == 0 && !this.state.history.length)
// No data yet, force load
return false;
}
_loadMoreRows({ startIndex, stopIndex }) {
let self = this;
for (let i = startIndex; i <= stopIndex; i++) {
this.state.history[startIndex] = {loading: true};
}
const timeoutId = setTimeout(() => {
delete this._timeoutIdMap[timeoutId];
for (let i = startIndex; i <= stopIndex; i++) {
self.state.history[i] = {loading: false, text: 'Hi ' + i };
}
promiseResolver();
}, 10000);
this._timeoutIdMap[timeoutId] = true;
let promiseResolver;
return new Promise(resolve => {
promiseResolver = resolve;
});
}
_rowRenderer({ index, key, style }) {
let content;
if (index >= this.state.history.length)
return <div>Placeholder</div>
else if (this.state.history[index].loading) {
content = <div>Loading</div>;
} else {
content = (
<div>Loaded</div>
);
}
return (
<Infinite.CellMeasurer
cache={this._cache}
columnIndex={0}
key={key}
rowIndex={index}
>
<div key={key} style={style}>{content}</div>
</Infinite.CellMeasurer>
);
}
}
The recipe you linked to should be a good starting place. The main thing its missing is an implementation of loadNextPage but that varies from app to app based on how your state/data management code works.
Can I trigger first load of data from within InfiniteScroller/List - it seems you need to pass in a data source that is populated with initial page
This is up to you. IMO it generally makes sense to just fetch the first "page" of records without waiting for InfiniteLoader to ask for them- because you know you'll need them. That being said, if you give InfiniteLoader a rowCount of 1 and then return false from isRowLoaded it should request the first page of records. There are tests confirming this behavior in the react-virtualized GitHub.
The docs for InfiniteLoader.rowCount say: "Number of rows in list; can be arbitrary high number if actual number is unknown."
So how do you indicate there are no more rows.
You stop adding +1 to the rowCount, like the markdown file you linked to mentions:
// If there are more items to be loaded then add an extra row to hold a
loading indicator.
const rowCount = hasNextPage
? list.size + 1
: list.size

Results pagination in Cassandra (CQL)

I am wondering how can I achieve pagination using Cassandra.
Let us say that I have a blog. The blog lists max 10 posts per page. To access next posts a user must click on pagination menu to access page 2 (posts 11-20), page 3 (posts 21-30), etc.
Using SQL under MySQL, I could do the following:
SELECT * FROM posts LIMIT 20,10;
The first parameter of LIMIT is offset from the beginning of result set and second argument is amount of rows to fetch. The example above returns 10 rows starting from row 20.
How can I achieve the same effect in CQL?
I have found some solutions on Google, but all of them require to have "the last result from previous query". It works for having "next" button to paginate to another 10-results-set, but what if I want to jump from page 1 to page 5?
You don't need to use tokens, if you are using Cassandra 2.0+.
Cassandra 2.0 has auto paging.
Instead of using token function to create paging, it is now a built-in feature.
Now developers can iterate over the entire result set, without having to care that it’s size is larger than the memory. As the client code iterates over the results, some extra rows can be fetched, while old ones are dropped.
Looking at this in Java, note that SELECT statement returns all rows, and the number of rows retrieved is set to 100.
I’ve shown a simple statement here, but the same code can be written with a prepared statement, couple with a bound statement. It is possible to disable automatic paging, if it is not desired. It is also important to test various fetch size settings, since you will want to keep the memorize small enough, but not so small that too many round-trips to the database are taken. Check out this blog post to see how paging works server side.
Statement stmt = new SimpleStatement(
"SELECT * FROM raw_weather_data"
+ " WHERE wsid= '725474:99999'"
+ " AND year = 2005 AND month = 6");
stmt.setFetchSize(24);
ResultSet rs = session.execute(stmt);
Iterator<Row> iter = rs.iterator();
while (!rs.isFullyFetched()) {
rs.fetchMoreResults();
Row row = iter.next();
System.out.println(row);
}
Try using the token function in CQL:
https://docs.datastax.com/en/cql-oss/3.3/cql/cql_using/useToken.html
Another suggestion, if you are using DSE, solr supports deep paging:
https://cwiki.apache.org/confluence/display/solr/Pagination+of+Results
Manual Paging
The driver exposes a PagingState object that represents where we were in the result set when the last page was fetched:
ResultSet resultSet = session.execute("your query");
// iterate the result set...
PagingState pagingState = resultSet.getExecutionInfo().getPagingState();
This object can be serialized to a String or a byte array:
String string = pagingState.toString();
byte[] bytes = pagingState.toBytes();
This serialized form can be saved in some form of persistent storage to be reused later. When that value is retrieved later, we can deserialize it and reinject it in a statement:
PagingState pagingState = PagingState.fromString(string);
Statement st = new SimpleStatement("your query");
st.setPagingState(pagingState);
ResultSet rs = session.execute(st);
Note that the paging state can only be reused with the exact same statement (same query string, same parameters). Also, it is an opaque value that is only meant to be collected, stored an re-used. If you try to modify its contents or reuse it with a different statement, the driver will raise an error.
Src: https://docs.datastax.com/en/cql-oss/3.3/cql/cql_reference/cqlshPaging.html
If you read this doc "Use paging state token to get next result",
https://datastax.github.io/php-driver/features/result_paging/
We can use "paging state token" to paginate at application level.
So PHP logic should look like,
<?php
$limit = 10;
$offset = 20;
$cluster = Cassandra::cluster()->withContactPoints('127.0.0.1')->build();
$session = $cluster->connect("simplex");
$statement = new Cassandra\SimpleStatement("SELECT * FROM paging_entries Limit ".($limit+$offset));
$result = $session->execute($statement, new Cassandra\ExecutionOptions(array('page_size' => $offset)));
// Now $result has all rows till "$offset" which we can skip and jump to next page to fetch "$limit" rows.
while ($result->pagingStateToken()) {
$result = $session->execute($statement, new Cassandra\ExecutionOptions($options = array('page_size' => $limit,'paging_state_token' => $result->pagingStateToken())));
foreach ($result as $row) {
printf("key: '%s' value: %d\n", $row['key'], $row['value']);
}
}
?>
Although the count is available in CQL, so far I have not seen a good solution for the offset part...
So... one solution I have been contemplating was to create sets of pages using a background process.
In some table, I would create the blog page A as a set of references to page 1, 2, ... 10. Then another entry for blog page B pointing to pages 11 to 20, etc.
In other words, I would build my own index with a row key set to the page number. You may still make it somewhat flexible since you can offer the user to choose to see 10, 20 or 30 references per page. For example, when set to 30, you display sets 1, 2, and 3 as page A, sets 4, 5, 6 as page B, etc.)
And if you have a backend process to handle all of that, you can update your lists as new pages are added and old pages are deleted from the blog. The process should be really fast (like 1 min. for 1,000,000 rows if even that slow...) and then you can find the pages to display in your list pretty much instantaneously. (Obviously, if you are to have thousands of users each posting hundreds of pages... that number can grow quickly.)
Where it becomes more complicated is if you wanted to offer a complex WHERE clause. By default a blog shows you a list of all the posts from the newest to the oldest. You could also offer lists of posts with tag Cassandra. Maybe you want to inverse the order, etc. That makes it difficult unless you have some form of advanced way to create your index(es). On my end I have a C-like language which goes and peek and poke to the values in a row to (a) select them and if selected (b) to sort them. In other words, on my end I can already have WHERE clauses as complex as what you'd have in SQL. However, I do not yet break up my lists in pages. Next step I suppose...
Using cassandra-node driver for node js (koa js,marko js) : Pagination
Problem
Due to the absence of skip functionality, we need to work around. Below is the implementation of manual paging for node app in case of anyone can get an idea.
code for simple users list
navigate between next and previous page states
easy to replicate
There are two solutions which i am going to state here but only gave the code for solution 1 below,
Solution 1 : Maintain page states for next and previous records (maintain stack or whatever data structure best fit)
Solution 2 : Loop through all records with limit and save all possible page states in variable and generate pages relatively to their pageStates
Using this commented code in model, we can get all states for pages
//for the next flow
//if (result.nextPage) {
// Retrieve the following pages:
// the same row handler from above will be used
// result.nextPage();
//}
Router Functions
var userModel = require('/models/users');
public.get('/users', users);
public.post('/users', filterUsers);
var users = function* () {//get request
var data = {};
var pageState = { "next": "", "previous": "" };
try {
var userCount = yield userModel.Count();//count all users with basic count query
var currentPage = 1;
var pager = yield generatePaging(currentPage, userCount, pagingMaxLimit);
var userList = yield userModel.List(pager);
data.pageNumber = currentPage;
data.TotalPages = pager.TotalPages;
console.log('--------------what now--------------');
data.pageState_next = userList.pageStates.next;
data.pageState_previous = userList.pageStates.previous;
console.log("next ", data.pageState_next);
console.log("previous ", data.pageState_previous);
data.previousStates = null;
data.isPrevious = false;
if ((userCount / pagingMaxLimit) > 1) {
data.isNext = true;
}
data.userList = userList;
data.totalRecords = userCount;
console.log('--------------------userList--------------------', data.userList);
//pass to html template
}
catch (e) {
console.log("err ", e);
log.info("userList error : ", e);
}
this.body = this.stream('./views/userList.marko', data);
this.type = 'text/html';
};
//post filter and get list
var filterUsers = function* () {
console.log("<------------------Form Post Started----------------->");
var data = {};
var totalCount;
data.isPrevious = true;
data.isNext = true;
var form = this.request.body;
console.log("----------------formdata--------------------", form);
var currentPage = parseInt(form.hdpagenumber);//page number hidden in html
console.log("-------before current page------", currentPage);
var pageState = null;
try {
var statesArray = [];
if (form.hdallpageStates && form.hdallpageStates !== '') {
statesArray = form.hdallpageStates.split(',');
}
console.log(statesArray);
//develop stack to track paging states
if (form.hdpagestateRequest === 'next') {
console.log('--------------------------next---------------------');
currentPage = currentPage + 1;
statesArray.push(form.hdpageState_next);
pageState = form.hdpageState_next;
}
else if (form.hdpagestateRequest === 'previous') {
console.log('--------------------------pre---------------------');
currentPage = currentPage - 1;
var p_st = statesArray.length - 2;//second last index
console.log('this index of array to be removed ', p_st);
pageState = statesArray[p_st];
statesArray.splice(p_st, 1);
//pageState = statesArray.pop();
}
else if (form.hdispaging === 'false') {
currentPage = 1;
pageState = null;
statesArray = [];
}
data.previousStates = statesArray;
console.log("paging true");
totalCount = yield userModel.Count();
var pager = yield generatePaging(form.hdpagenumber, totalCount, pagingMaxLimit);
data.pageNumber = currentPage;
data.TotalPages = pager.TotalPages;
//filter function - not yet constructed
var searchUsers = yield userModel.searchList(pager, pageState);
data.usersList = searchUsers;
if (searchUsers.pageStates) {
data.pageStates = searchUsers.pageStates;
data.next = searchUsers.nextPage;
data.pageState_next = searchUsers.pageStates.next;
data.pageState_previous = searchUsers.pageStates.previous;
//show previous and next buttons accordingly
if (currentPage == 1 && pager.TotalPages > 1) {
data.isPrevious = false;
data.isNext = true;
}
else if (currentPage == 1 && pager.TotalPages <= 1) {
data.isPrevious = false;
data.isNext = false;
}
else if (currentPage >= pager.TotalPages) {
data.isPrevious = true;
data.isNext = false;
}
else {
data.isPrevious = true;
data.isNext = true;
}
}
else {
data.isPrevious = false;
data.isNext = false;
}
console.log("response ", searchUsers);
data.totalRecords = totalCount;
//pass to html template
}
catch (e) {
console.log("err ", e);
log.info("user list error : ", e);
}
console.log("<------------------Form Post Ended----------------->");
this.body = this.stream('./views/userList.marko', data);
this.type = 'text/html';
};
//Paging function
var generatePaging = function* (currentpage, count, pageSizeTemp) {
var paging = new Object();
var pagesize = pageSizeTemp;
var totalPages = 0;
var pageNo = currentpage == null ? null : currentpage;
var skip = pageNo == null ? 0 : parseInt(pageNo - 1) * pagesize;
var pageNumber = pageNo != null ? pageNo : 1;
totalPages = pagesize == null ? 0 : Math.ceil(count / pagesize);
paging.skip = skip;
paging.limit = pagesize;
paging.pageNumber = pageNumber;
paging.TotalPages = totalPages;
return paging;
};
Model Functions
var clientdb = require('../utils/cassandradb')();
var Users = function (options) {
//this.init();
_.assign(this, options);
};
Users.List = function* (limit) {//first time
var myresult; var res = [];
res.pageStates = { "next": "", "previous": "" };
const options = { prepare: true, fetchSize: limit };
console.log('----------did i appeared first?-----------');
yield new Promise(function (resolve, reject) {
clientdb.eachRow('SELECT * FROM users_lookup_history', [], options, function (n, row) {
console.log('----paging----rows');
res.push(row);
}, function (err, result) {
if (err) {
console.log("error ", err);
}
else {
res.pageStates.next = result.pageState;
res.nextPage = result.nextPage;//next page function
}
resolve(result);
});
}).catch(function (e) {
console.log("error ", e);
}); //promise ends
console.log('page state ', res.pageStates);
return res;
};
Users.searchList = function* (pager, pageState) {//paging filtering
console.log("|------------Query Started-------------|");
console.log("pageState if any ", pageState);
var res = [], myresult;
res.pageStates = { "next": "" };
var query = "SELECT * FROM users_lookup_history ";
var params = [];
console.log('current pageState ', pageState);
const options = { pageState: pageState, prepare: true, fetchSize: pager.limit };
console.log('----------------did i appeared first?------------------');
yield new Promise(function (resolve, reject) {
clientdb.eachRow(query, [], options, function (n, row) {
console.log('----Users paging----rows');
res.push(row);
}, function (err, result) {
if (err) {
console.log("error ", err);
}
else {
res.pageStates.next = result.pageState;
res.nextPage = result.nextPage;
}
//for the next flow
//if (result.nextPage) {
// Retrieve the following pages:
// the same row handler from above will be used
// result.nextPage();
//}
resolve(result);
});
}).catch(function (e) {
console.log("error ", e);
info.log('something');
}); //promise ends
console.log('page state ', pageState);
console.log("|------------Query Ended-------------|");
return res;
};
Html side
<div class="box-footer clearfix">
<ul class="pagination pagination-sm no-margin pull-left">
<if test="data.isPrevious == true">
<li><a class='submitform_previous' href="">Previous</a></li>
</if>
<if test="data.isNext == true">
<li><a class="submitform_next" href="">Next</a></li>
</if>
</ul>
<ul class="pagination pagination-sm no-margin pull-right">
<li>Total Records : $data.totalRecords</li>
<li> | Total Pages : $data.TotalPages</li>
<li> | Current Page : $data.pageNumber</li>
</ul>
</div>
I am not very much experienced with node js and cassandra db, this solution can surely be improved. Solution 1 is working example code to start with the paging idea. Cheers
a detailed blog.
Our use case was similar. Pull everything from a Cassandra table (cassandra does it smartly by fetching ~5000 in one go and return a cursor), heavy personalized processing on each row, and keep going. Once our iteration reaches close to 5000, it again fetches the next chunk of 5000 rows internally and adds it to the result cursor. It does it so brilliantly that we don’t even feel this magic happening behind the scene.
but It became a bottleneck for us.As iterating over the chunk took some time and till it reached the end of the chunk, Cassandra thought the connection was not being used and closed the connection automatically yelling, its timeout. So we implemented with page state.
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
from cassandra.query import SimpleStatement
# connection with cassandra
cluster = Cluster(["127.0.0.1"], auth_provider=PlainTextAuthProvider(username="pankaj", password="pankaj"))
session = cluster.connect()
# setting keyspace
session.set_keyspace("my_keyspace")
# set fetch size
fetch_size = 100
# It will print first 100 records
next_page_available = True
paging_state     = None
data_count     = 0
while next_page_available is True:
# fetches a new chunk with given page state
result = fetch_a_fresh_chunk(paging_state)
paging_state = results.paging_state
for result in results:
# process payload here.....
# payload processed
data_count += 1
# once we reach fetch size, we stop cassandra to fetch more chunk, internally
if data_count == fetch_size:
i = 0
break
# fetches a fresh chunk with given page state
def fetch_a_fresh_chunk(paging_state = None)
query = "SELECT * FROM my_cute_cassandra_table;"
statement = SimpleStatement(query, fetch_size = fetch_size)
results = session.execute(statement, paging_state=paging_state)

Google Apps Script - Exporting events from Google Sheets to Google Calendar - how can I stop it from removing my spreadsheet formulas?

I've been trying to create a code that takes info from a Google Spreadsheet, and creates Google Calendar events. I'm new to this, so bear with my lack of in-depth coding knowledge!
I initially used this post to create a code:
Create Google Calendar Events from Spreadsheet but prevent duplicates
I then worked out that it was timing out due to the number of rows on the spreadsheet, and wasn't creating eventIDs to avoid the duplicates. I got an answer here to work that out!
Google Script that creates Google Calendar events from a Google Spreadsheet - "Exceeded maximum execution time"
And now I've realised that it's over-writing the formulas, I have in the spreadsheet, auto-completing into each row, as follows:
Row 12 - =if(E4="","",E4+1) // Row 13 - =if(C4="","",C4+1) // Row 18 - =if(B4="","","WHC - "&B4) // Row 19 - =if(B4="","","Docs - "&B4)
Does anyone have any idea how I can stop it doing this?
/**
* Adds a custom menu to the active spreadsheet, containing a single menu item
* for invoking the exportEvents() function.
* The onOpen() function, when defined, is automatically invoked whenever the
* spreadsheet is opened.
* For more information on using the Spreadsheet API, see
* https://developers.google.com/apps-script/service_spreadsheet
*/
function onOpen() {
var sheet = SpreadsheetApp.getActiveSpreadsheet();
var entries = [{
name : "Export WHCs",
functionName : "exportWHCs"
},
{
name : "Export Docs",
functionName : "exportDocs"
}];
sheet.addMenu("Calendar Actions", entries);
};
/**
* Export events from spreadsheet to calendar
*/
function exportWHCs() {
// check if the script runs for the first time or not,
// if so, create the trigger and PropertiesService.getScriptProperties() the script will use
// a start index and a total counter for processed items
// else continue the task
if(PropertiesService.getScriptProperties().getKeys().length==0){
PropertiesService.getScriptProperties().setProperties({'itemsprocessed':0});
ScriptApp.newTrigger('exportWHCs').timeBased().everyMinutes(5).create();
}
// initialize all variables when we start a new task, "notFinished" is the main loop condition
var itemsProcessed = Number(PropertiesService.getScriptProperties().getProperty('itemsprocessed'));
var startTime = new Date().getTime();
var sheet = SpreadsheetApp.getActiveSheet();
var headerRows = 4; // Number of rows of header info (to skip)
var range = sheet.getDataRange();
var data = range.getValues();
var calId = "flightcentre.com.au_pma5g2rd5cft4lird345j7pke8#group.calendar.google.com";
var cal = CalendarApp.getCalendarById(calId);
for (i in data) {
if (i < headerRows) continue; // Skip header row(s)
var row = data[i];
var date = new Date(row[12]); // First column
var title = row[18]; // Second column
var tstart = new Date(row[15]);
tstart.setDate(date.getDate());
tstart.setMonth(date.getMonth());
tstart.setYear(date.getYear());
var tstop = new Date(row[16]);
tstop.setDate(date.getDate());
tstop.setMonth(date.getMonth());
tstop.setYear(date.getYear());
var id = row[17]; // Sixth column == eventId
// Check if event already exists, update it if it does
try {
var event = cal.getEventSeriesById(id);
}
catch (e) {
// do nothing - we just want to avoid the exception when event doesn't exist
}
if (!event) {
//cal.createEvent(title, new Date("March 3, 2010 08:00:00"), new Date("March 3, 2010 09:00:00"));
var newEvent = cal.createEvent(title, tstart, tstop).addEmailReminder(5).getId();
row[17] = newEvent; // Update the data array with event ID
}
else {
event.setTitle(title);
}
if(new Date().getTime()-startTime > 240000){ // if > 4 minutes
var processed = i+1;// save usefull variable
PropertiesService.getScriptProperties().setProperties({'itemsprocessed':processed});
range.setValues(data);
MailApp.sendEmail(Session.getEffectiveUser().getEmail(),'progress sheet to cal','item processed : '+processed);
return;
}
debugger;
}
// Record all event IDs to spreadsheet
range.setValues(data);
}
/**
* Export events from spreadsheet to calendar
*/
function exportDocs() {
// check if the script runs for the first time or not,
// if so, create the trigger and PropertiesService.getScriptProperties() the script will use
// a start index and a total counter for processed items
// else continue the task
if(PropertiesService.getScriptProperties().getKeys().length==0){
PropertiesService.getScriptProperties().setProperties({'itemsprocessed':0});
ScriptApp.newTrigger('exportDocs').timeBased().everyMinutes(5).create();
}
// initialize all variables when we start a new task, "notFinished" is the main loop condition
var itemsProcessed = Number(PropertiesService.getScriptProperties().getProperty('itemsprocessed'));
var startTime = new Date().getTime();
var sheet = SpreadsheetApp.getActiveSheet();
var headerRows = 4; // Number of rows of header info (to skip)
var range = sheet.getDataRange();
var data = range.getValues();
var calId = "flightcentre.com.au_pma5g2rd5cft4lird345j7pke8#group.calendar.google.com";
var cal = CalendarApp.getCalendarById(calId);
for (i in data) {
if (i < headerRows) continue; // Skip header row(s)
var row = data[i];
var date = new Date(row[13]); // First column
var title = row[19]; // Second column
var tstart = new Date(row[15]);
tstart.setDate(date.getDate());
tstart.setMonth(date.getMonth());
tstart.setYear(date.getYear());
var tstop = new Date(row[16]);
tstop.setDate(date.getDate());
tstop.setMonth(date.getMonth());
tstop.setYear(date.getYear());
var id = row[20]; // Sixth column == eventId
// Check if event already exists, update it if it does
try {
var event = cal.getEventSeriesById(id);
}
catch (e) {
// do nothing - we just want to avoid the exception when event doesn't exist
}
if (!event) {
//cal.createEvent(title, new Date("March 3, 2010 08:00:00"), new Date("March 3, 2010 09:00:00"));
var newEvent = cal.createEvent(title, tstart, tstop).addEmailReminder(5).getId();
row[20] = newEvent; // Update the data array with event ID
}
else {
event.setTitle(title);
}
if(new Date().getTime()-startTime > 240000){ // if > 4 minutes
var processed = i+1;// save usefull variable
PropertiesService.getScriptProperties().setProperties({'itemsprocessed':processed});
range.setValues(data);
MailApp.sendEmail(Session.getEffectiveUser().getEmail(),'progress sheet to cal','item processed : '+processed);
return;
}
debugger;
}
// Record all event IDs to spreadsheet
range.setValues(data);
}
You have to ways to solve that problem.
First possibility : update your sheet with array data only on columns that have no formulas, proceeding as in this other post but in your case (with multiple columns to skip) it will rapidly become tricky
Second possibility : (the one I would personally choose because I 'm not a "formula fan") is to do what your formulas do in the script itself, ie translate the formulas into array level operations.
following your example =if(E4="","",E4+1) would become something like data[n][4]=data[n][4]==''?'':data[n+1][4]; if I understood the logic (but I'm not so sure...).
EDIT
There is actually a third solution that is even simpler (go figure why I didn't think about it in the first place...) You could save the ranges that have formulas, for example if col M has formulas you want to keep use :
var formulM = sheet.getRange('G1:G').getFormulas();
and then, at the end of the function (after the global setValues()) rewrite the formulas using :
sheet.getRange('G1:G').setFormulas(formulM);
to restore all the previous formulas... as simple as that, repeat for every column where you need to keep the formulas.

how best managing paging with subsonic 3.003

I'm really engaged with subsonic but I'm not sure how make it work with paging
I mean how can I get "the page" in a list or how is the best way to managing
the total table in my base, page by page
You'll see I tried three things:
m02colegio is an class generated from activerecord
IList<m02colegio> loscolegios;
loscolegios = m02colegio.GetPaged(0, 80).ToList();
----------- and:
SubSonic.Schema.PagedList<m02colegio> loscolegios;
loscolegios = m02colegio.GetPaged(0, 80);
----------- and:
var paged = m02colegio.GetPaged(0,80).All<m02colegio>(x=>x.m02ccolnom.Contains(" "));
// 'cause i dont know how to tell it to consider all records
loscolegios = m02colegio.All().ToList();
but after every try I don't get any exception and loscolegios always is NULL
I need to access the records in this manner
so, what is the best way?
how can I get the first page and then how advance among pages??
public ActionResult Index(int? page)
{
if (!validateInt(page.ToString()))
page = 0;
else
page = page - 1;
if (page < 0) page = 0;
const int pagesize = 9;
IQueryable<m02colegio> Mym02colegio = m02colegio.All().Where(x => x.category == "test").OrderBy(x => x.id);
ViewData["numpages"] = m02colegio.All().Where(x => x.category == "test").OrderBy(x => x.id).Count() / pagesize;
ViewData["curpage"] = page;
return View(new PagedList<material>(Mym02colegio, page ?? 0, pagesize));
}
that is in a MVC sense however it gives you the idea, Index accepts a null or a page number
you get all the records then return a pagelist of the records you got.
I'm not sure if this is a bug that's been fixed in the current github source or if it's by design but I've found that GetPaged only works with a 1 based index for the first argument. So if you do the following you should find it works as you'd expect:
IList<m02colegio> loscolegios = m02colegio.GetPaged(1, 80);

Resources