Hello,
I'm publishing a Power BI report with an Azure Analysis Services cube as source. The thing is that when I open it for the first time in PBI Services, all the visuals and measures are loading which is fine. I'm waiting till the end but then if I close the report and come back later, all the measures will load again and it really is a problem for my customers. I know I can schedule a cache refresh but even if I do so, the reports will still refresh every time I open them.
Any idea to solve this ?
Thanks a lot.
In case you are using live connection, switching to imported should speed things up. This means that when getting the data from SSAS, you should select Import option:
When importing data, your report will not work with the "live" data, but with a cached copy. This will speed things up, but you will need to schedule a refresh. However, with SSAS you probably do not work with live data too. Usually there is a nightly ETL, which will refresh the data in your model. So you may want to schedule a refresh of your dataset at appropriate time, after the refresh of SSAS is completed.
You can read more about Live connection vs. Import here.
Unfortunately, I don't think you can switch this option for existing reports. There is an idea for which you can vote, though. So you may need to re-create your report to do that.
Related
I have no knowledge on computer programming and I need a bit of help.
I'm using automate.io (a drag and drop integration software) to take a new row in excel and insert it into salesforce. That bit works all ok.
What I worry about is my excel document it is connected to an SQL server and auto refreshes every minute. The problem is that I have to have the Excel document open at all times for this to auto refresh to take place.
To combat this I used task scheduler to open the document at 7am even when there is no one logged in.
My question is,
will this work and is it reliable?
Will it work?
Only your testing can answer that.
Watch out for false positives, e.g. new record in database not picked up or not refreshed, and therefore not input to SalesForce in a timely manner.
Is it reliable?
Here are some ways to achieve what you want, in approximate descending order of reliability:
Get a third party to integrate the two databases directly (SalesForce and your SQL server), with updates triggered by any change in the data in your SQL server. There is a whole sub-industry of SalesForce integration businesses and individuals who would consider taking this on in return for money.
Get a standalone script (not Excel) running on a server near your database to monitor your DB for changes, and push new records to SalesForce via a direct API.
Get a standalone script (not Excel) running on a server near your database to monitor your DB for changes, and push new records to text files (not Excel) which are subsequently loaded into SalesForce.
Get Excel to refresh your DB for changes regularly via a data link (i.e. what you outlined), but have it new records to text files (not Excel) which are subsequently loaded into SalesForce.
Get Excel to refresh your DB for changes regularly via a data link (i.e. what you outlined), and have it push new records to SalesForce via third-party software as a substitute for actual integration.
You will notice your proposed solution is at or near the end. The list may inspire you to determine what tweaks you might make to the process to move up the list a little ways, without changing the approach completely (unless you want to and can justify it).
Any tweak that removes a link in the dependency chain helps reliability. Shorter chains are generally more reliable. Right now your chain sounds something like: Database > Server?/internet?/network? > Excel data link > Excel file > Task scheduler > internet > automate.io > API > Force.com > SalesForce.
Transaction volume, mission criticality, and other subjective criteria will help guide you as to what is most appropriate in your situation.
I am trying to capture a historical log of all the queries run against the data stored in Application Insights which is for compliance purposes. You are able to view a history list of the queries ran but I can't see a way to view/access the raw data of this list.
From my investigation in the settings tab in the Log (Analytics) page it says that this list is saved for 30 days:
Log Analytics query history is saved for 30 days globally.
Query history may be cleared using the “clear history” button.
I have tried searching through the settings and online documentation but no mention of the history of queries ran or where it can be found.
Does anyone know of a way to access this list of historical queries ran?
That's an interesting question, but AFAIK, an export of the historical data is not possible, as it seems to be more of a feature on the Azure Portal that is not exposed by any of Log Analytics or Application Insights REST APIs.
Digging this further, I noticed that one of the network calls involves hitting the URL: https://portal.loganalytics.io/. But the push now is towards moving to the Azure Portal, as discussed in this issue.
Nevertheless, I will still check with our internal Teams and let you know if I have something more to share.
Hope this helps!
I'm new to Azure Mobile Services as well as mobile development.
From my experience in web development, retrieving data from the database is done part by part as the user requests more data i.e. the website doesn't load all the data on one go.
I'm implementing this principle in mobile app wherein data is loaded (if already in the local db) or downloaded (if not yet in the local db) as the user scrolls down.
I'm using Azure Mobile Services Sync Table to handle the loading of data in the app. However, i wont be able to paginate the downloading of data. According to this post, the PullAsync method downloads all data that has changed/added since its last sync and doesn't allow for using take/skip methods. This is because PullAsync uses incremental sync.
This would mean there will be a large download of data during the first ever launch of the app or if the app hasn't been online for a while even if the user hasn't requested for the said data (i.e. scrolled to it).
Is this a good way for handling data in mobile apps? I like using SyncTable cos it handles quite a lot of important data upload/download stuff e.g. data upload queuing, download/upload of data changes. I'm just concerned with downloading data that the user doesn't need yet.
Or maybe there's something i can do to limit the items PullAsync downloads? (aside from deleted = false and UserId = current user's UserId)
Currently, i limited the times PullAsync is called to the Loading Screen after the user logs in and when the user pulls to refresh.
Mobile development is very different from web development. While loading lots of data to a stateless web page is a bad thing, loading the same data to a mobile app might actually be a good thing. It can help app performance and usability.
The main purpose of using something like the offline data storage is for occasionally disconnected scenarios. There are always architectural tradeoffs that have to be considered. "How much is too much" is one of those tradeoffs. How many roundtrips to the server is too much? How much data transfer is too much? Can you find the right balance of the data that you pass to the mobile device? Mobile applications that are "chatty" with the servers can become unusable when the carrier signal is lost.
In your question, you suggest "maybe there's something i can do to limit the items PullAsync downloads". In order to avoid the large download, it may make sense for you to design your application to allow the user to set criteria for download. If UserId doesn't make sense, maybe a Service Date or a number of days forward or back in the schedule. Finding the right "partition" of data to load to the device will be a key consideration for usability of your app...both online and offline.
There is no one right answer for your solution. However, key considerations should be bandwidth, data plan limits, carrier coverage and user experience both connected and disconnected. Remember...your mobile app is "stateful" and you aren't limited to round-trips to the server for data. This means you have a bit of latitude to do things you wouldn't on a web page.
I am new to this.
I built a pivot report in excel 2007 on SSAS. It connects to a cube on my local pc. Now I want to send this pivot report to other people to make them be able to view the pivot report and do some analysis by themselves (expanding year-month-day etc).
When my colleague tried he couldn't expand.
How can I achieve this?
Thank you,
Nian
Your colleague needs to be able to access the cube in order to refresh it. This means that your cube should be on a shared machine (like a server). I would recommend putting the cube on a server and setup a database read-only user login and setup the Excel file to use that username/password. You may be able to have your local machine be accessible, but I don't have experience with this and I would advise against it anyhow (your users wouldn't be able to refresh the cube if you don't have your computer on the network).
Also, even if you send them the file with data cached from the cube, only so much data gets cached. When you expand items, it won't need to request the data from the cube (on your machine/server) if it has that particular data cached. The same may happen when you create filters.
I'm looking for a way of programmatically exporting Facebook insights data for my pages, in a way that I can automate it. Specifically, I'd like to create a scheduled task that runs daily, and that can save a CSV or Excel file of a page's insights data using a Facebook API. I would then have an ETL job that puts that data into a database.
I checked out the oData service for Excel, which appears to be broken. Does anyone know of a way to programmatically automate the export of insights data for Facebook pages?
It's possible and not too complicated once you know how to access the insights.
Here is how I proceed:
Login the user with the offline_access and read_insights.
read_insights allows me to access the insights for all the pages and applications the user is admin of.
offline_access gives me a permanent token that I can use to update the insights without having to wait for the user to login.
Retrieve the list of pages and applications the user is admin of, and store those in database.
When I want to get the insights for a page or application, I don't query FQL, I query the Graph API: First I calculate how many queries to graph.facebook.com/[object_id]/insights are necessary, according to the date range chosen. Then I generate a query to use with the Batch API (http://developers.facebook.com/docs/reference/api/batch/). That allows me to get all the data for all the available insights, for all the days in the date range, in only one query.
I parse the rather huge json object obtained (which weight a few Mb, be aware of that) and store everything in database.
Now that you have all the insights parsed and stored in database, you're just a few SQL queries away from manipulating the data the way you want, like displaying charts, or exporting in CSV or Excel format.
I have the code already made (and published as a temporarily free tool on www.social-insights.net), so exporting to excel would be quite fast and easy.
Let me know if I can help you with that.
It can be done before the week-end.
You would need to write something that uses the Insights part of the Facebook Graph API. I haven't seen something already written for this.
Check out http://megalytic.com. This is a service that exports FB Insights (along with Google Analytics, Twitter, and some others) to Excel.
A new tool is available: the Analytics Edge add-ins now have a Facebook connector that makes downloads a snap.
http://www.analyticsedge.com/facebook-connector/
There are a number of ways that you could do this. I would suggest your choice depends on two factors:
What is your level of coding skill?
How much data are you looking to move?
I can't answer 1 for you, but in your case you aren't moving that much data (in relative terms). I will still share three options of many.
HARD CODE IT
This would require a script that accesses Facebook's GraphAPI
AND a computer/server to process that request automatically.
I primarily use AWS and would suggest that you could launch an EC2
and have it scheduled to launch your script at X times. I haven't used AWS Pipeline, but I do know that it is designed in a way that you can have it run a script automatically as well... supposedly with a little less server know-how
USE THIRD PARTY ADD-ON
There are a lot of people who have similar data needs. It has led to a number of easy-to-use tools. I use Supermetrics Free to run occasional audits and make sure that our tools are running properly. Supermetrics is fast and has a really easy interface to access Facebooks API's and several others. I believe that you can also schedule refreshes and updates with it.
USE THIRD PARTY FULL-SERVICE ETL
There are also several services or freelancers that can set this up for you at little to no work on your own. Depending on where you want the data. Stitch is a service I have worked with on FB-ads. There might be better services, but it has fulfilled our needs for now.
MY SUGGESTION
You would probably be best served by using a third-party add-on like Supermetrics. It's fast and easy to use. The other methods might be more worth looking into if you had a lot more data to move, or needed it to be refreshed more often than daily.