I was helping my colleague with how to contact MS Graph API to acquire an access token and and to get drives (not OneDrive), folders and files.
He needs to make the calls from Dynamics Business Central and it seems to be working, but he is experiencing very poor performance, like 15-20 seconds for acquiring an access token and listing 3 folders each with 1-2 files in them (only retrieving the Ids and Names, not content).
I'm at this point not entirely sure how his code looks, but before I try to dig into that, I would like to hear if anyone else has experienced this poor performance or know something about what could be causing it?
Thank you.
Related
hope you lovely gentleman can help x
i have three folders of word dosuments. these need to be accessed by workers on site via tablets /phones.
i created a power app for internal users, the issue being some of the workers are contracted to us and may not have MS licenses on their phones.
is there a better way of me providing access to the latest version of these documents to employees external and internal. ideally the external ones may be given a log in. the information isnt really sensitive however i wouldnt want anyone just being able to access it.
i looked at power pages but there is a subsription fee. i dont know if theres any apps, or other solutions i am missing?
really appreciate your help
lucy x
tried power apps, however some users dont have licenses
Our company has an on-prem file server that I'd like to move to the cloud. I followed these directions and was successfully able to map a drive on my local work computer to connect to an Azure File Share. Our company has about 20 locations, ~5 TB of data (mostly "office" type of files) in total, and about 500 users accessing them.
There are two issues I would like to improve but I'm not sure how:
There's somewhat of a lag when opening files. Other than increasing our office's internet speed, is there anything to be done to make it faster? Would some kind of site-to-site VPN help? Would adding some type of server or VM in the "middle" (maybe one per location?) that would perhaps somehow cache the files reduce the lag?
Also, we have and use an Office 365 subscription. What's the easiest way to use our existing AD structure to transfer over the NTFS permissions that are currently in place?
I Googled around and found a bunch of companies advertising their services, notable among them was Talon Storage. But it seems like something that could be done without hiring a company. What I'm hoping for is a DIY direction to optimally solve these issues. Perhaps there's a standard or commonly recommended solution for such issues. Any guidance would be greatly appreciated.
L-A-T-E-N-C-Y. The number one enemy for any cloud-based file server attempt. It ranges from annoying to down right unusable, depending on how far you are from the Azure datacenter of choice.
Imagine a poor soul trying to "stream" a large 20-meg Excel file with 20 references to external files. What used to take maybe 8 seconds on-prem will now take 40 in the cloud (on a good day). It's game over for productivity. Your marketing department that sometimes used to cut video in iMovie over the network? Those days are over.
I understand this is not the answer you were after, but it's the crude reality.
Do not panic, there are solutions, here's a good one - https://azure.microsoft.com/en-us/services/storsimple/
I'm sure you wanted to get rid of boxes not buy more, but it is what it is.
I'm looking for a way of programmatically exporting Facebook insights data for my pages, in a way that I can automate it. Specifically, I'd like to create a scheduled task that runs daily, and that can save a CSV or Excel file of a page's insights data using a Facebook API. I would then have an ETL job that puts that data into a database.
I checked out the oData service for Excel, which appears to be broken. Does anyone know of a way to programmatically automate the export of insights data for Facebook pages?
It's possible and not too complicated once you know how to access the insights.
Here is how I proceed:
Login the user with the offline_access and read_insights.
read_insights allows me to access the insights for all the pages and applications the user is admin of.
offline_access gives me a permanent token that I can use to update the insights without having to wait for the user to login.
Retrieve the list of pages and applications the user is admin of, and store those in database.
When I want to get the insights for a page or application, I don't query FQL, I query the Graph API: First I calculate how many queries to graph.facebook.com/[object_id]/insights are necessary, according to the date range chosen. Then I generate a query to use with the Batch API (http://developers.facebook.com/docs/reference/api/batch/). That allows me to get all the data for all the available insights, for all the days in the date range, in only one query.
I parse the rather huge json object obtained (which weight a few Mb, be aware of that) and store everything in database.
Now that you have all the insights parsed and stored in database, you're just a few SQL queries away from manipulating the data the way you want, like displaying charts, or exporting in CSV or Excel format.
I have the code already made (and published as a temporarily free tool on www.social-insights.net), so exporting to excel would be quite fast and easy.
Let me know if I can help you with that.
It can be done before the week-end.
You would need to write something that uses the Insights part of the Facebook Graph API. I haven't seen something already written for this.
Check out http://megalytic.com. This is a service that exports FB Insights (along with Google Analytics, Twitter, and some others) to Excel.
A new tool is available: the Analytics Edge add-ins now have a Facebook connector that makes downloads a snap.
http://www.analyticsedge.com/facebook-connector/
There are a number of ways that you could do this. I would suggest your choice depends on two factors:
What is your level of coding skill?
How much data are you looking to move?
I can't answer 1 for you, but in your case you aren't moving that much data (in relative terms). I will still share three options of many.
HARD CODE IT
This would require a script that accesses Facebook's GraphAPI
AND a computer/server to process that request automatically.
I primarily use AWS and would suggest that you could launch an EC2
and have it scheduled to launch your script at X times. I haven't used AWS Pipeline, but I do know that it is designed in a way that you can have it run a script automatically as well... supposedly with a little less server know-how
USE THIRD PARTY ADD-ON
There are a lot of people who have similar data needs. It has led to a number of easy-to-use tools. I use Supermetrics Free to run occasional audits and make sure that our tools are running properly. Supermetrics is fast and has a really easy interface to access Facebooks API's and several others. I believe that you can also schedule refreshes and updates with it.
USE THIRD PARTY FULL-SERVICE ETL
There are also several services or freelancers that can set this up for you at little to no work on your own. Depending on where you want the data. Stitch is a service I have worked with on FB-ads. There might be better services, but it has fulfilled our needs for now.
MY SUGGESTION
You would probably be best served by using a third-party add-on like Supermetrics. It's fast and easy to use. The other methods might be more worth looking into if you had a lot more data to move, or needed it to be refreshed more often than daily.
This may not be the most technical question, but I was just interested, nonetheless...
How does a giant company like Google keep from having their code stolen by employees? Maybe I'm wrong, but I would assume that their source code to their search algorithms (amongst other things) would be valuable to their competitors (i.e. Microsoft).
I guess I can best phrase it like this:
What's keeping an unscrupulous
employee who has sufficient clearance from
accessing Google's code repository for
a specific project and copying significant amounts of code
to a flash drive and taking it to their
competitors?
Fear of being sued?
Things within a company like Google are also compartmentalized. So not everybody has access to all code. If someone has access to code, you can bet that Google knows when they access it. I'm sure they have some kind of algorithm that looks and sees if somebody just downloads a lot of files very fast. The search algorithm isn't a small file obviously, it is a gigantic application.
All this would allow them to track who has stolen the code from within. There is also the fact that any self-respecting company or company with something to lose (i.e. Microsoft) would not take anything like this from somebody. They would probably even tell Google about it.
It is called protocol. The idea that only a few people get to know the code. In which then those few have to tell a major very embarrassing secret to the others. So then nobody can tell or else they get outed in the public. Which can be very simple like they like something, compared to as bashful as they are all the way to they killed somebody.
Many employers, including one that I've worked for, completely block flash drives.
In many cases, though, this is to protect non-technical confidential information.
Companies that are serious about protecting their assets will have access logging on their core systems and active scanning to detect suspicious patterns. Similar security is implemented for employees of government agencies (e.g. tax, social security) holding sensitive personal information. Users who access data outside of their assigned cases can be flagged and investigated.
I suspect (but don't know) that similar scanning could be implemented in high value source code repositories.
Some organizations block the use of removable media (It has been reported that some agencies have reacted to Wikileaks with such policies), in some cases by physically gluing up the USB/media ports. This restricts potential thiefs to network transfers of material which can be scanned.
I think companies such as Google will implement access control on their source code repository / version control system. So their employee would only be able to access source code in which they were involved. And their access could be revoked from previous repository if they're being assigned to different project. Its the same thing with normal internal documents, would a security-conscious company let documents be downloaded by any employee freely ?
I think codethis hit the nail on the head. Some fly-by-night operation may be interested, but Microsoft, Yahoo, etc - wouldn't touch stolen code with a ten foot pole. And the fly-by-night wouldn't have the infrastructure. If you didn't tell anybody it was stolen - it's not like you could get away with walking in to a company with an entire spider/searching algorithm on your thumbdrive and declare you wrote it last week.
The bigger threat is details of the search algorithm getting out. SEOers, as a whole, are rather shady - and many would kill for solid facts about how the algorithm ranked or downranked pages. Even then, Google has demonstrated the ability to change their ranking algorithms so quickly that it wouldn't much matter.
On the other hand, Google doesn't have that much super-secret code. Most of their cool stuff (MapReduce et.al) is publicly available (see Hadoop). This question is probably more applicable to a company like Adobe. Some of their Photoshop algorithms are really cool, and would probably hurt them if they got out - but again, no legit company would touch it.
For the better part of 10 years + we have relied on various network mapped drives to allow file sharing. One drive letter for sharing files between teams, a seperate file share for the entire organization, a third for personal use etc. I would like to move away from this and am trying to decide if an ECM/Sharepoint type solution, or home grown app, is worth the cost and the way to go? Or if we should simply remain relying on login scripts/mapped drives for file sharing due to its relative simplicity? Does anyone have any exeperience within their own organization or thoughts on this?
Thanks.
SharePoint is very good at document sharing.
Documents generally follow a process for approval, have permissions, live in clusters... and these things lend themselves well to SharePoints document libraries.
However there are somethings that don't lend themselves well to living inside SharePoint... do you have a virtual hard drive (.vhd) file that you want to share with a workmate? Not such a good idea to try and put a 20GB file into SharePoint.
SharePoint can handle large files, and so can SQL Server behind it... but do you want your SQL Server bandwidth being saturated by such large files? Do you want your backup of SQL Server to hold copies of such large files multiple times?
I believe that there are a few Microsoft partners who offer the ability to disassociate file blobs from the SharePoint database, so that SharePoint can hold the metadata and a file system holds the actual files, and SharePoint simply becomes the gateway to manage access, permissions, and offer a centralised interface to files throughout an organisation. This would offer you the best of both worlds.
Right now though, I consider SharePoint ideal for documents, and I keep large files (that are not document centric) on Windows file shares.
Definetely, use a tool.
The main benefit here is version control. Being able to jump easily to a previous version, diff'ing and seeing who modified what (see most VCS' blame/annotate tool- it prints out a text file showing when/who modified each line in the text file).
Second, you can probably benefit from issue tracking/task tracking.
Other benefits include web access from the internet, having a wiki (which can be great in some situations), etc.
I use Subversion + Redmine at work, and I find it highly useful- test a few solutions and you will surely find out further advantages for you.
One thing that can be overlooked in the change to an document management tool is the planning required around how much is going to be stored and information architecture issues like where different content is going to end up.
SharePoint particularly is easy to setup without a good plan going forward and is particularly vulnerable to difficulties later on when things get to busy.
I would not recommend a home grown app for something like this. The problem has been solved by off the shelf tools and growing one from scratch is going to cost a huge amount and not get you any way near the features for the money.
Did I mention how important planning your security groups and document areas (IA) was?
If you need just document storage then sharepoint can do very well. WSS is ewen free and it provides very good document storage capabilities.
But you have to plan carefully as updating existing applications is painfull. If you decide to go with Sharepoint then I can give you few advices from top of my head
Pay attention to security configuration (user groups, privilegies,..)
Plan your document libraries well as it is not easy to just move documents betveen them
Also consider limiting number of versions that one document can have, because sharepoint stores full backups betveen verions, not just changes
Don't use infopath:) we have very bad experience with it (just don't tell this to managers)
If you don't really need to change graphical look of Sharepoint than don't bother with it as it brings many problems (I'm talking about custom masterpages and custom site templates)
Try to use as much OOB stuff as possible, because developing your own webparts not only cost more, but it can be quite complicated.
Make sure to turn-on search indexing. This is quite tricky, because it is by default turned off and then you will be as surprised that search is not working as I was :)
If you try to just deploy it and load 10.000 documents into it then you will surely have problems with it later. If you give a little thought about structure then you will end up with really good document storage.
Migrating is very probably worth the cost in the long term. You will gain reliability, versioning, traceability, and extensibility.
Be sure to first identify the groups/rights, and to identify which links need to be fixed (maybe you have applications that use links to the shares).
An open source alternative to SharePoint is Alfresco, it is very good for CIFS (Windows shares) too.