Is it possible to somehow set up WMS to stream the content from a database only using Windows Server 2003 Standard edition?
I know it can be done using custom-plugin data source but that is only enabled with Windows Server 2003 Enterprise Edition.
Sorry to be the bearer of bad news, but your best bet is probably:
Maintain copies of your data outside of the database (I don't know the details of your setup, so this may defeat the purpose of what you're trying to do). Basically, keep a file cache of the content you want to stream and write an application to keep the file cache synchronized with the database.
Switch to Windows Server 2008. The Web Server and Standard editions for 2008 support custom plugins and should be much more affordable than the Enterprise editions (I believe the Web Server edition is under $500).
Maybe someone else has some clever solution, but these are the only options I'm aware of.
There is nothing that ships out of the box but WMS does support additional custom plug-ins. From a performance perspective you might want to consider why you need to do so. The easiest way in my mind would be to write an HTTP wrapper around the database and use the built-in HTTP streaming data source. The knowledge and skills required to write the HTTP wrapper is more plentiful than writing WMS plugins. If you do have a use case for this consider SQL Server 2008's filestream feature as it is designed for giving you the relational power with file system performance.
Related
Can any help me in fetching data from power BI endpoint without the need of using Power Shell, as want to know a way of directly fetching in Linux only?
I know a power shell can be installed in Linux , but is there any way I can skip and directly fetch the data?
reference - https://learn.microsoft.com/en-us/power-bi/admin/service-premium-connect-tools
Your Power BI XMLA endpoint is accessible through your Azure Analysis Services (AAS) instance tied to the given datasource/workspace, which means that you should be able to connect to that AAS instance and work with the data there via the web. I am not aware of any currently available Linux compatible tools that allow this. I did a bit of research and was surprised to find that there was not a VS Code extension that allowed this (might have to get to work on that ;)).
That being said, Microsoft has several different client libraries (for both AMO and ADOMD.NET) built within their .NET Core framework that would theoretically be able to used by a client application that could be built for supported Linux OS (Microsoft doc here). In other words, (again, theoretically) it should be relatively painless to build a simple tool for a supported Linux OS that takes in XMLA commands and executes them on a provided connection.
EDIT: Another good option to consider might be Microsoft's Power BI REST API (documentation here). If the functionality you are looking for is available within their REST API, you should be able to write a client tool (using one of many different options, but .NET Core could still be the in there) targeting Linux that makes use of the API for your Power BI instance in place of directly using the XMLA endpoint. I would consider this the better alternative. This is going is a less 'Microsoft-y' way of doing this, and is going to be much easier to maintain and develop over time. I would start by confirming that the functionality you want is not available in this API first.
EDIT: After reading further in above linked document regarding AMO and ADOMD.NET client libraries:
TCP based connectivity is supported for Windows computers only.
Interactive login with Azure Active Directory is supported for Windows computers only. The .NET Core Desktop runtime is required.
So it looks like there are currently some limitations to these libraries regarding a Linux runtime. I am not positive that you could use something other than TCP based connectivity to accomplish this, but if I find a way (or someone is able to suggest something), then I will update.
I'm assigned to find a solution for an issue with connecting proprietary ProvideX database to a running web application developed on a OSX platform using PHP language. What I've figured is that if there will be a possible way for querying data from ProvideX, The web app could pull data and update itself with live data. ODBC is what I found as an effective and possible solution.
The question is that, is there any Linux ODBC driver for provideX so the web API would be able to communicate to ProvideX database? I know that there's one for windows platform since ProvideX has been designed to work with windows systems.
Any thought or writeup I could go over to find out more on this issue?
Don't try to go strictly through the ODBC driver. It works nice if you're just looking at the data in an ODBC compliant application or service, but for web applications PxPlus offers a different way to access the database. Look for PxPlus web server, which may or may not be included in your installation.
I am a network administrator, former software engineer also.
I want to build my own program to keep track the IP, equipment and etc. Since our company has only only less than
100 equipments (including PCs, Printers), the data to process is small, can anyone suggest which language and platform suit my needs best ?
Hmm.. if it were me, I would do a mix of PHP and MySQL for the data backend (CRUD Operations) with HTML, CSS, and JavaScript for the front end UI. This would require Apache, MySQL, and PHP to be installed. These are available to any platform (Windows, OSX, Linux, etc.)
If its local for your use, just create an Access Database.
If web based, this is a fast and simple way ... xml based CRUD
I'm hoping someone can validate or correct my conclusions here.
I'm looking into writing a small side project. I want to create a desktop application for taking notes that will synchronise to a web-server so that multiple installations can be kept in step and data shared and also so that it can be accessed via a browser if necessary.
I've kind of been half-listening to the noises about CouchDB and I've heard mention of "offline functionality", of desktop-couchdb and of moves to utilise its ability to handle intermittent communications to enable distributed applications in the mobile market. This all led me to believe that it might be an interesting option to look at for providing my data storage and also handling my synchronisation needs, but after spending some time looking around for info on how to get started my conclusion is that I've got completely the wrong end of the stick and the reality is that:
There's no way of packaging up a CouchDB instance, distributing it as part of a desktop application and running it in the context of that application to provide local storage and synchronisation to a central database.
Am I correct here? If so is there any technology out there that does this sort of thing or am I left just rolling my own local storage and maybe still using CouchDB on the server?
Update (2012/05): check out the new TouchDB projects from Couchbase if you are targeting Mac OS X and/or iOS or Android. These actually use SQLite under the hood (at least for now) but can replicate to/from a "real" CouchDB server. Another clientside alternative that is finally starting to mature is PouchDB, which runs in IndexedDB capable browser engines. Using these or using them to inspire similar port to another desktop platform is now becoming a better-trod path.
Original answer:
There's no way of packaging up a
CouchDB instance, distributing it as
part of a desktop application and
running it in the context of that
application to provide local storage
and synchronisation to a central
database.
At this point in time, your statement is practically correct although it is possible to include CouchDB in an app — for an example see CouchDBX.app which is a thin wrapper around a prefixed bundle of CouchDB and all its dependencies.
The easiest way to build a CouchDB app is to assume that the user will already have a CouchDB server running. This is easier than it sounds, especially with Couchone's hosting or a prebuilt app like CouchDBX on OS X or DesktopCouch on Ubuntu. This latter is especially interesting, because if I understand correctly it is included by default with Ubuntu these days, and automatically spins up a CouchDB server per-user when you query its port via D-Bus. Something similar could (and should) be done on OS X using launchd and Bonjour.
So as you write, you either would design your app to store data in a local format and optionally sync with a CouchDB service you provide or you'd have to build and bundle all of Erlang, SpiderMonkey and CouchDB together with your app along with some scripts to make sure it was running when needed. This is possible but obviously neither of these are ideal, and believe me you're not the only one wanting a simpler solution for desktop-oriented apps!
What is the internal storage mechanism of WSS 3.0? Does it need SQL Server 2005 or can we use SQL Server 2005 Embedded Edition automatically installed with WSS 3.0? If yes then what is the limit of the content for a web application if it uses SSEE?
Let say I have created a web application in WSS 3.0 then how much data can be stored within it? How much data can I store for lists and document libraries? How many folders can I create inside a document library?
This is quite a common misconception - the paranoid amongst us may even thing that MSFT doesn't do much to clear this up as it pushes people along the route of buying SQL Server...
Tin hats away though ...
When you use the "Basic" install option during MOSS 2007 installation it does install and use SQL Server 2005 Express Edition (see Stand alone installation) and you do have a 4GB limit.
When you use the "Basic" install option during WSS 3.0 installation it DOES NOT use SQL Express, it uses something called Windows Internal Database and it DOES NOT have a 4GB size limit.
Its hard to find an authoritative reference on this (tin hats again) but this one by Mark Walsh and marked as correct by an MSFT moderator is about the best I can find.
Beside the database limitation there are some other SharePoint limitations and advises regarding the numbers of items per library or numbers of site collections per web application or content database.
MOSS Limitations 1
MOSS Limitations 2
It uses SQL Server 2005 Express Edition which I believe has a limit of 4GB per database. You could create multiple content databases for separate site collections but there may also be some performance limitations in the express edition.
Here is a page that compares editions:
http://www.microsoft.com/sqlserver/2005/en/us/compare-features.aspx
Josh pretty much has the answer. As for the "how many documents and lists and whatevers" question, the answer is "as many as you want so long as you don't slam into the 4gb limit."
I'd also note that if you start getting near that 4gb limit, you can always upgrade to full-blown SQL server with very minimal pain so it is a decent place to start.
The real place it falls down is management tools (ie--backup), but you can script that from the command line pretty effectively.
When installing SharePoint 2007 you can specify the SQL Server database to connect to yourself. If you don't do this SharePoint will use the Windows Internal Database, otherwise known as WYukon. This database is not the same as SQL Express and there's two key differences. (1) WYukon isn't artificially limited in database size or performance. (2) You can't connect to a WYukon database with a regular database connection string.
Here's a link with some (minimal) information about WYukon.
http://www.microsoft.com/downloads/details.aspx?FamilyId=30A7365B-91C5-4C28-85A5-9AB861168C0E
Regards,
Paul