Are there any limits using the dbpedia-spotlight APIs ?
I found the endpoint documented here https://www.dbpedia-spotlight.org/api but the page does not speak about any limits such as rete-limits and about the endpoint availability.
Related to availability, you can check it out at status.dbpedia-spotlight.org. The API is totally open without any rate-limit, but please try to don't overload our demo server. If you need batch processing, you can install it locally using our docker images
I guess that API has several limits. I could not find any documentation about this, but in my experience I discovered:
Your requests for API can have until 930 words. (I recommend 900 or less)
I did not use for many requests. I guess which restrictions are for 19 or less requests.
I have no problem with limits and I'm grateful for service, but it could useful that is clearly in documentation at web page.
Related
I am reading tons of GPT-3 samples, and came cross many code samples.
None of them mentions that how and where I can run and play with the code myself... and especially not mentioning I can not.
So I did my research, and concluded, I can not, but I may be wrong:
There is no way to run the "thing" on-premises on a dev machine, it is a hosted service by definition (?)
As of now (Oct. 11th 2020) the OpenAI API is in invite only beta (?)
Did I miss something?
As of now (Oct. 11th 2020) the OpenAI API is in invite only beta (?)
Yes, that is correct! Check out their API introduction and application form. On there you'll find the necessary information to apply (in case you want that).
There is no way to run the "thing" on-premises on a dev machine, it is a hosted service by definition
Well, you have access to their API. You either can use the inbuilt playground or access the API via a HTTP request (and therefore via most programming languages). But there isn't much coding to be done as you only have a few parameters to pass into the request - e.g. the amount of tokens, temperature, etc.
They even have a function which writes the code for you (though that's probably not necessary):
As you can see, you're able to write (and test) your scripts with the different settings and engines. Once you set it up properly, you may simply export the code. That obviously shouldn't be used in production as your software is (presumably) more in-depth then just one standardized call.
As of Nov 18 2021, the API is now available publicly with no waitlist. You just need to sign up for an account and get an API key (instant access). You need to pay per usage, but you get some free usage to start with.
Pricing for Da Vinci (the main/best GPT-3 model) is $0.006 per 1000 tokens (approx 750 words). So for about 1 cent, you can get it to read and/or write about 1500 words.
https://openai.com/blog/api-no-waitlist/
We have installed liferay portal in our server and we want to know if we want to support more than 1,000 simultaneous users what harware is required?
What bandwidth and cpu or ram we need?
Is there any formula or something to get that requirement based on number of users?
Pankaj Kathiriya already linked to http://www.liferay.com/documentation/additional-resources/whitepapers in the comment to your question - please look for the "Performance Whitepaper" there. That one highlights 4 different scenarios on a given hardware platform. You'll easily see that the correct answer is "it depends". Now, what does it depend on?
It's the scenario you're implementing: Anonymous access to the site with fully cacheable pages is a different story than highly interactive and permission-controlled access with lots of integration. Also, pure text-based portals will differ in bandwidth requirements from media-rich portlets. And lastly, you can tune Liferay and the related web request to quite some extent, in order to serve static content from other locations etc.
So, read the performance whitepaper, identify the scenario that comes closest to yours and make sure you tune your system if you need more performance.
I am considering developing a web site which has many characteristics of a social networking site. The website, I am considering will have a lot of apps, which will interact with the database, and also, scraping other websites for information and a multiuser chat. Also, it will feature a forum, blog, and other similar CRUD applications. The key things I am looking at is
Response time
Max number of developers may be 1 to 3 during the initial stages
I expect the website to scale up to around 1000 concurrent users in a year, and then hopefully an exponential growth.
The users are expected to spend a lot of time, in the site.
With this requirements in mind, I looked at Django, and Web2Py, since I am knowledgable in Python. It fits the bill mostly, but, I am concerned about the scalability, and as it scales, I will require more servers to be added. This means, additional cost, and I don't have any ideas to monetize the app in the near future for various reasons. So, I have to be satisfied with a limited amount of resources.
Can you kindly advice me?
Thx
Ik
From what you had described, Node.js is perfect. Not only does it have a low memory footprint and can it handle thousands of concurrent clients out of the box, but you can definitely use it for scraping websites (see this and this), creating chats (check nodechat and this other nice tutorial)
The respond time depends on your application, but if you code the right way (don't block the event loop of Node.js, keep you 'heavy-lifting' outside the server process) Node.js is really fast.
This depends on you, but consider Node.js is JavaScript on the server-side, so there is already a great pool of developers that already know JS and could learn Node.js specific things fast.
There were some official benchmarks on the nodejs blog some weeks ago, look here: http://blog.nodejs.org/2011/11/05/node-v0-6-0/ A simple server with Node.js can handle 5-6 thousands of requests per second, so you can imagine that's really something.
Spending a lot of time on the site means that they'll be making many requests, so look at my point above 3).
http://highscalability.com/blog/2011/2/22/is-nodejs-becoming-a-part-of-the-stack-simplegeo-says-yes.html
Scaling node.js
Hello there and thanks for reading my question.
I am looking into Amazon Cloudfront (CF) at the moment and need to define exactly the steps to setting up CF with our own origin server before I can proceed past inputting payment details. The basic steps I have been able to find out through Googling are:
Register with CF
Set-up a CF distribution (this is where you register your origin server)
Update your resource references on your site
The problem I am having is with step 2. Although Amazon describe it as a simple API call, I am still not quite sure exactly what this means and what I would have to do to perform this call.
A lot of bloggers/forum posters suggest using a third party software like CloudBerry - the problem is is that CloudBerry costs to do the CF/origin server bit and I only need to do it once (everything else after that can be handled by the AWS management console.
I have looked at loads of other similar pieces of software but have found them to either error on download or on install or not have the functionality I am looking for on the Windows version!
Now, this page describes how to setup the origin server manually (http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/) but I am still not sure exactly how this is performed.
Has anyone done this before and can offer some guidance or step on how to do this?
Many thanks in advance!
Greg
I had success using Fog gem. Once you establish a connection to the Amazon API, it was painless to create a distribution.
cdn = Fog::AWS::CDN.new(
:aws_access_key_id => YOUR_ID,
:aws_secret_access_key => YOUR_SECRET_KEY
)
cdn.post_distribution(options = YOUR_OPTIONS_HASH )
And with that you should receive a 201.
The documentation is great, too.
Cloudbuddy (http://m1.mycloudbuddy.com/downloads.html) is free and you can use it to setup the Cloudfront custom origin. Windows only, unfortunately, but you only have to use it once, right? :-D
This page walks through the custom origin server in a bit more detail http://docs.amazonwebservices.com/AmazonCloudFront/latest/DeveloperGuide/index.html?CreatingDistributions.html. The API call you need to make is to create a new distribution which points at your custom origin server. Basically you would craft the request as described and post it to amazon's web services.
Amazon just updated their CloudFront Management console to support features that were previously only available through their API, so you should be able to configure it without using the API.
http://aws.amazon.com/about-aws/whats-new/2010/11/09/cloudfront-adds-support-for-custom-origins-and-sla/?ref_=pe_2170_19753730
Twitter sometimes shows an message: Twitter is over capacity
This is to prevent too much pressure on the servers. Which avoids that the servers go down.
How do I implement this in my application?
Edit: I am NOT looking for a PHP specific solution.
I thing this can be easily achieved by using a separate software to watch the server status, and on to much pressure, show the specified message. This is very important in a cloud architecture, so you can easily launch new instances. I think Amazon uses CloudWatch for this. Also, you could use apache mod_status to watch the server, also using a separate software.
Hope this helps, Gabriel