GPT3 completion with insertion - invalid argument :suffix - openai-api

I am trying out completions using insertions.
It seems that I am supposed to use a parameter called suffix: to inform where the end of the insert goes.
The payload to the endpoint: POST /v1/completions
{
"model": "code-davinci-002",
"prompt": "Write a JSON document for a person with first name, last name, email and phone number\n\n{\n",
"suffix": "\n}",
"temperature": 0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0
}
I tried doing this from a ruby implementation of GPT3.
parameters
=> {
:model=>"code-davinci-001",
:prompt=>"generate some JSON for a person with first and last name {",
:max_tokens=>250,
:temperature=>0,
:top_p=>1,
:frequency_penalty=>0,
:presence_penalty=>0,
:suffix=>"\n}"}
post(url: "/v1/completions", parameters: parameters)
I get an invalid argument error for suffix
{"error"=>{"message"=>"Unrecognized request argument supplied: suffix", "type"=>"invalid_request_error", "param"=>nil, "code"=>nil}}

I looked at the Payload from OpenAI vs the payload from the Ruby Library and saw the issue.
My ruby library was setting the model to code-davinci-001 while OpenAI was using code-davinci-002.
As soon as I manually altered the model: attribute in debug, the completion started working correctly.
{
"id"=>"cmpl-5yJ8b01Cw26W6ZIHoRSOb71Dc4QvH",
"object"=>"text_completion",
"created"=>1665054929,
"model"=>"code-davinci-002",
"choices"=>
[{"text"=>"\n \"firstName\": \"John\",\n \"lastName\": \"Smith\"",
"index"=>0,
"logprobs"=>nil,
"finish_reason"=>"stop"}],
"usage"=>{"prompt_tokens"=>14, "completion_tokens"=>19,
"total_tokens"=>33}
}

Related

Regarding Oracle Node as a NEAR Protocol Contract implemented in Rust

Recently, I have been executing the implementation of this repository:
https://github.com/smartcontractkit/near-protocol-contracts
So, everything is working fine in this...
But what I want to ask in this is:
When I executed the request command:
near call oracle.$NEAR_ACCT request '{"payment": "10", "spec_id": "dW5pcXVlIHNwZWMgaWQ=", "callback_address": "client.'$NEAR_ACCT'", "callback_method": "token_price_callback", "nonce": "11", "data_version": "1", "data": "eyJnZXQiOiJodHRwczovL21pbi1hcGkuY3J5cHRvY29tcGFyZS5jb20vZGF0YS9wcmljZT9mc3ltPUVUSCZ0c3ltcz1VU0QiLCJwYXRoIjoiVVNEIiwidGltZXMiOjEwMH0="}' --accountId client.$NEAR_ACCT --gas 300000000000000
The resultant transaction was:
https://explorer.testnet.near.org/transactions/Gr4ddg77Hj1KN2EB3W7vErc6aaDq8sNPfo6KnQWkN9rm
And then, I executed the fulfill_request command:
near call oracle.$NEAR_ACCT fulfill_request '{"account": "client.'$NEAR_ACCT'", "nonce": "11", "data": "Nzg2"}' --accountId oracle-node.$NEAR_ACCT --gas 300000000000000
Then, the resultant transaction was:
https://explorer.testnet.near.org/transactions/39XZF81s9vGDzbUkobQZJufGxAX7wNrPT336S8TEnk29
NOTE:
As we can see in the first command that is request, the data parameter that is passed is:
eyJnZXQiOiJodHRwczovL21pbi1hcGkuY3J5cHRvY29tcGFyZS5jb20vZGF0YS9wcmljZT9mc3ltPUVUSCZ0c3ltcz1VU0QiLCJwYXRoIjoiVVNEIiwidGltZXMiOjEwMH0=
When we base64 decode that, then it comes out to be as:
{"get":"https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD","path":"USD","times":100}
Similarly, in the second command that is fulfill_request, I passed the data parameter as:
Nzg2
When we base64 decode that, it comes out to be as:
786
And, that is also the result of the second transaction that you can see by scrolling to the midway of https://explorer.testnet.near.org/transactions/39XZF81s9vGDzbUkobQZJufGxAX7wNrPT336S8TEnk29.
We can see the result as:
Client contract received price: "786"
So, basically what I want is:
To get the response of https://min-api.cryptocompare.com/data/price?fsym=ETH&tsyms=USD as the result of fulfill_request command, instead of hard-coded 786.

TF Keras Model Serving REST API JSON Input Format

So I tried following this guide and deploy the model using docker tensorflow serving image. Let's say there are 4 features: feat1, feat2, feat3 and feat4. I tried to hit the prediction endpoint {url}/predict with this JSON body:
{
"instances":
[
{
"feat1": 26,
"feat2": 16,
"feat3": 20.2,
"feat4": 48.8
}
]}
I got 400 response code:
{
"error": "Failed to process element: 0 key: feat1 of 'instances' list. Error: Invalid argument: JSON object: does not have named input: feat"
}
This is the signature passed to model.save():
signatures = {
'serving_default':
_get_serve_tf_examples_fn(model,
tf_transform_output).get_concrete_function(
tf.TensorSpec(
shape=[None],
dtype=tf.string,
name='examples')),
}
I understand that from this signature that in every instances element, the only field being accepted is "examples" but when I tried to only pass this one only with empty string:
{
"instances":
[
{
"examples": ""
}
]
}
I also got bad request: {"error": "Name: <unknown>, Feature: feat1 (data type: int64) is required but could not be found.\n\t [[{{node ParseExample/ParseExampleV2}}]]"}
I couldn't find in the guide how to build the JSON body request the right way, it would be really helpful if anyone can point this out or give references regarding this matter.
In that example, the serving function expects a serialized tf.train.Example proto as input. This page explains how binary data can be passed to a deployed model as a string (explaining why the signature expects a tensor of strings). So what you need to do is build an Example proto containing your features and send that over. It could look something like this:
import base64
import tensorflow was tf
features = {'feat1': 26,, 'feat2': 16, "feat3": 20.2, "feat4": 48.8}
# Create an Example proto from your feature dict.
feature_spec = {
k: tf.train.Feature(float_list=tf.train.FloatList(value=[float(v)]))
for k, v in features.items()
}
example = tf.train.Example(
features=tf.train.Features(feature=feature_spec)).SerializeToString()
# Encode your serialized Example using base64 so it can be added into your
# JSON payload.
b64_example = base64.b64encode(example).decode()
result = [{'examples': {'b64': b64_example}}]
What is the output of saved_model_cli show --dir /path/to/model --all? You should follow the output to serialize your request.
I tried to solve this problem by changing the signature serving input but it raised another exception. This problem already solved, check it out here.

How to build a Graqhql mutation with existing variables

This might seem like an odd question, or something really straightforward, but honestly I am struggling to figure out how to do this. I am working in Node.js and I want to set data I have saved on a node object into my GraphQL mutation.
I'm working with a vendor's GraqhQL API, so this isn't something I have created myself, nor do I have a schema file for it. I'm building a mutation that will insert a record into their application, and I can write out everything manually and use a tool like Postman to manually create a new record...the structure of the mutation is not my problem.
What I'm struggling to figure out is how to build the mutation with variables from my node object without just catting a bunch of strings together.
For example, this is what I'm trying to avoid:
class MyClass {
constructor() {
this.username = "my_username"
this.title = "Some Title"
}
}
const obj = new MyClass()
let query = "mutation {
createEntry( input: {
author: { username: \"" + obj.username + "\" }
title: \"" + obj.title + "\"
})
}"
I've noticed that there are a number of different node packages out there for working with Graphql, but none of their documentation that I've seen really addresses the above situation. I've been completely unsuccessful in my Googling attempts, can someone please point me in the right direction? Is there a package out there that's useful for just building queries without requiring a schema or trying to send them at the same time?
GraphQL services typically implement this spec when using HTTP as a transport. That means you can construct a POST request with four parameters:
query - A Document containing GraphQL Operations and Fragments to execute.
operationName - (Optional): The name of the Operation in the Document to execute.
variables - (Optional): Values for any Variables defined by the Operation.
extensions - (Optional): This entry is reserved for implementors to extend the protocol however they see fit.
You can use a Node-friendly version of fetch like cross-fetch, axios, request or any other library of your choice to make the actual HTTP request.
If you have dynamic values you want to substitute inside the query, you should utilize variables to do so. Variables are defined as part of your operation definition at the top of the document:
const query = `
mutation ($input: SomeInputObjectType!) {
createEntry(input: $input) {
# whatever other fields assuming the createEntry
# returns an object and not a scalar
}
}
`
Note that the type you use will depend on the type specified by the input argument -- replace SomeInputObjectType with the appropriate type name. If the vendor did not provide adequate documentation for their service, you should at least have access to a GraphiQL or GraphQL Playground instance where you can look up the argument's type. Otherwise, you can use any generic GraphQL client like Altair and view the schema that way.
Once you've constructed your query, make the request like this:
const variables = {
input: {
title: obj.title,
...
}
}
const response = await fetch(YOUR_GRAPHQL_ENDPOINT, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ query, variables }),
})
const { data, errors } = await response.json()

How to force nodejs script argument to take value from choices list using commander?

I am using commander in node.js script. I am able to set default value to one argument.
var args = require('commander')
// set 'pending' as defaut value of status
args.option('-s --status <statusString>', 'Status to filter questions', 'pending').parse(process.argv)
console.log('status:', args.status)
How can I force status value to be from ["pending", "rejected", "accepted", "hold"] only? I did not find anything relevant in documention.
This is what I could achieve:
var options = ["pending", "rejected", "accepted", "hold"];
args.option(
'-s --status <statusString>',
`Status to filter questions: ${options}`,
function(val, _) {
if (!options.includes(val)) {
throw Error(`${val} is not from ${options}`);
}
return val;
},
'pending')
.parse(process.argv)
Not perfect, since you need to format help string and validate input value by yourself. Also, throwing an error from the validation function is not handled by Commander nicely and causes it to fail with the whole stacktrace in the output. I could not find a better way to tell Commander that the input is invalid.
In my case I finally just switched to argparse, which is a clone of the Python command line parser, and seems to be better thought over. This is how you limit choices with it:
const ArgumentParser = require('argparse').ArgumentParser;
const argparser = new ArgumentParser({ description: 'An example'});
argparser.addArgument('--status', {
choices: ['pending', 'rejected', 'accepted', 'hold'],
defaultValue: 'pending'});
const args = argparser.parseArgs();
This will do the job, including nice help message and input validation.
I think that in your case this is what you were looking for:
args
.addOption(
new Option('-s --status <statusString>', 'Status to filter questions')
.choices(["pending", "rejected", "accepted", "hold"])
.default('pending')
)
.parse(process.argv)
Also please take a look to another specific example I've prepared and tested successfully (NPM commander v8.2.0)
program
.addOption(
new Option('-s --status <statusString>', 'Status to filter questions')
.choices(["pending", "rejected", "accepted", "hold"])
.default('pending')
)
.action((args) => {
console.log(args)
})
Side note: please notice that for this second example I've used a slightly different naming convention for clarity: I've used program (vs original args) in the first line, as I was planning to use args name for the variable array received in the arrow function used in action() instead. Don't let that change confuse you! ;)
IMPORTANT: more on the official examples, direct link to a related example right here: https://github.com/tj/commander.js/blob/HEAD/examples/options-extra.js#L13

Watson Conversation API says: 'Patterns are defined but type is not specified.'

I am trying to call the updateValue method of the Watson Conversation API using the Watson SDK for Node.js. The request updates the patterns of the patterns-type entity value.
My request fails with a 400 Bad Request and the message:
[ { message: 'Patterns are defined but type is not specified.',
path: '.patterns' } ],
Here is the code I'm using to call the API - pretty standard.:
let params = {
workspace_id: '<redacted>',
entity: 'myEntityType',
type: 'patterns', // tried with and without this line
value: 'myCanonicalValue',
new_patterns: ['test'],
};
watsonApi.updateValue(params, (error, response) => {
if (error) {
console.log('Error returned by Watson when updating an entity value.');
reject(error);
} else {
resolve(response);
}
});
Actually, what the request is doing is trying to delete a pattern from the pattern list. Since there is no endpoint for deleting patterns, I fetch the list of patterns, delete the one I need to delete from the pattern list, and send the now-reduced patterns list via the updateValue method. In the above example, imagine the pattern list was ['test', 'test2']. By calling updateValue with ['test'] only, we are deleting the test2 pattern.
I am using a previous API version but I've also tested it in the Assistant API Explorer and the version 2018-07-10 results in the same problem when sending a raw request body formed as follows:
{
"patterns": ["test"]
}
Am I doing something wrong or did I forget a parameter?
It's not a bug, but it is a non-intuitive parameter name. The service accepts a type parameter and the Node SDK has a wrapper parameter called new_type. If you are using this to update patterns and not synonyms (the default), then you need to specify new_type as "patterns" even though the parameter is listed as optional.
This appears to be a bug in Watson Conversation Node.js SDK.
To avoid this, always add new_type: 'patterns' to the params:
let params = {
workspace_id: '<redacted>',
entity: 'myEntityType',
new_type: 'patterns',
value: 'myCanonicalValue',
new_patterns: ['test'],
};
I read the Watson Assistant API for updateValue the following way:
The new_type parameter is not required, valid values are synonyms or patterns. However, if you don't provide that parameter, the default kicks in. According to the documentation the default is synonyms. This would explain the error when you pass in patterns.

Resources