Name *
Short name *
Slug *
Caption *
Title *
Description *
Context input #1
Input type *
Label *
Placeholder *
Label key *
(use this key in prompt template)
Value key *
Maximum input characters *
Required
GPT Temperature *
What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
GPT Top P *
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
GPT Frequency penalty *
Number between 0 and 1 that penalizes new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim.
GPT Presence penalty *
Number between 0 and 1 that penalizes new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.
GPT Maximum output characters *
GPT Prompt *
Use TONE to replace with user's selected tone value. Use context input's respective keys to replace during prompt generation. Use ### to separate examples or as defined in 'GPT Stop sequences' input.
Image
Variants *
Info
Output placeholder *
Separate multiple values in new lines with ###.
Key *
Use this value to reference in code.
Show in dropdown
Featured on homepage
Enabled