HomeDocsAI FeaturesAI Pure Generation
AI Features

AI Pure Generation

The AI Pure Generation node (referred to in technical docs as the Promptless Generative AI Question) is a high-performance building block within Querlo Studio. Unlike standard AI nodes that may have fixed logic, Pure Generation gives you direct control over the Large Language Model (LLM) interaction, allowing for highly specific responses tailored to your data and user context.

Core Configuration

Inside the node editor, you can define the fundamental parameters of the AI interaction:

  • AI Integration & Model: Select your provider (e.g., Azure OpenAI) and the specific model version (e.g., 4o).
  • Prompt: This is the "hidden" instruction set for the AI. You can use variables like {avgTemp} to inject real-time data into the instructions.
  • Temperature: A value from 0 to 1 that controls creativity. Lower numbers make the AI more focused; higher numbers make it more creative.
  • History: Set the number of previous messages the LLM should "remember" during this specific interaction.
  • Knowledge Base: Assign a specific Knowledge Base to ground the AI's response in your custom data.

Advanced LLM Configuration

Expand the Advanced LLM Configuration section to access deep-level settings:

Extra JSON Params

This field allows you to pass custom parameters directly to the LLM API. It is a powerful way to extend functionality and optimize performance.

Example payload:

json
{
  "max_tokens": "120000",
  "markdown": "raw",
  "exclude_from_history": true
}

Key JSON Parameters:

  1. max_tokens: Explicitly sets the maximum output length. Useful for ensuring the AI doesn't cut off during complex generations.
  2. markdown: Controls how the output text is processed before reaching the user.
    • raw: Returns the text exactly as the LLM generated it.
    • html-it: Processes the output through a markdown-to-HTML parser for rich text rendering.
  3. exclude_from_history: Set this to true if you want the output of this specific node to be ignored in subsequent conversation history.
    • Pro Tip: This is incredibly useful for saving tokens and increasing processing speed when the specific generation isn't needed for future context (e.g., a one-off weather report or a translation).

Query Template

Fine-tune the structure of the query sent to the AI, allowing you to wrap user input or variables in specific system wrappers.

Variable Mapping

As with standard question nodes, you can assign the AI's output to a Variable name (e.g., aaa). This allows you to repeat or use the AI-generated answer in any following node by simply typing {variable_name}.