The vLLM Provider enables integration with vLLM-deployed language models into Keep.
steps: - name: Query vllm provider: vllm config: "{{ provider.my_provider_name }}" with: prompt: {value} temperature: {value} model: {value} max_tokens: {value} structured_output_format: {value}