gogo2/_doc/oi-notes.md
2024-03-23 01:07:39 +02:00

1.3 KiB

interpreter --api_base http://192.168.0.11:11434/v1/

interpreter --model "gpt-3.5-turbo" # mistral interpreter --model "mistral" --api_base http://192.168.0.11:11434/v1/

    Mac/Linux: 'export OPENAI_API_KEY=your-key-here',
    Windows: 'setx OPENAI_API_KEY your-key-here' then restart terminal.

interpreter --local

interpreter --api_base http://192.168.0.11:11434/v1 --api_key "" --model openai/local

interpreter --api_base http://192.168.0.137:1234/v1 --api_key "" --model openai/local 192.168.0.137

Load a model, start the server, and run this example in your terminal

Choose between streaming and non-streaming mode by setting the "stream" field

curl http://192.168.0.11:11434/v1/chat/completions
-H "Content-Type: application/json"
-d '{ "messages": [ { "role": "system", "content": "Always answer in rhymes." }, { "role": "user", "content": "Introduce yourself." } ], "temperature": 0.7, "max_tokens": -1, "stream": false }'

curl http://192.168.0.137:1234/v1/chat/completions
-H "Content-Type: application/json"
-d '{ "messages": [ { "role": "system", "content": "Always answer in rhymes." }, { "role": "user", "content": "Introduce yourself." } ], "temperature": 0.7, "max_tokens": -1, "stream": false }'