main idea is now working :)
Using openai for tts and groq for ollama fast inference
This commit is contained in:
@ -1,7 +1,16 @@
|
||||
|
||||
ENV_NAME=development
|
||||
TTS_API_URL=https://api.tts.d-popov.com/asr
|
||||
LNN_API_URL=https://ollama.d-popov.com
|
||||
|
||||
# LLN_MODEL=qwen2
|
||||
# LNN_API_URL=https://ollama.d-popov.com/api/generate
|
||||
|
||||
LLN_MODEL=qwen2
|
||||
LNN_API_URL=https://ollama.d-popov.com/api/generate
|
||||
|
||||
GROQ_API_KEY=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
|
||||
OPENAI_API_KEY=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
|
||||
|
||||
WS_URL=ws://localhost:8081
|
||||
SERVER_PORT_WS=8081
|
||||
SERVER_PORT_HTTP=8080
|
||||
|
Reference in New Issue
Block a user