diff options
author | Hiroki Tagato <tagattie@FreeBSD.org> | 2024-02-12 17:30:35 +0900 |
---|---|---|
committer | Hiroki Tagato <tagattie@FreeBSD.org> | 2024-02-12 17:34:14 +0900 |
commit | 730828c627631142966c84d4c2943defaad86e4e (patch) | |
tree | 2e734a5168d62de91e5cad164d5dfc5b1ebe991c /misc/py-litellm/files/patch-litellm_proxy_start.sh | |
parent | textproc/py-tokenizers: add port: Fast state-of-the-art tokenizers optimized ... (diff) |
misc/py-litellm: add port: Call all LLM APIs using the OpenAI format
Call all LLM APIs using the OpenAI format [Bedrock, Huggingface,
VertexAI, TogetherAI, Azure, OpenAI, etc.]
LiteLLM manages:
- Translate inputs to provider's completion, embedding, and
image_generation endpoints
- Consistent output, text responses will always be available at
['choices'][0]['message']['content']
- Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI)
- Router
- Track spend & set budgets per project OpenAI Proxy Server
WWW: https://github.com/BerriAI/litellm
Diffstat (limited to 'misc/py-litellm/files/patch-litellm_proxy_start.sh')
-rw-r--r-- | misc/py-litellm/files/patch-litellm_proxy_start.sh | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/misc/py-litellm/files/patch-litellm_proxy_start.sh b/misc/py-litellm/files/patch-litellm_proxy_start.sh new file mode 100644 index 000000000000..f1ce771fdaeb --- /dev/null +++ b/misc/py-litellm/files/patch-litellm_proxy_start.sh @@ -0,0 +1,8 @@ +--- litellm/proxy/start.sh.orig 2024-02-11 03:13:21 UTC ++++ litellm/proxy/start.sh +@@ -1,2 +1,2 @@ +-#!/bin/bash +-python3 proxy_cli.py +\ No newline at end of file ++#!/bin/sh ++%%PYTHON_CMD%% proxy_cli.py |