summaryrefslogtreecommitdiff
path: root/misc/py-litellm/files/patch-litellm_proxy_start.sh
diff options
context:
space:
mode:
authorHiroki Tagato <tagattie@FreeBSD.org>2024-02-12 17:30:35 +0900
committerHiroki Tagato <tagattie@FreeBSD.org>2024-02-12 17:34:14 +0900
commit730828c627631142966c84d4c2943defaad86e4e (patch)
tree2e734a5168d62de91e5cad164d5dfc5b1ebe991c /misc/py-litellm/files/patch-litellm_proxy_start.sh
parenttextproc/py-tokenizers: add port: Fast state-of-the-art tokenizers optimized ... (diff)
misc/py-litellm: add port: Call all LLM APIs using the OpenAI format
Call all LLM APIs using the OpenAI format [Bedrock, Huggingface, VertexAI, TogetherAI, Azure, OpenAI, etc.] LiteLLM manages: - Translate inputs to provider's completion, embedding, and image_generation endpoints - Consistent output, text responses will always be available at ['choices'][0]['message']['content'] - Retry/fallback logic across multiple deployments (e.g. Azure/OpenAI) - Router - Track spend & set budgets per project OpenAI Proxy Server WWW: https://github.com/BerriAI/litellm
Diffstat (limited to 'misc/py-litellm/files/patch-litellm_proxy_start.sh')
-rw-r--r--misc/py-litellm/files/patch-litellm_proxy_start.sh8
1 files changed, 8 insertions, 0 deletions
diff --git a/misc/py-litellm/files/patch-litellm_proxy_start.sh b/misc/py-litellm/files/patch-litellm_proxy_start.sh
new file mode 100644
index 000000000000..f1ce771fdaeb
--- /dev/null
+++ b/misc/py-litellm/files/patch-litellm_proxy_start.sh
@@ -0,0 +1,8 @@
+--- litellm/proxy/start.sh.orig 2024-02-11 03:13:21 UTC
++++ litellm/proxy/start.sh
+@@ -1,2 +1,2 @@
+-#!/bin/bash
+-python3 proxy_cli.py
+\ No newline at end of file
++#!/bin/sh
++%%PYTHON_CMD%% proxy_cli.py