Skip to content

Commit ff5e430

Browse files
v0.5.0 separate http(x) clients (#99)
* wip: httpx client objects * add agent run * fix: playwright, TODO: cleanup * slight refactor * bump version, unify initialization, minor updates * fix action * minor fixes, doc comments, renames * span_context 2 parent_span_context, remove cdp_url * fix doc comments * fix in doc comment * update spanContext serde * fix tests * set current context as parent * fix parent span context * remove redundant method * allow shutdown without init * some renames, close browser span immediately * raise the error if the eval is broken * add base url to evals * don't add browsertype.connect if there is an active span * minor fixes * add shutdown in evals, increase httpx timeout to 1h * Update README.md Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * Update README.md Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * small fixes in README * Update README.md Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com> * allow deserializing camelCase as well * remove agent state str --------- Co-authored-by: ellipsis-dev[bot] <65095814+ellipsis-dev[bot]@users.noreply.github.com>
1 parent 3374f64 commit ff5e430

36 files changed

+2278
-909
lines changed

.github/workflows/ensure-version-match.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ jobs:
1818
run: uv add toml-cli
1919
- name: Ensure version match
2020
run: |
21-
SDK_VERSION=$(cat src/lmnr/version.py | grep SDK_VERSION | head -n1 | cut -d'=' -f2 | sed 's/[" '"'"']//g')
21+
SDK_VERSION=$(cat src/lmnr/version.py | grep __version__ | head -n1 | cut -d'=' -f2 | sed 's/[" '"'"']//g')
2222
PYPROJECT_VERSION=$(uv run toml get --toml-path=pyproject.toml project.version)
2323
if [ "$SDK_VERSION" != "$PYPROJECT_VERSION" ]; then
2424
echo "Version mismatch between src/lmnr/version.py and pyproject.toml"

README.md

Lines changed: 78 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,28 @@ from lmnr import Laminar
3535
Laminar.initialize(project_api_key="<PROJECT_API_KEY>")
3636
```
3737

38-
Note that you need to only initialize Laminar once in your application.
38+
You can also skip passing the `project_api_key`, in which case it will be looked
39+
in the environment (or local .env file) by the key `LMNR_PROJECT_API_KEY`.
40+
41+
Note that you need to only initialize Laminar once in your application. You should
42+
try to do that as early as possible in your application, e.g. at server startup.
43+
44+
## Set-up for self-hosting
45+
46+
If you self-host a Laminar instance, the default connection settings to it are
47+
`http://localhost:8000` for HTTP and `http://localhost:8001` for gRPC. Initialize
48+
the SDK accordingly:
49+
50+
```python
51+
from lmnr import Laminar
52+
53+
Laminar.initialize(
54+
project_api_key="<PROJECT_API_KEY>",
55+
base_url="http://localhost",
56+
http_port=8000,
57+
grpc_port=8001,
58+
)
59+
```
3960

4061
## Instrumentation
4162

@@ -171,49 +192,78 @@ You can run evaluations locally by providing executor (part of the logic used in
171192

172193
Read the [docs](https://docs.lmnr.ai/evaluations/introduction) to learn more about evaluations.
173194

174-
## Laminar pipelines as prompt chain managers
195+
## Client for HTTP operations
175196

176-
You can create Laminar pipelines in the UI and manage chains of LLM calls there.
197+
Various interactions with Laminar [API](https://docs.lmnr.ai/api-reference/) are available in `LaminarClient`
198+
and its asynchronous version `AsyncLaminarClient`.
177199

178-
After you are ready to use your pipeline in your code, deploy it in Laminar by selecting the target version for the pipeline.
200+
### Agent
179201

180-
Once your pipeline target is set, you can call it from Python in just a few lines.
181-
182-
Example use:
202+
To run Laminar agent, you can invoke `client.agent.run`
183203

184204
```python
185-
from lmnr import Laminar
205+
from lmnr import LaminarClient
186206

187-
Laminar.initialize('<YOUR_PROJECT_API_KEY>', instruments=set())
207+
client = LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")
188208

189-
result = Laminar.run(
190-
pipeline = 'my_pipeline_name',
191-
inputs = {'input_node_name': 'some_value'},
192-
# all environment variables
193-
env = {'OPENAI_API_KEY': 'sk-some-key'},
209+
response = client.agent.run(
210+
prompt="What is the weather in London today?"
194211
)
212+
213+
print(response.result.content)
195214
```
196215

197-
Resulting in:
216+
#### Streaming
217+
218+
Agent run supports streaming as well.
198219

199220
```python
200-
>>> result
201-
PipelineRunResponse(
202-
outputs={'output': {'value': [ChatMessage(role='user', content='hello')]}},
203-
# useful to locate your trace
204-
run_id='53b012d5-5759-48a6-a9c5-0011610e3669'
205-
)
221+
from lmnr import LaminarClient
222+
223+
client = LaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")
224+
225+
for chunk in client.agent.run(
226+
prompt="What is the weather in London today?",
227+
stream=True
228+
):
229+
if chunk.chunkType == 'step':
230+
print(chunk.summary)
231+
elif chunk.chunkType == 'finalOutput':
232+
print(chunk.content.result.content)
206233
```
207234

208-
## Semantic search
209-
210-
You can perform a semantic search on a dataset in Laminar by calling `Laminar.semantic_search`.
235+
#### Async mode
211236

212237
```python
213-
response = Laminar.semantic_search(
214-
query="Greatest Chinese architectural wonders",
215-
dataset_id=uuid.UUID("413f8404-724c-4aa4-af16-714d84fd7958"),
238+
from lmnr import AsyncLaminarClient
239+
240+
client = AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")
241+
242+
response = await client.agent.run(
243+
prompt="What is the weather in London today?"
216244
)
245+
246+
print(response.result.content)
217247
```
218248

219-
[Read more](https://docs.lmnr.ai/datasets/indexing) about indexing and semantic search.
249+
#### Async mode with streaming
250+
251+
```python
252+
from lmnr import AsyncLaminarClient
253+
254+
client = AsyncLaminarClient(project_api_key="<YOUR_PROJECT_API_KEY>")
255+
256+
# Note that you need to await the operation even though we use `async for` below
257+
response = await client.agent.run(
258+
prompt="What is the weather in London today?",
259+
stream=True
260+
)
261+
async for chunk in client.agent.run(
262+
prompt="What is the weather in London today?",
263+
stream=True
264+
):
265+
if chunk.chunkType == 'step':
266+
print(chunk.summary)
267+
else if chunk.chunkType == 'finalOutput':
268+
print(chunk.content.result.content)
269+
```

pyproject.toml

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77
[project]
88
name = "lmnr"
9-
version = "0.4.66"
9+
version = "0.5.0"
1010
description = "Python SDK for Laminar"
1111
authors = [
1212
{ name = "lmnr.ai", email = "[email protected]" }
@@ -16,20 +16,18 @@ requires-python = ">=3.9,<4"
1616
license = "Apache-2.0"
1717
dependencies = [
1818
"pydantic (>=2.0.3)",
19-
"requests (>=2.0)",
2019
"python-dotenv (>=1.0)",
21-
"opentelemetry-api (>=1.28.0)",
22-
"opentelemetry-sdk (>=1.28.0)",
23-
"opentelemetry-exporter-otlp-proto-http (>=1.28.0)",
24-
"opentelemetry-exporter-otlp-proto-grpc (>=1.28.0)",
25-
"opentelemetry-instrumentation-requests (>=0.50b0)",
26-
"opentelemetry-instrumentation-sqlalchemy (>=0.50b0)",
27-
"opentelemetry-instrumentation-urllib3 (>=0.50b0)",
28-
"opentelemetry-instrumentation-threading (>=0.50b0)",
20+
"opentelemetry-api (>=1.31.1)",
21+
"opentelemetry-sdk (>=1.31.1)",
22+
"opentelemetry-exporter-otlp-proto-http (>=1.31.1)",
23+
"opentelemetry-exporter-otlp-proto-grpc (>=1.31.1)",
24+
"opentelemetry-instrumentation-requests (>=0.52b0)",
25+
"opentelemetry-instrumentation-sqlalchemy (>=0.52b0)",
26+
"opentelemetry-instrumentation-urllib3 (>=0.52b0)",
27+
"opentelemetry-instrumentation-threading (>=0.52b0)",
2928
"opentelemetry-semantic-conventions-ai (>=0.4.2)",
3029
"tqdm (>=4.0)",
3130
"argparse (>=1.0)",
32-
"aiohttp (>=3.0)",
3331
"tenacity (>=8.0)",
3432
# explicitly freeze grpcio. Since 1.68.0, grpcio writes a warning message
3533
# that looks scary, but is harmless.
@@ -40,6 +38,7 @@ dependencies = [
4038
# https://discuss.ai.google.dev/t/warning-all-log-messages-before-absl-initializelog-is-called-are-written-to-stderr-e0000-001731955515-629532-17124-init-cc-229-grpc-wait-for-shutdown-with-timeout-timed-out/50020
4139
# https://github.com/grpc/grpc/issues/38490
4240
"grpcio<1.68.0",
41+
"httpx>=0.28.1",
4342
]
4443

4544
[project.scripts]
@@ -114,7 +113,8 @@ dev = [
114113
"flake8",
115114
"pytest>=8.3.4",
116115
"pytest-sugar",
117-
"pytest-asyncio>=0.25.2"
116+
"pytest-asyncio>=0.25.2",
117+
"playwright>=1.51.0"
118118
]
119119

120120
[build-system]

src/lmnr/__init__.py

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,46 @@
1+
from .sdk.client.synchronous.sync_client import LaminarClient
2+
from .sdk.client.asynchronous.async_client import AsyncLaminarClient
13
from .sdk.datasets import EvaluationDataset, LaminarDataset
24
from .sdk.evaluations import evaluate
35
from .sdk.laminar import Laminar
46
from .sdk.types import (
7+
AgentOutput,
8+
FinalOutputChunkContent,
59
ChatMessage,
610
HumanEvaluator,
711
NodeInput,
812
PipelineRunError,
913
PipelineRunResponse,
14+
RunAgentResponseChunk,
15+
StepChunkContent,
1016
TracingLevel,
1117
)
1218
from .sdk.decorators import observe
1319
from .sdk.types import LaminarSpanContext
1420
from .openllmetry_sdk import Instruments
1521
from .openllmetry_sdk.tracing.attributes import Attributes
1622
from opentelemetry.trace import use_span
23+
24+
__all__ = [
25+
"AgentOutput",
26+
"AsyncLaminarClient",
27+
"Attributes",
28+
"ChatMessage",
29+
"EvaluationDataset",
30+
"FinalOutputChunkContent",
31+
"HumanEvaluator",
32+
"Instruments",
33+
"Laminar",
34+
"LaminarClient",
35+
"LaminarDataset",
36+
"LaminarSpanContext",
37+
"NodeInput",
38+
"PipelineRunError",
39+
"PipelineRunResponse",
40+
"RunAgentResponseChunk",
41+
"StepChunkContent",
42+
"TracingLevel",
43+
"evaluate",
44+
"observe",
45+
"use_span",
46+
]

src/lmnr/openllmetry_sdk/__init__.py

Lines changed: 4 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
from typing import Dict
1717

1818

19-
class Traceloop:
19+
class TracerManager:
2020
__tracer_wrapper: TracerWrapper
2121

2222
@staticmethod
@@ -44,17 +44,6 @@ def init(
4444
if isinstance(headers, str):
4545
headers = parse_env_headers(headers)
4646

47-
if (
48-
not exporter
49-
and not processor
50-
and api_endpoint == "https://api.lmnr.ai"
51-
and not api_key
52-
):
53-
print(
54-
"Set the LMNR_PROJECT_API_KEY environment variable to your project API key"
55-
)
56-
return
57-
5847
if api_key and not exporter and not processor and not headers:
5948
headers = {
6049
"Authorization": f"Bearer {api_key}",
@@ -65,7 +54,7 @@ def init(
6554
TracerWrapper.set_static_params(
6655
resource_attributes, enable_content_tracing, api_endpoint, headers
6756
)
68-
Traceloop.__tracer_wrapper = TracerWrapper(
57+
TracerManager.__tracer_wrapper = TracerWrapper(
6958
disable_batch=disable_batch,
7059
processor=processor,
7160
propagator=propagator,
@@ -79,5 +68,5 @@ def init(
7968

8069
@staticmethod
8170
def flush():
82-
if Traceloop.__tracer_wrapper:
83-
Traceloop.__tracer_wrapper.flush()
71+
if getattr(TracerManager, "__tracer_wrapper", None):
72+
TracerManager.__tracer_wrapper.flush()

src/lmnr/openllmetry_sdk/tracing/attributes.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,6 @@
99
SPAN_INSTRUMENTATION_SOURCE = "lmnr.span.instrumentation_source"
1010
SPAN_SDK_VERSION = "lmnr.span.sdk_version"
1111
SPAN_LANGUAGE_VERSION = "lmnr.span.language_version"
12-
OVERRIDE_PARENT_SPAN = "lmnr.internal.override_parent_span"
1312

1413
ASSOCIATION_PROPERTIES = "lmnr.association.properties"
1514
SESSION_ID = "session_id"

0 commit comments

Comments
 (0)