Skip to content

Commit f25e7b5

Browse files
authored
Fix Terminology and Grammar in Documentation (ag2ai#1180)
* Update MAINTAINERS.md * Update TRANSPARENCY_FAQS.md * Update ollama.mdx
1 parent 256f55f commit f25e7b5

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

MAINTAINERS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@
1313
| Yixuan Zhai | [randombet](https://github.com/randombet) | Meta | group chat, sequential_chats, rag |
1414
| Yiran Wu | [yiranwu0](https://github.com/yiranwu0) | Penn State University | alt-models, group chat, logging, infra |
1515
| Jieyu Zhang | [JieyuZ2](https://jieyuz2.github.io/) | University of Washington | autobuild, group chat |
16-
| Davor Runje | [davorrunje](https://github.com/davorrunje) | airt.ai | Tool calling, IO |
16+
| Davor Runje | [davorrunje](https://github.com/davorrunje) | airt.ai | Tool calling, I/O |
1717
| Rudy Wu | [rudyalways](https://github.com/rudyalways) | Google | all, group chats, sequential chats |
1818
| Haiyang Li | [ohdearquant](https://github.com/ohdearquant) | - | all, sequential chats, structured output, low-level|
1919
| Eric Moore | [emooreatx](https://github.com/emooreatx) | IBM | all|

TRANSPARENCY_FAQS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Additionally, AG2's multi-agent framework may amplify or introduce additional ri
5151
- Security & unintended consequences: The use of multi-agent conversations and automation in complex tasks may have unintended consequences. Especially, allowing LLM agents to make changes in external environments through code execution or function calls, such as installing packages, could pose significant risks. Developers should carefully consider the potential risks and ensure that appropriate safeguards are in place to prevent harm or negative outcomes, including keeping a human in the loop for decision making.
5252

5353
## What operational factors and settings allow for effective and responsible use of AG2?
54-
- Code execution: AG2 recommends using docker containers so that code execution can happen in a safer manner. Users can use function calls instead of free-form code to execute pre-defined functions only, increasing reliability and safety. Users can also tailor the code execution environment to their requirements.
54+
- Code execution: AG2 recommends using docker containers so that code execution can happen in a safer manner. Users can use function calls instead of free-form code to execute predefined functions only, increasing reliability and safety. Users can also tailor the code execution environment to their requirements.
5555
- Human involvement: AG2 prioritizes human involvement in multi agent conversation. The overseers can step in to give feedback to agents and steer them in the correct direction. Users can get a chance to confirm before code is executed.
5656
- Agent modularity: Modularity allows agents to have different levels of information access. Additional agents can assume roles that help keep other agents in check. For example, one can easily add a dedicated agent to play the role of a safeguard.
5757
- LLMs: Users can choose the LLM that is optimized for responsible use. For example, OpenAI's GPT-4o includes RAI mechanisms and filters. Caching is enabled by default to increase reliability and control cost. We encourage developers to review [OpenAI’s Usage policies](https://openai.com/policies/usage-policies) and [Azure OpenAI’s Code of Conduct](https://learn.microsoft.com/en-us/legal/cognitive-services/openai/code-of-conduct) when using their models.

website/docs/user-guide/models/ollama.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ To use manual tool calling set `native_tool_calls` to `False`.
266266

267267
## Reducing repetitive tool calls
268268

269-
By incorporating tools into a conversation, LLMs can often continually recommend them to be called, even after they've been called and a result returned. This can lead to a never ending cycle of tool calls.
269+
By incorporating tools into a conversation, LLMs can often continually recommend them to be called, even after they've been called and a result returned. This can lead to a never-ending cycle of tool calls.
270270

271271
To remove the chance of an LLM recommending a tool call, an additional parameter called `hide_tools` can be used to specify when tools are hidden from the LLM. The string values for the parameter are:
272272

0 commit comments

Comments
 (0)