Harness the power of GPT-2.5, fine-tuned with personal Telegram chats to not just mimic a writing style but to infuse it with a touch of humor and mathematical prowess. Built on Python, this model utilizes the strengths of PyTorch and the Hugging Face Transformers library.
This project seeks to fine-tune the GPT-2.5 model with a personal touch. By training it on personal Telegram chats, we aim to capture the essence of individualized writing styles. To spice things up, we've also fed the model a sprinkle of anecdotes and a dash of math tasks. The result? A model with personality, humor, and the ability to crunch some numbers!
- Personalized Writing Style: Thanks to the Telegram chat data, the model is geared to adopt and mimic individual writing styles.
- Sense of Humor: Integrated with various anecdotes, the model doesn't just process information but does so with a witty touch.
- Mathematical Abilities: With added math tasks, expect this model to handle mathematical queries with precision.
Note: it is highly recommended to train these models in GPU-backed environments
- Python 3.x
- PyTorch
- Hugging Face Transformers
stepan-bot-v1.py
is a file where you can find an example of model inference. Please, feel free to adapt it for your own needs and use-cases.
While personal Telegram chats were used for training, all chat data has been kept confidential, ensuring utmost privacy.
Your contributions, issues, and feature requests are welcome! Feel free to check the issues page.
This project is licensed under the MIT License. See the LICENSE.md file for details.