Skip to content

Commit f32dc01

Browse files
adds basic syntax highlighting to code snippets (#40)
note: i think these changes make the outputs look a lot better, but there's a small cost of some additional dependencies (so chatify takes a little longer to install). i think it's worth it... example of new formatting: ![Screenshot 2023-07-31 at 12 28 32 AM](https://github.com/ContextLab/chatify/assets/9030494/c1243483-0bb2-472a-927f-50adce8ba39b)
2 parents 7d061fb + a52ecfb commit f32dc01

File tree

5 files changed

+38
-12
lines changed

5 files changed

+38
-12
lines changed

README.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -29,9 +29,7 @@ No further setup is required. To interact with Chatify about any code in the no
2929

3030
## Customizing Chatify
3131

32-
Chatify is designed to work by default in the free tiers of [Colaboratory](https://colab.research.google.com/) and [Kaggle](https://www.kaggle.com/code) notebooks, and to operate without requiring any additional costs or setup beyond installing and enabling Chatify itself.
33-
34-
Chatify is designed to work on a variety of systems and setups, including the "free" tiers on Google Colaboratory and Kaggle. For setups with additional resources, it is possible to switch to better-performing or lower-cost models. Chatify works in CPU-only environments, but is GPU-friendly (for both CUDA-enabled and Metal-enabled systems). We support any text-generation model on [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending), Meta's [Llama 2](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) models, and OpenAI's [ChatGPT](https://chat.openai.com/) models (both ChatGPT-3.5 and ChatGPT-4). Models that run on Hugging Face or OpenAI's servers require either a [Hugging Face API key](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) or an [OpenAI API key](https://platform.openai.com/signup), respectively.
32+
Chatify is designed to work by default in the free tiers of [Colaboratory](https://colab.research.google.com/) and [Kaggle](https://www.kaggle.com/code) notebooks, and to operate without requiring any additional costs or setup beyond installing and enabling Chatify itself. In addition to Colaboratory and Kaggle notebooks, Chatify also supports a variety of other systems and setups, including running locally or on other cloud-based systems. For setups with additional resources, it is possible to switch to better-performing or lower-cost models. Chatify works in CPU-only environments, but it is GPU-friendly (for both CUDA-enabled and Metal-enabled systems). We support any text-generation model on [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending), Meta's [Llama 2](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) models, and OpenAI's [ChatGPT](https://chat.openai.com/) models (both ChatGPT-3.5 and ChatGPT-4). Models that run on Hugging Face or OpenAI's servers require either a [Hugging Face API key](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) or an [OpenAI API key](https://platform.openai.com/signup), respectively.
3533

3634
Once you have your API key(s), if needed, create a `config.yaml` file in the directory where you launch your notebook. For the OpenAI configuration, replace `<OPANAI API KEY>` with your actual OpenAI API key (with no quotes) and then create a `config.yaml` file with the following contents:
3735

@@ -63,7 +61,7 @@ prompts_config:
6361
6462
### Llama 2 configuration
6563
66-
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! The 7B and 13B variants of llama 2 both run on the free tier of Google Colaboratory and Kaggle, but the 13B is substantially slower (hence we use the 7B variant by default). Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
64+
If you're running your notebook on a well-resourced machine, you can use this config file to get good performance for free! The 7B and 13B variants of llama 2 both run on the free tier of Google Colaboratory and Kaggle, but the 13B is substantially slower (therefore we recommend the 7B variant if you're using Colaboratory or Kaggle notebooks). Note that using this configuration requires installing the "HuggingFace" dependencies (`pip install chatify[hf]`).
6765

6866
```yaml
6967
cache_config:

chatify/main.py

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
import yaml
2-
import markdown
32

43
import pathlib
54
import requests
@@ -12,8 +11,7 @@
1211
from .chains import CreateLLMChain
1312
from .widgets import option_widget, button_widget, text_widget, thumbs
1413

15-
from .utils import check_dev_config
16-
14+
from .utils import check_dev_config, get_html
1715

1816
@magics_class
1917
class Chatify(Magics):
@@ -166,7 +164,7 @@ def gpt(self, inputs, prompt):
166164
response = requests.post(combined_url, headers=headers, json=data)
167165
output = eval(response.content.decode('utf-8'))
168166

169-
return markdown.markdown(output.replace("\n", "\n\n"))
167+
return get_html(output)
170168

171169

172170
def update_values(self, *args, **kwargs):

chatify/utils.py

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,41 @@
33
import random
44
import urllib
55

6+
from markdown_it import MarkdownIt
7+
from pygments.formatters import HtmlFormatter
8+
from pygments.lexers import get_lexer_by_name
9+
from pygments import highlight
10+
611
from langchain.llms.base import LLM
712

813

14+
def highlight_code(code, name, attrs):
15+
"""Highlight a block of code"""
16+
17+
lexer = get_lexer_by_name(name)
18+
formatter = HtmlFormatter(cssclass='codehilite', linenos='table')
19+
20+
return highlight(code, lexer, formatter)
21+
22+
def get_html(markdown, code_style='default'):
23+
"""Return HTML string rendered from markdown source."""
24+
25+
md = MarkdownIt(
26+
"js-default",
27+
{
28+
"linkify": True,
29+
"html": True,
30+
"typographer": True,
31+
"highlight": highlight_code,
32+
},
33+
)
34+
35+
formatter = HtmlFormatter(style=code_style, linenos='table')
36+
css = formatter.get_style_defs()
37+
38+
return f'<head><style>{css}</style></head>' + md.render(markdown)
39+
40+
941
class FakeListLLM(LLM):
1042
"""Fake LLM wrapper for testing purposes.
1143

chatify/widgets.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,5 @@ def text_widget():
7272
text : widgets.HTMLMath
7373
HTMLMath widget for displaying text.
7474
"""
75-
text = widgets.HTMLMath(
76-
value='', placeholder='', description='', style=dict(font_size='12px')
77-
)
75+
text = widgets.HTMLMath(value='', placeholder='', description='')
7876
return text

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
with open('HISTORY.rst') as history_file:
1212
history = history_file.read()
1313

14-
requirements = ['gptcache', 'langchain', 'openai', 'markdown', 'ipywidgets', 'requests']
14+
requirements = ['gptcache', 'langchain', 'openai', 'markdown', 'ipywidgets', 'requests', 'markdown-it-py[linkify,plugins]', 'pygments']
1515
extras = ['transformers', 'torch>=2.0', 'tensorflow>=2.0', 'flax', 'einops', 'accelerate', 'xformers', 'bitsandbytes', 'sentencepiece', 'llama-cpp-python']
1616

1717
test_requirements = [

0 commit comments

Comments
 (0)