This is a LlamaIndex project bootstrapped with create-llama
.
This integration with yFiles then allows you to visualize the specific nodes involved in generating answers within a knowledge graph. You can easily identify which nodes contribute to an answer and further explore the graph by expanding neighboring nodes for a broader context.
With this tool, you can gain deeper insights into how LlamaIndex processes information. It showcases the data from source nodes used to create a final answer. By visualizing the flow from source nodes to answers, you can better understand the logic behind the results and interactively explore the connections in the knowledge graph.
Before beginning the project, ensure you add your agent specifications to a .env file located in the backend folder.
Alternatively, you can modify the .env_template
file located in the backend.
To use this template, simply add your OpenAI key and uncomment the line marked with a TODO
. Once you've made the edit, rename the file by removing the
_template part
, leaving it as .env
.
First, startup the backend as described in the backend README.
Second, run the development server of the frontend as described in the frontend README.
Open http://localhost:3000 with your browser to see the result.
To view a graph, submit a question related to the files you have uploaded. To expand a node, simply double-click on it. A node will only expand if there are hidden neighboring nodes available to reveal.
To learn more about LlamaIndex, take a look at the following resources:
- LlamaIndex Documentation - learn about LlamaIndex (Python features).
- LlamaIndexTS Documentation - learn about LlamaIndex (Typescript features).
You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!