Skip to content

Commit e507332

Browse files
authored
Update use_cases.md
1 parent c65bcb0 commit e507332

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/use_cases.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ We also use this analysis for feature engineering from code. Within an enterpri
1212

1313
**SEMFORMS Paper**: [https://arxiv.org/abs/2111.00083](https://www.ijcai.org/proceedings/2023/0827.pdf)
1414

15-
**DataRinse Paper**: [https://dl.acm.org/doi/10.14778/3611540.3611628]
15+
**DataRinse Paper**: [https://dl.acm.org/doi/10.14778/3611540.3611628](https://dl.acm.org/doi/10.14778/3611540.3611628)
1616

1717
### Buildng Better Language Models for Code Understanding<a name="lm"></a>
1818
Code understanding is an increasingly important application of Artificial Intelligence. A fundamental aspect of understanding code is understanding text about code, e.g., documentation and forum discussions. Pre-trained language models (e.g., BERT) are a popular approach for various NLP tasks, and there are now a variety of benchmarks, such as GLUE, to help improve the development of such models for natural language understanding. However, little is known about how well such models work on textual artifacts about code, and we are unaware of any systematic set of downstream tasks for such an evaluation. In this paper, we derive a set of benchmarks (BLANCA - Benchmarks for LANguage models on Coding Artifacts) that assess code understanding based on tasks such as predicting the best answer to a question in a forum post, finding related forum posts, or predicting classes related in a hierarchy from class documentation. We evaluate the performance of current state-of-the-art language models on these tasks and show that there is a significant improvement on each task from fine tuning. We also show that multi-task training over BLANCA tasks helps build better language models for code understanding.

0 commit comments

Comments
 (0)