From 8df47ec2bc0a9313ea3146a775bd39d4099fee98 Mon Sep 17 00:00:00 2001 From: lopez Date: Fri, 17 Nov 2023 22:40:37 +0100 Subject: [PATCH] update documentation for java 11 min requirement, and more generally for next version --- Readme.md | 3 ++- doc/Configuration.md | 12 ++++++++++-- doc/Grobid-service.md | 27 ++++++++++++++++----------- doc/Install-Grobid.md | 13 +++++++------ doc/Introduction.md | 7 ++++--- doc/Run-Grobid.md | 24 ++++++++++++++++++++++++ doc/Training-the-models-of-Grobid.md | 2 ++ doc/index.md | 14 +++++++++++--- mkdocs.yml | 5 +++-- 9 files changed, 79 insertions(+), 28 deletions(-) create mode 100644 doc/Run-Grobid.md diff --git a/Readme.md b/Readme.md index b631b8d448..b6b74e46cf 100644 --- a/Readme.md +++ b/Readme.md @@ -24,7 +24,7 @@ The following functionalities are available: - __Header extraction and parsing__ from article in PDF format. The extraction here covers the usual bibliographical information (e.g. title, abstract, authors, affiliations, keywords, etc.). - __References extraction and parsing__ from articles in PDF format, around .87 F1-score against on an independent PubMed Central set of 1943 PDF containing 90,125 references, and around .90 on a similar bioRxiv set of 2000 PDF (using the Deep Learning citation model). All the usual publication metadata are covered (including DOI, PMID, etc.). - __Citation contexts recognition and resolution__ of the full bibliographical references of the article. The accuracy of citation contexts resolution is between .76 and .91 F1-score depending on the evaluation collection (this corresponds to both the correct identification of the citation callout and its correct association with a full bibliographical reference). -- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference and footnote callouts, figures, tables, etc.). +- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference and footnote callouts, figures, tables, data availability statements, etc.). - __PDF coordinates__ for extracted information, allowing to create "augmented" interactive PDF based on bounding boxes of the identified structures. - Parsing of __references in isolation__ (above .90 F1-score at instance-level, .95 F1-score at field level, using the Deep Learning model). - __Parsing of names__ (e.g. person title, forenames, middle name, etc.), in particular author names in header, and author names in references (two distinct models). @@ -32,6 +32,7 @@ The following functionalities are available: - __Parsing of dates__, ISO normalized day, month, year. - __Consolidation/resolution of the extracted bibliographical references__ using the [biblio-glutton](https://github.com/kermitt2/biblio-glutton) service or the [CrossRef REST API](https://github.com/CrossRef/rest-api-doc). In both cases, DOI/PMID resolution performance is higher than 0.95 F1-score from PDF extraction. - __Extraction and parsing of patent and non-patent references in patent__ publications. +- __Extraction of Funders and funding information__ with optional matching of extracted funders with the CrossRef Funder Registry. In a complete PDF processing, GROBID manages 55 final labels used to build relatively fine-grained structures, from traditional publication metadata (title, author first/last/middle names, affiliation types, detailed address, journal, volume, issue, pages, DOI, PMID, etc.) to full text structures (section title, paragraph, reference markers, head/foot notes, figure captions, etc.). diff --git a/doc/Configuration.md b/doc/Configuration.md index 9c778ffeae..8ea092594b 100644 --- a/doc/Configuration.md +++ b/doc/Configuration.md @@ -207,15 +207,23 @@ logging: level: INFO loggers: org.apache.pdfbox.pdmodel.font.PDSimpleFont: "OFF" + org.glassfish.jersey.internal: "OFF" + com.squarespace.jersey2.guice.JerseyGuiceUtils: "OFF" appenders: - type: console - threshold: ALL + threshold: WARN timeZone: UTC + # uncomment to have the logs in json format + #layout: + # type: json - type: file currentLogFilename: logs/grobid-service.log - threshold: ALL + threshold: INFO archive: true archivedLogFilenamePattern: logs/grobid-service-%d.log archivedFileCount: 5 timeZone: UTC + # uncomment to have the logs in json format + #layout: + # type: json ``` diff --git a/doc/Grobid-service.md b/doc/Grobid-service.md index fb5e125795..b9d2916b9e 100644 --- a/doc/Grobid-service.md +++ b/doc/Grobid-service.md @@ -2,7 +2,12 @@ The GROBID Web API provides a simple and efficient way to use the tool. A service console is available to test GROBID in a human friendly manner. For production and benchmarking, we strongly recommand to use this web service mode on a multi-core machine and to avoid running GROBID in the batch mode. -## Start the server with Gradle +## Start the server with Docker + +This is the recommended and standard way to run the Grobid web services. + + +## Start a development server with Gradle Go under the `grobid/` main directory. Be sure that the GROBID project is built, see [Install GROBID](Install-Grobid.md). @@ -16,7 +21,7 @@ The following command will start the server on the default port __8070__: ## Install and run the service as standalone application -You could also build and install the service as a standalone service (let's supposed the destination directory is grobid-installation) +From a development installation, you can also build and install the service as a standalone service - here let's supposed the destination directory is grobid-installation: ```console ./gradlew clean assemble @@ -57,16 +62,16 @@ If required, modify the file under `grobid/grobid-home/config/grobid.yaml` for s You can choose to load all the models at the start of the service or lazily when a model is used the first time, the latter being the default. Loading all models at service startup will slow down the start of the server and will use more memories than the lazy mode in case only a few services will be used. -For preloading all the models, set the following config parameter to `true`: +Preloading all the models at server start is the default setting, but you choose a lazy loading of the model: ```yaml grobid: # for **service only**: how to load the models, - # false -> models are loaded when needed (default), avoiding putting in memory useless models but slow down significantly - # the service at first call - # true -> all the models are loaded into memory at the server startup, slow the start of the services and models not - # used will take some memory, but server is immediatly warm and ready - modelPreload: false + # false -> models are loaded when needed, avoiding putting in memory useless models (only in case of CRF) but slow down + # significantly the service at first call + # true -> all the models are loaded into memory at the server startup (default), slow the start of the services + # and models not used will take some more memory (only in case of CRF), but server is immediatly warm and ready + modelPreload: true ``` ## CORS (Cross-Origin Resource Share) @@ -89,13 +94,13 @@ We provide clients written in Python, Java, node.js using the GROBID PDF-to-TEI * Java GROBID client * Node.js GROBID client -All these clients will take advantage of the multi-threading for scaling PDF batch processing. As a consequence, they will be much more efficient than the [batch command lines](Grobid-batch.md) (which use only one thread) and should be prefered. +All these clients will take advantage of the multi-threading for scaling PDF batch processing. As a consequence, they will be much more efficient than the [batch command lines](Grobid-batch.md) (which use only one thread) and should be prefered. The Python client is the more up-to-date and complete and can be adapted for your needs. ## Use GROBID test console -On your browser, the welcome page of the Service console is available at the URL . +On your browser, the welcome page of the service console is available at the URL . -On the console, the RESTful API can be tested under the `TEI` tab for service returning a TEI document, under the `PDF` tab for services returning annotations relative to PDF or an annotated PDF and under the `Patent` tab for patent-related services: +On the service console, the RESTful API can be tested under the `TEI` tab for service returning a TEI document, under the `PDF` tab for services returning annotations relative to PDF or an annotated PDF and under the `Patent` tab for patent-related services: ![Example of GROBID Service console usage](img/grobid-rest-example.png) diff --git a/doc/Install-Grobid.md b/doc/Install-Grobid.md index 333a7375a0..f207aff4c2 100644 --- a/doc/Install-Grobid.md +++ b/doc/Install-Grobid.md @@ -1,8 +1,10 @@ -

Install GROBID

> +

Install a GROBID development environment

> -## Getting GROBID +## Getting the GROBID project source -GROBID requires a JVM installed on your machine, we tested the tool successfully up version **JVM 17**. Other recent JVM versions should work correctly. +For building GROBID yourself, a JDK must be installed on your machine. We tested the tool successfully from **JDK 1.11** up version **JDK 1.17**. Other recent JDK versions should work correctly. + +Note: Java/JDK 8 is not supported anymore from Grobid version `0.8.0` and the minimum requirement for Java is JDK 1.11. ### Latest stable release @@ -29,7 +31,7 @@ Or download directly the zip file: > unzip master ``` -## Build GROBID +## Build GROBID from the source **Please make sure that Grobid is installed in a path with no parent directories containing spaces.** @@ -59,9 +61,8 @@ systemProp.https.proxyUser=username systemProp.https.proxyPassword=password ``` -## Use GROBID +## Use a built GROBID project From there, the easiest and most efficient way to use GROBID is the [web service mode](Grobid-service.md). You can also use the tool in [batch mode](Grobid-batch.md) or integrate it in your Java project via the [Java API](Grobid-java-library.md). - diff --git a/doc/Introduction.md b/doc/Introduction.md index fe76df7b3f..c30b0bc945 100644 --- a/doc/Introduction.md +++ b/doc/Introduction.md @@ -22,7 +22,7 @@ The following functionalities are available: - __Header extraction and parsing__ from article in PDF format. The extraction here covers the usual bibliographical information (e.g. title, abstract, authors, affiliations, keywords, etc.). - __References extraction and parsing__ from articles in PDF format, around .87 F1-score against on an independent PubMed Central set of 1943 PDF containing 90,125 references, and around .90 on a similar bioRxiv set of 2000 PDF (using the Deep Learning citation model). All the usual publication metadata are covered (including DOI, PMID, etc.). - __Citation contexts recognition and resolution__ of the full bibliographical references of the article. The accuracy of citation contexts resolution is between .76 and .91 F1-score depending on the evaluation collection (this corresponds to both the correct identification of the citation callout and its correct association with a full bibliographical reference). -- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference and footnote callouts, figures, tables, etc.). +- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference and footnote callouts, figures, tables, data availability statements, etc.). - __PDF coordinates__ for extracted information, allowing to create "augmented" interactive PDF based on bounding boxes of the identified structures. - Parsing of __references in isolation__ (above .90 F1-score at instance-level, .95 F1-score at field level, using the Deep Learning model). - __Parsing of names__ (e.g. person title, forenames, middle name, etc.), in particular author names in header, and author names in references (two distinct models). @@ -30,8 +30,9 @@ The following functionalities are available: - __Parsing of dates__, ISO normalized day, month, year. - __Consolidation/resolution of the extracted bibliographical references__ using the [biblio-glutton](https://github.com/kermitt2/biblio-glutton) service or the [CrossRef REST API](https://github.com/CrossRef/rest-api-doc). In both cases, DOI/PMID resolution performance is higher than 0.95 F1-score from PDF extraction. - __Extraction and parsing of patent and non-patent references in patent__ publications. +- __Extraction of Funders and funding information__ with optional matching of extracted funders with the CrossRef Funder Registry. -In a complete PDF processing, GROBID manages 55 final labels used to build relatively fine-grained structures, from traditional publication metadata (title, author first/last/middle names, affiliation types, detailed address, journal, volume, issue, pages, DOI, PMID, etc.) to full text structures (section title, paragraph, reference markers, head/foot notes, figure captions, etc.). +In a complete PDF processing, GROBID manages more than 55 final labels used to build relatively fine-grained structures, from traditional publication metadata (title, author first/last/middle names, affiliation types, detailed address, journal, volume, issue, pages, DOI, PMID, etc.) to full text structures (section title, paragraph, reference markers, head/foot notes, figure captions, etc.). GROBID includes a comprehensive [web service API](https://grobid.readthedocs.io/en/latest/Grobid-service/), [Docker images](https://grobid.readthedocs.io/en/latest/Grobid-docker/), [batch processing](https://grobid.readthedocs.io/en/latest/Grobid-batch/), a JAVA API, a generic [training and evaluation framework](https://grobid.readthedocs.io/en/latest/Training-the-models-of-Grobid/) (precision, recall, etc., n-fold cross-evaluation), systematic [end-to-end benchmarking](https://grobid.readthedocs.io/en/latest/Benchmarking/) on thousand documents and the semi-automatic generation of training data. @@ -42,7 +43,7 @@ The key aspects of GROBID are the following ones: + Written in Java, with JNI call to native CRF libraries and/or Deep Learning libraries via Python JNI bridge. + Speed - on low profile Linux machine (8 threads): header extraction from 4000 PDF in 2 minutes (36 PDF per second with the RESTful API), parsing of 3500 references in 4 seconds, full processing of 4000 PDF (full body, header and reference, structured) in 26 minutes (around 2.5 PDF per second). + Scalability and robustness: We have been able recently to run the complete fulltext processing at around 10.6 PDF per second (around 915,000 PDF per day, around 20M pages per day) during one week on one 16 CPU machine (16 threads, 32GB RAM, no SDD, articles from mainstream publishers), see [here](https://github.com/kermitt2/grobid/issues/443#issuecomment-505208132) (11.3M PDF were processed in 6 days by 2 servers without crash). -+ Lazy loading of models and resources. Depending on the selected process, only the required data are loaded in memory. For instance, extracting only metadata header from a PDF requires less than 2 GB memory in a multithreading usage, extracting citations uses around 3GB and extracting all the PDF structures around 4GB. ++ Optional lazy loading of models and resources. Depending on the selected process, only the required data are loaded in memory. For instance, extracting only metadata header from a PDF requires less than 2 GB memory in a multithreading usage, extracting citations uses around 3GB and extracting all the PDF structures around 4GB. + Robust and fast PDF processing with [pdfalto](https://github.com/kermitt2/pdfalto), based on xpdf, and dedicated post-processing. + Modular and reusable machine learning models for sequence labelling. The default extractions are based on Linear Chain Conditional Random Fields, with the possibility to use various Deep Learning architectures for sequence labelling (including ELMo and BERT-CRF) for improving accuracy. The specialized sequence labelling models are cascaded to build a complete (hierarchical) document structure. + Full encoding in [__TEI__](http://www.tei-c.org/Guidelines/P5/index.xml), both for the training corpus and the parsed results. diff --git a/doc/Run-Grobid.md b/doc/Run-Grobid.md new file mode 100644 index 0000000000..127229a6d1 --- /dev/null +++ b/doc/Run-Grobid.md @@ -0,0 +1,24 @@ +

Run GROBID

> + +The standard way to run Grobid is to use Docker for starting a Grobid server. + +For installing Docker on your system, see [here](https://docs.docker.com/engine/understanding-docker/). + +For convenience, we provide two docker images: + +- the **full** image provides the best accuracy, because it includes all the required python and TensorFlow libraries, GPU support and all Deep Learning model resources. However it requires more resources, ideally a GPU (it will be automatically detected). If you have a limited amount of PDF, a good machine, and prioritize accuracy, use this Grobid flavor. To run this version of Grobid, the command is: + +```console +docker run --rm --init --ulimit core=0 -p 8070:8070 lfoppiano/grobid:0.7.3 +``` + +- the **lightweight** one offers best performance in term of runtime, memory usage and Docker image size. However, it does not use some of the best performing models in term of accuracy. If you have a lot of PDF to process, a low resource system, and accuracy is not so important, use this flavor: + +```console +docker run --rm --gpus all --init --ulimit core=0 -p 8070:8070 grobid/grobid:0.7.3 +``` + +More documentation on the Docker images can be found [here](Grobid-docker.md). + +From there, you can check on your browser if the service works fine by accessing the welcome page of the service console, available at the URL . The GROBID server can be used via the [web service](Grobid-service.md). + diff --git a/doc/Training-the-models-of-Grobid.md b/doc/Training-the-models-of-Grobid.md index aad2f0eebc..f6d84a8f5d 100644 --- a/doc/Training-the-models-of-Grobid.md +++ b/doc/Training-the-models-of-Grobid.md @@ -28,6 +28,8 @@ Grobid uses different sequence labelling models depending on the labeling task t * table +* funding-acknowledgement + The models are located under `grobid/grobid-home/models`. Each of these models can be retrained using amended or additional training data. For production, a model is trained with all the available training data to maximize the performance. For development purposes, it is also possible to evaluate a model with part of the training data as frozen set (e.g. holdout set), automatic random split or apply 10-fold cross-evaluation. ## Train and evaluate diff --git a/doc/index.md b/doc/index.md index 66bd7e0810..5af27ef658 100644 --- a/doc/index.md +++ b/doc/index.md @@ -13,12 +13,14 @@

User manual

-* [Install GROBID](Install-Grobid.md) - -* [Use GROBID with containers (Docker)](Grobid-docker.md) +* [Run GROBID](Run-Grobid.md) * [Use GROBID as a service](Grobid-service.md) +* [Build a GROBID development environment](Install-Grobid.md) + +* [Manage GROBID with containers (Docker)](Grobid-docker.md) + * [Use GROBID in batch mode](Grobid-batch.md) * [GROBID configuration](Configuration.md) @@ -42,9 +44,13 @@

Benchmarking

* [Description](Benchmarking.md) + * [Evaluation PubMed Central](Benchmarking-pmc.md) + * [Evaluation bioRxiv](Benchmarking-biorxiv.md) + * [Evaluation PLOS](Benchmarking-plos.md) + * [Evaluation eLife](Benchmarking-elife.md)

Annotation guidelines

@@ -66,7 +72,9 @@

Developer notes

* [Notes for the Grobid Developers](Notes-grobid-developers.md) + * [Using Deep Learning models instead of default CRF](Deep-Learning-models.md) + * [Recompiling and integrating CRF libraries into GROBID](Recompiling-and-integrating-CRF-libraries.md) diff --git a/mkdocs.yml b/mkdocs.yml index f0e256fb10..d6b9b08d51 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -16,9 +16,10 @@ nav: - 'References': 'References.md' - 'Licence': 'License.md' - User manual: - - 'Install GROBID': 'Install-Grobid.md' - - 'GROBID with containers': 'Grobid-docker.md' + - 'Run GROBID': 'Run-Grobid.md' - 'GROBID service': 'Grobid-service.md' + - 'Build GROBID from source': 'Install-Grobid.md' + - 'GROBID with containers': 'Grobid-docker.md' - 'GROBID batch mode': 'Grobid-batch.md' - 'GROBID configuration': 'Configuration.md' - 'Troubleshooting and known issues': 'Troubleshooting.md'