Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use it #33

Open
wants to merge 18 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add verbosity control to PTBTokenizer
PTBTokenizer logs tokenization details by default (ex. `PTBTokenizer tokenized 2 tokens at 33.87 tokens per second`).
This becomes noisy when you have run tokenization iteratively.
I redirect stderr to `subprocess.DEVNULL` to suppress this.
j-min authored Jan 6, 2021

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 38e309706960d8394b638515c21ad8febef63c44
8 changes: 7 additions & 1 deletion tokenizer/ptbtokenizer.py
Original file line number Diff line number Diff line change
@@ -23,6 +23,8 @@

class PTBTokenizer:
"""Python wrapper of Stanford PTBTokenizer"""
def __init__(self, verbose):
self.verbose = verbose

def tokenize(self, captions_for_image):
cmd = ['java', '-cp', STANFORD_CORENLP_3_4_1_JAR, \
@@ -48,8 +50,12 @@ def tokenize(self, captions_for_image):
# tokenize sentence
# ======================================================
cmd.append(os.path.basename(tmp_file.name))
p_tokenizer = subprocess.Popen(cmd, cwd=path_to_jar_dirname, \
if verbose:
p_tokenizer = subprocess.Popen(cmd, cwd=path_to_jar_dirname, \
stdout=subprocess.PIPE)
else:
p_tokenizer = subprocess.Popen(cmd, cwd=path_to_jar_dirname, \
stdout=subprocess.PIPE, stderr=subprocess.DEVNULL)
token_lines = p_tokenizer.communicate(input=sentences.rstrip())[0]
token_lines = token_lines.decode()
lines = token_lines.split('\n')