Commit 694739d 1 parent 9974028 commit 694739d Copy full SHA for 694739d
File tree 1 file changed +5
-23
lines changed
1 file changed +5
-23
lines changed Original file line number Diff line number Diff line change @@ -136,41 +136,23 @@ Audience question
136
136
137
137
<!-- .slide: class="audience-question" -->
138
138
139
- # Complexity
140
-
141
- ## Grep
139
+ # Grep Complexity
142
140
143
141
Search every query term as a string in every document: <!-- .element: class="fragment" -->
144
142
145
143
$$ O(\text{num query terms} \times \text{total length of all documents}) $$ <!-- .element: class="fragment" -->
146
144
147
- ## Union
148
-
149
- Merge result lists (without duplicates): <!-- .element: class="fragment" -->
150
-
151
- $$ O(\text{number of results}) $$ <!-- .element: class="fragment" -->
152
-
153
- ## Intersect
154
-
155
- Compare the first result list with every other: <!-- .element: class="fragment" -->
156
-
157
- $$ O(\text{num query terms} \times \text{num results per query term}) $$ <!-- .element: class="fragment" -->
145
+ Can take reaaally long<!-- .element: class="fragment" -->
158
146
159
147
Notes:
160
148
Audience question
161
149
---
162
150
163
- # Grep complexity
164
-
165
- * $O(\text{num query terms} \times \text{total length of all documents})$
166
- * Can take reaaally long
167
-
168
- #### <!-- .element: class="fragment" data-fragment-index="1" --> Example
151
+ # Grep complexity example
169
152
170
- * & shy ; <!-- .element: class="fragment" data-fragment-index="1" --> * English Wikipedia* : 6M articles, 12B characters, 1.2M
153
+ * * English Wikipedia* : 6M articles, 12B characters, 1.2M
171
154
distinct terms
172
- * ­ ; <!-- .element: class="fragment" data-fragment-index="1" --> grep: 2 query terms × ; 12GB = ** 24 billion string
173
- comparisons**
155
+ * grep: 2 query terms × ; 12GB = ** 24 billion string comparisons**
174
156
175
157
Notes:
176
158
How can this be improved?
You can’t perform that action at this time.
0 commit comments