forked from stanford-cs221/autumn2019
-
Notifications
You must be signed in to change notification settings - Fork 0
/
project.html
496 lines (454 loc) · 25.6 KB
/
project.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
<head>
<title>CS221 Final Project Guidelines</title>
<script src="plugins/main.js"></script>
</head>
<body>
<div class="assignmentTitle">
CS221 Final Project Guidelines
</div>
<p>
<a href="lectures/index.html#include=project.js">[slides]</a>
</p>
<p>
In the final project, you will work in groups of <b>one, two, three,
or four</b> to apply the techniques that you've learned in this class
to a new setting that you're interested in. Note that the larger the
group, the higher the expectations for the project.
</p>
<p>
You will build a system to solve a well-defined task.
Which task you choose is completely open-ended, but the methods you
use should draw on the ones from the course.
</p>
<p>
One thing we'd like you to think of is the <b>social impact</b> of your work.
AI is more than just a collection of cool techniques; it has the potential
for improving society (as well as doing harm). Please think about how
you could use the techniques in this class for the former.
Of course, sometimes there might be a trade-off between doing socially
impactful work and doing technically interesting work,
which is something your team should manage.
</p>
<p>
After you submit your proposal, you will be assigned one of the CAs as your
official mentor. He or she will grade all your work and really get to know
your project.
You are encouraged to come to office hours often to discuss your project;
you can also go to the instructor or other CA's office hours to get a second opinion.
Note that it will take several iterations to find the right project, so be
patient; this exploration is an essential part of research, so learn from it.
Have fun and don't wait until the last minute!
</p>
<p>
The final project should consist of the following stages:
<ul>
<li><b>Task definition</b>: What does your system do (what is its input and output)?
What real-world problem does this system try to solve?
<p>
Make sure that the <b>scope</b> of the project is not too narrow or broad.
For example, building a system to answer any natural
language question is too broad, whereas answering short factoid questions
about movies is more reasonable. This is probably the most
important part of the project, but is also the part that you will not get
practice doing from homeworks.
<p>
The first task you might come up with is to apply binary classification
on some standard dataset. This is probably not enough. If you are thinking in
terms of binary classification, you are probably thinking too narrowly about
the task. For example, when recommending news articles, you might not want to make
predictions for individual articles, but might benefit from choosing a diverse set of articles.
<p>
An important part of defining the task is the <b>evaluation</b>.
In other words, how will you measure the success of your system?
For this, you need to obtain a reasonably sized <b>dataset</b> of example
input-output pairs, either from existing sources, or collecting one from
scratch. A natural evaluation metric is accuracy, but it could be memory or
running time. How big the dataset is depends on your task at hand.
</li>
<li>
<b>Infrastructure</b>: In order to do something interesting, you have to set
up the infrastructure. For machine learning tasks, this involves collecting
data (either by scraping, using crowdsourcing, or hand labeling). For
game-based tasks, this involves building the game engine/simulator.
While infrastructure is necessary, try not to spend too much time on it.
You can sometimes take existing datasets or modify existing simulators to save time,
but if you want to solve a task you care about, this is not always an option.
Note that if you download existing datasets which are already preprocessed (e.g., Kaggle),
then you will be expected to do more with the project.
<p>
</li>
<li>
<b>Approach</b>: Identify the challenges of building the system and the
phenomena in the data that you're trying to capture.
How should you model the task (e.g., using search, machine learning, logic, etc.)?
There will be many ways to do this, but you should pick one or two
and explain how the methods address the challenges as well as any pros and cons.
What algorithms are appropriate for handling the models that you came up with,
and what are the tradeoffs between accuracy and efficiency?
Are there any implementation choices specific to your problem?
<p>
Before developing your primary approach, you should implement
<b>baselines</b> and <b>oracles</b>. These are really important
as they give you intuition for how easy or hard the problem you're solving is.
Intuitively, baselines give lower bounds on the performance you will obtain and oracles give upper bounds.
If this gap is too small, then you probably don't have a good task.
Importantly, baselines and oracles should be relatively easy to implement and can be done
before you invest a lot of time in implementing a fancier approach.
This can prune out problems early and save you a lot of time!
<p>
Baselines are simple algorithms, which might include
using a small set of hand-crafted rules,
training a simple classifier, etc.
(Note that baselines are extremely simple, but you might be surprised at how effective they are.)
While a predictor that guesses randomly provides a lower bound (and can be reported in the paper),
it is too simple and doesn't give you much information.
Predicting the majority label is a slightly less trivial baseline,
and whether it's acceptable depends on how insightful it is.
For classification, if the different labels have very different proportions,
then it could be useful; otherwise it won't be.
You are encouraged to have multiple baselines.
<p>
Oracles are algorithms that "cheat" and look at the correct answer or involve humans.
For human-like classification problems (e.g., sentiment classification),
you can have each member of your project team try to annotate ~50 examples
and measuring the agreement rate.
Note that some tasks are subjective, so even though
humans are providing ground truth labels,
human accuracy will not be 100%.
When the classification problem is not human-like,
you can try to use the training error of an expressive classifier (e.g., nearest neighbors)
as a proxy for oracle error.
The idea is that if you can't even fit the training data using a very expressive classifier,
there is probably a lot of noise in your dataset,
and you have a slim chance of building any classifier that does well on test.
While returning 100% is an upper bound, it is not a valid oracle since it is vacuously an upper bound.
Sometimes, oracles might be difficult to come by.
If you think that no good oracles exist, explain why.
<p>
Overall, the main point of baselines and oracles is to get you to look at the problem
carefully and think about what's possible.
The accuracies of state-of-the-art systems on the dataset
could be either a baseline or an oracle.
Sometimes, there are data points that are neither baselines nor oracles:
for example, in a two component system, you use an oracle for one and a baseline for another.
</li>
<li><b>Literature review</b>: Have there been other attempts to build such a
system? Compare and contrast your approach with existing work, citing the
relevant papers. The comparison should be more than just high-level
descriptions. You should try to fit your work and other work into the same
framework. Are the two approaches complementary, orthogonal, or contradictory?
<p>
</li>
<li><b>Error analysis</b>:
Design a few experiments to show the properties (both pros and cons) of your system.
For example, if your system is supposed to deal with graphs with lots of cycles,
then construct both examples with lots of cycles and ones without to test your hypothesis.
Each experiment should ask a concise question, such as: <i>Do we need
to model the interactions between the ghosts in Pac-Man?</i> or
<i>How well does the system scale up to large datasets?</i>
Analyze the data and show either graphs or tables to illustrate your point.
What's the take-away message? Were there any surprises?
</ul>
</p>
<span class="header">Milestones</span>
<p>
Throughout the quarter, there will be several milestones so that you can get
adequate feedback on the project.
<ul>
<li><a name="p-proposal"></a><b>Proposal</b> (10% Points) (2 pages max):
Define the input-output behavior of the system and the scope of the project.
What is your evaluation metric for success?
Collect some preliminary data, and give <b>concrete examples of inputs and outputs</b>.
Implement a baseline and an oracle and discuss the gap.
What are the challenges? Which topics (e.g., search,
MDPs, etc.) might be able to address those challenges (at a high-level, since
we haven't covered any techniques in detail at this point)?
Search the Internet for similar projects and mention the related work.
You should basically have all the infrastructure (e.g., building a simulator,
cleaning data) completed to do something interesting by now.
<p>
<li><a name="p-progress"></a><b>Progress report</b> (20% Points) (4 pages):
Propose a model and an algorithm for tackling your task. You should describe
the model and algorithm in detail and use a concrete example to demonstrate
how the model and algorithm work.
Don't describe methods in general; describe precisely how they apply to your problem
(what are the variables, factors, states, etc.)?
You should also have finished implementing a preliminary version of your algorithm
(maybe it's not fully optimized yet and it doesn't have all the features you want).
Report your initial experimental results.
<p>
<li><a name="p-poster"></a><b>Poster session</b> (20% Points):
By the poster session, you should have finished implementation, run
a good chunk of your experiments, and done some basic error analysis.
In the poster, you should describe the motivation, problem definition,
challenges, approaches, results, and analysis.
The goal of the poster is to convey the important high-level ideas and give intuition
rather than be a super-detailed specification of everything you did
(but you should still be precise).
You will be evaluated on both the contents of the poster as well as your
presentation.
You will be assigned to either Session A (2:30-4:00pm) or Session B (4:00-5:30pm).
You should stand by your poster during your assigned slot.
You can find your session assignment <a href="project-list.html">here</a>.
During the poster session, the course staff will come around.
You should be able to give a 30-second elevator pitch providing the
highlights of your work, but also be prepared to take questions about any of
the specific details.
Additional tips:
<ul>
<li>Use lots of diagrams and concrete examples. Use bullets for the key points, and make sure you use a large font that can be read from a distance.
Don't write long sentences and lots of complex equations.</li>
<li>Organize your poster into sections, so it's clear what the components of the project are. This also makes it easy to delve into a particular component.</li>
<li>Practice and polish your 30-second pitch. Someone should be able to understand what you're doing and importantly, why, from listening to it.</li>
<li>If you can make a live demo, you should do it!</li>
<li>The posterboards are 20" x 30" (not that big!).
You can print your poster at Meyer library or Kinkos.
Or, for a super convenient/cheap option, you can use <a
href="http://www.blockposters.com/">blockposters</a>, which splits up
your poster into small pages which can be printed on a normal printer.
</li>
<li>At the beginning of the poster session, you should check in with your mentor,
who will provide easels and stands in exchange for a group member's student ID card.
You should setup your poster around the mentor's designated area.</li>
<li>All group members are expected to be at the poster session.
If you cannot make it, coordinate with your mentor.</li>
<li>In all cases, submit your poster as a PDF using the submit script by 11pm that evening.</li>
</ul>
<p>
<li><a name="p-peer"></a><b>Poster session peer review</b>:
During the time that you're not assigned,
you should wander around and look at other people's posters — after all,
the whole point of having a poster session is so that everyone can share what they've accomplished over the quarter.
You might even get ideas for your own project.
Based on your wanderings,
you should choose 3 posters that you liked the most and write a sentence or
two describing each poster and why you liked it.
These reviews will not influence anyone's grade,
but is just a fun way to encourage you to engage in the poster session.
<p>
<li><a name="p-final"></a><b>Final report</b> (50% Points) (5-10 pages):
You should have completed everything (task definition, infrastructure,
approach, literature review, error analysis).
</ul>
</p>
<p>
Note: you can have an appendix beyond the
maximum number of allowed pages with any figures/plots/examples if you need.
</p>
<span class="header">Submission</span>
<p>
Submit the milestones using the submit script as usual,
but make sure the <b>same one member</b> of your
group submits on behalf of the entire group.
</p>
<p>
All milestones are due at <b><font color="red">11pm</font>
(23:00, not 23:59)</b>.
Late days (up to 2) can be used <b>except for the final report</b>.
</p>
For each milestone, you should submit:
<ul>
<li><code>proposal.pdf</code>, <code>progress.pdf</code>, or <code>final.pdf</code> containing a PDF of your writeup.</li>
</ul>
<p>For <b>p-proposal</b>, you should also submit a google form on gradescope which includes project information, team member information and mentor preference.</p>
For <b>p-final</b>, you should also submit supplementary material.
There are two ways to do this. First, you can just package it up:
<ul>
<li><b>Code:</b> Upload your code to GitHub/Bitbucket/etc and include a link to the code in the writeup.
Alternatively, you can create a <code>code.zip</code> file containing the code, upload it to OneDrive/Google Drive/etc.,
and include a link in the writeup. Any language is fine;
it does not have to run out-of-the-box. You should also include a README file with the repo/zip documenting
what everything is and what commands you ran.</li>
<p>
<li><b>Data:</b> Create a <code>data.zip</code> file containing the data. Upload the file to OneDrive/Google Drive/etc.
Include a in the writeup.</li>
</ul>
The file size limit is <b>20MB</b> per file.
If the data does not fit in the file size limit,
submit a small but meaningful subset of the data.
<p>
We encourage you to submit your project as a <a href="https://worksheets.codalab.org">CodaLab worksheet</a>,
which certifies that the experiments in your project are reproducible.
With CodaLab, you upload your code and data and interleave the description of your experiments with their actual execution.
<b>Extra credit</b> will be awarded to those that produce a meaningful CodaLab worksheet.
Just include the link in the final report.
<p>
<p>
<span class="header">Grading rubric</span>
Your project will be graded on the following dimensions:
<ul>
<li><b>Task definition</b>: is the task precisely defined and is the motivation for the task clear?
In other words, does the world somehow become a better place if your project were successful?
We will reward projects that are extra thoughtful about how to use <b>AI for social impact</b>.
<li><b>Approach</b>: was a baseline, an oracle, and an advanced method described clearly, well justified, and tested?
<li><b>Data and experiments</b>: have you explained the data clearly, performed systematic
experiments, and reported concrete results?
<li><b>Analysis</b>: did you interpret the results and try to explain why things
worked (or didn't work) the way they did? Do you show concrete examples?
<li><b>Extra credit</b>: does the project present interesting and novel ideas
(i.e., would this be publishable at a good conference)?
</ul>
<p>
Of course, the experiments may not always be successful.
Getting negative results is normal,
and as long as you make a reasonably well-motivated attempt and you explained <i>why</i> the
results came out negative, you will get credit.
</p>
<span class="header">An example strategy</span>
<p>
This is a suggestion of how to approach the final project with an example.
<ul>
<li>Pick a topic that you're passionate about (e.g., food, language, energy,
politics, sports, card games, robotics).
<i>As a running example, say we're interested in how people read the news to get their information.</i>
<p>
<li>Brainstorm to find some tasks on that topic: ask "wouldn't it be nice to
have a system that does X?" or "wouldn't it be nice to understand X?"
A good task should not be too easy (sorting a list of numbers) and not too hard
(building a system that can automatically solve CS221 homeworks).
Please come to office hours for feedback on finding the right balance.
<i>Let's focus on recommending news to people.</i>
<p>
<p>
<li>Define the task you're trying to solve clearly and convince yourself (and a few friends) that it's important/interesting.
Also state your evaluation metric – how will you know if you have succeeded or not?
<i>
Concentrate on a small set of popular news sites: nytimes.com, slashdot.org, sfgate.com, onion.com, etc.
For each user and each day, assume we have acquired a set of articles that the user is interested in reading (training data).
Our task is to predict for a new day, given the full set of articles, the best subset to show the user;
evaluation metric would be prediction accuracy.
</i>
<p>
<li>Gather and clean the necessary data (this might involve scraping websites, filtering outliers, etc.).
This step can often take an annoyingly large amount of time if you're not careful, so do not try to get bogged down here.
Simplify the task or focus on a subset of the data if necessary.
You might find yourself adjusting the task you're trying to solve based on new empirical insights you get by looking at the data.
Notice that even if you're not doing machine learning, it's necessary to have data for evaluation purposes.
<i>Write some scripts that download the RSS feeds from the news sites, run
some basic NLP processing (e.g., tokenization), say, using NLTK.</i>
<p>
<li>Implement a baseline algorithm. For a classification task, this would be
always predicting the most common label. If your baseline is too high, then
your task is probably too easy.
<i>One baseline is to always produce the first document from each news site.</i>
Also implement an oracle, for example, <i>recommending the document based on
the number of comments. This is an oracle because you wouldn't have the number
of comments at the time you actually wanted to recommend the article!</i>
<p>
<li>Formulate a model and implement the algorithm for that model. You should
try several variants and compare them.
Remember to try as much as possible to separate model (what you
want to compute) from algorithms (how you do it).
<i>You might train a classifier to predict, for each news article, whether to include it or not.
You might try to include these predictions as factors in a weighted CSP and
try to find a set of articles that balance diversity and relevance.</i>
<p>
<li>Perhaps the most important part of the project is the final step, which
is to analyze the results. It's more important that you do a thorough
analysis and interpret your results rather than implement a huge number of
complicated heuristics in trying to eke out the maximum performance.
The analysis should begin with basic facts, e.g., how much time/memory did
the algorithm take, how does the accuracy vary with the amount of training
data? What are the instances that your system does the worst on? Give concrete examples
and try to understand why. Is there a bottleneck? Is it due to lack of training data?
</ul>
</p>
<span class="header">Datasets</span>
<p>
You are free to use existing datasets, but these might be not necessarily the
best match for your problem, in which case you are probably better off making
your own dataset.
<ul>
<li><a href="http://www.kaggle.com/">Kaggle</a> is a website that runs
machine learning competitions for predicting for monetary reward.
<li><a href="http://cs229.stanford.edu/projects2011.html">Past CS229 projects</a>:
examples of machine learning projects that you can look at for inspiration.
Of course your project doesn't have to use machine learning – it can draw from other areas of AI.
<li><a href="http://baldur.iti.kit.edu/SAT-Challenge-2012">SAT competition</a>: satisfiability problems are a special important class of CSPs.
<li><a href="http://nlp.stanford.edu/links/statnlp.html#Corpora">Natural language processing datasets</a>: links to many NLP datasets for different languages.
<li><a href="http://www.stanford.edu/class/cs224w/resources.html">Datasets from CS224W (Social and Information Network Analysis)</a>.
</ul>
<p>
<span class="header">Libraries</span>
<p>
You are free to use existing tools for parts of your project as long as you're
clear what you used. When you use existing tools, the expectation is that you will do
more on other dimensions.
<ul>
<li><a href="http://scikit-learn.org">scikit-learn</a>: machine learning library implemented in Python
<li><a href="http://nltk.org">Natural language Toolkit (NLTK)</a>: a set of tools for basic NLP in Python
<li><a href="http://www.neuroforge.co.uk/index.php/getting-started-with-python-a-opencv">OpenCV</a>:
Python libraries for simple computer vision
</ul>
<p>
<span class="header">Some project ideas</span>
<ul>
<li>Predict the price of airline ticket prices given day, time, location, etc.
<li>Predict the amount of electricity consumed over the course of a day.
<li>Predict whether the phone should be switched off / silenced based on sensor readings from your smartphone.
<li>Auto-complete code when you're programming.
<li>Answer natural language questions for a restricted domain (e.g., movies, sports).
<li>Search for a mathematical theorem based on an expression which normalizes over variable names.
<li>Find the optimal way to get from one place on Stanford campus to another
place, taking into account uncertain travel times due to traffic.
<li>Solve Sudoku puzzles or crossword puzzles.
<li>Build an engine to play Go, chess, 2048, poker, etc.
<li>Break substitution codes based on knowledge of English.
<li>Automatically generate the harmonization of a melody.
<li>Generate poetry on a given topic.
</ul>
<p>You can also get inspiration from
<a href="https://stanford-cs221.github.io/autumn2019/2018/project-list.html">last spring's CS221 projects</a>
(student access only).
</p>
<p>
<span class="header">Frequently asked questions</span>
<p>
<b>Can I use the same project for CS221 and another class (CS229, etc.)?</b>
The short answer is that you cannot turn in the identical project for both
classes, but you can share common infrastructure across the two classes.
First, you should make sure that you follow the guidelines for the CS221
project, which are likely different from those of other classes.
Second, if any part of the project is done for a purpose outside CS221 (for the
final project in CS229 or other classes, or even for your own research), then
in the progress and final reports, you must clearly indicate which part of the
project was done for CS221 and which part was not.
For example, if you're taking CS229, then you cannot turn in the same pure
machine learning project for CS221. But you can work on the same broad problem
(e.g., news recommendation) for both classes and share the same dataset /
generic wrapper code. You should then explore the machine learning aspect of
the problem for CS229 (e.g., classifying news relevance) and another topic for
CS221 (e.g., optimizing diversity across news articles using search or CSPs).
</p>
<p>
<b>Are there restrictions on who I can partner up with for the final project?</b>
The only hard requirement is that each member of your group must be enrolled
in CS221. Thus, if you choose to use the same project for CS221 and another class,
all of your partners must be in CS221. If you feel like you have a compelling case for an exception,
please submit a request on Piazza detailing the parts of the
project used for each class and the reasons for deviating from the project policies.
<p>
<b>How do I choose a good baseline and oracle?</b>
Both baselines and oracles should be simple and not take much time. The point
is not to do something fancy, but to work with the data / problem that you have
in a substantive way and learn something from it.
Here are some examples of baselines:
<ul>
<li>Manually hand-code a few simple rules
<li>Train a classifier with some simple features
</ul>
Guessing completely at random is technically a baseline, but is a really bad
one because it doesn't really tell you much about how easy the problem is.
Here are some examples of oracles:
<ul>
<li>Manually have each member of your team hand label a handful of examples.
The oracle is the agreement rate between people.
<li>Measure the training accuracy using reasonable features. The idea is
that if you can't even fit your training data very well, then your test error
is probably going to be even worse.
<li>Predict the label given information that you wouldn't normally have a
test time (for example predicting the future given other information from the
future).
</ul>
Always guessing the correct label is technically an oracle, but it's a really
bad one, because you'd always get 100% and you don't learn anything from it.