-
Notifications
You must be signed in to change notification settings - Fork 3
/
genisys.jemdoc
106 lines (84 loc) · 5.29 KB
/
genisys.jemdoc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
# jemdoc: menu{MENU2}{genisys.html}
= ADA Lab @ UCSD
~~~
*Note:* This umbrella project webpage is now deprecated.
Please see the webpages of the active project SpeakQL.
~~~
~~~
{}{img_left}{images/genisys.jpg}{}{80px}{}{}
== Project Genisys
~~~
=== Overview
Genisys is a new kind of data system that enables ADA applications to easily deploy ML
models in environments ranging from the cloud to personal devices.
Genisys exploits deep learning-based ML models to see, hear, and understand unstructured
data and query sources such as speech, images, video, time series, and text.
We call this vision of type-agnostic data analytics /database perception/.
Watch this space for more details.
=== Active Component Projects
~~~
{}{img_left}{images/krypton.jpg}{}{80px}{}{}
[krypton.html *Krypton*] \n
Enabling fast interactive diagnosis of the internals of visual perception systems.
~~~
~~~
{}{img_left}{images/panorama.png}{}{80px}{}{}
[panorama.html *Panorama*] \n
Enabling unbounded vocabulary querying over video.
~~~
~~~
{}{img_left}{images/speakql.jpg}{}{80px}{}{}
[speakql.html *SpeakQL*]\n
Enabling speech-driven multimodal querying of structured data with regular SQL and more.
~~~
~~~
{}{img_left}{images/vista.jpg}{}{80px}{}{}
[vista.html *Vista*] \n
Enabling data systems to truly see image and video data for efficient multimodal analytics.
~~~
=== Publications
- Panorama: A Data System for Unbounded Vocabulary Querying over Video\n
Yuhao Zhang and Arun Kumar\n
VLDB 2020 | [http://www.vldb.org/pvldb/vol13/p477-zhang.pdf Paper PDF] and [papers/2019_Panorama_VLDB.txt BibTeX]|
[papers/TR_2019_Panorama.pdf TechReport]
| [https://docs.google.com/presentation/d/1a9xHmfP1Gwg03CnVP8OWWf20v1IZ9O5eIhfa0dEdkcc/edit?usp=sharing Talk slides] | Talk videos: [https://www.youtube.com/watch?v=gAGOp0fbUcU Youtube] [https://www.bilibili.com/video/av329339128?p=109 Bilibili]
| [https://adalabucsd.github.io/research-blog/panorama.html Blog post]
| [https://github.com/makemebitter/Panorama-UCSD Source code on GitHub]
- Query Optimization for Faster Deep CNN Explanations\n
Supun Nakandala, Arun Kumar, and Yannis Papakonstantinou\n
ACM SIGMOD Record 2020 | [papers/2020_Krypton_SIGMODRecord.pdf Paper PDF] and [papers/2020_Krypton_SIGMODRecord.txt BibTeX] \n
+ACM SIGMOD Research Highlights Award+
- Incremental and Approximate Computations for Accelerating Deep CNN Inference\n
Supun Nakandala, Kabir Nagrecha, Arun Kumar, and Yannis Papakonstantinou\n
ACM TODS 2020 | [papers/2020_Krypton_TODS.pdf Paper PDF] and [papers/2020_Krypton_TODS.txt BibTeX] \n
+Invited Paper+
- Vista: Optimized System for Declarative Feature Transfer from Deep CNNs at Scale\n
Supun Nakandala and Arun Kumar\n
ACM SIGMOD 2020 | [papers/2020_Vista_SIGMOD.pdf Paper PDF] and [papers/2020_Vista_SIGMOD.txt BibTeX] |
[papers/TR_2020_Vista.pdf TechReport] | [https://adalabucsd.github.io/research-blog/research/2020/06/14/vista.html Blog post] | [https://www.youtube.com/watch?v=nmfUFCDthAo&feature=youtu.be Talk Video] | [https://github.com/ADALabUCSD/Vista Code]
- SpeakQL: Towards Speech-driven Multimodal Querying of Structured Data\n
Vraj Shah, Side Li, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2020 | [papers/2020_SpeakQL_SIGMOD.pdf Paper PDF] and [papers/2020_SpeakQL_SIGMOD.txt BibTeX]|
[papers/TR_2020_SpeakQL.pdf TechReport] |
[https://adalabucsd.github.io/research-blog/research/2020/06/14/speakql.html Blog post] |
[https://drive.google.com/drive/folders/1tSxUTu2A7qy8fPtB81RnwkyakgykZ3iw?usp=sharing Dataset on Drive]
- Incremental and Approximate Inference for Faster Occlusion-based Deep CNN Explanations\n
Supun Nakandala, Arun Kumar, and Yannis Papakonstantinou\n
ACM SIGMOD 2019 | [papers/2019_Krypton_SIGMOD.pdf Paper PDF] and [papers/2019_Krypton_SIGMOD.txt BibTeX] | [papers/TR_2019_Krypton.pdf TechReport] | [https://adalabucsd.github.io/research-blog/research/2019/06/07/krypton.html Blog post] | [https://av.tib.eu/media/42901 Talk Video] \n
+Honorable Mention for Best Paper Award+
- Demonstration of SpeakQL: Speech-driven Multimodal Querying of Structured Data\n
Vraj Shah, Side Li, Kevin Yang, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2019 Demo | [papers/2019_SpeakQL_SIGMOD.pdf Paper PDF] and [papers/2019_SpeakQL_SIGMOD.txt BibTeX]| [https://vimeo.com/295693078 Video]
- Demonstration of Krypton: Optimized CNN Inference for Occlusion-based Deep CNN Explanations\n
Allen Ordookhanians, Xin Li, Supun Nakandala, and Arun Kumar\n
VLDB 2019 | [http://www.vldb.org/pvldb/vol12/p1894-ordookhanians.pdf Paper PDF] and [papers/2019_Krypton_VLDB.txt BibTeX] | [https://www.youtube.com/watch?v=1OWddbd4n6Y&feature=youtu.be Video]
- Demonstration of Krypton: Incremental and Approximate Inference for Faster Occlusion-based Deep CNN Explanations\n
Supun Nakandala, Arun Kumar, and Yannis Papakonstantinou\n
SysML 2019 Demo | [papers/2019_Krypton_SysML.pdf Paper PDF] | [https://www.youtube.com/watch?v=1OWddbd4n6Y&feature=youtu.be Video]
- Materialization Trade-offs for Feature Transfer from Deep CNNs for Multimodal Data Analytics\n
Supun Nakandala and Arun Kumar\n
SysML 2018 (Short paper/poster) | [papers/2018_Vista_SysML.pdf Paper PDF]
- SpeakQL: Towards Speech-driven Multi-modal Querying\n
Dharmil Chandarana, Vraj Shah, Arun Kumar, and Lawrence Saul\n
ACM SIGMOD 2017 HILDA Workshop |
[papers/2017_SpeakQL_HILDA.pdf Paper PDF] and [papers/2017_SpeakQL_SIGMOD.txt BibTeX]