-
Notifications
You must be signed in to change notification settings - Fork 47
GSoC Ideas 2020
We don't provide a special template for your application. But please make sure to answer the following questions:
- Who are you and what is your background? (university, studies, hobbies …)
- What are your experiences? (programming languages/frameworks, jobs, part of an open source community …)
- What are your expectations? Why have you chosen this project? (personal interests, specializations …)
Besides that, your application should contain a project plan, consisting of tasks and milestones. Please make sure that your project plan is coordinated with your mentor. If you have any question, feel free to ask! Just write at softvis(at)uni-leipzig.de or open an issue. Anyway, please make sure to talk to your potential mentor before applying!
This documents provides an overview over ideas for projects from Google Summer of Code 2020. Every idea consists of a list of keywords representing the technologies you are most likely will get in touch with during the project. This does not mean you already have to be familiar with them, preconditions are listed separately. Getaviz is a very heterogenous project, containing different components which uses completely different technologies. The keywords will help you to find a project you are interested in, so you can work with the technologies you are enthousiastic about! Further, the focus of most projects can be shifted a bit according to your interests. Just talk to us! The following list gives you an overview which technologies we use in Getaviz:
Please make sure to inform yourself about Getaviz and software visualization in general before applying. Have a look at our online demo and publications. Both is linked in the README of this project.
Hardware: HTC Vive, Oculus Rift, Microsoft HoloLens
Visualization frameworks: A-Frame, x3d, x3dom, d3
Programming languages: Java, Ruby, JavaScript
Frameworks and tools: react, Ruby on Rails, jQAssistant, neo4j
-
Brief explanation
jQAssistant is an open source tool for scanning software artifacts. The scanned software artifacts are stored as a graph in a neo4j database and serve as a data basis for analyses and visualizations with Getaviz or the jQA-dashboard. There are scanners for different programming languages or data sources, for example Java source code, Java bytecode, PHP source code, Github issues, stack traces and others. In order to support further programming languages and data sources, a scanner for jQAssistant has to be developed. A graph model is designed and the parsed/read data is mapped to it. The scanner is implemented in Java similar to this tutorial. This project can be assigned to several students where each student works on one scanner. -
Expected results
- The goal of this project is to implement a scanner for one of the following programming languages or data sources that stores the information in a neo4j database.
- It is not the primary goal to develop an own parser or tracer. It is much better to use existing tools and adapt them to produce the required output.
-
Involved Technologies: Java, jQAssistant, neo4j
-
Knowledge Prerequisite
You should already have some experience with Java and the chosen software artifact. With this project you can deepen your knowledge and get in touch with almost every aspect of the software artifact. -
Mentor: Richard Müller [rmueller(at)wifa.uni-leipzig.de]
-
Brief explanation
Currently the Ruby behaviour parser of Getaviz needs the instrumentation of the ruby code to start the tracing at a specific point. It is desireable to provide a possibility for the user of the behaviour parser to determin the start and end of the tracing as an command line argument. -
Expected results
- Package structure, behavior and evolution parser as a gem with binaries
- Provide suitable command line options for all parsers for output files and for git repos as input
- Provide command line options to the behavior parser to determine either the source code file and line or the class an method where the tracing starts and where it should finish
- Provide suitable filter options for all parsers
- Create unit tests for all parsers
-
Involved Technologies: Ruby
-
Knowledge Prerequisite
Experience with at least one high-level language, e.g. Ruby or Java. -
Mentor: Jan Schilbach [jan.schilbach(at)uni-leipzig.de]
-
Brief explanation
In software visualizations the relations between the elements are a critical part in task solving. Because of the huge amount and different kinds they are normally not displayed at all and the user has to decide which of them should be displayed. A simple way to represent relations is drawing lines between the elements. However, in a complex visualization with a lot of elements, the length of the lines become too long to get a suitable overview about the relations. Instead the related elements could be temporarily arranged in a circle around the starting element. -
Expected results
The related elements are temporarily arranged in a circle around the starting element. Leaving this circle with the mouse resets the position of the related elements and the relation is displayed otherwise. Clicking on a related element navigates to the originally position. -
Involved Technologies: JavaScript, A-Frame
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful. -
Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]
-
Brief explanation
To detect single elements in a complex visualization is sometimes very tough, because of the huge amount of very small displayed elements. Navigation, like zooming or panning, becomes exhausting over time and come along with losing the overview. In this cases a magnifier can help to detect the elements of interest with lower effort than using navigation. For related elements a preview of their representations as an additional window can also help to avoid unnecessary navigation. -
Expected results
The user can display a magnifier over the current mouse position to detect and select small displayed elements. He can also display previews of related elements in additional windows, that also allows navigating to the element by clicking on it. -
Involved Technologies: JavaScript, A-Frame
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful. -
Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]
-
Brief explanation
Highlighting elements in complex visualizations can focus the user attention to specific points of interest in the task at hand. Decorative animations, like pulsing glow effects, are one way to highlight specific elements to draw the user attention. -
Expected results
The user can decide, which decorating animation or which combination of them represents which property of the elements. -
Involved Technologies: JavaScript, A-Frame
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame is not necessary but will be helpful. -
Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]
-
Brief explanation
So far, only a single view of a model is visualized at the same time. However, for many purposes it is necessary to simultaneously analyze more than one view of the model or to analyze multiple views of different models, e.g. comparing two versions of the same system. -
Expected results
The main result of this idea is, that the UI can handle multiple views of the same model or different models. As a second result it should be possible to couple this views, so an user event in one view has also affects to the other view. For example the selection of one element in the first view selects the same element at the second view, when it exists and when it is visible. -
Involved Technologies: JavaScript, A-Frame
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame is not necessary. -
Mentor: Pascal Kovacs [pkovacs(at)uni-leipzig.de]
-
Brief explanation
Last year, we implemented rudimentary support for HTC Vive using A-Frame. Currently, it is possible to view visualizations with it and navigate using the controllers. Besides that, it is not possible to interact with the visualization as it is on the desktop. If you choose this topic, you will extend the A-Frame support so Getaviz will provide the same features on desktop and HTC Vive. Finding suitable interaction concepts is a highly creative challenge where you can bring in your own ideas!
We can provide a HTC Vive to you, but only at our local virtual reality laboratory. If you can't work here, you'll have to find a HTC Vive yourself, for example at your local university. If possible, we will be happy to help.
-
Expected results It should be possible to navigae somehow and some basic interaction concepts should be implemented, e.g., search bar, filtering elements, highlighting elements
-
Involved Technologies: JavaScript, A-Frame, HTC Vive
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame or VR is not necessary. -
Mentor: David Baum [david.baum(at)uni-leipzig.de]
-
Brief explanation Our visualizations are generated in A-Frame at the moment and can be viewed via webbrowser. A-Frame supports the WebVR interface and therefore also runs on HoloLens and HTC Vive out of the box. But at the moment it is not possible to navigate through the visualization or interact with it as it is on the desktop. If you choose this topic, you will extend the A-Frame support so Getaviz will provide the same features on desktop and HoloLens. Finding suitable interaction concepts is a highly creative challenge where you can bring in your own ideas! We try to provide a HoloLens, but at the moment the chances are rather bad. If you have access to a Hololens, for example at your university, this would simplify a lot.
-
Expected results The visualization should be depicted properly using the features of AR. It should be possible to navigae somehow and some basic interaction concepts should be implemented, e.g., search bar, filtering elements, highlighting elements
-
Involved Technologies: Java, A-Frame, Microsoft HoloLens
-
Knowledge Prerequisite
Experience with JavaScript. Experience with A-Frame or AR is not necessary. -
Mentor: David Baum [david.baum(at)uni-leipzig.de]
-
Brief explanation
Getaviz generates A-Frame visualizations. Currently, they can be viewed via browser or HTC Vive. To better support virual reality, the visualizations includung navigation and interaction should be adopted to Oculus Rift as well. Finding suitable interaction and navigation concepts is a highly creative challenge where you can bring in your own ideas! We can provide an Oculus Rift to you, but only at our local virtual reality laboratory. If you can't work here, you'll have to find an Oculus Rift yourself, for example at your local university. If possible, we will be happy to help. -
Expected results
Implementation of a basic interaction (search bar, filter elements, highlight elements, view detail information) and navigation concept -
Involved Technologies: Java, JavaScript, A-Frame, Oculus Rift
-
Knowledge Prerequisite
This is a very chellenging and explorative project! You should have experience with Java Script. It is very important that you are creative and bring in your own ideas, but also that you are willing to dive into new technologies and try out different solution approaches. -
Mentor: David Baum [david.baum(at)uni-leipzig.de]
-
Brief explanation
The jQA-dashboard supports software project managers in decision making. Its data source is an existing neo4j database with structural, behavioral, and evolutionary information of a software project. The dashboard consists of interactive react components where each component supports a certain task, for example hotspot, ownership, or test coverage analysis. Every component encapsulates a cypher query to fetch data from the neo4j database and JavaScript code (e.g. d3) for visualizing its results. Not all data from the database are yet displayed in the dashboard. These include for example code changes over time, call and dependency relationships, GitHub issues, and stack traces (Kieker). This project can be assigned to several students where each student works on one component. -
Expected results
- The goal of this project is to design a useful visualization, to implement it using d3 or another visualization framework such as Popoto, React-Vis, nivo, Semiotic, yFiles for HTML etc., and to integrate it into the dashboard as a react component. Especially graph visualizations are of interest.
-
Involved Technologies: JavaScript, react, d3, neo4j
-
Knowledge Prerequisite
The challenge in this project is to find a suitable visualization that supports a software project manager and that can be build on the basis of the existing data in the neo4j database. You will have to write JavaScript and react code. You should already be familiar with JavaScript or react, ideally with both. Experiences with d3 and/or neo4j cypher queries are not necessary, but helpful. You can either reuse existing d3 components or develop your own d3 visualization and you are supported with cypher queries by the mentor. It is important that you are willing to learning and want to dive into new technologies. -
Mentor: Richard Müller [rmueller(at)wifa.uni-leipzig.de]
-
Brief explanation
The evaluation server displays the evaluated scene in an iframe and provides an API to keep track of rudimentary interaction actions of the user with the scene. To improve the tracking of the interactions an advancement of the API and the Javascript responsible to interact with the API is desireable. -
Expected results
- Generalization of the API to facilitate the persistence of all browser interaction events like mouse movement, mouse click, and key press
- Javascript-Script to track all the interactions and to send the informations to the API
-
Involved Technologies: Ruby, Rails, Javascript
-
Knowledge Prerequisite
Experience with at least one high-level language, e.g. Ruby or Java. -
Mentor: Jan Schilbach [jan.schilbach(at)uni-leipzig.de]