Skip to content

HannanJaved/HannanJaved

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

25 Commits
ย 
ย 

Repository files navigation

Profile Views PhD Student Open for Collaboration Currently Mentoring Connect with me Blog

Hi, I'm Hannan Mahadik ๐Ÿ‘‹

Typing SVG

โ€œResearch is about pushing boundaries, and mentoring is about helping others find theirs.โ€

โ€” David Clutterbuck


About Me

I'm currently pursuing a PhD at the ELLIS Institute Tรผbingen, where I focus on research in artificial intelligence โ€” especially Large Language Models (LLMs), deep learning, and model compression. My work is hands-on and experimental. I also enjoy mentoring students and sharing knowledge with the community.


What I'm Working On At The Moment

  • Finetuning and Evaluation for Small Language Models (upto 8B params)
  • Finding optimal dataset mixtures for the annealing phase of pretraining for LLMs using synetune
  • Exploring ways to expand my master thesis (Fairness in Recommendations using Graph Neural Networks)

Open to collaboration and discussion on any of these topics! ๐Ÿ˜ƒ


Skills

  • Deep Learning, including Generative Models like VAEs, Diffusion models, GANs etc.
  • LLMs: Supervised Fine Tuning (SFT), Evaluation, Synthetic Data Generation, Pretraining
  • Model Compression & Knowledge Distillation
  • Graph Neural Networks (GNNs)
  • Teaching & Mentoring

Tech Stack

Programming Languages

AI & Machine Learning

Tools & Platforms


Featured Projects

  • GNNs_FAME: My master's thesis on Graph Neural Networks for Fairer Recommendations
  • Whittle: Python library to compress LitGPT models for resource efficient inference
  • AI-blog: Sharing insights and articles at the intersection of AI and research - aimed at university students
  • arena-hard-auto-private: Data analysis on language models judged by an LLM using Arena-Hard
  • synetune-annealing-experiments-oellm: Using Synetune to find data weights for annealing experiments

๐Ÿ“š Publications

  • "GNN's FAME: Fairness-Aware MEssages for Graph Neural Networks"
    Proceedings of the 33rd ACM Conference on User Modeling, Adaptation and Personalization (UMAP '25), 2025.
    DOI: 10.1145/3699682.3728324

    Graph Neural Networks (GNNs) have shown success in various domains but often inherit societal biases from training data, limiting their real-world applications. Historical data can contain patterns of discrimination related to sensitive attributes like age or gender. GNNs can even amplify these biases due to their topology and message-passing mechanism, where nodes with similar sensitive attributes tend to connect more frequently. While many studies have addressed algorithmic fairness in machine learning through pre-processing and post-processing techniques, few have focused on bias mitigation within the GNN training process. In this paper, we propose FAME (Fairness-Aware MEssages), an in-processing bias mitigation technique that modifies the GNN trainingโ€™s message-passing algorithm to promote fairness. By incorporating a bias correction term, the FAME layer adjusts messages based on the difference between the sensitive attributes of connected nodes. FAME is compatible with Graph Convolutional Networks, and a variant called A-FAME is designed for attention-based GNNs. Experiments conducted on three datasets evaluate the effectiveness of our approach against three classes of algorithms and six models, considering two notions of algorithmic fairness. Results show that the proposed approaches produce accurate and fair node classifications. These results provide a strong foundation for further exploration and validation of this methodology.

  • "Ballista โ€“ ein Traum von Alexander"
    LEGO-Praktikum. Entwickeln, programmieren, optimieren : Berichte der Studierenden zum Projektseminar Elektrotechnik/Informationstechnik, 2019.
    DOI: 10.24352/UB.OVGU-2019-044

    Robots and machine are taking over a lot of manual work, earlier done by men, and making it better by being not only quicker in doing the same task, but also in a much more This was the main aim of our project as we tried to engineer a efficient way. This paper is heavily based on the Ballista (Catapult) used by many great rulers including Alexander the Great in a bid to win wars. It is an engineering marvel built to launch projectiles across great distances inflicting quite some damage. Depending on the needs, a catapult can be modified and used to destroy or create an opening in mountain ranges or just simply as a war weapon. We have created a miniature, albeit working prototype of the same using the Lego Mindstorms NXT kit and programming of the same using MATLAB.


Interests

  • Open LLM Research
  • Model Compression and Knowledge Distillation
  • Teaching and mentoring
  • Writing and sharing ideas in AI

Fun Facts

  • I played U-19 and U-16 Cricket for the Kuwait National Team.
  • I have been an international student mentor at OvGU, Magdeburg, for over three years.
  • I enjoy collaborating and exchanging ideas in AI, always open to new research conversations.

๐Ÿ“Š GitHub Analytics

๐Ÿ† GitHub Trophies


Connect With Me!

I speak the following languages fluently:

Message me on: Hannan Mahadik LinkedIn


About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published