Advitya Dua: Plugin Modernizer Stats Visualization

Hi everyone,

I’m Advitya Dua, a Computer Engineering undergraduate at Thapar Institute of Engineering and Technology, India. I’m excited to submit my proposal for “Plugin Modernizer Stats Visualization” under GSoC 2026 and would love to get feedback from the community and mentors.


About Me

I am a full-stack developer with a strong focus on building data-driven systems, dashboards, and scalable backend architectures. Over the past couple of years, I have worked extensively with:

  • Frontend: React, JavaScript

  • Backend: Python (FastAPI, Django, Flask)

  • Databases: PostgreSQL, MongoDB, MySQL, Redis

  • DevOps & Tools: Docker, CI/CD pipelines, AWS, Selenium, Celery

Currently, I am working as a Software Engineering Intern, where I build AI-driven systems and optimize backend performance.


Relevant Experience

:small_blue_diamond: Dashboard & Data Systems

I have built multiple platforms involving dashboards and analytics, such as:

  • A Tender Management System with reporting and analytics

  • A College LMS platform with structured data visualization

  • Inventory and healthcare systems with real-time dashboards and insights


:small_blue_diamond: Full-Stack & Scalable Systems

  • Developed full-stack applications using React + Django/FastAPI

  • Designed scalable backend architectures handling structured datasets

  • Built systems involving role-based access, APIs, and automation pipelines


:small_blue_diamond: Automation & Data Pipelines

  • Implemented automation workflows using Selenium and Celery

  • Built pipelines for:

    • data processing

    • report generation

    • workflow automation


:small_blue_diamond: Performance Optimization

  • Improved backend performance (~35% latency reduction in production system)

  • Designed systems supporting real-time updates and efficient querying


Why This Project?

The Plugin Modernizer Stats Visualization project strongly aligns with my experience and interests.

What excites me most is:

  • Transforming structured metadata into actionable insights

  • Designing scalable data pipelines

  • Building intuitive dashboards for complex systems

I see this project as a perfect opportunity to combine:

data engineering + visualization + system design


My Approach

I plan to approach the project as a pipeline-driven system, not just a frontend dashboard:

  1. Data Layer

    • Fetch and validate metadata from metadata-plugin-modernizer

    • Normalize and preprocess datasets

  2. Build-Time Processing

    • Generate optimized datasets

    • Compute metrics and plugin-level insights

  3. Visualization Layer

    • Build dashboards using React + TypeScript

    • Provide filtering, trend analysis, and plugin reports

  4. Automation

    • CI/CD pipeline for automated builds and deployment

What I Bring

  • Strong experience in building dashboards and data-driven systems

  • Ability to design scalable architectures, not just UI

  • Hands-on work with automation, APIs, and pipelines

  • Familiarity with CI/CD and production workflows


Looking for Feedback

@krisstern @CodexRaunak Mentors, I would really appreciate feedback on:

  • My overall approach and architecture

  • Any suggestions on improving the scope or execution

  • Areas where I should align more closely with Jenkins practices


Thanks a lot for your time! Looking forward to contributing and learning from the community!

Hi @AdvityaDua ,

Your experience with dashboards and data pipelines really stand out!

I wanted to share one observation after reading your approach. Please correct me if I misunderstood:

From what I understood, you’re planning a more full-stack architecture with backend processing, APIs, and possibly runtime data handling. However, from the official project idea page, it seems the project is intended to be a static visualization site. The data (JSON files from the metadata-plugin-modernizer repo) is supposed to be consumed and processed at build time, not at runtime.

Would it make sense to simplify it into:

  • Reading + transforming the JSON data during the build step

  • Computing metrics and insights during build

  • Rendering everything with React + TypeScript as a static site

This would keep the site lightweight, fast to load, and easier to host — which seems to be the expectation.

Let me know if I got that wrong or if you’re intentionally planning something different!

Looking forward to seeing your prototype :blush:

1 Like

Hi, thanks a lot for the detailed feedback — this is really helpful!

You’re absolutely right, and I appreciate you pointing this out. My intention is not to introduce any runtime backend or API layer, but rather to focus entirely on a build-time data processing approach.

When I mentioned “data pipelines”, I was referring to:

  • fetching the metadata from the repository during build
  • validating and normalizing the dataset
  • computing aggregated metrics and plugin-level insights
  • generating optimized JSON files for the frontend

All of this would happen at build time, keeping the final site fully static, lightweight, and easy to deploy.

I’ll make sure to clarify this better in my approach and keep the implementation aligned with the static-site design philosophy.

Thanks again for the clarification — this helps a lot in refining the direction!

1 Like