About me

Hi there! My name is Neeratyoy Mallik.

I’m a PhD student at the ML Lab in Freiburg, supervised by Prof. Dr. Frank Hutter. My research is driven by the goal of making Deep Learning more efficient to tune and thus more accessible — reducing the compute and expertise barrier so that strong DL performance isn’t gated by resource budgets.

I have worked on multi-fidelity Bayesian Optimization methods that deliver real tuning gains under tight budgets, leveraging expert priors and learning curve extrapolation. Going beyond a purely black-box view, I also study how network weights themselves can serve as a source of both HPO information and cost reduction — through mechanisms like freezing and growing of networks.

My active research connects HPO to efficient scaling in DL along two directions: co-scaling data and model sizes under optimal hyperparameters cheaply, and tuning large-scale models better for less. Constructing accurate scaling laws efficiently is central to both. I am particularly interested in finding better, application-driven parametric forms for scaling laws that can more reliably inform hyperparameter scaling and practitioner-specific design choices.

If you are working on:

  • $\mu$Parameterization or applying hyperparameter ladders,
  • scaling collapse for reliable early stopping,
  • interpreting and leveraging network learning dynamics for hyperparameter transfer,
  • general HPO (or AutoML) for different data modalities, and downstream applications,

do write to me for a chat and potential collaboration! :)

Thank you for visiting — if you have any questions or would like to get in touch, please feel free to contact me, or find me on social media (left panel).