Trying a Channel-Agnostic Microscopy Foundation Model

This project was part of a small group benchmark on foundation models for multiplexed fluorescence microscopy. The general question was simple: if we take recent pretrained models and run them on real multiplexed microscopy data, what do their embeddings actually capture? My part focused on a channel-agnostic masked autoencoder, or CA-MAE, from the paper Masked Autoencoders for Microscopy are Scalable Learners of Cellular Biology. I did not train a new foundation model here, and I did not fine-tune the model. The work was more practical than that: take a pretrained model, make our data fit the input format, run inference on the university cluster, and then look at the resulting embeddings carefully. ...

March 14, 2026 · 8 min · Jonas

Trying to Make Deep RL Work on a Small Gridworld

This project started as a practical follow-up to an introductory deep reinforcement learning course. The goal was simple to state and harder to make work: train deep RL agents to solve a small stochastic pickup-and-delivery task better than a hand-written greedy baseline. The code is available on GitHub: github.com/jclotten/deep-rl-gridworld-benchmark. Why This Project Small gridworlds are often used to explain reinforcement learning because the rules are easy to understand. That does not mean they are automatically easy for deep RL. ...

November 15, 2025 · 5 min · Jonas