We need to study how things change
Push-pull effects in real-world neighborhoods need dynamics…
Updated: 2022-01-17
It’s been a while since I posted; M3 year is amazing. I’ll talk about it more in a separate post, but holy crap, surgery is the coolest job in the world.
At the same time, I’ve had great difficulty in disengaging from the math/AI world.
There are several really cool results that I’ve been obsessed with over the last few weeks. I figured I’d talk a bit about them now.
Emergent Behavior
OpenAI’s demonstration of emergent behavior in RL is fascinating. Their presentation was beautiful as you can see and their paper is very interesting. In a nutshell: they set up simple rules for a hide-and-seek game between two teams of two agents. Over time (and iterations of learning) these agents were able to learn how to do some pretty cool things. The absolute coolest was coordination between agents. It’s really mindboggling to think about the implications of this emergent coordination.
Learn the dynamics directly
I was very excited about the LFADS result that came from one of the researchers here at Emory/GT. Recently, that excitement was galvanized by the SINDy follow up paper. Basically: why contrive experiments that isolate variables so you can slowly build up an understanding of how a system behaves? Why not just learn the dynamics directly. More importantly, why not also learn the best coordinate system to extract these dynamics? Combined with results like learning dynamics in sparse data, we’re about to see an exciting revolution in how science is done that directly improves healthcare.
I’ve forked the project and will be trying to translate it to the world of nonlinear control very soon.
Topological data analysis
Topology is the study of neighborhoods. It’s a fundamental study of the structure of sets of objects, including the notion of whether things are close to each other, whether things transition smoothly (though this may be more $\paratial$ geometry), etc. A recent paper on topological data analysis captured my imagination. Basically (and I’m still learning through this) the goal is to leverage the structure of the data as an operator and then link it to fundamental topological structures. What this does is then gives you a whole family of operators that are iso, or identical, with respect to certain properties that you may be interested in. I’ll revisit this paper in a more concrete neuroengineering context soon, but it got me excited to talk about.