Year 3 (and a half): TIL my PhD is Flappy Bird.

It's January of 2018. 40 months after I started my PhD. Wow. I know people say that the years feel shorter as you get older, but these days are just zipping by way too fast. I realized somewhere in October that I forgot to write one of these for my third year, and all of a sudden, we're in 2018. So I'm late, what else is new.

This post is broken into two parts. The first part is about a recent shift in attitude I had on how to process failures and other "negative" emotions throughout the PhD, during which I also realized that getting a PhD is more like staying alive in Flappy Bird than running a marathon. Between when I started writing this post and now (~10 days), I've had two major rejections already (one paper, one conference submission). I'm still learning how to process these things, but what you will read below is my take on some wisdom I received through reading. The second part contains habits and lifestyle changes I've acquired over the last year or so, that I think contributed positively to my general well-being. Disclaimer: none of what you're about to read are meant to be advice, they're more or less just entertaining chronicles of my graduate school adventure. But if you happen to find them useful or comforting, that's all the better!

PhD - Perpetual heap of Discomfort
I thought PhD was like a marathon, and in some ways, it is - it's grueling, long, and most of the time you're by yourself. But in a marathon, every step you take contributes to getting you to the finish line. In a PhD, not so much. It's more like this: imagine stepping off a cliff way above the clouds, and you're falling through a mysterious unknown. Your trajectory has been determined, and many, many things will come straight at you, largely outside of your control. And you know, with every passing second, that you're falling (probably to your death), and it's scary as hell. That being the case, your attitude through the fall could be one of two things. The first is to completely reject the reality that is the unstoppable force of gravity and helplessly flail, feeling strong emotions at every possible opportunity. That could be a positive or negative emotion: maybe you see a branch that might stop or slow your fall, or maybe you hear the snap of the branch that you thought would save you. All of these little events in their individual moments seem disconnected and unrelated. From that perspective, their causes could either be that you personally are doing something right/wrong, or that life is being especially gratuitous/unfair. Through these three and half years, sometimes I blamed life and the people around me, and often I blamed myself, which starts the vicious cycle of unhappiness: "I seem to be falling. I see and grasp at every little thing, but things often don't work out. Now I'm still falling and even more stressed out. Am I not learning quickly enough? How is everyone else doing this so gracefully? I wish I wasn't falling. Oh shit, I'm still falling. Ugh... can't I get a break or do something right?"

In hindsight, that's not the only way to experience this perpetual fall and those emotions along the way, but I didn't even consciously realize that that was a choice I was making, or that I even had a choice. A few months ago, I read "Man's Search for Meaning" by Victor Frankl, and in it I found a piece of wisdom that would go a long way in helping me unlock and process my feelings of free-fall during my PhD, as well as life as a whole during these few years. In it, he says something to the effect of: in this day and age, people have a tendency to not only put themselves in situations where they are unhappy, but they then become unhappy about their unhappiness, because they have a need to be happy all the time. After chewing on this for some time, I asked myself: have I been unhappy during these years, and have I been unhappy about my unhappiness, especially those unhappiness that stemmed from factors that were a natural progression of being in graduate school in a foreign country (which also happened to be a dumpster fire with a steady stream of shitty news to look at on Twitter)? I think the answer was yes.

All of the shitty stuff - work failures and personal failures - are real, and in a way, it's not my fault. It's not anybody's fault. I don't mean that life for graduate students can't be made better by the institution as a whole - it certainly can. Nor am I deflecting responsibility from the things that I could have done better, like starting a deliverable early to avoid stressing myself out last minute. But I think a part of me put such a heavy expectation on this graduate school experience to be positive that I rejected the possibility that my unhappinesses were a product of the situation I put myself in, but rather time and time again blamed it on how much life sucks or how badly I'm doing as a researcher. This doesn't mean I haven't been happy at all. Quite the opposite, actually: I regularly find joy in the work I do and the progress I've made, the new friends I've made and the experiences we've shared, and the beautiful city and its ocean breeze I now think of as my new home, as well as the cherished opportunities I get to see my old home and everything it embodies. But these moments of happiness are fleeting, as all moments are. Except, their endings only thrust me back to the reality of my falling, against the expectation that each happy moment was suppose to mean that I have figured life out, and that the feeling of falling should have disappeared.  

The realization that many aspects of my current situation made me unhappy was powerful, and quite a relief. It's difficult to put them into the right words, but there's a profound difference between the mindset that things are really great overall and there are just unexpected wrinkles along the way, versus the mindset - the more accurate one I'd say - that this process as a whole is challenging and frustrating, and inevitably will make me unhappy, and that's perfectly fine, and I shouldn't be too hard on myself or anyone around me because of it. When I left for San Diego three and a half years ago, I kind of just hopped on a plane and left. I didn't think about how hard I made it for myself to see my family and old friends, experience the comfort of home in Toronto, and in general the challenges of starting life in a completely new place. On top of that, never did I really think about the challenges of doing a PhD, the real and daunting challenges that many before me have faced, which, summed in one phrase, is a relentless feeling of failure and uncertainty, both real and perceived.

So what now? In Frankl's book, one of the things he talks about, within the framework of logotherapy - or logically looking at ourselves - is that we have a choice in changing our situations. Taking responsibility for my own happiness has always been my mantra, which, I think, was why I felt inadequate or that I always have to do more to improve my own situation. But to change my situation, I have to first accurately assess my situation, and these short-term ups and downs, in a way, were largely predetermined by the choice of being in graduate school 3 years ago. Yes, specific failures and disappoints may be avoidable, but they will inevitably come in one form or another. Of course, I can also choose to drop out at any time, and that would definitely alleviate certain aspects of this feeling of falling. But if I wanted to stick it through, I'd first have to acknowledge that failures, successes, more failures and the feeling of being not good enough even after successes are all a part of this process. A lot of people in academia, to their credit, acknowledge the reality of anxiety, stress, and things like imposter syndrome, for example. Every time I read a blog post on that, I think to myself: "everyone feels it, it's not a big deal, and it makes sense to feel it, so I shouldn't feel it now, because understanding it means I'm above it now. Right?" Wrong. No matter how many times I'm confronted with this feeling, and no matter how many times I feel satisfied with a piece of work I did, this feeling has not gone away, and to be honest, I don't think it ever will. So my new resolution is just to embrace this feeling, and embrace the fall. Something else that helped is the Buddhist perspective (or philosophy?) that even "negative" emotions and events can be observed, appreciated, and understood (I really like Thich Nhat Hanh's writing about this in Being Peace), and that's an invaluable treasure in life, just as the happy moments are.

(As I finish writing this, I am realizing that the falling process I described as getting a PhD is basically Flappy Bird rotated 90 degrees: something is always propelling you forward, and you more or less can't control how fast you go forward, even though that's what you're measured on. Instead, the effort you exert feels almost orthogonal to the direction you want to move towards, but nevertheless, it's those little actions that keep you afloat and moving. Actually, that sounds like life in general. Holy shit - life is Flappy Bird.)

That being said, I think there were a few things I did over the last year or so that definitely contributed positively to my mental and physical well-being. Most of them fit in the larger theme of finding what works for myself, so no guarantees that it will work for anyone else, but it might be worth a try:

Moved off-campus: La Jolla, nice as it is, is not very good for living, especially as a 25 year old with a diverse taste for cuisines (here comes the Soylent jokes). More importantly, living on campus literally meant that I live in the same place as I work, so I basically didn't really live, and just worked. Turns out, living in a place where I can walk for more than 15 minutes and not having to stop because there's nothing but highways is nice. Plus, I'm now in an area much closer to my friends, and spending quality social time (or get college-student-drunk) is definitely good for the soul! Also, having to take the bus for 30 minutes to and from work means there is a significant chunk of time away from my laptop, which brings me to...

Casual reading: sometimes I still spend the whole bus ride thinking about projects, but on most days I get a full hour of casual reading. I think I've read more books this year than I have in the rest of my life combined, and this sounds really stupid coming from a 26-year old, but books are great stuff! On the surface level, being immersed in a really good fictional narrative completely reenergizes my mind, and honestly on some days I look forward to that bus ride more than actually going to the office (or anything else in life) because of the book I'm reading. On a deeper level, the books themselves, both fiction and non-fiction, have had transformative effects on my life. I'd been meaning to put together a list of my personal favorites (who knows when this will happen), but top of my list so far: the Glass Bead Game, Dune, Half a Yellow Sun, All the Light We Cannot See, and I already mentioned Man's Search for Meaning. Is this always good for productivity? No, because on some days, I would get to the office and read for another half an hour because I just can't put it down. But do I enjoy it? You're goddamn right I do. Shoutout to the people who have recommended or lent books to me, and my bookclub buddy (spoiler: it's my girlfriend) who patiently reads the weird shit I want to read and picks books that I would've never read otherwise.

Mid-day workout: I now go to the gym before lunch for an hour and half every day, either to play basketball or to lift. I've even turned down free lunch events because it overlapped with my gym time. *GASP* I know, right? Why was this helpful? For one, I feel like I'm in the best physical shape I have been for a while, which definitely helps me feel good about myself, especially during times where everything else is crap (running theme: how to have a good day when life is shit otherwise). The other benefit is that taking that midday break to do something active really helps the blood circulate to my brain again, because if there's anything I learned about myself during grad school, it's that I can't sit still for more than 3 hours and still be fully there mentally. I've tried various routines to overcome this, but working for 8 continuous hours a day always resulted in me dozing off for the second half of the day, no matter how many hours I've slept before or how many coffees I have. Midday nap doesn't even do me as much good as exercise. I think the combined effect of adrenaline and breathing more deeply is a much more natural stimulant for the brain, and it has to come during the early afternoon for me, because going to the gym at night after a long workday (during which I've fallen asleep many times at my desk) is doubly ineffective.

Waking up earlier: The awkward thing about exercising midday is that, of course, I'm now working like 5 hours a day instead of 7, even though those 5 hours are all good, focused hours. To compensate for this, I usually try to wake up early (like at 6 am) to put in two extra hours, have a late breakfast, and go to the office. Because I woke up SOOO early (6 is hella early for a grad student!), I get sleepy by 9pm, which means I'm in bed at grandma hours, and the virtuous cycle continues. I'm really digging the tripartite workday now (work, breakfast, work, gym + lunch, work), and the added benefit is that even if I'm completely unproductive on campus for whatever reason (meetings, teaching, etc), I still feel good about the 2 hours I got done before my day even started. The only thing this isn't good for is socializing on a weeknight, because it screws up the next morning.

Keeping a lab notebook: working on computational projects means I usually try many many things before something worthwhile happens, and sometimes a whole day has to be spent on things like debugging code, or learning some software package. All this means that a lot of work has to be done before I see any real results, and that feels bad. Also, I find myself always referring back to simulation or analyses I did in vague ways because I can't remember the details, but just remembering that it was interesting/important. So after the Nth time of that happening, I decided to keep track of my daily explorations and progress in Evernote. In hindsight, this makes a lot of sense, because that's what people in wet labs do. I guess I just felt that computational stuff happens so quickly that it's not worth writing down. But hey, even if it doesn't matter, seeing a full notebook definitely makes me feel good even if nothing concrete came out of it yet, and that, my friends, is the word of the day today.

26/52: Our thin layer of existence.

Hawaii was an absolute treat for the senses, where almost every single day we witnessed something that was breathtaking: be it the sun setting into the infinite expanse of the orange horizon, the warm saltiness of the sea water on the skin that's just cool enough to alert the peripheries, or the chirping of countless coqui frogs going in and out of synchrony while the jungle leaves rustle peacefully under the backdrop of the star-lit night. Words are not enough to really convey these perceptions, and to be honest, neither are pictures, but we tried anyway. One idea that cannot be conveyed without words, however, is the feeling of our minuteness I got - both me personally and human lives in general - when I witnessed just a little more of the world around us.

Traveling to new places is always an eye opening experience, especially when immersed in the ecology and culture of a foreign place. This was different, though - it revealed to me something that I foolishly thought I already knew intimately - the world in which I live. It's like this: imagine you're about to visit someone's house for the first time. It's cool, because you know you'll see things you've never seen before, or at least how the same things might be arranged differently, so it's an expected kind of novelty. Now imagine sitting on your own couch at home, with an abundance of familiarity surrounding you. Everything was either put there by you, or it's been there for so long that it might as well have come with the place. All of a sudden, someone whispers a few words into your ears and you watch the familiarity in front of you unfold into an entirely new experience, realizing for the first time that there is so much more to your home than meets the eye. Hawaii gave me that feeling about this "world", "my" world, and not just through its exquisite wild life on the surface, but extending from the depth of the ocean to the stars above. The whole two weeks were full of moments like those, but I will just describe a few things that happened over a span of 48 hours on the Big Island (Hawaii Island). 

The Earth shaping itself
The state of Hawaii is a chain of islands formed by underwater volcanic activity. I knew this, and it makes sense. How else does a chain of islands emerge in the middle of the ocean? I didn't know, though, that there are active volcanoes on the Big Island. That fact seems routine enough when you read about it, but being there and witnessing it is another thing. We visited the Volcano National Park, where the Halemaumau Crater spews out a thick stream of never-ending smoke during the day, and transforms into a scary demonic pit at night. The park itself is a huge area of land that surrounds the crater, as well as the aftermath of some of the more explosive eruptions from a few decades ago. The landscape is incredibly eerie. It simultaneously makes me appreciate the wrath of Mother Earth, fear her swiftness in taking life away, and marvel at the incredible youth of the land beneath my feet and its newly sprouted inhabitants. When I think of a young Earth, I think of spring and budding greens. But here, youth is charred black, porous, and honestly looks kinda deadly and downright alien. 

The youthfulness of the land was further exhibited the next night, when we went to the edge of the park where lava is pouring out down the slope of the mountain and into the ocean. The feeling that we are standing there and witnessing NEW EARTH BEING BORN is truly incredible. Land is literally being formed around us, and the rocks we stepped on were younger than any of their visitors (and there were some hardy toddlers braving the lava rock hikes in the pitch black night). It was a sublime reminder that the world around us is dynamic, constantly morphing, swallowing itself and rebirthing itself - not only do plants and animals cycle through life and death, so too does the Earth they stand on.

Shit I never ever think about.

The world outside of our world
I've always lived in crowded places with extremely dense light pollution. Among those, La Jolla is probably the only place where I can regularly see more than a handful of stars at night. I've heard of friends going out of the city to star gaze, but have never done it myself, nor have I ever really camped in my life (I know). So imagine my awe when we went three quarters way up Mauna Kea (9300 ft), which is the dormant volcano on the Big Island and peaks at almost 14000 ft!!! There a lot of cool little tidbits about this mountain, one of which being that the base of it is actually deep under the ocean, so deep that if you measured from the base to its peak, it's just slightly taller than Mount Everest at 33,000 ft. Driving up to observatory altitude is the embodiment of "0 to 100 real quick". I think we went from beach to 9300 ft in about an hour? On the way up, we had to drive through a layer of super dense condensation (aka clouds), and it is a local saying that many people hit the invisible cows on the way up and down the mountain because visibility around the foggy area is no more than about 10 m ahead of you. But beyond that, the sky above feels like it reaches the depth of the universe. After nightfall, it's as if we were transported to another dimension or planet outside of our own, because I've never ever seen that many stars shining so brightly. Apparently, from Mauna Kea, one can see every star available in the northern half of the sky, and about 80-90% (?) in the southern sky, because, you know, it's a tall ass mountain. 

Standing under that diamond studded ceiling, we got to see a lot of astronomical phenomena firsthand, through our own eyes (and sometimes through a telescope). For example, there were guides at the visitor information center that set up small telescopes for the crowd to take a closer look at the stars, and I actually saw for the first time Saturn and its ring. It looks like a miniature, cartoon version of the Saturn I'm used to seeing in books and films: a small tilted ring encircling a smaller dot, both unicolor with a gray sheen. It was pretty neat. We also saw the ISS racing through the night sky in a perfect broad curve, and several shooting stars. By far the most indescribable feeling, though, was the smallness of humans and our planet under such a majestic sky. Standing on top of the cold peak, it was like the universe and all its mysteries were suddenly opened to me - I am directly experiencing, for the first time ever, how vast the space is out there and how little we really knew.

It wasn't quite a religious moment, but that was as close as I've ever gotten to marveling in the creation of some higher being.

The world within our world
I've posted this before, and I have to post it again. I've watched this video myself about 20 times now, and every time I do, I can't help but have a big stupid grin on my face. There are just so many completely spontaneous opportunities to witness animals enjoying themselves, be it a manta ray tumbling around, a family of sea turtle surfing the current, or a pack of dolphins playing hide and seek with us in the bay.

As a surface-dweller, my idea of life is mostly concentrated around my altitude and on dry land. Rarely do my thoughts venture out into the other 70% of this planet. The waters around Hawaii, though, really made me feel that there is life all around us. Perhaps they're different, and look a little strange, but life nonetheless. Breaking through the thin surface of the water that separates two worlds, you are instantaneously immersed in another storyline, like an invisible fly on the wall with the special privilege to witness the completely normal lives of all its characters. In those moment, I felt acknowledged and welcomed, and I hope I can do the same for them one day. It really makes me question, even now, the extent of other kinds of cognition, beyond our simply human ideals.

After all, we are but a thin layer of existence in a much, much larger whole.

25/52: First research paper published!

Because I'm quite short on my 52 posts this year, I'm stealing this from the lab blog (which I wrote!). But hey! First research paper published, jeez what a long process...

Highlights (tl;dr)

The overarching goal of our recent NeuroImage paper (PDF) is to make inferences about the brain’s synaptic/molecular-level processes using large-scale (very much non-molecular or microscopic) electrical recordings. In the following blog post, I will take you through the concept of excitation-inhibition (EI) balance, why it’s important to quantify, and how we go about doing so in the paper, which is the novel contribution. It’s aimed at a broad audience, so there are a lot of analogies and oversimplifications, and I refer you to the paper itself for the gory details. At the end, I reflect a little on the process and talk about the real (untold) story of how this paper came to be. 

A Tale of Two Forces

Inside all of our brains, there are two fundamental and opposing forces – no, not good and evil – excitation and inhibition. Excitatory input, well, “excites” a neuron, causing it to depolarize (become more positively charged internally) and fire off an action potential if enough excitatory inputs converge. This is the fundamental mechanism by which neurons communicate: shorts bursts of electrical impulses. Inhibitory inputs, on the other hand, do exactly the opposite: they hyperpolarize a neuron, making it less likely to fire an action potential. Not to be hyperbolic, but since before you were born these two forces were waging war with and balancing one another through embryonic development, infancy, childhood, adulthood, and till death. There are lots of molecular mechanisms for excitation and inhibition, but for the most part, “excitatory neurons” are responsible for sending excitation via a neurotransmitter called glutamate, and “inhibitory neurons” are responsible for inhibition via GABA.


Like all great rivalries (think Batman and Joker, Celtics and Lakers), these two forces cannot exist without each other, but they also keep each other in check: too much excitation leads the brain to have run-away activity, such as what happens in seizure, while too much inhibition shuts everything down, as happens during sleep, anesthesia, or being drunk. This makes intuitive sense, and scientists have empirically validated this “excitation-inhibition balance” concept numerous times. This EI balance, as it’s called, is ubiquitous under normal conditions, and has been proposed to be crucial for neural computation, the routing of information in the brain, and many other processes. Furthermore, it’s been hypothesized, with some experimental evidence in animals, that an imbalance of excitation and inhibition is the cause (or result) of many neurological and psychiatric disorders, including epilepsy, schizophrenia, and autism, just to name a few.

Finding Balance

Given how important this intricate balance is, it is actually quite difficult to measure at any moment the ratio between excitatory and inhibitory inputs. I mentioned above that there is empirical evidence for balance and imbalance. However, in the vast majority of these cases, measurements are done by poking tiny electrodes into single neurons and, via a protocol called voltage clamping, scientists record within a single neuron how much excitatory and inhibitory input that neuron is receiving. Because the setup is so delicate, it’s often done in slices of brain tissue kept alive in a dish, or sometimes in a head-fixed, anesthetized mouse or rat – basically, in brain tissue that can’t move much, but not in humans. I mean, imagine doing this in the intact brain of a living human – yeah, I can’t either. And as far as I know, it’s never been done. This presents a pretty big conundrum: if we want to link a psychiatric disorder to an improper ratio between excitation and inhibition in the human brain directly, but we can’t actually measure that thing, how can we corroborate that EI (im)balance matters in the way we think it does?


Our Approach: Parsing Balance From “Background Noise”

This is exactly the problem we try to solve in our recent paper published in NeuroImage: how might one estimate the ratio between excitation and inhibition in a brain region without having to invasively record from within a brain cell (which is not something most people would like to happen to them)?

Well, recording inside a brain cell is hard, but recording outside brain cells – extracellularly – is a LOT easier. It’s still pretty invasive, depending on the technique, but much safer and more feasible in moving, living, behaving people. Of course, recording outside the brain cell is not the same as recording the inside – when we record electrical fluctuations in the space around neurons, rather from within or right next to a single neuron, we’re picking up the activity of thousands to millions of cells all mixed up together.

The first critical idea of our paper was that this aggregate signal – often referred to as the local field potential (LFP) – reflects excitatory and inhibitory inputs onto a large population of local cells, not just a single one. Therefore, we should be able to get a general estimate of balance by decoding this aggregate signal. The second piece of critical information was the realization that (for the most part) excitatory inputs are fast and inhibitory inputs are slow so that, even when they are mixed together from millions of different sources like in the LFP signal, we are still able to separate their effects: not in time, but in the frequency-domain (see our frequency domain tutorial).

 A: LFP model with excitatory and inhibitory inputs; B: the time course of E and I inputs from a single action potential; C: simulated synaptic inputs (blue and red) and LFP (black); F: LFP index of E:I ratio

A: LFP model with excitatory and inhibitory inputs; B: the time course of E and I inputs from a single action potential; C: simulated synaptic inputs (blue and red) and LFP (black); F: LFP index of E:I ratio

Combining Computational Modeling and Empirical (Open) Data

Pursuing this line of reasoning, we simulated populations of neurons in silico and looked at how their activity would generate a local field potential recording. What this means is that we can generate, in a computer simulation, different ratios of excitatory or inhibitory inputs into a brain region and see how that influences the simulated LFP. Through this computational model we found an index for the relative ratio between excitation and inhibition.

For those of you that are into frequency-domain analysis of neural signal, this index is the 1/f power law exponent of the LFP power spectrum. Let’s unpack that a bit. In the figure above (panel B) you can see that the excitatory electrical currents (blue) that contribute to the LFP shoot up in voltage really quickly—within a few thousands of a second—and then slowly decay back down to zero. In contrast, the inhibitory currents (red) also shoot up pretty quickly—but not as quickly—and then decay back to zero much more slowly than the excitatory inputs. When you add up thousands of these currents happening all at different times, the simulated voltage (panel C, black) looks to us humans a lot like noise. But through the mathematical magic of the Fourier transform, when we look at this same signal’s frequency representation, they’re clearly distinguishable!

More technically, the idea is that the ratio between fast excitation and slow inhibition should be represented by the relative strength between high-frequency (rapidly fluctuating) and low-frequency (slowly fluctuating) signals. With this hypothesis in hand, we were able to make use of several publicly available databases of neural recordings to validate the predictions made by our computational models in a few different ways. One example from the paper: we were lucky enough to find a recording from macaque monkeys undergoing anesthesia, and the anesthetic agent, propofol, acts through amplifying inhibitory inputs in the brain at GABA synapses. Therefore, we predicted that when the monkey goes under, we should see a corresponding change in the power law exponent, and that’s exactly what we found! As you can see below, our EI index remains relatively stable during the awake state, then immediately shoots down toward an inhibition-greater-than-excitation regime during the anesthetized state before coming back to baseline after the anesthesia wears off.


Takeaways and Disclaimers

So to summarize, we were able to make predictions, borne out of observations from previous physiological experiments and our own computational modeling, and then validate these predictions using data from existing databases to draw a link between EI balance, which is a cellular-level process, and the local field potential, which is an aggregate circuit-level signal. Personally, I think that bridging the gap between these different levels of description in the brain is super interesting, and it’s one way for us to confirm our understanding of how the brain gives rise to cognition and behavior at multiple scales. Furthermore, we can now make use of the theoretical idea of EI balance in places where it was previously inaccessible, such as a human patients responding to treatments.

Before I wrap up, I just want to point out that this paper does not conclusively show that EI balance directly shifts the power law exponent – what we show is a suggestive correlation. Nor does the correlation hold under all circumstances. We had to make a lot of assumptions in our model and the data we found, such as the noise-like process by which we generated the model inputs. I’m not throwing this out here to inhibit the excitement (hah, hah), but rather to limit the scope of our claim, especially for a public-facing blog piece like this.

Rather, ours is the first step of an ongoing investigation, and although we will probably find evidence that corroborates and contradicts our findings later on, it’s important that anyone reading this and getting excited (hah) about it understands that we do not, and likely will not, have the last word on this. Ultimately, though, I believe we stumbled onto something pretty cool and we’ll definitely follow up on those assumptions one by one, and hopefully have more blog posts to come!

Some Personal Reflection

This project was my first real scientific research project in grad school, and it definitely created in me a lot of joy and excitement, as well as caused a fair amount of brooding. As a whole, I really enjoyed the process of building a computational model, even if it was quite simple, and using the predictions from that to inform further empirical investigations. As I mentioned, I think we really need to bridge the gap between molecular-level mechanisms in the brain and circuit/organism-level “neural markers”, and computational modeling work allows us to do that in situations where it would be intractable for many reasons. I certainly subscribe to the notion that combining theoretical/computational work with empirical data is an exciting and fruitful line of research, because it fills a space between two successful but largely non-overlapping subfields in neuroscience (though that trend is now changing).

Also, the fact that we were able to test our predictions on publicly available data was such a blessing, as we simply did not have the capacity, as a new lab, to do those in vitro and in vivo experiments ourselves. However, that meant combing through tons and tons of data where there might have been unlabeled or badly labeled information, only to reach the conclusion that the data is unusable for our purposes. There was some (a lot) of headbanging due to this, but ultimately, we found useful (FREE!) data and I’m very grateful for the people that made them available: CRCNS and Buzsaki Lab, Neurotycho and Fujii Lab, as well as many friends and collaborators that donated data for us to test different routes. To support this open-access endeavor, all code used to produce the analysis and figures are on our lab GitHub, found here.

The Untold Story

One last note, for those of you that find the process of scientific discovery interesting: in this blog post, I tried to write the story as a lay-friendly CliffsNotes version of the paper, starting with the importance of EI balance and the motivation to find an accessible index of it in the LFP, then outlining how we went about solving that problem. That’s the scientific story, and while not false, it’s not chronological.

The actual story began with Brad’s 2015 paper showing that aging is associated with 1/f changes. That was actually what first interested me back when I started in 2014 – this seemingly ubiquitous phenomenon (1/f scaling) in neural data. After digging a bit to find various accounts for how 1/f observations arise in nature, we decided to just simulate the LFP ourselves and see what happens. Turns out, the 1/f naturally falls out of the temporal profile of synaptic currents, which both have exponential rise and decay.

Our model contained what I thought to be the bare minimum: excitatory and inhibitory currents. At that point, I didn’t have a clue about what EI balance was and what it has been linked to. I think I was twiddling parameters one day, and realized that changing the relative weight of E and I inputs will cause the 1/f exponent (or slope) to change because of their different time constants. Then, like any modern-day graduate student, I Googled to see if this is something that actually happens in vivo, and the rest was history. This little anecdote really just speaks to the serendipity of science, and it couldn’t have happened without the many hours of spontaneous discussions in the lab, which I’m also very grateful for, and Google. I think these little stories really liven up the otherwise logical world of science, and I’d love to read about such stories from other people!

24/52: Holy sh*t I swam with a manta ray: the most incredible and awe-inspiring thing I've seen.

We saw a lot of wonderful and incredible sights in Hawaii over the last two weeks, all of which I will shortly detail. But the one thing that stood out the most was this moment, so much so that I think it deserves its own entry.

We were snorkeling at Two Steps Beach on the Big Island, near Captain Cook. It was a treasure trove of marine life with an abundance of corals, but by that point we had been in Hawaii for a week and half, and everything I saw became unremarkable in that everything was routinely remarkable. I was swimming over to the other end of the bay to meet Mei on the beach, still hopeful that I might see a dolphin or two, when this great white shape appeared - about the size of me - and fanned out in front of me.

My immediate feeling was that of fear, since it's not a run-of-the-mill experience to come face to face with a wild animal the size of myself on land, and who knows if this creature is benevolent. Even if it doesn't eat me, what if I pissed it off by swimming near its home? This feeling of apprehension never quite went away, but gave way to so many more: excitement at this rare opportunity, curiosity in this strange beast, joy in watching it tumble round and round in the water. So I followed it around for two minutes, just out of reach. Not that it seemed to care.

I hate to anthropomorphize nature but in this moment, it's impossible to not ponder the mind of this being, as it performed its strange dance. Perhaps joyously, perhaps hungrily. All of these thoughts and emotions combined themselves into a single concept - awe - and I found myself completely immersed in the presence of another entity in its undisturbed natural habitat, wondering just how many more lives like this one surrounded our thin surface of existence, evading our consciousness.

Without further ado (only edit I made was adding the music, over which you can still sometimes hear my gasps):

23/52: What is the hardest scientific endeavor of all? (Answer: neuroscience)

"The brain is the most complex thing in the universe."

If there was one quote that grinds my gears more than all else, it is this one. It is simply an idiotic thing to say: the brain is not the most complex thing in the universe; the universe, which contains billions and billions of brains small and big, is the most complex thing in the universe. Even the interaction between two brains, certainly, is more complex than the brain itself. So why is neuroscience, the scientific study of the brain, the hardest of all?

Let's take, as two examples, quantum physics and astrophysics, fields that study the tiniest and the grandest objects in our universe. These are extremely difficult things to study, because to even observe the signals necessary to start answering the questions we've set forth, it takes some real human ingenuity and delicate engineering to construct devices that can give us reliable measurements we can then use to make inferences. Watch this really great video on LIGO and the detection of gravitational waves if you are not convinced, and coincidentally, it is a combination of both quantum physics and astrophysics (4:45 is the best part...why are they even wearing goggles?)

If that's not the craziest thing I've heard of, which is literally at the physical boundary of the universe (as we understand today) on both the smallest and the largest scales, I don't know what is. Maybe putting people on Mars? Maybe Elon Musk's new brain hat? Which brings me to how much more difficult neuroscience is. Actually, it's not just neuroscience, it's all scientific efforts that try to study some aspect of the human mind, like psychology and cognitive science, but not, for example, neurobiology. And the reason I believe this is not because neuroscience is intrinsically hard - it most certainly pales in comparison to many branches of the physical and even social sciences.

What makes it hard, I think, is that it is incredibly difficult for a human to be objective when we study the brain and the mind: the phenomena we are interested in explaining are the ones that occur on a daily basis in our mundane lives, like paying attention to traffic, perceiving color, etc, and it is precisely these subjective experiences that we not only draw inspiration from, but also try to dissect as if they are objective things in the universe. I certainly don't want to get into the debate of what is and what is not objectively real, a photon or consciousness, but I think we can agree that one is more objectively existing than the other, at least in terms of how we operationalize them scientifically. In fact, I think the more "dead" we think something is, the easier it is to study it objectively, which might explain the incredible disparity in our level of understanding of the brain compared to every other organ in our body. When we study the brain and the mind, confirmation bias does not only creep in at the objective scientific level, but the personal level as well. Yes, we can be diligent in checking results that confirm our hypotheses, but it is damn near impossible to be diligent in checking results that are consistent with our daily experiences and intuition. After all, if I've had these experiences, it must be the right, right? 

Why is it so hard? I'm not sure, but if I were to venture a guess, I think it is rooted in our wish to preserve our own identities, livingness, and humanness. One of the things that makes us humans feel special is the belief that we are special: yes, dogs and cows and rats and dolphins all have brains, but we must be special in some way? And if we were to lose this feeling of specialness in the pursuit of an objective understanding of the brain and our humanness, we would paradoxically lose this humanness altogether. In fact, I believe it is crucial that we treat the human brain like we treat any other organism on this planet in order to properly study it, but that would create such an intense dissonance, because at the end of the day, when we're done being neuroscientists, we're back to being regular people - a friend, a spouse, a parent - all of which requires this special humanness to maintain, this special belief that we're all the good guys at the end of the day.

I recently finished reading Paul Kalanithi's When Breath Becomes Air, and his story contextualized our scientific effort in understanding the brain in a way that's never explicitly occurred to me before - though I soon realized afterwards that it's always been an implicit motivation for me, and perhaps for many others - and that is the search for meaning. I believed that we can ultimately objectively define the meaning of our existence by understanding the brain, the organ that presumably gives us this sense of meaning - our joys, pains, struggles, triumphs, and every other thing we feel - in the first place. Now that I consciously think about it, I don't know whether this will ever happen: to objectively understand this sense of meaning, we may have to give up that there is any meaning in the first place, and simply describe our thoughts, movements, and interactions as physical quantities changing over time, like how we objectively describe ant colonies, economics, and a murmuration of birds.