23/52: What is the hardest scientific endeavor of all? (Answer: neuroscience)

"The brain is the most complex thing in the universe."

If there was one quote that grinds my gears more than all else, it is this one. It is simply an idiotic thing to say: the brain is not the most complex thing in the universe; the universe, which contains billions and billions of brains small and big, is the most complex thing in the universe. Even the interaction between two brains, certainly, is more complex than the brain itself. So why is neuroscience, the scientific study of the brain, the hardest of all?

Let's take, as two examples, quantum physics and astrophysics, fields that study the tiniest and the grandest objects in our universe. These are extremely difficult things to study, because to even observe the signals necessary to start answering the questions we've set forth, it takes some real human ingenuity and delicate engineering to construct devices that can give us reliable measurements we can then use to make inferences. Watch this really great video on LIGO and the detection of gravitational waves if you are not convinced, and coincidentally, it is a combination of both quantum physics and astrophysics (4:45 is the best part...why are they even wearing goggles?)

If that's not the craziest thing I've heard of, which is literally at the physical boundary of the universe (as we understand today) on both the smallest and the largest scales, I don't know what is. Maybe putting people on Mars? Maybe Elon Musk's new brain hat? Which brings me to how much more difficult neuroscience is. Actually, it's not just neuroscience, it's all scientific efforts that try to study some aspect of the human mind, like psychology and cognitive science, but not, for example, neurobiology. And the reason I believe this is not because neuroscience is intrinsically hard - it most certainly pales in comparison to many branches of the physical and even social sciences.

What makes it hard, I think, is that it is incredibly difficult for a human to be objective when we study the brain and the mind: the phenomena we are interested in explaining are the ones that occur on a daily basis in our mundane lives, like paying attention to traffic, perceiving color, etc, and it is precisely these subjective experiences that we not only draw inspiration from, but also try to dissect as if they are objective things in the universe. I certainly don't want to get into the debate of what is and what is not objectively real, a photon or consciousness, but I think we can agree that one is more objectively existing than the other, at least in terms of how we operationalize them scientifically. In fact, I think the more "dead" we think something is, the easier it is to study it objectively, which might explain the incredible disparity in our level of understanding of the brain compared to every other organ in our body. When we study the brain and the mind, confirmation bias does not only creep in at the objective scientific level, but the personal level as well. Yes, we can be diligent in checking results that confirm our hypotheses, but it is damn near impossible to be diligent in checking results that are consistent with our daily experiences and intuition. After all, if I've had these experiences, it must be the right, right? 

Why is it so hard? I'm not sure, but if I were to venture a guess, I think it is rooted in our wish to preserve our own identities, livingness, and humanness. One of the things that makes us humans feel special is the belief that we are special: yes, dogs and cows and rats and dolphins all have brains, but we must be special in some way? And if we were to lose this feeling of specialness in the pursuit of an objective understanding of the brain and our humanness, we would paradoxically lose this humanness altogether. In fact, I believe it is crucial that we treat the human brain like we treat any other organism on this planet in order to properly study it, but that would create such an intense dissonance, because at the end of the day, when we're done being neuroscientists, we're back to being regular people - a friend, a spouse, a parent - all of which requires this special humanness to maintain, this special belief that we're all the good guys at the end of the day.

I recently finished reading Paul Kalanithi's When Breath Becomes Air, and his story contextualized our scientific effort in understanding the brain in a way that's never explicitly occurred to me before - though I soon realized afterwards that it's always been an implicit motivation for me, and perhaps for many others - and that is the search for meaning. I believed that we can ultimately objectively define the meaning of our existence by understanding the brain, the organ that presumably gives us this sense of meaning - our joys, pains, struggles, triumphs, and every other thing we feel - in the first place. Now that I consciously think about it, I don't know whether this will ever happen: to objectively understand this sense of meaning, we may have to give up that there is any meaning in the first place, and simply describe our thoughts, movements, and interactions as physical quantities changing over time, like how we objectively describe ant colonies, economics, and a murmuration of birds.



22/52: The Disease of Productivity & Mindful Dishwashing

Since I started graduate school, I think, I've revered industriousness and have been deathly afraid of unproductivity. It's really weird, because I was just as gung-ho about doing well in high school and university, but in some sense I felt like at that time my life (and work) wasn't really mine, and that I was just doing the bare minimum required to get the grades I was supposed to get, but no more - I was on a bus being driven to and fro. Even though EngSci was extremely time-consuming, I still had some "down time", where it was set aside to just hang out with people, get stupid drunk, or go play basketball. Actually, it wasn't "set aside" as much as I was doing anything to not spend time on school. Could I have gotten, better grades, been more involved in extracurriculars, or developed SOME kind of interest or hobby outside of school? Probably. But to be honest, the concept of setting aside time to do things as an investment of my future never occurred to me, and I just did whatever I wanted to do.

The moment I started graduate school, though, there was a fundamental change in my mindset. The best way I can describe it is that I began to think of my time doing research/reading/whatever as if it was time working on my own business: I get back as much as I put in, and no more, so if I want to be successful - and there is literally no bounds on what that means - I have to put in as much time as possible. In other words, every second of my time is a resource that I have to spend wisely to get the maximum return on investment I can get, and at this point in my life, it's through working or reading or better equipping myself for research one way or another (narrow-minded, I know). From that moment on, I was always trying to preoccupy myself with work, and I convinced myself that unlimited industriousness is good. Of course it is - there are countless numbers of motivational videos on Youtube, and many times more of quotes saying hard work is the secret to life. 

After three years, though, I'm starting to think it's becoming a condition of some sort that I cannot get rid of. I don't know what specifically I was thinking about, but one day I had a thought that perpetual and uncontrolled laziness or procrastination is like a disease, since a person may literally feel as if they have no control over their inability to get started on doing work. And it dawned on me - I have just as little control over my inability to NOT think about work. Of course, that's not to say that I'm always actually doing work, which is the stupid and ironic part, because I'm also starting to realize that, pushing myself or not, I end up doing about the same amount of work, except in one case, I feel extra bad that I'm taking time off so I just end up procrastinating doing something that I don't really want to do, like watching the same video of Gordon Ramsey cooking a steak 5+ times on Youtube. For a while, I tried to consciously set aside time to relax by doing things that's not work. But that never really worked, because I would just think that I could be spending this time doing work instead.

Now I realized I had it backwards: I was depending on the activity itself to relax me, but the truth is that relaxation comes from the conscious decision to do this thing - anything - instead of thinking about work. This happened a lot for meditation: I noticed that during times when there are no immediate deadlines, meditating is a huge boost to my energy level and general sense of wellbeing. But when I'm in a period of high stress because a deadline is approaching or just particularly busy with multiple things, meditating not only does not help, but makes the situation worse, because I end up getting distracted by work and then think that I should've just spent that time working instead. It was very frustrating because I thought meditating was suppose to make me more relaxed, not the opposite?! At some point, it dawned on me that meditation cannot make one's mind more relaxed, but it is the relaxed mind itself - one that happily embarks on the small journey of being mindful despite all the chaos engulfing and bombarding the mind with responsibilities and tasks - that makes one's mind more relaxed. In other words, more than half the battle is already won when I wholeheartedly commit myself to relaxing. Mind blown, right?

After a few more periods of alternating stress and relative idleness, I started to pick up on all the random little things that are truly indicators of my mental wellbeing. Willingness to meditate, of course, is probably a big one, precisely because it's something that is truly unnecessary, in the sense that it does not accomplish anything that "needs to get done". Going to the gym is another such thing. Household chores, in general, are pretty good litmus tests when I start to neglect them: dishes piling up in the sink even though I only cook like one meal a day, shirts not hung back up at the end of the day, etc. There is probably a direct inverse relationship between my daily cortisol level and how many times I flossed in the last week. Conversely, some things that I spend more time doing when I'm not doing well: Youtube, Twitter, and various other forms of social media, reading about random shit on Wikipedia or Buzzfeed, reading the news - though that's largely a non-issue now since I avoid that like the plague. Anyway, the takeaway here is that how I'm treating myself in the face of external pressures is the best indicator of my mental wellbeing. Obviously I'm not saying to goof off every time there is more responsibility at work or something. It is simply to say that perhaps we need more loving and caring, from ourselves, precisely when we are being demanded the most (wow that sounds super obvious when I write it down that way).

21/52: Combatting bias in science

For week of May 22

This is the third post in my series of reflecting on the state of science, which, at this point, feels like it's more likely to be a given week's topic than not. In any case, today, I will be writing about bias within the selection process in science. This current line of thought has accumulated over several years and many broader conversations about affirmative action in both academia and industry with various people. I realize that affirmative action is a particular sensitive topic, and it seems like people are either super for it or super against it, so let me just start by saying that without a doubt I believe in the principles behind it, and given how recently it has become an adopted practice, it is a great first step towards balancing out structural inequalities. That being said, I think it is worth taking a closer look at to see how we are doing and what we're trying to achieve.

The impetus behind writing this actual post today came from an interesting blog post I recently read by (presumably) a fellow science blogger whom I only have Twitter relations with. In it, the author asks whether graduate school admission criteria, like GPA, previous lab experience, letters of recommendation, etc, are truly an indicator of future success, or are we simply selecting for previous privilege? And if the latter, how can we give opportunities to those who want to pursue science but did not have the luck of birth to prepare themselves for it? I found this post interesting because, while it was not the explicit purpose, it gave me insight on differentiating between personal and structural biases, the different goals we have for solving these issues, and how they need to be tackled differently. I just made up these categories on the spot, so forgive me if they are defined properly elsewhere.

Removing personal bias helps selecting objectively and optimally
I'll get the easy one out of the way first: imagine yourself as the hiring manager for a position in your company, or an academic researcher deciding on whether to admit a new student into your lab next year. Personal biases often come in the form of color and gender, and studies have shown time and time again that men are likely to be perceived as more capable than women, and white candidates more than black candidates. This page from the University of Oregon on implicit bias during a hiring process details, with citations, many of the studies that have reported significant effects. The one I had in mind, for example, reports that "randomly assigning different names to resumes showed that job applicants with “white-sounding names” were more likely to be interviewed for open positions than were equally qualified applicants with “African American –sounding names” (Bertrand & Mullainathan, 2004). These studies demonstrate that implicit bias is rampant amongst even the most "objective" professions, like science and medicine. Additionally, they almost always deal with gender and race, two things we can usually identify from resumes, applications, and certainly in person. While this is an ugly problem, it is fairly simple to fix as long as we commit to the value that equally-qualified candidates should have equal opportunities. As some of these studies have shown, once the interviewer is blind to the gender or race of the applicant, they are more likely to make an objective and unbiased selection, at least in terms of proportions of men-women or black-white selected. This is also why I dub this a personal bias: once we take the biases of the interviewer out of the equation, the problem (largely) disappears and objectively better candidates are more likely to be successful.

Removing structural bias means selecting sub-optimally?
Which brings me to the latter, and more difficult issue to tackle, and one that is discussed in the blog post I referenced above: structural biases. Structural (or systematic) biases are caused by systematic and historical issues in our society that makes one group of people more likely to be objectively qualified than another. In the context of academic research, a more qualified candidate will have, in order of importance: more research experience in labs during undergrad (or even high school), better recommendation letters from well-known researchers in the field, more well-rounded extracurriculars, better GPA, and potentially better GRE scores. If I was trying to choose grad students for my lab, of course I would more likely pick someone that has demonstrably proven that they can do good research, and I'll happily make the selection while blinded to their gender and race. The problem, though, is that these qualifications are more likely to belong to someone that comes from a higher socioeconomic background, or colloquially, people with "privilege". This is really not a contentious fact: if a high school or college student has to work a part time job to support themselves or their family, can we even expect them to have as much time to study for their exams, much less take an active interest in research and athletics? Here, we are faced with two difficult questions: first, are the more qualified candidates not deserving of pursuing their science dreams, since, after all, they still had to work hard for the things they've accomplished? And second, are the less qualified candidates less deserving, because they will make lesser scientists in the future? I think the answer to both is no, but the difficulty of these questions lies in the fact that they cannot be normalized by blinding yourself, the interviewer, to color and gender alone, but it is rather a choice we have to make collectively and consciously, after we figure out what our values are and what we are trying to accomplish.

Let's clearly define our objectives
So what are we trying to accomplish? On the one hand, it is simply trying to select the best prospective student possible. If this is the case, statistically speaking, it's better to pick someone that has better credentials, no question. There might be candidates that are diamond-in-the-rough kinds of bets, or maybe their story speaks to you personally such that you believe they possess the character, if not the expertise, to be a good scientist in the lab and a better person in the future. These are, for the most part, rare personal choices that have implications for only the personnel involved. On the other hand, there is the problem of placing underrepresented groups into positions of power, so that the cycle of systematic oppression does not continue. It is about giving poor and underrepresented people quality education and critical thinking skills so that they can hopefully help their community in the future. If this is what we aim to accomplish, and I believe we should, then we need to be upfront to ourselves and everybody involved that we, at any given time, may not be trying to find the most qualified candidate for graduate school or an important position in the government. It is certainly a priority to place competent people into jobs where others depend on them, but perhaps the number one priority is consciously equalizing the disproportionate representation of different gender, race, and more importantly, socioeconomic groups. This means having admission quota, and it means selecting under-qualified candidates over more qualified ones, and that is exactly the point. But more importantly, we have to be honest with ourselves about what we want to achieve, because what doesn't make sense is to say that we are battling inequalities, but then filling race or gender quotas with students whose parents are doctors or lawyers, and did undergraduate research under a Nobel Prize winner in an Ivy League school. Like I said, affirmative action in the form of student quotas is a first step towards consciously counterbalancing inequities, because intersectionality exists and there is a correlation between, for example, race and income. However, to say that quotas simultaneously select for the best candidates and try to equalize existing disparities is, if not disingenuous, then completely missing the point.

Why bother?
To end, I think it's worth talking briefly about why one might want to tackle structural inequalities, especially in science. One argument against it, for example, is that hardworking parents should be able to transfer those privileges gained to their children. If I'm an immigrant and broke into the elite ranks of doctorhood or lawyerhood, why should my children be punished for the position I've actively put them in where they don't have to face the hardship I've faced? Anecdotally, I find this attitude most common in Asian technical immigrants, because they often start out dirt poor in North America but rarely get recognized as a legitimate minority groups, so in a way, they get the short end of both deals. I don't have a good response to that, other than I think it's perhaps the right thing to do to give up some privileges for the betterment of society as a whole, and getting snubbed for Harvard or Stanford and ending up at UCLA is really not that big of a problem. What is a problem, however, is a lack of diversity in higher education, as well as in decision making roles in the government. White dudes with private endowments, to be fair, made significant contributions to mathematics and physics for most of the last 2 or 3 centuries. But we are now at a point in society where different groups of people can and do bring significantly different - and valuable - perspectives in tackling challenging scientific and societal problems, and those perspectives are crucial in recognizing, sometimes literally, that a problem exists.


20/52: On the reproducibility crisis and how theory in neuroscience can help to avert it

This is the second of a series of posts where I talk about the current state of science, and today's in particular will begin by looking at the reproducibility crisis, and how a seemingly technical problem can give rise to this real-world issue.

I was sitting in lecture for the class that I'm TAing this quarter last night - introduction to data science - and we began to talk about the current reproducibility crisis in science. The fact that reproducibility has been an issue, especially in biological and psychological sciences, is not news to me. In the last couple of years, there have been various accounts of this particular problem in the media, as well as scrutiny from the scientific community (see here, and an analytical explanation for neuroscience specifically here). What caught my eye, though, was the graphic below, taken from the accompanying article from the Economist:

False Positive Results by Chance
Briefly, this schematic aims to explain why there seems to be so many false positive (hence irreproducible) findings in science. They begin with the assumption that most scientific hypotheses are false, and only a small fraction (10% here) of them will turn out to be correct after data is gathered through experiments (panel 1). This seems consistent with intuition, since most of our ideas will turn out to be crap, and a lot of experiments simply don't work out. This is now purely personal speculation, but there seemed to be a kind of a golden-age of science - at least in biology - where fundamental and groundbreaking results were discovered in a flurry in the 80s and 90s. Nowadays, it seems harder to find the golden needle in a stack of hay, hence most of our hypotheses - the ones that we were so convinced would work after 3 beers with the lab - are probably wrong.

Panel 2: even with careful statistical analysis, an experiment has the possibility of returning a false positive or false negative result. For example, if we hypothesize that eating strawberries makes your brain larger, we will set out to gather data and test this relationship. We often use a p=0.05 threshold for significance, meaning that if there was indeed no real relationship between strawberries and brain size, then only 5% of the time will our experimental data - given the threshold we set - produce an effect large enough BY CHANCE that we then go on to interpret as true. It follows, then, that 5% of all false hypotheses will return positive results by chance, not out of misconduct (45 out of 900). Similarly, a portion of actually positive findings (true hypotheses) will be deemed false after our experiments, also by chance (20% of 100 = 20). So in the end, we end up with 80 (true positive) + 45 (false positive) = 125 positive findings. 

Panel 3: of course, overwhelmingly, it is positive findings that actually make it out of the lab, so that we can make another headline that reads "Scientists discover _____". Now, if we go back and re-examine these 125 discoveries, it's likely that we will not be able to reproduce the majority of those 45 false positives (hard to get lucky twice). Hence, it seems that science is in a dire reproducibility crisis, since around a third of the positive findings cannot be reproduced by independent labs. There are real and systematic problems that make this happen. For example, bad incentive structures where a scientist is judged only by publications and citation counts will want to publish as much as possible to get grant funding and tenure so they can keep doing the science they love. In very few instances are these actually due to misconduct and conscious malpractice, and the scientific process was designed to catch these kinds of errors anyway. However, over time, sloppy standards get sloppier at no fault of any individual, and no one would bother to try to reproduce a work because reproducing someone else's exciting finding won't get you nearly as much recognition, so why not invest these precious grant dollars more efficiently? These are problems that most scientists acknowledge and lament, and wish to be different (I believe), so I won't belabor the point here, though I may write a separate post about these larger issues.

How Theory Can Save Us
What I want to focus on is a new insight I stumbled upon while looking at this illustration: people can't publish null findings. This fact is old as time itself, and even has its own name: publication bias. It describes the situation where the result of an experiment influences whether it will get published, and most of the time it's towards a positive bias. In other words, only "discoveries" tend to get published. I was exposed to and made wary of this the day I started graduate school, because neuroscience as a field seems to be one of the worst offenders of this. Almost every single person in the scientific community laments this, though efforts have definitely been made to curb this practice via mechanisms like pre-registration, where one shares the plan of an experiment prior to actually conducting it, clearly stating the hypothesis and expected finding. Publication bias is so pervasive that I never really stopped to think about why null findings are unsexy and unpublishable, until I saw that second panel in the schematic, and realized just how many more null findings there probably are compared to positive findings, and how much value these results would add to our body of knowledge about the brain at large.

But why aren't these null findings published? Essentially, it boils down to one thing: null findings are unsexy because it doesn't tell you anything informative. But failure to find anything is not uninformative on its own - it is only so because there is no theory behind why it should have NOT failed. Put it another way: if we think of most neuroscience experiments as fishing expeditions (biology is stamp-collecting, after all), going out into a random patch of sea and casting your net and not getting a single fish is not that surprising, therefore not very informative. However, if you are in the same patch of sea, lowered an empty bucket, and proceeded to not raise any water, this would be pretty informative, and certainly publishable. Why? Because our theory about the sea, water, and physics tells us that we really should expect to get water, and thus not getting water with a bucket means either: a) your bucket is leaking, b) physics is broken. Think about it: if in a well-conducted and well-controlled physics experiment that you found that an object did not drop to the ground after release, you wouldn't just say, "oh well, just another failed experiment, back to the lab tomorrow." No, you would probably publish a Nature paper saying you found a spot on Earth where there is no gravity. Expectations built from theory, or the lack thereof, is precisely what makes null findings informative or uninformative. If we have some expectations for how a neuroscience experiment would go - I'm not just talking about a hypothesis, I'm talking about the quantitative and falsifiable explanation we have in our heads FOR that hypothesis - then a positive and a negative finding would be equally informative. In fact, a negative finding is probably MORE informative, because it means your theory needs to be revised, which forms the basis of falsifiable science. But if we have no theory? A null finding is just another empty trip out to sea.

How will this help with the reproducibility crisis? Well, I think if we started to build actual neuroscientific theories that we believe in and can work with in a quantitative way, we will slowly be rid of the publication bias. After all, the most valuable artifact of physics is not the collection of empirical observations that we gathered in the last 200 years, it is the set of generalizable and useful theories we were able to abstract from and refine with the help of those observations. Similarly, a coherent theoretical framework in neuroscience and biology will shift the focus away from stamp-collecting and towards knowledge-building. Once the community at large accepts and values works that falsify theories or reproduce existing evidence, then we can move towards a sustainable incentive structure

19/52: On Arrival, Sapir-Whorf, and Reading Fiction

(Cover image source: Arrival)

When I was younger - I'm talking about middle-school age - I read all sorts of books for fun. Captain Underpants, Artemis Fowl, and Harry Potter immediately come to mind. As I grew older, though, I stopped reading fictions altogether, and the little leisure reading I did do, I read almost exclusively non-fiction or facts-of-the-world types of books. Not only did I stop reading fiction, I took pride in not wasting time on stories. After all, if I wanted to entertain myself with a good story, I can watch a movie in a tenth of the time, and I can use the other nine tenths to learn some more math and science (NERD!). This was partly because I was actually very busy studying math and sciences, and partly because I hated high school English so much that I thought it would turn me off reading forever, not understanding that people can actually enjoy reading Shakespeare and other literary works without explicitly analyzing the iambic pentameters and centuries-old metaphors. In any case, after a 10-year drought, I started reading leisurely again since last fall, because I now have an hour-long bus commute as a part of my day. At first, I read just as a way to relax and get away from my usual riveting readings about neuroscience. But slowly, I have to say, I realized that reading works of literature and seeing how words can be combined in different ways to describe feelings has literally changed the way I experience the world, and the way I experience myself.

If you've seen the film Arrival, or read the book, you will know that the premise of the story is that a linguistic professor was able to experience time in nonlinear ways due to the nonlinear alien language she learned. If you haven't, don't worry - that's not quite enough to spoil the movie and it's still worth a watch. While the notion that anything can warp our perception of time might take some stretch of imagination, the idea that language shapes the way we think does not. In fact, the movie itself cites the Sapir-Whorf hypothesis, or the theory of linguistic relativity. I must say, when I heard Amy Adams make explicit reference to it, I felt a tiny geeky giddiness, because I actually learned about this in graduate school. It's the same giddiness I got when I saw Johnny Depp trying to upload his mind into a computer in Transcendence, even though that movie made no f-ing sense. Anyway, my understanding of linguistic relativity pales in comparison to some of my friends and peers that are actually conducting research on this very topic, but the gist of it is that language is necessary for complex thoughts, the thoughts we have depend greatly on the language we speak, and different cultures "think differently", in a way, because of the different languages they speak. When I first learned about this theory three years ago, it was pretty easy to digest, and made no great impression on me. After all, it was not at all surprising to think that a native Chinese speaker would think in Chinese, and because of the difference in vocabulary, some concepts and cultural customs are simply not transferable to an English speaker. But I think I took it too literally, and re-learning the theory once again through a movie, during a time that I am, in a way, learning a new language through literary works, made me appreciate it so much more.

In essence, language is not only a tool that we use to outwardly describe concepts and thoughts to others, it is also the primary tool that we use to communicate to ourselves - through thoughts. Furthermore, language literally gives us access to some concepts that would otherwise not exist. In other words, annotating something for the first time with language defines and "creates" it, especially if this "something" cannot be captured with any other sensory modalities. For example, if we did not have a word for apples, it's something that can still be materialized in our thoughts by ways of imagining its sight, smell, taste, and the sound the fruit makes when we bite into it. In that way, even though it will be a lot more difficult when you try to tell someone you want to buy an apple, the concept itself can (probably?) still exist, its meaning anchored through our senses. On the other hand, for something as abstract and fleeting as an emotion or feeling, it's quite difficult to capture without words. Try to think about "sadness" without using the word, you will probably need to resort to a bodily feeling, like a tightness in your chest, or some previous memory that put you in that state of mind, like when you dropped a perfectly good ice cream cone. Some people are incredibly apt at relating their previous emotional experiences to new ones, such that old memories of the same feeling can serve as an anchor, and perhaps this is the skill (or talent) of empathy. Unfortunately, I am not such a person, and my emotional memory is very limited, both in detail and in how far back I can remember, so I can't really rely on that to process even my own feelings time and time again, much less intuitively understand someone else's. In fact, I thought emotions were so uncomplicated that I actively maintained that there are, scientifically speaking, only 6 elementary and universal emotions that can be described with single English words (happy, sad, angry, etc.).

For that reason, reading about detailed and intricate descriptions of how fictional characters feel, contextualized in the larger story where I understand their origins and goals and dreams and fears, has really opened my eyes to how I feel, and in turn, how others might feel. Many a times, I have read a passage with the joyous feeling that, not only does someone else - someone wielding this pen - understand precisely how I've felt before, but that they gave me the words so that I can for the first time understand how I feel myself. It's crazy. I recently finished reading All the Light We Cannot See, and I wrote a few weeks ago about a particular passage that feels so mundane, yet it's so accurate in its description of the frustration and helplessness one feels while trying to read under distraction - even though it never used those words. Another example: I am nearing the end of the Glass Bead Game by Hermann Hesse, and the subtle descriptions of Joseph Knecht's tranquility and sorrow simply cannot be substituted by the words "content" and "sad". Without delving too much into the book, I feel like I've learned for the first time that it is possible to think and feel at the same time, and in turn how I can process other people's simultaneous thoughts and feelings on the fly. To me, it feels like wiping the condensation off of my mirror for the first time and seeing a part of myself I've never seen before. And with that, I feel like I understood the Sapir-Whorf hypothesis just a little more, and I'm pleasantly surprised that it actually has implications in my own personal life. Actually, the idea that reading quality literature gives you a window into someone else's feelings is not new: I remember reading this a few years ago but apparently completely forgot about it, probably during a time where I thought: "leisure reading? hah, that's nice." But evidently there is scientific support for the theory that literature can make you more empathetic to the feelings of others.

Extending this away from feelings and into a broader context: can you imagine a world without concepts of freedom, justice, and democracy? Well, I'm starting 1984, so I guess I will soon find out.