Things I learned the hard way (in graduate school)

Dear friends,

It has been a while, I know. My tweets, posts, and overall online ponderings have dwindled this last year. But a lot has happened! I finished up my Ph.D. research, wrote up and successfully defended my dissertation, graduated from the University of Florida, and started as a postdoc at Yale University.



Featured: my wife, Begüm, UF Ph.D. grad ’15, aerospace engineering.


Now that we are (almost) all moved and the work/life balance is settling into somewhat normalcy I have had some time to reflect on my graduate school experience. I was thinking about writing a post on “what I wish I learned sooner” or “what I would tell myself 6 years ago”, but after giving it some thought I realized that I wouldn’t change anything about my path through grad school. Every lesson that I learned “the hard way” was a necessary struggle. Nevertheless, I’d like to share some of the lessons I learned the hard way, and maybe if you are reading this as a first year graduate student you can keep them in mind as you forge your own path and learn your own lessons.  So, off the top of my head, things I learned the hard way, in no particular order:

  • Treat grad school like a 9-5 job.

    When I first got to graduate school the postdoc in our lab suggested I treat graduate school like a 9-5 job. As in, have the discipline to go work every workday, even if you do not have any obvious obligations. It was easy to not do this, since my advisor didn’t set any rules about being in the lab, my work didn’t revolve around keeping lab animals happy, and I only taught a few days a week. Plus, I just moved to Florida and was living in a small city with 50,000 other young adults…

    Grad school is a lot of work. Before you know it there will be a bunch of obligations to tend to, many of which are completely new. Teaching, grading, your own classes and homework and exams, and, hopefully, your own batch of new research. Not to mention the little things nobody talks about, like spending an entire week trying to figure out how to use a program to make one figure that you never even use.

    All I’m saying is that you will thank yourself later if you build work habits and discipline early, even if you do not have any established research yet. (Thanks for the advice, April!)

    That all being said, make sure you figure out when it is appropriate to close your email, turn off your phone, unplug from the internet, and take care of your mental and physical wellbeing.

  • Find where you work best.

    The lab wasn’t always the best place to do work. Maybe your labmates are doing intensive noisy work. Maybe people are holding office hours. Maybe the building is old and heavily used and in Florida and it fills up with cockroaches after 4pm. I went through many phases in grad school with ideas about where I worked best. In the beginning, it was the library, in the middle it was a coffee shop, and at the end it was in the lab and at home (and at the VERY end it was everywhere, all the time). And, if you have a mental block, it might be best to get up and go for a long walk and change your environment.

  • Comment your code.

    I know this is the first thing you learn as a new coder/programmer. I know the (good) code shared on stackexchange is beautifully commented. I know it is SO obvious that this is the best practice. But, it is easy to get lazy. Especially when you don’t think anyone else will ever see that particular batch of code.

    I didn’t know how easy it is to completely forget what I just did two weeks ago. Maybe you are meeting with your advisor, and she suggests trying to do X,Y, and Z, and you say “Great idea! I actually did that two weeks ago!” and then you go to pull up your code (if you can find it, that’s another lesson I learned the hard way) and it looks like someone else wrote it.

    Trust me, even if you are writing one line of code to do something fairly trivial, add a comment saying what you are doing, why you are doing it, and how it works (in English)… and how it fits in with the rest. Maybe it’ll take an extra minute that feels wasted, but it is better than actually wasting 15 minutes at some later point trying to decipher your mess. You will thank yourself when a reviewer comments on results you generated 8 months ago and you need to rerun everything! On that note, I found using github extremely useful during the latter part of my Ph.D. work. Especially when I had multiple projects going on at the same time and I needed to jump between them and remember exactly where I left off.

  • Create figures directly from your code.

    Perhaps you have no formal training when it comes to coding. Perhaps you learned everything in a nice GUI and you can save figures by clicking “Save Figure As…” (I learned in RStudio). Perhaps you have gone years coding and generating results and saving them to random folders throughout your computer and it has never been a problem. You finally submit the first chapter of your dissertation for peer-review, and:

    “Please resubmit Figure 2 at 600 DPI.” Ok, no problem, you think: where is the code I used to generate the original figure 8 months ago? Where did I even save the original PDF?!

    Better way: directly save your figures within your code. This is fairly simple in R, and I do something like this:



>#Alright, time to make that neat figure from the data generated from the code immediately above (descriptive comments throughout!)

>#Mouse, tumorigenesis incidence, things I can Ctrl+F here


>plot.count <- 0    #initialize some counter


>plot.count <- plot.count+1    #this counter goes up 1 time every time you run the code to generate a new figure in the same folder. This way, you can try a bunch of dimensions/font sizes/etc. in a row and pick which looks the best

>png(height=12,width=8,unit=”in”,res=300, file=paste(“C:/Users/Vincent/Desktop/aging/evo_tradeoff_figures/”,”mouse_combined_wline_exitseminar”,plot.count,”.png”,sep=””))  #making a png image, and now we know the exact dimensions, resolution, and where it was saved. Note that the counter is within the (descriptive) filename.


>   #Turn off the “I’m outputting images to a file now” signal.

There, now you can look back to your code and see how the figure was made, where it was saved, and easily redo it!


  • Do not use powerpoint to create (manuscript) figures.

    Powerpoint is great. Over the last few years I have become a powerpoint whizkid. I love it. But, I’ve come to realize that it isn’t the ideal choice when making (professional) figures. It IS great for a quick and dirty manipulation of something for a presentation. It is great because it is easy. But, when you are stitching a bunch of images together, over a background, with a specific DPI requirement, and dozens of layers, you might need something a little less easy. That’s where GIMP (GNU Image Manipulation Program) came into play. It took some getting used to, but trust me, it’s worth it. (Just like learning LaTeX over Word.)

  • Set up alerts to find new papers.

    Whether it is a specific journal’s Table of Contents or something like Google Scholar Alerts, do yourself a favor and set up some automated email service that alerts you to new papers in your field and interests. Even if you just take note of authors and titles and abstracts, it’ll keep you at the edge of your field, and you never know who you will meet at a conference!

  • Join listservs

    The messages I received through Evoldir alerted me to numerous conferences and postdoc opportunities. Worth deleting a dozen or so messages every morning.

  • Join Twitter (maintain a semi-professional social media presence)

    I once heard someone describe their persona on twitter akin to if they were at a beer and wine social at a conference. Professional, but also fun and somewhat personal. Through Twitter I have shared my interests, met many researchers in my field, and have had my research (e.g. slides at a conference) sent out to tens of thousands of people. It’s great! Plus, you never know, maybe your university will even pick up on your story and want to share it with the world.



  • Give talks!!

    My graduate career has been punctuated by the talks I’ve given. Nothing helps you solidify your thoughts and results like having to announce them to a group of colleagues. Invite the meanest, most critical, and intimidating professors you can think of, because they will probably have the best feedback. Invite mathematicians if you describe math. If you have the opportunity to present in an informal setting, then share new half-baked ideas with colleagues. I can’t stress this enough, sharing your work with the field is one of the most challenging and rewarding parts of graduate school.


If you went through this journey and want to share some things you learned the hard way, let me know! I’ll update if I think of anything else.


Fun Fact about your diet…

Spoiler alert: I usually start my seminars by talking about how we are all in a continual flux of our constituent cells. Don’t tell me that “people never change”… yes, they do, in fact every day a huge chunk of every person dies and is reborn. For instance, perhaps 84% of our cells are red blood cells, and each one of those cells only lives about 100-120 days (meaning that 2.4 million new cells need to be produced every second)! Not only that, our entire intestinal epithelium is completely renewed every few days, and we lose (and gain) about a Billion (Carl Sagan emphasis added here) cells in our small intestine every 20 minutes.  Woah.  OK, … so what happens to the cells that “leave” our intestines? They enter the part of the gut where the “food” goes. So, do we eat them? Do we eat our own cells? Are we constantly, every day, digesting ourselves?


According to the Food and Nutrition Encyclopedia, every day 25% of the protein in our diet comes from digesting our own intestinal cells! Think about that. Today, perhaps one fourth of the protein in your diet came from digesting your own flesh!

And then I can start talking about my research.

Spider Sunday is back! This week: Wolves in Your Backyard.

I have a confession. When I was young I wasn’t very kind to spiders. My behavior can likely be attributed to fear; growing up we are surrounded by imagery of spiders being dangerous and alien. We fear what we don’t understand. The internet says Marie Curie once said “Nothing in life is to be feared, it is only to be understood. Now is the time to understand more, so that we may fear less.” And it’s true, the more I learned about spiders the less I squashed them. Now that I’m older, and a biologist, and living in Florida (read: constantly surrounded by giant spiders), I see spiders as fascinating, useful, and largely innocuous. And I’m on a mission to spread this view in order to gain back all the biology-karma I lost squashing spiders in my childhood. So, here are some neat facts I just learned after to a recent encounter.

Storytime. The other night I was outside enjoying the (relatively) cooler Floridian night and getting some work done. Suddenly I glimpsed a familiar shape darting towards the leg of my chair. A few inches long, but too meaty and agile to be an orbweaver or banana spider, I knew it had to be a wolf spider. So, I jumped up and reached for my phone and rushed to snap a picture before she retreated. When the flash went off I was greeted with a surprise…

Proud Wolf Spider Momma

Proud Wolf Spider Momma

Reflections. Reflections from eyes. But wait, why are there reflections coming from the spider’s abdomen?

Woah. Cool! Ok, so now I know that wolf spider’s eyes are reflective, just like I’ve seen (and posted about) before in Golden Orb-weavers…


Golden Orb-weaver, La Chua Trail, Gainesville FL.

This is very similar to the reflections we’ve all seen before when shining a light towards certain mammals at night, such as cats or raccoons.

Raccoon hanging out behind UF's Science Library, Gainesville FL.

Raccoon hiding out behind UF’s Science Library, Gainesville FL.

So, what’s going on here? By reflecting light back through the retina there is more light available to the photoreceptors- enhancing night-vision. In vertebrates this is accomplished by the tapetum lucidum, or “bright tapestery” in Latin, a thin tissue membrane in the back of the eye. It looks like the tapetum has evolved independently in invertebrates and vertebrates, and actually exists in several invertebrate taxa including scallops, crustaceans, scorpions, and dragonflies. The tapetum in invertebrates consists of parallel strips of reflective guanine crystals- the same type of crystals that give fish their shiny metallic skin and allow chameleons to shift their skin color.

Want to see it for yourself? Go outside at night and shine a bright light into the grass. Those hundreds of reflective dots shining back? Wolf spiders looking at you. But fear not, for now you understand more. Just wear shoes.

(and share your cool spider pictures with me!)

Dragonflies are awesome.

Alright, so my wife and I both think dragonflies are really cool. We never really thought about exactly why we think this, it’s just this inherent neatness about them. Maybe it’s how they hover like brightly colored silent helicopters and then quickly dart about like… I don’t know, some sort of alien spacecraft. And, unlike some of our other backyard insect friends (I’m talking about mosquitoes and red imported fire ants, both of which seem to have an affinity for my skin in particular), dragonflies don’t bother us.

This last weekend I was fortunate enough to have a dragonfly interaction that got me falling down the wikipedia rabbit hole learning about our flying friends, so I figured I’d share some of what I found here. First, for the fateful interaction:

I was grilling up a batch of beer in preparation for the summer…

You read that right. Cannataro’s Brewery Summer Saison will go on tap June 2015.

… and I was joined by a male blue dasher!


He hung out for a while, flew to different perches, and even let me get a few close-ups.


Smiling for the camera. Which, by the way, was just my cell phone (galaxy s4).

Eventually my wort was ready to start cooling and he was done patrolling the garden so we exchanged our goodbyes and went our separate ways.  Little did I know the carnage that was awaiting me the next morning. Warning, dear reader, the next image is graphic.


My dragonfly friend had been decapitated! By one of his own! Well, kind of. That new, green, living dragonfly is (I believe) a female eastern pondhawk. After I took that picture she flew off, taking the body with her, leaving just his head as evidence. Woah. Talk about cool backyard biology! Down the wikipedia rabbit hole I went. Time for some rapid fire fun dragonfly facts.

Dragonflies have been on Earth in pretty much their present form for over 300 million years. In fact, the largest insect to ever exist was an ancient dragonfly (with an estimated wingspan of 28 inches!). They can spend years in their underwater nymph form, which has extending and retractile lower jaws (remind you of any alien characters?) and can feed on vertebrates (small fish, tadpoles) and mosquito larvae (thank you).

The adults enjoy mosquitoes as well (told you they were awesome).

The nymph crawls out of the water and transforms directly into the adult in a process called ecdysis. They have a unique mating system where the male grabs the female behind her head with the claspers at the end of his abdomen and they form a heart-shaped mating pair. Their wings are self-cleaning and water repellant due to the lotus effect. I can go on and on, but if you want to learn more you should check out this video:

So next time you look at a dragonfly think about how you’re looking at the 300 million year old body plan of a ruthless killing machine with a 95% hunting success rate. Dragonflies are awesome!


Generating Biology

I’m taking a Teaching Careers/Methods course and our first assignment was to develop a class we’d like to teach and give a little mini-lesson to convey some learning objectives and outcomes of the class. I proposed a class dubbed “Programming and Quantitative Methods in Biology” (or should it be quantitative methods and programming in biology?). Regardless, it’s meant for upper level undergraduates, and the idea is to give students the means to model and simulate biological systems and interpret the underlying biological phenomena at play. I wanted to turn the teaching of biology on its head a bit, instead of learning words and formulae (for example: drift, selection, p+q=1) and then trying to associate them to some dynamics in your head, you “generate” the biology, observe and manipulate the dynamics, and then learn the corresponding biological concepts. I guess I spent a lot of time in college memorizing terms instead of understanding concepts and I didn’t realize the difference until I was in graduate school. Anyway, I’ll give a synopsis of my mini-lesson below. Please feel free to expand and improve or implement these ideas yourself.

Towards the end of the semester students are tasked with creating a simulation of an evolving diploid population with constant population size. Ideally, their simulation has input parameters of the initial population size and the initial distribution of genotypes at a particular allele (AA,Aa,aa). It would also be nice to define the relative propensity for specific genotypes to have offspring and the ability for an allele to mutate into another allele at some specified rate. I coded this simulation up myself for the mini-lesson I gave so that the class could play along. You can find the code here:

(again, please feel free to expand and improve on this!)

You have to save both .R files into the same folder and then open R, open the HW_dynamics_master.R script, and set the folder you saved the files to as your working directory. Then you can manipulate the parameters, run the whole script, and see the resultant evolutionary dynamics.

What do you think will happen to the genotype distribution if you run the simulation with these initial parameters?

#Number of each genotype in the population initially:
AA <- 0
Aa <- 2000
aa <- 0

#Generations to run the simulation:
Generations <- 50

#Chance of allele mutations per reproduction event: <- 0 <- 0

#Some measure of relative fitness. <- 1 <- 1 <- 1

Do you think the population will remain as 100% heterozygote? Do you think any allele will go extinct? Do you think it will continually fluctuate, or reach an equilibrium? Well, here’s an example of what happened for me:

Initial_conditionsAfter a single generation the population snapped in to an equilibrium and remained hovering around that for the rest of the simulation. This is known as the Hardy-Weinberg Equilibrium. (This is a good opportunity to derive the HW formulas on the board and discuss what assumptions are key to maintaining these genotype frequencies). The code spits out the expected HW equilibrium given the initial p and q and the actual average equilibrium after each run, here’s what it looks like for those initial parameters:

"Initial 'p': 0.5"
"Initial 'q': 0.5"
"Expected AA= 0.25 | Expected Aa= 0.5 | Expected aa= 0.25"
"Average AA= 0.26205 | Average Aa= 0.50067 | Average aa= 0.23728       
*(averages after first generation)"


Now the fun begins. What would happen if we started violating these “key assumptions” underlying the Hardy-Weinberg principle? Let’s see. What do you think might happen if we started with 20 heterozygotes instead of 2000?

#Number of each genotype in the population initially:
AA <- 0
Aa <- 20
aa <- 0


"Initial 'p': 0.5"
"Initial 'q': 0.5"
"Expected AA= 0.25 | Expected Aa= 0.5 | Expected aa= 0.25"
"Average AA= 0.55918 | Average Aa= 0.30918 | Average aa= 0.13163      
 *(averages after first generation)"


Woah woah woah, our equilibrium is all off! In fact, an allele went completely extinct after 38 generations. It’s almost like one of the alleles drifted towards fixation in this population.

What if the heterozygote left, on average, twice as many offspring as either homozygote?

#Number of each genotype in the population initially:
AA <- 0
Aa <- 20
aa <- 0

#Generations to run the simulation:
Generations <- 50

#Chance of allele mutations per reproduction event: <- 0 <- 0

#Some measure of relative fitness. <- 1 <- 2 <- 1


"Initial 'p': 0.5"
"Initial 'q': 0.5"
"Expected AA= 0.25 | Expected Aa= 0.5 | Expected aa= 0.25"
"Average AA= 0.28469 | Average Aa= 0.49898 | Average aa= 0.21633       
*(averages after first generation)"


All of the genotypes remain in the population! This is a good spot to bring up fitness, overdominance, and underdominance.

Plus, you can play with the mutation rate between alleles (what happens if the whole population is AA but there is some chance of a spontaneous A–>a mutation? What happens if this ‘a’ allele confers some fitness advantage?)

#Number of each genotype in the population initially:
AA <- 1000
Aa <- 0
aa <- 0

#Generations to run the simulation:
Generations <- 100

#Chance of allele mutations per reproduction event: <- 0.0001 <- 0

#Some measure of relative fitness. <- 1 <- 2 <- 2.5



Chances are that whatever resultant dynamics you observe from the model have already been described by population geneticists. And that’s the point-  you generate the biology, figure out what’s going on to lead you to the dynamics you observe, and learn how these biological phenomena have been previously described. There will also be a focus on the limitations of models and simulations.


Anyway, that’s my idea. Your thoughts are appreciated!




January 2015 fun facts

Woah, I’m way backlogged on blog posts! Don’t worry, I have some cool stuff in the works and I’ll share soon. In the meantime check out some of the science I’ve been starting my classes off with this month.

Aging research: blood to blood – scientists can splice animals together by creating a wound in each animal and sewing them together- their natural wound healing mechanisms join their bodies and their blood (it’s called parabiosis)! If you splice an old animal to a young one the tissue in the old animal gets “rejuvenated” by the young animal’s blood.  Sounds like the premise for a horror movie.

Scientists have discovered a new antibiotic that kills pathogens without detectable resistance.

Scientists have discovered that tumor cells can actually acquire previously lost DNA (in this case mitochondrial) from “normal” cells, and that the newly acquired DNA restores missing function. Think about that. Somatic cells (or cells that were once deemed somatic but now have become tumor cells) can horizontally transfer DNA. Biology textbooks get rewritten every day.

And, of course, I can’t introduce metabolic scaling and not discuss the invariance of heartbeats.


Mind controlling parasites- how sci-fi are zombies anyway?


Halloween weekend is drawing to a close, and as I type this (looking out a coffee shop window) I can still see the zombie makeup on the faces of those passing by. It’s understandable why the whole zombie thing can be pretty terrifying. In the movies the protagonist usually watches their once fully autonomous friends and loved ones fall prey to some microscopic parasite and become a mindless vessel, obeying the will of their neural captors, tasked with ensuring the survival of the parasite and oblivious to their own health. Good thing it’s science fiction! Right? Well, anyone studying parasitology can tell you that in some cases it’s less fiction and more science.

Whenever I teach the lab on species interaction I always spend a good bit of time on mind controlling parasites. First off- they’re just cool. Plus, there’s a lot of captivating videos out there! One of my favorite being:

(p.s. larva emerging from a caterpillar body below, viewer discretion advised!)

Great music and sound effects aside, it’s always interesting and sort of mind-blowing to see the caterpillar actively defend the larva that just busted through its skin. It really gives you a sense of just how possessed an organism can become at the whim of a parasite. Another zombie-state-inducing parasite infects snails:

And another favorite, the inspiration for the zombie-survival game The Last of Us, infects and alters the behavior of entire forests full of insects:


Ok, so mind controlling parasites might actually be all around us, but at least they only infect invertebrates. Right?! Well, no.

Rats have a natural (and understandable) aversion to cats. When they smell cat urine they feel fear and head in the other direction. However, rats infected with the protozoan Toxoplasma gondii, which only reproduces in the cat intestine, are actually drawn to cat urine. The parasite hijacks the sexual arousal pathway in the rat brain, and instead of feeling fear the rat feels sexual attraction to the cat odor. So, just like the snails in the video above, the rats search out their natural predators for the benefit of their parasite.


Ok, so mind controlling parasites can infect and manipulate the behavior of mammals as well. But, certainly humans, with their giant and complex brains, don’t have to worry about being influenced and controlled by the whims of a tiny microscopic organism. Right?! Well…

I have a habit of bringing up the universe that exists within multicellular organisms. It’s easy to think of this as a one way interaction- a large organism goes about their business and the little organisms tag along for the ride. But the survival and wellbeing of the microbiome is extremely important- so important that hosts even synthesize food for their microbiome during periods of illness to ensure that their microbial friends stay happy.

Is it possible that some of our microbial friends could be manipulating our behavior for their benefit? Some scientists have recently suggested that might be the case- we might be at the whims of a microbial puppet master. More research is needed to test these hypotheses, but I look forward to the day where taking a microbe-filled pill can change my appetite for the better and bolster my microbiome.

Outside of our bacterial microbiome, we also house a vast virome. Research published in PNAS this week has shown that humans can be infected with an algal virus, and this virus was associated with a 10% decrease in performance on visual processing exams. Additionally, mice infected with the virus took about 10% longer to navigate a maze and explored 20% less.

So, maybe we’re not so autonomous after all. Spooky! Happy Halloween!