Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Hacking Sleep, Memory

So memory hacking during sleep is a thing? With endless runs back and forth to Om of Medicine chasing down my subjects, to countless hours staring at the Mona Lisa of sleep: Delta waves, and many other ups and downs during this summer…

I can finally tell you it is quite possible!!

As August is here, it is sadly my last few days at Backyard Brains! So let me come back again one last time and give you a final peek at what I have been up to for the past month and a wrap-up spiel on all my findings and exciting results of my research!

Since my last blog posts (Improving Memory Formation During Sleep and Learning and Deep Sleep), I have been conducting my study on as many subjects I could possibly find. During this process, we added many new implementations to our TMR app to improve our ability to collect data as efficiently as possible.

The GUI settings for the app look very nice now with new colors and a more user friendly environment!

The reference grid was changed to colored boxes as shown, and the image does not appear in the confines of the boxes anymore. We added this change after noticing that our participants’ performance was being slightly biased by the old grid. Another exciting addition: we can now save experiment sessions within the app itself and be able to come back to it and continue from where we left. Our Exporting function was fully revolutionized.. Take a look

This is the pseudo code I did with Greg to organize our data in a better-structured form. We now export JSON files that have entries easily identifiable and accessible in Matlab to perform data analysis.  

TMR technique is a powerful tool to play with, it allows us to test the selectivity of our memory consolidation in various ways, and be able to experiment with many parameters and answer different research questions. For that, I wanted to have built in controls, and give the user the choice to change the parameters of sound cueing. The implementations are:

  • Setting the percentage for sounds to be cued during treatment. The default that I have been testing with (according to all published papers) is 50% of all sounds presented at the learning phase (so 24 out of the 48). It could be interesting to test if cueing 0%, 25%, 75%, or 100% would hinder or enhance the effectiveness of TMR on memory consolidation.
  • Manually select cues (and corresponding images accordingly) if a user would like to test with a different number of targets other than the default 48.  Check this out:

  • The most exciting part: the control experiment is now ready! To validate our results, we need to run control experiments where we have subjects do a continuous reaction task instead of sleeping. We imbed the cues within this task as well, and test to see if TMR still has an effect on memory consolidation during wakefulness. The game consists of 4 rounds. A 2.5 minute training phase, then 3 7.5 minutes testing phases. Sound cues play in the second round of the testing phase 1.5 minute after the start of the round. The user will see numbers on the screen, and they should click if both numbers are either odd or even. Here is a video from the app showing how the game works:

With all these amazing implementations I was able to test it on more subjects. This table includes the full database of all subject participants I had over this summer. It was very hard finding people who are willing to spare their time to do the study (which takes up to 2.5-3 hours) in the middle of the day. So most of my subjects were fellow interns and employees at BYB, and Ann Arbor locals who volunteered during Tech Trek, or signed up for my doodle poll.

Mean start time for Slow Wave Sleep =  37.5 minutes +/-5.1 SE amongst all the subjects we tested (who could fall asleep fully for 90 minutes or more, first 8 subjects). Experimental results and post analysis was based on data from the first 4 subjects, as they were the ones able to complete the study fully. Control experiment was conducted on the last two subjects.

Here comes the best part, what we have been waiting for:

Result: Cued sounds during SWS showed better recall after sleep than uncued sounds

This graph is pretty interesting and tells us a lot, but might not be very intuitive at first. So let’s walk through it! The change in spatial recall is measured in terms of the difference in distance in points. This is calculated within the app itself. Points are units of measurement for position in iOS and apple devices similar to pixels. The conversion ratio to cm is 1 point = 0.0352778 cm. The app calculates the distance between where the user taps on the screen according to where they remember the image to appear (as x,y coordinates to a single position point of the tap), and where the original correct location of the image is (taking x,y coordinates of a single point of the bottom left corner of the image). The larger the distance between the two, the more off was the subject from the correct location, so it reflects less accurate recall, indicating more forgetting. This distance in points is measured for each image in both pre and post sleep tests. To find the difference in performance, I subtract the after sleep distance – before sleep distance. Having a negative number, means that the distance after sleep is smaller than the one before, indicating an improvement in performance and recall, as the subject clicked closer to where the correct image location is, and so remembers better. Therefore, grouping data from all 4 subjects and separating the images into cued and uncued, we have 24 cued images per one subject and 96 cued for all 4 subjects. The same applies for uncued. This gives us a total number of 192 images on the x-axis both cued and uncued. Now, with this knowledge in mind, this graph shows us the distribution amongst each and all subjects. We can see there is a higher distribution of the blue columns with larger positive differences in distance above the x-axis for the uncued images. This shows that subjects are forgetting more/scoring a less accurate recall for the uncued images. On the other and, we can see a higher distribution of the green columns with larger negative differences in distance below the x-axis for cued images showing less forgetting-better remembering/scoring a more accurate recall.

This graph is now much easier, it takes the mean distance of all of the differences of the 96 cued and 96 uncued and plots them. This only shows the final overall change in recall for all subjects grouped. We can see a pretty interesting significant difference in performance between the two.

Summary: Better recall for cued images (-12.95 points +/- 15.80 SE) compared to uncued images (33.09 points +/- 16.26 SE), using two-sample independent t-test (p = 0.04).

This graph shows us another analysis of the results. It shows the percentage correct for cued and uncued images before and after sleep for all subjects. This is the number of images subjects got correct out of the 24 cued or uncued before and after sleep. Correctness or incorrectness is decided based on a comparison between the distance in points discussed above, and a set  threshold of 15% of the screen width. The % of screen width is just “distance in points/(width of the screen in points)”. The width is adjusted automatically as you change the apple device being used. If the distance is less than 15% of the screen width, it is correct. We can see that subjects had a higher %correct for cued images after sleep, lower %correct for uncued, and overall higher %correct for cued vs. uncued after sleep.

Assembling all these puzzle pieces together, we can conclude that we are seeing a general trend so far that indicates the following: The DIY version of Targeted Memory Reactivation (TMR) technique could potentially enhance memory consolidation during SWS and have suitable applications in learning and teaching in the future. It can be seen that TMR can effectively bias spatial associative memory consolidation, by altering the level of forgetness, more than providing pure gain of remembering cued images better. We definitely still need to test this on more  subjects for accurate significance conclusions.

The control experiments involving cueing sounds with no sleep were conducted on two subjects only so far.  Results show the same trend of the experiment with slight differences.

Summary: Better recall for cued images (-23.60points +/- 13.29 SE) compared to uncued images (46.77 points +/- 21.53 SE), using two-sample independent t-test (p = 0.007).

It looks like performance was slightly better for the cued images, and worse for the uncued ones compared to the results above. We have to keep in mind that although the results from the control experiment are significant, they are only taken from two subjects. More data needs to be collected, however, for now, this shows us something surprising yet reasonable! TMR appears to work well both during SWS and wakefulness. But which is better? Where does the maximum memory consolidation happen? Does SWS sleep promote consolidation of different types of memory compared to wakefulness? All such questions are yet to be answered!

So, my research does not stop here and it will continue beyond my fellowship this summer with BYB. My goal is to continue collecting more data, and explore the answers to the questions above and others that might come along the way. To do that, and continue with the idea of making this research fully DIY and accessible for the public, my next step would be working on taking the cueing of the sounds during sleep to the next step: automatic cueing using machine learning! This would allow users to run this fully functional study on themselves by buying the Heart and Brain Spiker Shield and downloading our TMR app, without needing a researcher observing their EEG during sleep and manually cueing the sounds when detecting Delta waves as what I have been doing. By having this property, the hope would be to provide a future cloud service for customer data and to use TMR to tackle larger issues:

Can it be used in PTSD research to help patients overcome traumatic memories? Can it be applied in educational settings to improve learning and teaching in institutions? Would it give us more insight into how our brains work when it comes to memory and potentially find a link to Alzheimer’s research?

Stay tuned! You will be hearing from me again in the near future.

Before I leave you for the summer, I would love to share with you some pictures from my best moments during this fellowship with my fellow interns and BYB staff. Last week, we had TED visiting Ann Arbor to film our projects into episodes for an internet show that will be go on live on the internet sometime this fall. This by far has been the best part of this experience and the most exciting one. We all worked so hard preparing for it, and spent long days presenting and explaining our work in front of cameras and lights! You will hopefully like it and share our enjoyment with us soon! Yesterday, we also presented our posters in the UROP Summer Symposium at the University of Michigan and people loved my project and gave some very good feedback on future directions.

It has been a pleasure interning with BYB this summer. It was a very exciting and moving journey, where it helped adding more valuable lessons to my academic and personal growth. I truly appreciate Greg Gage and all his love and support into pushing me to become a better researcher and a believer in his famous quoted piece of advice: “skepticism is a virtue”. This summer was indeed not only about learning how to cook, code/ MATLAB, deal with my best friend – EEGs – or even network and get one step closer towards my professional career aspirations. It was a reassuring discovery of my love for research and passion in literally revolutionizing Neuroscience and making it available for everyone!   

See if you can spot how many times I wore my favourite-lucky blue blouse! It should go down in history

With all the awesome interns! Thank you for the greatest summer 🙂 We had very good memories and funny moments, and got to explore Michigan together!! This is not a goodbye!!


Changing Taste Perception with Optogenetics

Hey everyone! My summer of research in Ann Arbor has come to an end and it’s been an awesome experience. It’s been a busy 10 weeks of making daily improvements to my rig, resoldering the flyPAD, collecting data, and presenting what I found to others. The original goal of this project was to see if altering taste perception was possible by activating taste neurons with light – a new technique called optogenetics. To test this I stimulated channelrhodopsin in the neurons of fruit flies’ which give them a sweet taste response.

If you missed it, my first post: Optogenetics with the flyPAD, and my second post: The Taste Preferences of Fruit Flies

The FlyPAD setup in its full glory

Naturally, fruit flies prefer eating sugary as opposed to unsweet foods, similar to humans. This was the case when I offered them banana, a sweet fruit, and avocado, broccoli, and brussels sprouts, the unsweet alternatives. The flies always preferred banana over anything else. However, when Arduinos were programmed to pulse red light at the flies the same instant they sipped the unsweet foods, their gr5a neurons were activated, tricking them into thinking that what they were eating was sweet. The data is shown below, as bar graphs of the average number of sips and of sip % to see how food choice preference changed.

As we see here, the flies naturally prefer banana over avocado

But this preference switched when stimulation of channelrhodopsin activated their sweet tasting (gr5a) neuron

Flies, naturally, REALLY prefer banana over broccoli

 

The star preference we saw earlier disappeared, and the flies ate some of both foods: more of the newly sweet tasting broccoli and less of the banana.

Again, we see that banana wins the prize naturally.

 

And again, with stimulation, we see the sweet and the non-sweet options begin to level out

 

So, changing the subjective perception of taste is possible, as we could make a fly’s least preferred food become their absolute favorite! These findings show that subjective perception is alterable, but also that optogenetics is a neuroscience technique which can be done with little, affordable equipment.

If I end up continuing work on this project, I am interested to see how long the altered preference of the flies can persist. Anecdotally, I’ve seen that when the LED lights stop working there are some flies which continue to visit the unsweet food which they were tricked into tasting sweet. This wasn’t within the scope of my summer research, but I suspect that doing experiments on this would be interesting as it could reveal how powerful optogenetics is by creating a change in food choice preference that persists once stimulation trials have stopped.

After finding these results I compiled them into a poster which I recently presented at an UROP (undergraduate research opportunity program) symposium at the University of Michigan. It was fun explaining my summer’s work to the public and other researchers. Got a ribbon for it too!

Call me “Blue Ribbon”

A close up of my poster!

Aside from collecting data in the lab, I also had the chance to showcase my project with TED for their upcoming series of episodes focussed on the Backyard Brains’ research fellows’ projects. I was able to conduct experiments for them and give step by step walkthroughs of how they are carried out. Stay tuned on their posts coming around this fall to catch our episodes!

Getting filmed

Huge thanks to Greg for mentoring me this summer and introducing me to the world of Neuroscience research in the coolest way possible with BYB.

Thank you so much to Backyard Brains for giving me this amazing opportunity and to all the research fellows who made it a really fun summer!


Visualizing Harmonic Convergence in Mosquito Mating

Wow, what a summer!!! I have some exciting news to report…I didn’t get bit by ONE mosquito all summer!!! Just kidding, my project is a little more exciting than that! I did it! I successfully put together and executed a project that I was a little iffy about back in May, and developed a new-found love for mosquitoes [fake news, don’t tell them I said that!]. I now like to be referred to as the mosquito whisperer, so if you see me on the streets, I will not respond to any other name.

But now, let’s get to the good stuff! Last time you heard from me, I was getting ready to start recording male/female pairs of mosquitoes. Now, I have about 7,000 audio and video recordings of these interactions, and I couldn’t be happier with the data I collected! The goal for this stage of my research was to observe whether or not mosquitoes actually communicate with one another to signal their interest in mating, or basically flirt. Below are the visual results of this from the previous study.

For my own recordings, I was able to detect the presence of these interactions by importing my audio files into a computer program called Audacity. Within this program, I could convert the sound file into a spectrogram that was able to clearly show me the frequencies produced by the mosquitoes in the recording. What the heck am I talking about, you ask?? Below is one example of a recording spectrogram that revealed a converging interaction!

But before I get into explaining the scary pink and blue stuff above, let’s talk about how I got these recordings in the first place- that’s the fun part (minus the 500 times mosquitoes got loose in the lab and attacked all of my friends…losers)! About midway through the summer, I changed some of my methods to make my procedure a little easier and reduce the number of casualties caused from pinning my little friends onto insect pins…yeah, they were not happy with me when they woke up from their nap to find themselves stuck to a wire…but, you got to do what you got to do for science!!!!! At the beginning of the summer, I was using insect wax (a yummy combination of beeswax and rosin) to fix these guys to their new home, but it turned out that the wax wasn’t strong enough to keep the mosquitoes in place when they woke up, and more often than not, they flew right off of the pin and straight for my face. So, I decided to try pinning them with a tiny amount of superglue, and it worked magically! The trick was to touch the super glued side of the pin to the mosquito’s thorax (pictured below) instead of their abdomen, which is where I was attempting to pin them when I was using the insect wax. When I tried to pin their abdomen with superglue, sometimes their wings would get stuck to the pin, making it a little bit difficult to get a good recording when their wings couldn’t move… Instead, their thorax provided a perfect amount of surface area for the pin without interfering with their antennae or wings at all.

Once I adapted this method, pinning them was a breeze! I kid you not, I could probably pin 20 mosquitoes within 30 seconds. You’re impressed, I know, I was too…Below are a few examples of my mad skills.

      

Don’t they look so comfortable and happy!? Next, I set up my recording stands, which were actually 3D printed ‘micro-manipulators’ designed by Backyard Brains! My company is so cool… These stands were used to fix the mosquitoes, with the help of some silly putty, for the duration of the experiment. They were perfect.

 

Now I was ready to record!! Below is a beautiful video of one of my experiments (I’m a little proud of myself, can you tell?) Make sure you turn on your sound!!

 

How creepy is that??? These noises will be burned into my brain for the rest of my life! But isn’t it also super cool? You can definitely hear the difference in sound between the two sexes, but can you hear when they begin converging?? Listen again.

If you’re thinking that it happens roughly 20 seconds into the video, lasting about 15, you’re right!! But just to be safe and make sure that the noises we were hearing were indeed interactions, I imported both files into MATLAB for a closer look

Here you can see the two different frequencies of the female and male (though there is a bit of noise blocking the females’ fundamental frequency). The key to detecting an interaction is to look at the higher frequencies, up in the harmonics, around 1200 Hz because this is where convergence will normally occur. And lucky for us, it did! On camera!  I was so excited I just about packed up and called it a day, but I really wanted to see some more interactions, so I pinned 8 million more mosquitoes and got down to business! In the end, I was able to successfully record, both audio and video, 49 male/female pairs, observing interactions in 33 of them! That means, in the small sample size I had, the pairs would communicate a love interest to one another 67% of the time! Gross, get a room!!!!

Nearing the end of my time in Ann Arbor, I finally finished recording, throwing in the towel for my beloved new hobby, and I was ready to start processing my data in the hope of making it a little more ‘Hollywood’ as Greg would say! Little did I know, this process wasn’t as appealing as I first thought, and on multiple occasions I considered playing with some more mosquitoes just to get away from the madness known as MATLAB. Lucky for me, I had a MATLAB expert living with me (Hmmm…maybe that’s why we became best friends since she couldn’t escape me anytime I opened my computer to work!) Christy helped me create the most magical, color coded, satisfying and all around perfect video of not only my little buddies interacting, but also a spectrogram underneath it that played in perfect sync with the original video recording! Brace yourselves…you will never see anything more beautiful in your life…

 

 

If you caught yourself replaying it multiple times, don’t fret, as you will catch me playing it periodically throughout the day just for fun. I’m not a nerd. But look, I was successful!!!

We also presented our research at a poster symposium at University of Michigan!

So now is about the time where we wrap up!!! Ah don’t make me leave!!!! But I am so happy with the work I produced this summer and I feel so lucky that I got the chance to be part of this program. Greg Gage, you are the best boss I have ever had (don’t tell that to my dad since he’s the only other boss I’ve had…) and I will be forever thankful for the impact you had on my life as not only a researcher but also an individual. I love you and your family to pieces, especially your little ones that taught me all about Peppa Pig, and are still convinced my name is ‘Dirt’. Wonder where they got that…cough, cough, Christy. I already miss you guys, and I haven’t even left Ann Arbor yet! I’d also like to thank all of the staff at Backyard Brains (Stanislav, Zorica, Will, Zach, Caty, Catherine and John), who made my time here so worthwhile and comfortable- I never felt alone even when my MATLAB would crash, or when my fellow interns would shun me for letting some mosquitoes loose in the lab…

And last but not least, thank you to all of the BYB interns that made this summer one for the books! You will all be a part of my life forever, and I can’t wait to see where our lives take us once we leave each other this evening. You’re all such wonderful people, and I couldn’t have asked for better friends. Love you guys!!

Backyard Brains forever!!!! (Tattoo idea, interns?????)