Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

EEGs, Learning, and Deep Sleep

Who would have thought a summer fellowship would grant you sleep sessions during work hours! Not just that, but it also comes with the ability to explore the deepest phases of sleep and access to unlimited Delta waves that come in all shapes and heights! Well, that can only happen at Backyard Brains, right from the interesting sleep lab I’m running this summer in collaboration with Om of Medicine.

Om of Medicine: Where the Magic Happens! Om is letting us use part of their lounge as our sleep lab, where subjects come and perform the study.
For the past couple of weeks, I have been working diligently on designing and implementing the experimental procedures to test if inducing consolidation during sleep by cuing certain auditory stimuli can improve memory recall. This is done using the TMR or Targeted Memory Reactivation technique,  where we selectively target memories, reactivate them, and compare them to ones that are not targeted or cued with any stimuli. Such methodology allows us to explore different parameters to learn more about the specificity of memory formation and bias in learning. From here, my project splits into two main parts: The memory task and EEG recording/decoding.

For the first part, I am collaborating with Dr. Ken Norman from the Princeton Computational Memory Lab and two of his students: Robert Zhang and Everett Shen to develop an iOS software for the memory tasks. The goal is to have this be a fully functional app users can download from the App Store and run their own sleep studies.

The memory task is simply comprised of 3 main parts. The first part is the learning task, where subjects would watch 48 different images being displayed on random locations on the screen, each with a distinct sound correlated with it, for example: cat with a meow sound. Subjects should try to memorize the location of where each image was displayed. Following this phase comes two consecutive rounds of testing with feedback, where subjects would see each image and should then click on where they think it’s correct location should be based on what they remember. Following this multi-stage learning phase, the subject would do the actual pre-sleep test. This is essentially the same as the previous two rounds, but without the feedback. The second part of the app, is the cueing phase that will be played during the nap when the subject is sleeping. The idea is to cue 24 targeted sounds out of the 48 the subject listened to before the nap. For the other 24 untargeted sounds, we play a baseline sounds that the subject did not listen to before the nap (so different than all the 48 presented). Part three is the post-sleep test which is again the same as the pre-sleep test.  

Part 2 of the app: cueing phase, should play only in the Slow Wave Sleep cycle, where Delta waves are observed. Here comes the second cool aspect of my project: EEG recording and Decoding.

Some screenshots from the current working version of the app. It is still being developed and improved upon. Code can be found soon on GitHub. CC

Scoring Sleep Stages and spotting Delta waves in real time can be very challenging. The end goal of this project, is to be able to detect deep sleep automatically and cue the sounds accordingly. For now I am using our EEG setup and Spike Recorder to observe Delta waves in real time as the subject is sleeping, once I see them, I start cuing the sounds from the app.

 My beautiful Delta Waves in different shapes and height taken from our subjects. Delta waves typically

 have a frequency of 0.5-3 Hz with an amplitude of around 75 microvolts

After recording, I am performing signal analysis and plotting of frequency and power graphs in different variations to check that Delta waves are happening at the same time we did the cueing in real time. So far, the results are on point and matching!

                       Top Left: Subject 1, Top Right: Subject 2, Bottom Left: Subject 3, Bottom Right: Subject 4

One of the most challenging tasks in my project is to find subjects willing to volunteer, perform the task and sleep. As this step is very crucial, I designed a brochure and gave it out during Tech Trek and to various parties. There is a doodle poll where subjects can sign-up for sessions.

Throughout this time, I learned Matlab from scratch and worked more with electronics and soldering. During the sleeping session, I play white noise using a generator, and the cueing sounds from a speaker placed next to the subject’s head. The trick is to not have the cueing sounds be more than 4dB higher than the white noise in order not to wake up the subject. Setting this up took a lot of testing and playing around with different wires, sound meters, and speakers. All subjects were asked after waking up if they listened to any sounds while sleeping. All assured they did not, which is good for us because we can make sure the procedure is working. Next to the speaker sits the EEG shield connected to the Arduino. The electrode placements are as follows. Reference electrode on the mastoid, active electrode and ground on frontal lobe using our EEG headband.

Top: iPad running the Memory Task. It is connected to the speaker placed inside the room by the subject’s head. I cue the sounds from it once I observe Delta waves. Mac for recording EEG in real time and scoring/observing SWS. We have both extended outside the room so that I don’t wake up the subject by sitting in the room with them.

Bottom: Speaker, white noise generator, and sound meter.

Subjects during the session. Photos were taken with the permission of the subject and taken at the very end of the nap, right before waking them up.

Finally, here comes the best part!! Getting our data that agrees with the expected literature and published papers.

This is the basic plotting of the data we got. More statistical analysis regarding error bars and figure labeling will be applied. The graphs show the mean distance in pixels of the 48 images for each category: Cued and Uncued, before and after sleep. The distance is between where the user clicked and where the original location of the image should be. This distance is compared to a certain threshold we set and compares to it. Larger distances mean more error and thus quantifies as incorrect. Smaller distances mean less error thus quantifies as correct. We can clearly notice that both subjects performed worse on the uncued sounds after sleep compared to before. Subject 4 also clearly shows an improvement in recall for the cued images after sleep compared to before. This supports the TMR technique and shows the selectivity of memory consolidation and recall.

The upcoming final month will be filled with more exciting work and experimentation. I  will be running more experiments on more subjects to double-check our results. Then I will start with the control experiments, were some subjects would not sleep, and other would but have no sounds cued at all. Stay tuned!!

 


Update: The Harmonics of Mosquito Mating

Introduction to the Project

Hi, Haley again!! It’s been an exciting couple of weeks- I’ve become more familiar with mosquitoes than I ever thought I would, learned a TON more Matlab, and even got a few recordings!! My work recently has focused on perfecting the methodology needed to successfully tether a mosquito in a position mimicking their natural free flight position; the tricky thing is to assure it is tethered with its wings free to flap, as my recordings will focus on hearing that irritating noise generated by a mosquito’s wing beat patterns.

 

After much tedious practice, I finally figured out a way to anesthetize these little guys using ice water and tether them using a very thin insect pin with a tiny amount of insect wax. The goal was to assure that they were suspended in a secure position to reduce the chance of them getting loose and flying straight for my face (though that has happened a couple of times, so now whenever there is a stray fuzzy in the air around me, I duck for cover…little bit paranoid…) Below are some creepy cool pictures of my little friends tethered that I was able to capture using a low-intensity microscope.

I am fortunate enough to be able to purchase these mosquitoes (Aedes Agypti) from a research laboratory that specializes in bioassays of insect control agents for lab testing. So, getting them has been easy… but maintaining them in a lab setting has proven to be very difficult. Once I receive my little ones in the mail, they usually live about 3-4 days, so once they are no longer viable for testing, and as I wait for my next shipment, you can find me frantically running around Nicholas Arboretum attempting to catch some wild-type mosquitoes. I’ve gotten some pretty weird stares and even weirder questions when I lurk in bushes near the Huron River trying to catch my prey, but hey, anything for science!!!!

Five seconds after this picture was taken, I fell into a pond.

Although it sounds so exciting and desirable, running around the Arboretum with mosquito nets actually isn’t the best part of my project this summer! We’re just getting to the good stuff!

Once I got a few mosquitoes tethered and ready to go, I was able to start recording some of the awesome sounds these guys produce in free flight. That pesky buzz of a nearby mosquito is actually a love signal, a sound they produce and adjust in order to mate with one another! However, before recording these adjustments in their wing beat frequencies when placed within earshot of each other, I needed to record individual mosquitoes to build up my recording database that will help me clearly show the difference in base frequencies between male and female mosquitoes. Research has shown that females have a base frequency of about 400 Hz, while males are around 600 Hz, so the next stage in my research was to prove this phenomenon via data collection and analysis.

This preliminary data recording stage is still ongoing, but nearing its transition into recording mating pairs, as the data I have collected thus far is pretty spot on with the research! As with any scientific study, there will always be factors that make perfectly reproducing results from another research study challenging, so my goal was to get as close as possible, and I did!

Here are some samples…

Female

Male

When I was ready to start collecting data, I first focused on recording the wing beat frequencies of as many male mosquitoes as I could before nearing the end of their lifecycle.  Thus far, I have obtained significant data from 9 male mosquitoes, where their base frequency was in range to what I anticipated (though slightly higher than the published data) and their harmonics are exactly as expected. Below shows one spectrogram of a male recording, as well as a graph showing the frequency distribution of all 9 male recordings during free flight.

Next, I moved my focus onto recording the female mosquitoes in free flight. These little ones were much harder to tether, for a number of reasons- their size, they didn’t respond as well to the cold water anesthesia so they kept waking up during pinning, and the insect wax didn’t solidify on their abdomen as easily as the males. All I have to say is if you see a million mosquitoes flying happily in our makerspace, don’t look at me!!!!

After overcoming that challenge, I was able to successfully obtain recordings from 7 female mosquitoes, though I tethered much more than that and for an unknown reason, many of the females in this batch wouldn’t beat their wings. The 7 good recordings I got are shown on the graph below in addition to one female spectrogram recording, with an identical format to the male distribution graph above.

Being super happy with this data, I wanted to take the analysis one step further to connect my findings with that of Ron Hoy, the professor at Cornell and mastermind behind all of this research. A figure that reappears throughout his research (pictured below) shows a clear spectrogram depicting the harmonic stack of sound clips from both male and female recording sessions.  

My goal was to reproduce this figure with the data I collected thus far in my research. That figure is shown below, with a little extra color coding thanks to my OCD (red = female, blue = male)!

The rest of my research plan consists of obtaining more individual recordings to clean up the figure above, and then record mating pair interactions! All of these recordings will be conducted in a sound-proof box that I created, with a lid that has a laser cut track removed to allow for me to manually move the male mosquito in and out of the females hearing range throughout the duration of the recording. This procedure is to clearly reveal the altering of flight tones when the mating process begins.

Stay tuned to see where the next stage of my project leads! I can’t wait!


Detecting Electric Fish

Hi! I’m Shreya and I just graduated from the Dwarkadas J. Sanghvi College of Engineering affiliated to the University of Mumbai in Electronics Engineering. During the last two years of my undergraduate study, I spent most of my vacations, free time and some weekends working as a research intern at the Indian Institute of Technology (IIT) Bombay where I completed several computer programming and embedded electronics projects. My undergraduate capstone project had me working with Artificial Neural Networks for ECG beat classification. This project was also completed at and funded by IIT Bombay.

Me on a train in India (during our final year class trip to Rishikesh)

Recently I have been really interested in neuroscience and EEGs, which is how I discovered Backyard Brains. I had been following their blog and Facebook posts for a few months, and that’s how I found out about this internship! I joined Backyard Brains on 12th June (got delayed because of final exams!) and I will be working on the Electric Fish project here for six months. This is my first time in the USA and so far, it’s been great! I’ve been enjoying the climate here – it’s a good change from the intense summer heat in Mumbai. I also love how Ann Arbor has so many different flower species!

Me and some beautiful flowers in Ann Arbor

Electric fish are a really interesting type of animal that can generate and detect electric fields around them to either stun prey or to communicate with other electric fish, detect objects and navigate. However, finding them and tracking them can be difficult, and many species have yet to be discovered! This project is aimed at building a device which can be deployed into the freshwater rivers of South America to detect and record the Electric Organ Discharges (EOD) of weakly electric fish as they swim past it. Each species has a unique EOD, which can be either wave-type or pulse-type. So, based on the nature of the recorded EOD, the species of the fish can be estimated and it can also be used to study the behaviour of the fish. This project is based on the research that Dr. Eric Fortune of the New Jersey Institute of Technology conducted in Ecuador. I will be using the Elephant Nose fish to test the device while prototyping.

Elephant Nose fish (source: Wikimedia Commons)

The Elephant Nose fish produces EODs which look like spikes when recorded using electrodes. So far, I have been able to amplify and see these spikes on an oscilloscope. I will be improving the filter and amplifier, using an Arduino to detect spikes in the recorded data, and saving this information along with time stamps on an SD card. Some of the challenges I will be facing while designing this are that the device needs to be waterproof and it should have power saving capabilities since it might have to run on batteries or solar energy for months at a time to be able to detect any electric fish.

Below are some of the spikes I recorded from the Elephant Nose fish as seen on an oscilloscope (along with 60 Hz noise).

I’m really enjoying working on this project here, at Backyard Brains, and I look forward to finishing this project!