Backyard Brains Logo

Neuroscience for Everyone!

+1 (855) GET-SPIKES (855-438-7745)


items ()

Houston, we have a datum

Pennywise, the dancing clown

The newest addition to our mantis shrimp family is a gorgeous green-black Gonodactylus smithii named Pennywise. The Gonodactylus genus has been my fourteen-year-old brother’s favorite genus ever since I told him that it essentially means scrotum fingers, as the two raptorial appendages held at the ready take on a somewhat humorous shape. For a review of mantis shrimp anatomy, see my last post here. The species name, smithii, is related to the word smithy, or blacksmith, presumably because blacksmiths and this mantis shrimp like to hammer things. Unlike the relatively tame Odontodactylus scyllarus (peacock) mantis, this species has significantly bigger hammers, and as such packs a bigger punch. I made the mistake of proffering my nail only once. When particularly aggravated, he will detach his dactyls from his propus and extend it toward me, revealing a cruel hook at the end that’s usually hidden, as though he’s flipping me the bird. A little high pitched voice in my head dubs him screaming “curse youuuu!!!” whenever he does this.

Pennywise on our makeshift operating table with his backpack affixed to his carapace. Wires have yet to be cut and inserted into his merus.

In my last post, I made up a thought experiment that would be useful once I started gathering data: What does it mean to the mantis shrimp that I put my finger near his burrow and then pull it back when he strikes every minute or so for a period of time?

You might predict that after a few intervals of striking, the mantis shrimp would no longer strike as readily. Perhaps it would strike every other time, and after a few more intervals, every five times, and then not at all. This kind of learning is called habituation, and can be a big confound in experiments involving behavior, and occurs because the mantis slowly realizes that I am not a real threat (after all, I’m not punching back and I retract my finger after one punch). But, the mantis shrimp does not have a perfect memory, so it there might be a longer interval than one minute where the rate of habituation would be so slow as to not happen at all. In other words, if I waited long enough between events of sticking my finger in the water, the mantis shrimp may not remember as clearly that I presented no real threat, and would probably punch with punctual predictably.

Featherclown is pictured on the right, photogenically showing off said Patek restraint, though he didn’t feel like punching that day.

Houston, we have more than one.

I’ve been mulling over this thought experiment a lot because this past Friday, I got my first round of data! Pennywise was kind enough to lend his EMGs for several rounds of Q-tip-coated-in-shrimp-paste bashing. As soon as I put the Q-tip in front of him, he hit it with a vengeance. As with the second time; however, the third time, I had to prod him a little. The fourth time, he seemed disinterested. Evidently, Pennywise habituates very fast. I left him alone for a bit, hoping the habituation would wear off, and returned a few minutes later. After a mere 20 minutes or so, he was done for the day. Next time, I’ll try to space out my Q-tip presentations a bit more, otherwise Pennywise might become totally habituated to my stimuli.

I placed the probes in Pennywise’s extensor, the muscle that is responsible for building up the strike power, and here’s what we got! On the top in red is the audio trace. I’ve highlighted the sound of the pop from Pennywise striking a Q-tip. I don’t know if there would be observable cavitation here since the Q-tip is soft and held lightly, so this pop is probably just the sound of the dactyl heel hitting the target. The small green jagged spikes are the extensor’s activity, representing muscles twitches that are adding energy to the “spring”, or saddle. As I noted in my first post, this activity should represents the coactivation phase, where the flexor and the extensor both tense to build up energy in the saddle. Let’s compare the original paper with these data.

            

Obviously, my spikes aren’t as large as the ones in the journal article, but you can kind of tell that my trace is probably in the coactivation phase. I’m looking forward to collecting more data and starting to find patterns. Also, there’s another member of our mantis shrimp family coming in the next few days! Keep an eye out for my next and final post where I talk about results and the surprising namesake of the Squilla empusa, currently travelling in luxury by way of the US Postal Service.


Volume Threshold for Songbird Audio

Hello again everyone! It’s Yifan here with the songbird project. Like my other colleagues I also attended the 4th of July parade in Ann Arbor, which was very fun. I made a very rugged cardinal helmet which looks like a rooster hat, but I guess rooster also counts as a kind of bird, so that turned out just fine.

Anyways, since the last blog post, I have shifted my work emphasis to user interface. After some discussions with my supervisors, we’ve made the decision to change the scheme a little. Instead of using machine learning to detect onsets in a recording, we are going to make an interface that allows the users to select an appropriate volume threshold to do the pre-processing. Then, we will use our machine learning classifier to further classify these interesting clips in details.

Why thresholding based on volume, one might ask? Well, volume is the most straightforward property of sound for us. During tech trek, a kid asked me a very interesting question: when you are detecting birds in a long recording, how do you know the train sound you ruled out as noise isn’t a bird that just sounds like train? Although this one should be quite obvious, we should still give the users the freedom to keep what they want in the raw data. Hence, I’ve developed a simple mechanism that allows every user to decide what they want and what they don’t want before classifying.

This figure is a quick visual representation of a 15 minute field recording after being processed by the mechanism I was talking about. As you can see, in the first plot there is a red line. That is the threshold for user to define. Anything louder than this line would be marked as “activity”; anything quieter than it would be marked as “inactivity.” The second plot shows the activity by time. However, an activity, like a bird call, might have long silence period in between each call. In order not to count those as multiple activities, we have a parameter called “inactivity window,” which is basically the silent time you need in between two activities to be counted as separate activities.

In the above figure, the inactivity window is set to 0.5 second, which is very small. That is why you can see so many separate spikes in the activity plot. Below is the the plot of the same data, but with a inactivity window of 5 seconds.

Because the inactivity window is larger now, smaller activities are now merged into longer continuous activities. This can also be customized by users. After this preprocessing procedure, we will chop up the long recording based on activities, and run smaller clips through the pre-trained classifier.

Unfortunately my laptop completely gave up on me a couple days ago, and I had to send it to repair. I would love to show more data and graphs in this blog post, but I’m afraid I have to postpone that to my last post. Anyways, I wish the best for my laptop (as well as the data in it), and see you next time!


Neurorobot Video Transmission In Progress

Hey everybody, it’s your favourite Neurorobot project once again, back with more exciting updates! I went to my first knitting lesson this week at a lovely local cafe called Literati, and attended the Ann Arbor Fourth of July parade dressed as a giant eyeball with keyboards on my arms (I meant to dress as “computer vision” but I think it ended up looking more like a strange halloween costume).

Oh wait… Did you want updates on the Neurorobot itself?
Unfortunately it’s been more snags and surprises than it has been significant progress; one of major hurdles we’re still yet to overcome is in the video transmission itself. (I did however put huge googly eyes on it)

The video from the Neurorobot has to first be captured and transmitted by the bot itself, then sent flying through the air as radio waves, received by my computer, assembled back together into video, loaded into program memory, processed, and only then can I finally give the bot a command to do something. All parts of this process incur delays, some small, some big, but the end result so far is about 0.85 seconds.

(A demo of how I measure delay, the difference between the stopwatch in the bot recording and the one running live on my computer)

Unfortunately, human perception is a finicky subject; typically in designing websites and applications it has been found that anything up to 100ms of delay is considered “instantaneous,” meaning the user won’t send you angry emails about how slow a button is to click. 0.85 seconds however means that even if you show the robot a cup or a shoe and tell it to follow it, the object may very well leave its view before it’s had a chance to react to it. This means the user has a hard time telling the correlation between showing the object and the bot moving towards it, leading them to question whether it’s actually doing anything at all.

Unfortunately the protocol the wifi module on our robot uses to communicate video with the laptop isn’t that easy to figure out, but we’ve made sizable progress. We’ve gotten the transmission delay down to 0.28 seconds, but the resulting code to do this is 3 different applications all “duct-taped” together, so there’s still a little bit of room for improvement.

I hope to have much bigger updates for my next blogpost, but for now here’s a video demo of my newest mug tracking software.

{Previous update: http://blog.backyardbrains.com/2018/06/neurorobot-on-wheels/}