top of page

What I'm Doing(+)

“Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?” ― Brian W. Kernighan

First, I think a little reflection on both the state of my journal and my goals is in order. I'll start with the goals that were supposed to end right around now. How do my goals stand in comparison to when I set them?

(Personal) Continue to stabilize the already built drone. Remove all duct tape. [February 20th, +/- 20 days]

This one is in process. I have ordered what I need to, I just need to solder a few things.

(Professional, Educational) Build and improve Matlab framework for analyzing actuation data through a microscope [February 5th, +/- 10 days]

I didn't quite meet the imposed deadline for this goal, but in my defense, I had to learn a whole new language. I'm getting very close now, as I'll explain later on in this post.

(Personal) Have one piece of multimedia per blog post

[Ongoing]

Hey, I do have multimedia! Nice!

(Educational) Learn more about computer vision, neural networks, and flight controller software stacks.

I'm learning about computer vision through OpenCV (A C++ computer vision framework) but also through this Matlab microscope analysis. I've learned quite a bit. What about Neural Networks? I'm in the process of taking a course on neural networks right now, so that goal is coming along. Flight Controller Software stacks? This would require learning either the computational architectures AVR or ARM, both of which I am trying to learn. I will not bore you with the details of how AVR and ARM differ, however they are both microcontroller families. A microcontroller is a tiny computer that can tackle small tasks like responding to sensor inputs, or blinking lights on and off, or even automatically controlling a drone. Microcontrollers are incredible, and working with them is the closest I've come to feeling like a wizard. I would highly investigate checking them out. In my opinion, the best place to start learning about microcontrollers is through Arduino, which runs on AVR. It has incredible documentation and support among the hobbyist community. The best part, is that you can do pretty much anything you want with them, and when you no longer can do anything you want, you just upgrade to a chip with better specs!

So overall, I'm seeing a slight amount of schedule slippage in my goal tackling, but I think I am meeting my concrete goals well. As for the non-concrete goals, I definitely could work on maintaining a closer dialogue with my third space colleagues. There has been an unfortunate "the fault is in our stars" type of occurrence though that makes this doubly difficult (I'll explain this at the end of the post). Also, every grad student has separate projects they are working on at the same time that I work. That is the nature of graduate school.

Last week, the WISE Class shared journals amongst ourselves to gain outside perspective on what makes our journal entries good, and what weighs them down. I'll try to summarize the feedback I got about my journal for you, and I'll be trying to solve every single problem.

Issues

  • My acronyms and technical concepts need a little bit more elaboration if I'm going to keep using them. There does not seem to be a consensus on whether I should cease talking about the technical details in general, but I feel that abandoning any sort of technical talk is jumping ship on my obligation to you, because this is, after all, a technical blog.

  • The journal entries are disorganized.

  • I need to relate more about what is actually happening at my third space

Overall, I think I need to be more careful in almost designing my blog posts to be more lean, connected, and comprehensible. My average journal size seems to be quite large. I'll now organize each post by a certain category, since a few of my entries have been less about the third space, and more about other extraneous things, but I should be settling into a rhythm soon, so there are more third space entries to come. I think I'm doing well at expressing my voice as well as reflecting in every journal, judging by the feedback I got. I will try my best to connect each sentence, each paragraph, and each entry in my wise experience. I encountered a great deal of questions asking for clarification about my third space during our journal sharing, so I'll give you a mini Q&A right now.

Q&A

Q: When do you go to your third space? How often?

A: I go on Monday, Wednesday, and Friday from 2:30 PM to 5:30 PM. I work in UC Berkeley's Cory Hall Swarm Lab.

Q:What does SWARM stand for?

A: Smart Warfighting Array of Reconfigurable Modules. Just kidding. I don't think it stands for anything, it just means swarm (as in a swarm of bees), but with emphasis. SWARM (It lends an official air to it, I think). As a quick side note, saying "SWARM" is either a way to sound incredibly intellectual in a technological setting, or a fearful declaration appropriate for being chased by a horde of wasps. The difference is not so subtle. It's funny though, in the lab, there is a 60's something movie poster emblazoned with the title "The Arrival of the Swarm!"

Q: Who is your mentor, and how are they assisting you in the research process?

A: My third space mentor is Dr. Kristofer Pister, and he sends me excellent resources on UAV research quite frequently (which I am reading up on). I also work very closely with primarily two graduate students, Daniel Contreras, and Hani Gomez (Pictures of them hopefully to come, depending on whether they allow me to).

Q: What are you doing at your third space?

A: Uh oh. I admit, this question made me toss my hands up in the air, at least mentally. I clearly have not done a great job telling you what my third space is all about. I'll try to do better. So here's a blog post detailing what I'm doing.

 

This whole time, except for the one week at OPI, I have been working on creating an imaging analysis tool that will look at a video stream from a camera positioned on a microscope, and then track certain metrics that the user requests, or perform operations that the user wants. It's still in a very unpolished stage, and I'd say that if there's one thing I hate to do, it's show a piece of code before it is looking aesthetically pleasing. I do not know that I can post the code, it may be infringing on the SWARM Lab's privacy. I will wait to see if I can do that. I will get past my misgivings by showing my code or a few screenshots of it at least. For those wondering why I have become so particular about showing my code in an early stage, it's because I feel that when you show a piece of code to someone, you are making a promise to that person that the features you've shown will be in the finished version, and that the estimated date you think you'll have it done by becomes set in stone. You then have an obligation to deliver. In the past, I've always shown my code early, and regretted it later. When months go by, and these implicit promises lay unfulfilled for whatever reason, the people you've shown your code to have ammunition against you, and they can use it however they please. So that is the end of my disclaimer.

My GUI (graphical user interface) application is written in Matlab. That decision came with the caveat to semi-learn Matlab. It's a good language, less irritating to learn than C++ for sure. In any case, this meant that as I was coding the application, I was also learning the language I was programming it in. This can be risky, but so far it has turned out okay for me. To this point, there have been two major tasks I've worked on at my third space. The first was building a graphical user interface for the user to interact with. Surprisingly, this was pure tedium, and I encountered almost no extremely difficult errors. The second thing that I worked on is transferring code from a command line microscope analysis tool to my GUI version. The command line version was written by an intern from Brazil at the SWARM lab during the summer. This "merge" has only been slightly more difficult to achieve (I recently encountered an issue with the GUI and the analysis code not playing well together, but this was more a result of 100% genuine idiocy on my part than actual issues in my understanding). Suffice to say that I am disappointed with how slow I am progressing in making everything come together. The next task is going to be quite interesting, because it looks to be quite challenging. To put it simply, the method currently used to track object displacement under a microscope is quite slow. This is inconvenient because the end goal here is to be able to take any microscope camera stream and then track the displacement of an object in real time (so at around 30-60 frames per second). I may have to redesign the algorithm for tracking the object. Computer vision can track a number of objects quite easily, often by using the color of the object to track it, or by using corner detection to grab certain "features" of the image. Using the color of the image isn't so ideal, as this makes operating on a purely black and white or intensity frames pretty much impossible, since they have no color. But using harris corner detection could be quite viable. The short version of Harris Corner Detection is that it grabs all of the "interesting" points in an image (almost always corners), and stores these points. The only issue for me then is selecting the right combination of features to track in order to get an accurate displacement, because features might unexpectedly disappear or reappear from frame to frame.

You might be wondering, what does the actual application look like? Here's what it looked like before I added the graphical user interface components.

As I mentioned before, I'm actually inheriting a great deal of the code I work with from an undergraduate student that lives in Brazil that came to UC Berkeley as part of an internship. Her code (involving normalized cross correlation) provides the basis for tracking an object through displacement. I'm trying to expand my analysis interface's utility beyond that, to being able to take video of a live stream as it happens, track displacement, take pictures, and potentially measure pull-in voltages. Here's what it looks like now with the graphical user interface elements added.

My primary goal is to make the analysis tool much more generalized. By generalized, I mean more robust to all sorts of different user inputs, and also able to take in these wide inputs without throwing errors. It is a relatively simple GUI, hindered by my small working knowledge of Matlab, but I'm very slowly getting better. So far, I've implemented video playback and displacement analysis and have partially implemented pausing and unpausing both operations. I also have started working on a few very simple features that will hopefully grow into something more full bodied, like file tree parsing, and user settings.

Building a GUI should be a relatively simple task, and it is, but I make careless mistakes and am bogged down by a swamp of errors as a result. I then spend pretty much all my time debugging. This is not a good sign, as it points to poor planning and design on my part. Historically, the planning and design of a program has always been incredibly difficult for me, so this is something I need to improve.

Last week, a new employee came into the lab, and claimed the seat that I was previously temporarily using. Obviously, it is not my seat, as I am a temporary worker. So I was unofficially exiled to the corner of the lab, a good fifty feet or more away from the island I used to reside on. This is, to say the least, an unfortunate development. On the upside, there seems to be a great deal of quiet now, and I am singularly focused on the task at hand. On the downside, I'm rather isolated. Truth be told, I am not sure how to approach this issue. For now though, it is fine, given that I am coding around the clock. But after I finish this microscope analysis, I will have to figure out how to swim back to the desk island.

Other Interesting Tidbits

  • Over the weekend, I visited Phoenix's third space, and it is incredibly fascinating. Not only do they have pretty much every tool you could possibly need for building circuitry, but they also have a good deal of art, including a neural network driven light display.

bottom of page