• Write SQL insert statements from excel file

    I’ve had this need multiple times, so I’ve written a quick matlab script that will allow you to to dump the contents of an excel file into a MySQL database. The call on command line should be as follows:

    
    xls_to_sql(input,sheet,outfile,database,table)
    
    

    With the following variables defined as:

    • input: the full name of the input excel file —–‘myfile.xls’ or ‘myfile.xlsx’
    • sheet: the name of the sheet to read from —– ‘Sheet1’
    • outfile: the name of your output file, which will be .txt by default
    • database: the full name of your database, usually something like ‘mysitecom_databasename’
    • table: the full name of the table —– ‘mytable’

    The first row of the excel file, your column headers, are expected to be the corresponding field names, already created in your database. You should be able to open the output text file and copy the code into the “SQL’ tab under phpMyAdmin. In the case of an empty cell, this will be read as NaN, and the script checks for those, and prints an empty entry when it finds one. The script was tested for simple string and numerical entities, entered into a database with standard INT, VARCHAR (255), and DOUBLE data types. I’m sure there are some awkward types that you would want to translate from excel into a strangely formatted SQL command that the script can’t handle. Feel free to modify as needed!

    xls_to_sql.m

    ·

  • The Holiday Weekend: An Epic Journey of Independence

    This is the story of an unfortunate adventure, for which one day of bad luck slowly transitioned into another day of incredible bad luck. In retrospect, misleading geography could be blamed for what happened, however I probably should have undone the wrongness sooner than I actually did. It’s amazing how I can keep track of so many useless details, but when it comes to connecting big concepts (like locations) I incorrectly connect A and B in the map in my head. Here is the story!

    On Friday, July 2nd, I biked into the heart of Palo Alto to pay a visit to a sweet older woman that I had almost rented a room from. She told me her stories of traveling to Iraq, Europe, and South Africa, encouraged me to use the basement of the town hall as a resource for bike maps, and gave me hazelnut coffee brewed by her daughter somewhere in the hills of California. It was a different, and enjoyable experience, even though by the end my mind was drifting back to work. Mid afternoon I head out, and since I was already far from home, this somehow made it logical to go even farther to hit up the Safeway and obtain the rare, precious, jalapeno corn bread derivative that is the closest thing I can find to what used to be my life-staple back in North Carolina.

    So I bike, I shop, and I leave the store. I am just starting on my way home, crossing a massive intersection, and all of a sudden I feel like I am biking on a deflated beach ball. I am forced to stop right in the line of fire of the row of hungry-to-zoom-zoom cars, get off my bike, feel the tire, and to my horror, it’s completely flat. Crap. First I had to get out of the intersection, especially since I still had the oncoming traffic side to cross, and the little orange hand was already waving at me. I then realized that I was between 4 and 5 miles from home in high 80 degree weather with a poozied-out bike, a heavy bag of groceries, and a fully loaded backpack. So I walked it most of the way, and once I hit the far side of the Medical Campus was able to ride the Margaritte shuttle for the small distance home. It somehow made me very happy to see my little bike perched on the front of that massive bus – like an injured soldier loaded into the helicopter on his way to safety. I was also very grateful to the bus driver that helped me load and unload the bike from the bus. The entire trip on bike and foot calculated out to 18 miles, with 4 miles of walking with the bike. It was, simply put, a tiring day.

    On Saturday I had a standard wake-up-early, do laundry and cleaning, get onto main campus, work until early afternoon, and then a nothing-out-of-the-ordinary walk to get dinner, and walk home. The entire process of starting off campus and finally arriving home is about two hours without a bike, but the time goes by quickly, and I don’t mind the distance. But let’s talk about Sunday.

    In retrospect, I should have tried stopping at the bike shop on Saturday. It might have been open. By the time I thought about checking, it was Sunday, and assuredly closed until after the holiday on Tuesday. So I “slept in” again until 6:00am, and by 7:00am was working away in the LKSC, my favorite place in the world. I should also note that campus was completely deserted. It was lovely!

    My work this weekend, however, has not been lovely. I’ve been frustrated trying to figure out an inconsistency with an algorithm that has been used for meta analysis of fMRI/sMRI data, as I want to use this same method in a project that I am working on. I won’t get into the details, but basically I want to use this technique with data points from white matter in the brain, however the permutations that are done to create the null distribution for the probabilistic map are created with a gray matter mask. That means that any white matter coordinate would not be included, and consequently a white matter point in your data would be significantly different, by default. That is… a problem! But no one else seems to think so. This tool has been used in a handful of studies, specifically WITH white matter coordinates, so I am convinced that I must be missing something. Unfortunately, multiple readings of the methods papers and the studies that used the tool give me no insight. I then tried to figure out recreating the entire method from scratch in matlab, but I’m a noob with statistics, and don’t have extensive experience with different methods, and I couldn’t figure it out. So I am frustrated. I am upset. I took a break from research and wrote a fun little matlab script to populate a mySQL database from an excel sheet (I’m sure which I’ll share at some future point) and then around noon I decided to call it a day, work wise. I typically go for about six hours on a weekend day before becomng aware of the time, but since I was troubled today, I cut it short a bit.

    I was hungry, and a little tired, so I decided that today might be the day to look into that one free delivery I could get from Safeway. I should note that I try to get ingredients and meals from the grocery store because it’s much more affordable than anything you can get prepackaged on campus. As soon as I carefully put together my order and looked into the delivery options, I disappointingly saw that they closed for deliver for the holiday. However, by this point I had completely forgotten about the point of Safeway to begin with (convenience of delivery) and forged a plan to walk there. I got excited upon discovering a second store only 2.2 miles from the medical campus (South) and then another 2 or so to walk home, to the East. I left the frigidly air-conditioned LKSC, and stepped out into the dry heat. It was definitely in the 90’s, and it felt completely wonderful. The campus was empty, so I stood in the middle of the Medical School stone plaza and spread out my arms to take in some sun for a good ten seconds. I then turned to the right, and started my walking.

    The trip to the store was incredibly fast. I was very pleased. It was direct, and had I been on a bike, it would have been super easy. This could potentially be converted into a daily routine, which made me very happy, because I’m always on the lookout for improvements in convenience, and more financially affordable options.

    As I walked through the aisles I made decisions that I would later be very grateful for, namely decisions to not buy heavier items that were running low like cream rinse and a bottled drink. On the walk home the map in my head was very simple. I would retrace my steps down Sand Hill Road until I hit the first major intersection with a right turn, and that road would hit Junipero as a left turn, and that road would take me right home. I even had seen it once in a car, so I was convinced it would be easy. Oh, if I only knew what I was about to get into!

    To get to the crux of the story, I found the right turn, and walked along a bike path directly off of the road toward my left turn onto Junipero. What I didn’t see, what I couldn’t see, was that when it snaked a little bit away from the road and sloped down into a valley, it actually crossed UNDERNEATH Junipero, and then came right back up on the other side. As a traveler on foot I was aware of crossing under what looked like a crossing for a train or small highway, but since it seemed disconnected from the road I was paralleling, it didn’t even cross my mind that I was crossing under the road I needed to be on. It then promptly takes you right back up the road it was paralleling, and unless you are someone who frequently looks behind you, you mind not even have awareness that you crossed a left turn, period.

    Now you must be thinking – but wasn’t I expecting a left turn with the name of Junipero? And here is the second fault of the geography. Right after you unknowingly cross under this road, you DO in fact hit an intersection with a left turn… AND there is a big sign that faces the left turn with the Junipero label on it. It isn’t in the correct orientation for the road, which caught me off guard, and I actually stopped, started at it for a while, checked the GPS on my phone (which wasn’t updated and showed me at the wrong location) and then I justified that it MUST be a variant street sign, and everything else fit right with my expectations for the turn, so it must be Junipero! So I happily turned left. This is when I proceeded to call my Dad and was completely taken up with something silly, explaining the xlsread function and datatypes in matlab. My phone proceeded to die, and I just kept walking. It’s always been easy for me to walk, and I’m not afraid of distance, but after an hour or so I started to wonder… how long does it take to walk two miles again? I saw… a brown fence, and rolling hills with grass, and it didn’t look completely wrong, so I stubbornly continued. I finally started to have doubts, saw a lone runner along the roadside, and asked the dreaded question.

    “Where does this road go?”

    “Oh, somewhere up into the foothills.”

    (crap!) “Which way is Stanford?”

    “Oh, back the direction you are coming from. You want either Sand Hills, or I think there is another road called Junipero that will get you there as well.”

    He drew a map in the sand for me, and I realized that I was very much not where I expected to be.

    I have just realized I’m 2+ hours in the wrong direction, in close to 100 degree weather, and it’s a long way home.

    This is the moment in my journey when I realized that all of the distance I had just traveled would need to be retraced, and then the journey re-started. I thanked the runner, who walked with me about half a mile before I assured him I would be ok, and he could return to running. I kept bringing up the mental picture of the map in my head, over and over again. It WAS right. There were only these two main roads, and one left turn, and it was the first I would see, and I took it! Where did I go wrong? I felt silly and stupid for most of that walk back, and ruminated about how things might have been different had I not been so distracted, or just completely unaware of what the heck was going on. It wasn’t until I returned to that intersection and saw it from the OTHER side that it occurred to me why I had so easily missed the turn.

    ](http://www.vsoch.com/blog/wp-content/uploads/2011/07/10.jpeg)you can see the path turning off to the left, and how it snakes right under the left turn.

    ](http://www.vsoch.com/blog/wp-content/uploads/2011/07/11.jpeg)in order to actually turn left on this road, you can’t get there via the small path, you have to walk on the highway

    I then proceeded to turn down the REAL Junipero, which ironically was only accessible by walking all the way back to the original intersection, and walking right on the busy road (instead of the bike path) to get to the left turn. As I was walking those last few miles, I thought about how this fits into the big picture of my daily life. It SHOULD serve as a lesson, or rationale, for me to either involve other people more in my life, so I would have more resources during times of trouble, or just have more social things planned, period, so the idea of walking for a big chunk of the afternoon wouldn’t even be feasible. It SHOULD be a lesson for this, and I TRIED to really believe this, but in my mind, the entire situation served as a reinforcement of my ability to be independent and survive in the face of challenge. It also was a good indicator of my own stubbornness and incentive structure. I don’t like spending money, and will go to great aims to minimize doing so. I probably could easily have found a local business off of the road somewhere, asked to use the phone, and called a cab. This, in my mind, wasn’t an option. It also felt like a lazy person’s solution. I was resolved to get myself out of this snaggle through the same means I had gotten into it – on my own two feet.

    Walking that last mile to home, I thought of the things I was grateful for.

    1. the lone man on road
    2. overcoming my stubbornness to talk to the lone man on the road
    3. going to the bathroom in the store
    4. deciding to not buy heavy items in the store
    5. that my feet didn’t break on the walk (I thought about this possibility quite a bit… if there was any time that feet bones might spontaneously break, it would be during a trek like this)
    6. that my mind has a sense of humor. As I crossed the road to my driveway, I thought about what would happen to me in a slapstick movie. I’d cross the street after eight years lost in the desert, mere steps from my door, and get plowed down by something moving fast. Haha. Ok, a little morbid, but it was a very funny thought, at the time!

    When I finally mapped it out, the entire journey was about 10.5 miles. In total, I went 6 miles in the wrong direction, and the extra distance translated into about 3-4 hours of extra walking in 100 degree weather. Yes, I can’t believe it either, it’s embarrassing. It’s a good story and I wanted to share it, but I almost didn’t want to admit that I was really this foolish.

    I don’t sweat so much, but just LOOK at this. Unbelievable! My shirt arms and back were completely soaked, and I felt it trickling down my back.

    I had a yogurt in my bag, it smelled funny, and the dinner I had bought was squished and wilted. The good news, however, is that I feel fulfilled by this epic tiredness, and I’m not so upset anymore about that silly algorithm. I’ll fess up, my pre-frontal cortex was so tired of keeping a harness on all the frustration from earlier in the day, and I was so physically beat, that I did cry, just a little bit. But then I stepped into the wonderfulness of the shower, actually a COLD shower. It was as if my brain was clenched in a tight fist and when the water hit me, it shocked and quiverered a little, and then finally relaxed. I felt all the tension and stress metaphorically wash away.

    ·

  • Print Structural Variable as M Script

    I noticed that in the SPM batch editor you are able to create a structural variable called “matlabbatch,” and if you click on “View .m code” the GUI splats out the guts of the structural variable, for my viewing pleasure. I thought it would be nice to have that functionality to easily print any structural variable from your workspace either to the screen or to a .m file, for more careful viewing or editing, so I wrote a script to do that:

    It uses the spm function gencode, which can be found under spm8\matlabbatch\gencode.m in your spmx installation directory.

    
    function db_print(DBvar,poption,outname)
    
    % This function prints a database variable to the screen for editing in a
    % .m file The user must input the variable to print as the input
    % The script uses the gencode function from spm to read the variable
    %--------------------------------------------------------------------------
    
    % INPUT VARIABLES
    % DBvar --- name of workspace variable to print
    % poption --- print to 'screen' or 'file'
    % outname --- name of output text file
    
    % Get the name of the variable to print
    DBprint = gencode(DBvar,outname);
    
    % if user wants to print to screen
    if strcmp(poption,'screen')
    for i=1:length(DBprint)
      fprintf('%s\n',DBprint{i})
    end
    
    % if user wants to print to file
    elseif strcmp(poption,'file')
      fid = fopen([ outname '.m' ],'w');
      for i=1:length(DBprint)
        fprintf(fid,'%s\n',DBprint{i});
    end
    fclose(fid);
    end
    end
    
    
    ·

  • Cookie Vulnerability

    ·

  • I am experiencing: matching terms to brain activation.

    Based on our limited understanding and ability to represent the human brain with numbers and images, it is unwise to claim that any state of brain activation can be matched to a particular emotional state or experience. That said, the recent (beta) release of the NeuroSynth project offers a promising view of the future ease of neuroimaging meta analysis.

    The interface is connected to databases of meta information from many studies pertaining to significant findings about brain activation. In a nutshell, text mining algorithms comb through HTML tables from online journals, and pull out reports of maximum voxel activations as well as count the number of times that various terms appear in the actual text of the paper. If a term (“fear,” for example) appears at a greater than .001 significant (once in 1000 terms), then the paper’s findings are linked to the term. So when you search for a term on the NeuroSynth website and are presented with an activation map, you are essentially seeing the compiled maximum voxels significantly associated with that term. This is, of course, an incredibly simplified explanation, and I suggest that you reference the NeuroSynth website to get a more detailed and correct description.

    This type of interface and analysis has me super excited, because it is a prominent example of the direction that we are moving in to better share, visualize, and compile massive amounts of brain data. Of course the mining algorithms aren’t perfect and the technology is in its infancy, but the implications for research, learning, and work-efficiency blow my mind!

    With access to such cool data, I knew that I wanted to create something. I was so excited that I maxed out the download limit of these meta maps two days in a row, and then found myself with beautiful data that I wanted to do something fun with. I decided to reverse the idea: instead of providing a term and getting a map, I thought it would be fun to be shown a map and guess the term. I first created a poster, and then built a web interface from the poster. This is hard coded, but I see no reason that it couldn’t be generated dynamically, and as the accuracy and reputation of this sort of interface improves, we can build research tools. Woot! For now, enjoy this fun little project!

    I AM EXPERIENCING: matching terms to brain activation

    this has only been tested in firefox and chrome – no promises about functionality elsewhere!

    Hmm robust amygdala activation and a little OFC, what could that be?Hmm robust amygdala activation and a little OFC, what could that be?

    ·

  • Happy 31st Anniversary to the Padres!

    Thank you for making me, caring for me, and loving me. I love you, and will always admire the example that you set for honesty, reliability, and companionship.

     

    ·

  • Man as Machine

    If we truly had all of the information, and understood the entire genome and its interactions with the environment, if we had formulas for the structure and function of every muscle, organ, and cell, the production, efficiency, life and death of every neuron, receptor, hormone, and neurotransmitter, and if we could somehow place an organism into a perfectly controlled environment, then I see no logical reason that an entire lifecycle could not be known from inception. I see no reason that a technology couldn’t scan my every particle and monitor the environment and predict my behavior before I act on anything. Our tendency to distinguish ourselves from machines, as something “special,” is not so much founded in logic, but in a deeper fear that when the hardware gets old or the power supply runs out, the machine simply turns off, and there’s not much more to it than that. The seemingly defined line that we continually draw in the sand to separate humanity and technology is only supported by a lack of full disclosure about our own little bits.

    ·

  • Installing and Finding Packages in Linux

    I’m pretty new to using anything other than windows, and I’m using CentOS 5 on a virtual machine, and just getting comfortable with some pretty basic installation procedures. For my own reference, I am going to document several procedures.

    Download and compile from source

    **1) downloading .tar.gz files, unzipping them with:

    
    rpm xvzf name.tar.gz
    
    

    2) look for README and INSTALL files, which will provide further details about specifics of the installation. Generally, the following works:

    
     ./configure  # seems to look for all the prereqs for the installation, and tell you if it can't find something
    # I think this command compiles
    make          
    # then you install!
    make install 
    
    

    3) The other option is to find a nice .rpm file, and have your system install it for you.

    Find Something that You’ve Installed

    If you install something and then another something cannot find it, it’s likely not on the path. To step back, the path variable is basically a list of places that are browsed through to find files, whenever you call any command. So if you want to know if something is “findable” in the terminal, you can echo $PATH to see if it’s there!

    Of course, what if you are like me, and you install something, and then have no idea where it is? I was missing a required library for a package, so my run of ./configure wasn’t working. I read in the INSTALL file that I could specify a path to look for libraries, so to get this to work, I needed to find that path. So first I used yum to check if it was installed.

    
    yum list packagename
    
    

    And sure enough, I did indeed have it, but I had no idea where it was! Silly yum just told me that it was installed, and the version number, and not much else. So I blindly looked in the folders where the internet told me it was “supposed” to be, but found nothing. I then needed another strategy, and started to look at the rpm command.

    Use rpm to find an installed package

    You can use the following to list all of your installed packages:

    
    rpm -qa | less
    
    

    The command above tells the package manager to query all installed packages, and the addition of less just makes it more manageable in the terminal. The following gives you information about a specific package:

    
    rpm -qi packagename # query for information (i)
    
    

    and then to find where the little bugger is hiding, you can do:

    
    rpm -ql packagename  # query for the location (l)
    
    

    From this basic troubleshooting, I was able to find the location of my package, and then add it to the path variable. To edit your path, you want to change either ./bash_profile or ./bashrc – which I think are hidden files when you cd to ~.

    Add package to path

    
    cd ~   # go to the root of everything
    gedit .bash_profile   # open up the bash profile with your text editor
    
    

    If you do “ls” you won’t see it, I think because it starts with a “.”. The basics of adding a folder to your path is appending it, and then exporting, like so:

    
    PATH=$PATH:/path/to/add
    export PATH
    
    

    and then save the .bash_profile, and you will have to log out and in again for the changes to happen. If you don’t want to do that, you can also type those commands into the terminal window.

    And to provide closure to my particular problem, after eight hours of compiling from source for a gazillion and one libraries, attempting to edit source code on my own, and installing different compilers to see if it made a difference, I finally admitted defeat to getting this particular package installed. However, that doesn’t take away from the utility of the commands that I detailed above, nor does it qualify the time spent as a waste. I found this experience fun, and learned a very significant amount. I am continually awed by the intricate design of software and machines, and excited as my brain continues to make sense of them.

    If an OS is like a religion, am I converted?

    Am I converted? Well, I don’t like the idea of joining an OS bootcamp and bashing the other side, because I don’t see why I can’t enjoy them all. However, my love for the command line combined with my recent escapade of installing Ubuntu 11.04 on my Dad’s old laptop and seeing huge improvements in performance has me excited. On some future date I would like to configure a system with something other than Windows, and not just do a dual boot or virtual machine. I’m pretty excited about Chrome OS too, but I don’t think the browser alone is ready for the type of applications that I use on a daily basis. Until then, it’s back to regedit, blue screens of death, using SSH to satisfy command line urges, constant searching for the right .dll, Windows Update, and Dell Diagnostics. Oh Windows, you are so special! :O)

    ·

  • Spring Reflections

    It is usually the case that before a big life change, I find myself in some sort of transition period with time to introspect. I think that introspection is important for both learning and reflection, so I feel lucky to have this small break to do so.

    Discovery and Balance

    Two years ago I graduated from Duke and chose to enter a domain of work that spanned my two interests: neuroscience and computer science. I was not certain if academia was the right fit for me, and I most surely did not want to enter graduate school without being certain of my passion for a field of study. Fast forward two years, and I know that I’ve found my niche in terms of lifestyle and topic. I feel excited, (a little nervous!), and empowered to transition into being a full time research scientist. I feel very strongly that hard work and a committment to proactively pursue my goals, even if there are bumps and challenges along the way, will lead to good things. I know that I have much to learn, and I don’t have all the answers, but I feel confident in my ability to do what is right to be the best researcher that I can be.

    More equal distribution between work and social stuffs is still something that many have articulated is an important component of a “balanced life,” and if this is true, I still am lacking the life experience or incentives to drastically shift my priorities. I still am more excited to relaxedly work over a weekend than take an impromptu beach trip that would eat up the entirety of the time. And I’m not sure that the idea of going out and drinking at some sort of happy hour is ever going to be my idea of fun. At this time in my life I still feel social fulfillment from interactions in with classmates or colleagues, and I balance out time sitting in front of a computer by going for an adventure run or bike ride. I think that there isn’t a “right” way to be as long as you are productive and happy with how you are. I cannot say if I will be as work focused in fifteen or twenty years as I am now, but given that I am about to start on what I hope to be a long and fulfilling career in academia, I think my mental state and priorities are just right.

    Motivation

    I read an article recently titled “The Cognitive Cost of Doing Things” that made me reflect on my own cognitive resources and energy, and for the second part of this post I’d like to review some of the ideas discussed.

    Most Things Require A Catalyst, or Activation Energy: I agree that it can be a challenge to get started on doing just about anything. I’d call this motivation, and one of my greatest fears is not having the motivation to do something that I either must do, or should do because it’s important. I manage this fear by starting on most things early so that I can always work on things that I want to be working on, maximize efficiency, and not work on things when I don’t want to work on them. I also start early because it probably takes more cognitive energy to suppress working on something that excites me, given that I can allocate some time.

    ** Manage Time Based on Present Desire:** In the case that I don’t want to work on project vanilla, there is a pretty good chance that I want to work on chocolate, and I might return to vanilla when my desires again shift. You only get into trouble when you haven’t allocated enough time to take fluctuating levels and foci of motivation into account, and you really have to force yourself to make an entire sundae when you don’t fancy vanilla, chocolate, or strawberry. Ice cream headache! This is why I think that getting started on things promptly is important – you want to avoid associating your work with any sort of negative mental state that says “I have to do this NOW and I don’t want to.” The same idea can be applied to running (forcing yourself to run up a hill when you aren’t feeling it) or what you have for dinner. Forcing yourself to run up those hills, slowly over time, may make it so you can’t muster up the cognitive energy to put your sneakers on in the first place, which is a much worse eventual outcome.

    Every decision tree that we conceptualize definitely takes into account how decisions influence present and future happiness, and alter future incentive structures. I tend to be less impulsive for the present, more future oriented, and I place a high value on efficiency and “getting stuff done,” so my advice to be kinder to yourself in the present and direct work and behavior based on present incentives may not be best for someone who would apply this mindset and only have it lead to procrastination. This is of course not a good mindset to have if you never reach a mental state where you want to jump into an activity.

    ** When the pool is cold, just jump in:** Another phenomenon I’ve noticed is when there are too many good things to work on, and I enter a mental state I like to call ‘analysis paralysis.’ It’s like being in line at the Loop (or your favorite dinner place) and being unable to order because I can only choose one thing, and there are too many choices. So I stand there and… do nothing, when the reality of the situation is that many choices would make me happy, and I just need to pick one and move on. In these situations, whether I am standing in line for dinner or staring at my to-do list, it’s good to be able to identify that I am spiraling into analysis paralysis, and I need to stop thinking and attempting to optimize, and just pick one. If motivation is an issue and my cognitive resources are pooping out, I can do the same thing. I try to start working on something small without thinking too much, and my brain usually gets immersed.

    **Distinguish between overall fit and sentiment for the task at hand: **I would also venture to say that it’s completely normal to love some aspects of work, and not care so much for others. I think that good evidence of not being matched well with a job is probably a strong desire to not do most of what someone in that job should be doing most of the time. On the contrary, I think that losing track of time and experiencing that lovely state called “flow” serve as evidence that I love what I do. To again bring up the idea of working based on current desire, I would say that the right time to work on anything is exactly when I am excited to do so, and to stop working on something when I lose interest. And when I get to the point when something distracts me, then is a good time to take a break, or stop entirely. Working when I don’t want to is neither productive nor a good mental state to associate with work, period.

    Slowing-Down Energy: The article talks about inertia as “keep(ing) doing what you’ve been doing” in the context of getting comfortable with routine. I’d like to distinguish between the idea of getting stuck and comfortable with a routine, and someone’s ability to disconnect from a task. Both could be considered the opposite of “activation energy” in that it’s what keeps you going – but the difference is the level of engagement. I have a hard time disconnecting from a task not because I am stuck in a mindless routine, but because I can’t break my engagement. I cannot, and do not want, to peel away from things that I’m working on until I’ve reached “the right” point when something else becomes more interesting, or my cognitive fuel tank needs a trip to the mental gas station. Breaking away from something that I want to be working on is more stressful and cognitively demanding than starting in the first place. If I stop too early, some component of fulfillment is missing, and the neglected task sits right on top of my mind. If I stop for the day with a lingering question, I will likely think about the problem throughout the evening, and into my run the next morning, and if I’m lucky my mind will stumble over something new to try.

    **Opportunity Cost: **My brain always considers opportunity cost with regard to what else I might be doing with the time. I don’t have the right neural connections so that money comes into the equation, and I will sheepishly admit that I’ll usually chose work tasks over social things. Since time and productivity are the primary drivers of my incentive machine, I prioritize them above all else. If I need to or am asked to do activity X, I will always consider what else I might be doing with that time, say activity Y, and I will choose the activity that maximizes productivity or learning to improve future productivity. I will need to work hard to try and look at the world and my choices from a less practical angle, because In many situations (especially unstructured ones in an academic environment) I need to recognize that I cannot always predict what might come out of any sort of interaction. It might “make more sense” beforehand to spend an evening coding or reading papers over attending a social event, but I could never predict meeting a new person or an interesting conversation that leads to an awesome new collaboration. This is a way of thinking that I have decided to proactively work on.

    Altering hormonal balance: The article talks about how we don’t think about how things like stress, hormones, digestion, or breathing (basically environmental influences on our unique biochemistry) have huge implications for cognitive energy. I have three words for how I feel about this: running fixes everything! But in all seriousness, I like to think about how my living environment and people in it tax my cognitive resources, or energize them. There are people that I’ve noticed make me happy and give me energy, and other people that leave me ruminating about something silly and negative, and wasting my cognitive resources. The same is true of small daily activities, or choices that I make. Going out for a morning run, regardless of how I feel beforehand, gives me energy. Staying up too late or forcing myself to do something that I don’t want to do drains me of energy. Breaking from work and spending time with friends, even if it isn’t the most productive choice, gives me energy. I take all these things into account. A lot of life is largely about maximizing the things that give me energy, and minimizing the things that sap me of it.

    The BIG WHY: But to dig a little deeper into this ogre onion, the next logical question is, what is the motivation underlying all of these things? Why should I even think or care about my cognitive resources, or be interested in anything at all?

    My underlying motivation is a desire to understand systems so I can build them myself. If I can break something into pieces, then I can understand each one, and make something on my own, either similar or modified by putting the pieces together in a different way. If I can make things and solve problems, then I feel useful and productive, and that makes me a happy human being. So in a sense, understanding = empowerment / control to change my environment –> verification of purpose –> happiness. I know that when I am presented with a system or idea and I don’t understand it, it’s very troubling / upsetting, and I know that I am happiest when I feel challenged, and I have many things to work on and think about.

    In a nutshell: I appreciate articles like this because they remind me to think about myself, what makes me tick, and why I do things that I do. They also remind me that when I disagree with someone, or don’t understand why said person’s behavior differs from my own, I can usually understand the difference if I try to understand the underlying values and motivations. And on the flip side, when I perceive that someone doesn’t agree with my choices as how to spend my time or allocate my cognitive resources, I hope that they consider what my incentives and values are. Even in light of disagreement, I think that people can try to logically understand one another. Given that people change, are inconsistent, and arguably more difficult to understand than a mechanical process, it is definitely worth taking the time to think about these things.

    For now, I’d like to close this thought bubble by saying that I am grateful for my past experience, really enjoying the present, and (I will say this in true New Englander, frontal-lobe dominant style) – “I am wicked excited for the future!” :OD

    ·

  • Operation Tiny Pie

    During my time in Palo Alto, we had brunch at a little cafe in the downtown strip, which may have been called University Ave. I don’t particularly go out to eat a lot, so perhaps this observation is already well documented, but I noticed that the jelly containers look like tiny pie tins. The only jelly containers that I can recall look like small rectangular, plastic bins with a peel away top. These were a novelty!

    I decided that it was worth taking some with me, packaging them into my liquids/gels plastic bag to go through airport security (and hoping no one would ask me questions about why it was important for me to fly strawberry jelly across the country), and see if I could use them to make actual pies.

    **Clean the tins. **I like how these almost look full size, until you get an idea of scale from the surroundings.

    **Make Pie. **Pumpkin is easy, and one of the best ones anyway! I only had four tins, so I decided to make one sans crust, three with crust, and then I made four “medium” size pies with graham cracker shells, and nine other tiny pies sans tin to use up the batter. I always think it’s interesting that pumpkin pie takes about the same time to cook, regardless of the size. It’s one of those baked goods that goes in a moderately hot oven for a longer period of time, and the time seems more based on the thickness of the batter than the length and width of the pie itself.

    **Tiny Pies! **to be sent with love to someones that you care about, or a mouse family at Thanksgiving time.

    ·

  • DNS Score 1.5 Alpha Release

    This is an incredibly exciting day for me, because I have finished my first “real” application (meaning that it is an .exe file that runs with an installer and plops something into Program Files.) I shall first provide an overview, and then more details about how I put it together.

    Overview

    This application is used for creating syntax (.sps) files to score behavioral data for the Duke NeuroGenetics Study. The user can select a data file (.sav), an output folder, and whether or not results should also be merged into one file. This application simply writes the script to produce the desired output, and does NOT run the script. After using this application, the user should go to the output folder and run the .sps script in PASW Statistics to produce the scored data.

    Tips for Use

    Install DNS Score on your machine and create a shortcut for it in a folder close to your syntax and/or output folders. The DNS Score file selector opens in the present working directory, so running it directly from Program Files will make your file selector start from there, which is not ideal.

    Instructions

    1. Create an output folder for storing the script and your results on your local machine
    2. Launch DNS Score
    3. Select your output folder, the syntax desired, whether you want a merged file, and the data in the GUI, and click CREATE SPS
    4. The .sps file will be saved to your output folder
    5. Simply open this file in PASW and run it.
    6. This script will score the measures that you selected using the scoring syntax you selected, and save all individual and compiled results to your output folder. You can rename the dns_score.sps script to whatever is appropriate for your run.

    What does the .sps syntax do?

    This application was created to address the problem of creating quick merged files with custom measures, and the problem of one tiny error in a massive syntax file corrupting an entire results data file. The script that this application creates does the following, specifically for use with the Duke NeuroGenetics Study:

    1.  Loads user specified dataset
    2. Resolves ID confusion issues between two variables, and prints the correct ID as an integer “dns_id”
    3. For each measure, works directly from a copy of the raw data, scores the measure, saves a measure specific file. (This means that an error in one syntax will not have averse effects on the rest)
    4. As it goes, if the user has asked to make a merged file, it concatenates the results based on the dns_id

    What can we learn from this application?

    While this application is a custom job for the Duke NeuroGenetics Study, it is a good example of how python can be used to create a GUI to run in Windows to create custom job scripts for users. This is the first of hopefully multiple small projects that I hope to do that will write custom scripts based on user input. This of course is a very rough version with limited use within my lab, but feel free to contact me with things that I can fix or do better!

    DOWNLOAD 64 BIT

    DOWNLOAD 32 BIT

    ·

  • covcheck_free.m Release!

    I have recently updated my coverage checking script to allow for more flexibility in selecting raw mask image files. As a reminder, here is the original version.

    While this original solution worked well with the pipelines established in my lab, I realized that, long term, the script was not flexible enough to account for changes in directory hierarchy or task lists. Thus, I have modified it so that instead of selecting an experiment and a task and having a hard coded spot to create the output folder, the new version allows the user to select his or her own location to create the output folder, as well as a custom list of mask image paths. This means that we aren’t limited to any particular experiment, task, and could even run a session with data from multiple different experiments.

    I am content with these changes for now, however I am aware that this script still requires the BIAC tools to function correctly, as well as some SPM functions. In the long run I would like to develop a stand alone application for checking coverage, given that the user has only a set of images to check coverage for, and a desired mask!

    Overall Changes

    • User no longer has to select experiment, task, number of subjects, or list of subjects. A simple input of mask image paths is asked for instead.
    • Output files no longer print subject IDs, but these paths, which it is assumed contain the subject IDs. Granted that the list of paths is likely prepared in advance or takes time to select, this means that it can be saved somewhere for an incredibly easy copy paste to run the same coverage check.
    • The script only includes support for .img/.hdr files, as I am not comfortable enabling other formats without lots of testing!

    covcheck_free

    ·

  • List of all Video Types

    Here is a list of tuples (for python) for all the video extensions (types[x][0]), the descriptor (types[x][1]), and the commonality (types[x][2]). This was harder to put together than it should have been, and I want to put it here for future reference, because I never want to make it again!

    
    types = (('.3gp2','3GPP Multimedia File','Average'),('.3gpp','3GPP Media File','Average'),('.3mm','3D Movie Maker Movie Project','Average'),('.3p2','3GPP Multimedia File','Average'),('.aaf','Advanced Authoring Format File','Average'),('.aep','After Effects Project','Average'),('.aetx','After Effects XML Project Template','Average'),('.ajp','CCTV Video File','Average'),('.amv','Anime Music Video File','Average'),('.amx','Adobe Motion Exchange File','Average'),('.arf','WebEx Advanced Recording File','Average'),('.avb','Avid Bin File','Average'),('.axm','AXMEDIS Object','Average'),('.bdmv','Blu-ray Disc Movie Information File','Average'),('.bik','BINK Video File','Average'),('.bin','Binary Video File','Average'),('.bix','Kodicom Video File','Average'),('.bmk','PowerDVD MovieMark File','Average'),('.box','Kodicom Video','Average'),('.byu','Brigham Young University Movie','Average'),('.camrec','Camtasia Studio Screen Recording','Average'),('.clpi','Blu-ray Clip Information File','Average'),('.cmmp','Camtasia MenuMaker Project','Average'),('.cmmtpl','Camtasia MenuMaker Template','Average'),('.cmproj','Camtasia Project File','Average'),('.cmrec','Camtasia Recording','Average'),('.cvc','cVideo','Average'),('.d2v','DVD2AVI File','Average'),('.d3v','Datel Video File','Average'),('.dat','VCD Video File','Average'),('.dce','DriveCam Video','Average'),('.dck','Resolume Deck File','Average'),('.dir','Adobe Director Movie','Average'),('.dmb','Digital Multimedia Broadcasting File','Average'),('.dmss','VideoWave SlideShow Project File','Average'),('.dpg','Nintendo DS Movie File','Average'),('.dv','Digital Video File','Average'),('.dv-avi','Microsoft DV-AVI Video File','Average'),('.dvx','DivX Video File','Average'),('.dxr','Protected Macromedia Director Movie','Average'),('.dzt','DirectorZone Title File','Average'),('.evo','HD DVD Video File','Average'),('.eye','Eyemail Video Recording File','Average'),('.f4p','Adobe Flash Protected Media File','Average'),('.fbz','FlashBack Screen Recorder Movie','Average'),('.fcp','Final Cut Project','Average'),('.flc','FLIC Animation','Average'),('.flh','FLIC Animation File','Average'),('.fli','FLIC Animation','Average'),('.flx','FLIC Animation','Average'),('.gfp','GreenForce-Player Protected Media File','Average'),('.gl','GRASP Animation','Average'),('.grasp','GRASP Animation','Average'),('.gts','CaptiveWorks PVR Video File','Average'),('.hkm','Havok Movie File','Average'),('.ifo','DVD-Video Disc Information File','Average'),('.imovieproject','iMovie Project','Average'),('.ivf','Indeo Video Format File','Average'),('.ivr','Internet Video Recording','Average'),('.ivs','Internet Streaming Video','Average'),('.izz','Isadora Media Control Project','Average'),('.izzy','Isadora Project','Average'),('.jts','Cyberlink AVCHD Video File','Average'),('.lsf','Streaming Media Format','Average'),('.m1pg','iFinish Video Clip','Average'),('.m21','MPEG-21 File','Average'),('.m2t','HDV Video File','Average'),('.m2ts','Blu-ray BDAV Video File','Average'),('.m2v','MPEG-2 Video','Average'),('.m4u','MPEG-4 Playlist','Average'),('.mgv','PSP Video File','Average'),('.mj2','Motion JPEG 2000 Video Clip','Average'),('.mjp','MJPEG Video File','Average'),('.mnv','PlayStation Movie File','Average'),('.mp21','MPEG-21 Multimedia File','Average'),('.mpgindex','Adobe MPEG Index File','Average'),('.mpl','AVCHD Playlist File','Average'),('.mpls','Blu-ray  Movie Playlist File','Average'),('.mpv','MPEG Elementary Stream Video  File','Average'),('.mqv','Sony Movie Format File','Average'),('.msdvd','Windows DVD Maker Project File','Average'),('.msh','Visual Communicator Project File','Average'),('.mswmm','Windows Movie Maker Project','Average'),('.mtv','MTV Video Format File','Average'),('.mvb','Multimedia Viewer Book Source File','Average'),('.mvd','Movie Edit Pro Movie File','Average'),('.mve','Infinity Engine Movie File','Average'),('.ncor','Adobe Encore Project File','Average'),('.nsv','Nullsoft Streaming Video File','Average'),('.nuv','NuppelVideo File','Average'),('.nvc','NeroVision  Express Project File','Average'),('.ogv','Ogg Vorbis Video  File','Average'),('.ogx','Ogg Vorbis Multiplexed Media  File','Average'),('.pgi','Video Recording File','Average'),('.piv','Pivot Stickfigure Animation','Average'),('.playlist','CyberLink PowerDVD Playlist','Average'),('.pmf','PSP Movie File','Average'),('.prel','Premiere Elements Project File','Average'),('.pro','ProPresenter Export File','Average'),('.pxv','Pixbend Media File','Average'),('.qtch','QuickTime Cache File','Average'),('.qtl','QuickTime Link File','Average'),('.qtz','Quartz Composer File','Average'),('.rdb','Wavelet Video Images File','Average'),('.rec','Topfield PVR Recording','Average'),('.rmp','RealPlayer Metadata Package File','Average'),('.rms','Secure Real Media File','Average'),('.roq','Id Software Game Video','Average'),('.rp','RealPix Clip','Average'),('.rts','RealPlayer Streaming Media','Average'),('.rum','Bink Video Subtitle File','Average'),('.rv','Real Video File','Average'),('.sbk','SWiSH Project Backup File','Average'),('.seq','NorPix StreamPix Sequence','Average'),('.sfvidcap','Sonic Foundry Video Capture File','Average'),('.smi','SMIL Presentation','Average'),('.smk','Smacker Compressed Movie File','Average'),('.spl','FutureSplash Animation','Average'),('.ssm','Standard Streaming Metafile','Average'),('.svi','Samsung Video File','Average'),('.swt','Flash Generator Template','Average'),('.tda3mt','DivX Author Template File','Average'),('.tivo','TiVo Video File','Average'),('.tod','JVC Everio Video Capture File','Average'),('.tp','Beyond TV Transport Stream File','Average'),('.tp0','Mascom PVR Video File','Average'),('.tpd','Cyberlink TOD Video File','Average'),('.tpr','TMPGEnc Project File','Average'),('.tvs','TeamViewer Video Session File','Average'),('.vc1','VC-1 Video File','Average'),('.vcpf','VideoConvert Project File','Average'),('.vcv','ViewCave Video File','Average'),('.vdo','VDOLive Media File','Average'),('.vdr','VirtualDub Signpost File','Average'),('.vfz','Creative Webcam Video Effects File','Average'),('.vgz','DigitalVDO Compressed Video File','Average'),('.vid','Generic Video File','Average'),('.viewlet','Qarbon Viewlet','Average'),('.viv','VivoActive Video File','Average'),('.vivo','VivoActive Video File','Average'),('.vlab','VisionLab Studio Project File','Average'),('.vp6','TrueMotion VP6 Video File','Average'),('.vp7','TrueMotion VP7 Video File','Average'),('.vpj','VideoPad Video Editor Project File','Average'),('.vsp','VideoStudio Project File','Average'),('.w32','WinCAPs Subtitle File','Average'),('.wcp','WinDVD Creator Project File','Average'),('.webm','WebM Video File','Average'),('.wm','Windows Media File','Average'),('.wmd','Windows Media Download Package','Average'),('.wmmp','Windows Movie Maker Project File','Average'),('.wmx','Windows Media Redirector','Average'),('.wp3','Microsoft Photo Story Project File','Average'),('.wpl','Windows Media Player Playlist','Average'),('.wvx','Windows Media Video Redirector','Average'),('.xfl','Flash Movie Archive','Average'),('.zm1','ZSNES Movie 1 File','Average'),('.zm2','ZSNES Movie 2 File','Average'),('.zm3','ZSNES Movie 3 File','Average'),('.zmv','ZSNES Movie File','Average'),('.aepx','After Effects XML Project','Common'),('.bdm','AVHCD Information File','Common'),('.bsf','Blu-ray AVC Video File','Common'),('.camproj','Camtasia Studio Project','Common'),('.cpi','AVCHD Video Clip Information File','Common'),('.divx','DivX-Encoded Movie File','Common'),('.dmsm','VideoWave Movie Project File','Common'),('.dream','Dream Animated Wallpaper File','Common'),('.dvdmedia','RipIt DVD Package','Common'),('.dvr-ms','Microsoft Digital Video Recording','Common'),('.dzm','DirectorZone Menu Template','Common'),('.dzp','DirectorZone Particle Effect File','Common'),('.f4v','Flash MP4 Video File','Common'),('.hdmov','QuickTime HD Movie File','Common'),('.imovieproj','iMovie Project File','Common'),('.m2p','MPEG-2 Program Stream File','Common'),('.m4v','iTunes Video File','Common'),('.mkv','Matroska Video File','Common'),('.mod','Camcorder Recorded Video File','Common'),('.moi','MOI Video File','Common'),('.mpeg','MPEG Movie','Common'),('.mts','AVCHD Video File','Common'),('.mxf','Material Exchange Format File','Common'),('.ogm','Ogg Media File','Common'),('.pds','PowerDirector Script File','Common'),('.prproj','Premiere Pro Project','Common'),('.psh','Photodex Slide Show','Common'),('.r3d','REDCODE Video File','Common'),('.rcproject','iMovie 08 Project','Common'),('.rmvb','RealMedia Variable Bit Rate File','Common'),('.smil','SMIL Presentation File','Common'),('.srt','SubRip Subtitle File','Common'),('.stx','Pinnacle  Studio Project File','Common'),('.swi','SWiSH Project  File','Common'),('.tix','DivX Video Download Activation  File','Common'),('.trp','HD Video Transport Stream','Common'),('.ts','Video Transport Stream File','Common'),('.veg','Vegas Video Project','Common'),('.vf','Vegas Movie Studio Project File','Common'),('.vro','DVD Video Recording Format','Common'),('.wlmp','Windows Live Movie Maker Project File','Common'),('.wtv','Windows Recorded TV Show File','Common'),('.xvid','Xvid-Encoded  Video File','Common'),('.yuv','YUV Video File','Common'),('787','AVTECH  CCTV Video File','Rare'),('.dsy','Besta Video  File','Rare'),('.gvi','Google Video File','Rare'),('.m15','MPEG  Video','Rare'),('.m4e','MPEG-4 Video File','Rare'),('.m75','MPEG  Video','Rare'),('.mmv','MicroMV Video File','Rare'),('.mpeg4','MPEG-4 File','Rare'),('.mpf','MainActor Project File','Rare'),('.mpg2','MPEG-2 Video File','Rare'),('.mpv2','MPEG-2 Video Stream','Rare'),('.rmd','RealPlayer Media File','Rare'),('.scm','Super Chain Media File','Rare'),('.sec','GuinXell Video File','Rare'),('.vp3','On2 Streaming Video File','Rare'),('2.640000001','Ripped Video Data File','Uncommon'),('.3gpp2','3GPP2 Multimedia File','Uncommon'),('.60d','CCTV Video Clip','Uncommon'),('.aet','After Effects Project Template','Uncommon'),('.avd','Movie Edit Pro Video Information File','Uncommon'),('.avs','Application Visualization System File','Uncommon'),('.bs4','Mikogo Session Video Recording','Uncommon'),('.dav','DVR365 Video File','Uncommon'),('.ddat','DivX Temporary Video File','Uncommon'),('.dif','Digital Interface Format','Uncommon'),('.dlx','Sony VDU Video File','Uncommon'),('.dmsm3d','VideoWave 3D Movie Project File','Uncommon'),('.dnc','Windows Dancer File','Uncommon'),('.dv4','Bosch Security Systems CCTV Video File','Uncommon'),('.fbr','Mercury Screen Recording','Uncommon'),('.gvp','Google Video Pointer','Uncommon'),('.iva','Surveillance Video File','Uncommon'),('.lsx','Streaming Media Shortcut','Uncommon'),('.m1v','MPEG-1 Video File','Uncommon'),('.m2a','MPEG-1 Layer 2 Audio File','Uncommon'),('.meta','RealPlayer Metafile','Uncommon'),('.mjpg','Motion JPEG Video File','Uncommon'),('.modd','Sony Video Analysis File','Uncommon'),('.moff','Sony Video Analysis Index File','Uncommon'),('.moov','Apple QuickTime Movie','Uncommon'),('.movie','QuickTime Movie File','Uncommon'),('.mp2v','MPEG-2 Video File','Uncommon'),('.mp4v','MPEG-4 Video','Uncommon'),('.mpe','MPEG Movie File','Uncommon'),('.mpsub','MPlayer Subtitles File','Uncommon'),('.mvc','Movie Collector Catalog','Uncommon'),('.mvp','Movie Edit Pro Video Project File','Uncommon'),('.mys','Vineyard Captured Video File','Uncommon'),('.osp','OpenShot Project File','Uncommon'),('.par','Dedicated Micros DVR Recording','Uncommon'),('.pssd','PhotoSuite Slide Show File','Uncommon'),('.pva','PVA Video File','Uncommon'),('.pvr','Wintal PVR Video File','Uncommon'),('.qt','Apple QuickTime Movie','Uncommon'),('.qtm','Apple QuickTime Movie File','Uncommon'),('.sbt','SBT Subtitle File','Uncommon'),('.scn','Pinnacle Studio Scene File','Uncommon'),('.sml','SMIL Slideshow Presentation','Uncommon'),('.smv','VideoLink Mail Video File','Uncommon'),('.str','PlayStation Video Stream','Uncommon'),('.vcr','ATI Video Card Recording','Uncommon'),('.vem','Meta Media Video E-Mail File','Uncommon'),('.vfw','Video for Windows','Uncommon'),('.vs4','AVTECH CCTV Video Surveillance File','Uncommon'),('.vse','AVTECH CCTV Video','Uncommon'),('.3g2','3GPP2  Multimedia File','Very Common'),('.3gp','3GPP Multimedia File','Very  Common'),('.asf','Advanced Systems Format File','Very  Common'),('.asx','Microsoft ASF Redirector File','Very  Common'),('.avi','Audio Video Interleave File','Very  Common'),('.flv','Flash Video File','Very Common'),('.mov','Apple  QuickTime Movie','Very Common'),('.mp4','MPEG-4 Video File','Very  Common'),('.mpg','MPEG Video File','Very Common'),('.rm','Real Media  File','Very Common'),('.swf','Flash Movie','Very Common'),('.vob','DVD  Video Object File','Very Common'),('.wmv','Windows Media Video  File','Very Common'))
    
    

    Here would be an example of how to get a short list, for example looking up all the extensions that are very common:

    
    for type in types:
    
      if type[2] == 'Very Common':
    
          print type[0],
    
    
    
    .3g2 .3gp .asf .asx .avi .flv .mov .mp4 .mpg .rm .swf .vob .wmv
    
    

    I simply printed them all in a straight line, however you could format them in whatever way you like, or put them into another data structure for better use! I would also suspect that .ogg and .ogv will increase in popularity with HTML5, and will soon belong to the “Very Common” group!

    ·

  • Simple Rice Crispies Treats

    I’ve been doing a lot of baked treats recently, rice crispies included! I’ve always liked making these because you can make them with any cereal, any marshmallow type, and measurements don’t matter so much. The general “recipe” is as follows:

    • 3-5 tablespoons margarine (I usually just use half a stick, which is 4)
    • 1 package (10 oz) marshmallows
    • 6 cups cereal

    Instructions

    1. Melt margarine on low heat
    2. Add marshmallows, melt completely
    3. Mix in cereal in large bowl, press into pan (use pam or margarine coated bags on your hands to press down)
    4. Let set and cut into squares!
    5. Package and send to people you like!

    I’d like to briefly discuss the evolution of my baking, and the incentives for doing so, as they have evolved over time. The main incentive has always been being able to create something to give to someone else to show them that you value them. Baking is a way of expressing value and affection, even if the person is far away. In high school I regularly baked for my advisory and classes because it was fun, challenging, and fulfilling. In terms of my time constraints, I didn’t have many. I did what I was supposed to, school-work and running wise with still plenty of free time for playing computer games and family stuffs. I hadn’t yet stumbled on something that I loved so much that I would choose to do it not only for a full time job, but also in my free time.

    Fast forward seven or eight years, and I’ve stumbled on that something, in a big way. I now have a rich life filled with work that I am passionate about, but that means that free time that I’m willing to allocate towards something like baking is minuscule. However, 1) the desire to make things for people that I care about, and 2) the love of being creative with color and design, and 3) the enjoyment of the process of creating something from nothing, still make me want to have baking as a part of my life. The compromise is the amount of time that I’m willing to spend on these projects. No longer will I devote 5 to 6 hours in a night to making a three tier cake or a perfect batch of decorated cookies. I also no longer have the bottomless pantry and complete set of tools that were present at my family’s home, or the desire to spend large amounts of money on very specialized ingredient lists. The result is that I make simple things that are fun and quick, like two layered pink and green cheerio-crispies :)

    I’d also like to note that this is my hundredth post – hooray!

    ·

  • Hackerspace Cupcake Challenge

    I was pretty excited about the Hackerspace Cupcake Challenge until I realized that I wasn’t part of a Hackerspace, and it would not be feasible to join or make my own. Under these circumstances, I decided to host my own little Cupcake Challenge, and create a batch of cupcakes paired with a fun, easy, and cheap packaging strategy.

    **Idea: **

    • Make a cupcake pod for each cupcake. Mostly because I like the idea of each cupcake travelling in its own vessel.
    • Make cupcakes square so corners are better anchors
    • Use some sort of toothpick to anchor (either sticking through the side or pushing down)

     

    Rejected Ideas:

    • Use one of those round, plastic expanding ball toys and anchor each cupcake into a hole (frosted side inward) and anchor cupcakes from the sides
    • Put cupcakes inside of a gerbil wheel, and fill the entire center with sprinkles as a “packaging filler.” When it gets there, simply open up ball, pour out sprinkles, and remove cupcakes.
    • Bake cupcakes into something that could make them more egg shaped, and use an egg carton
    • Make a “cupcake hat” out of something hard and plastic, and place over cakes in a standard, rectangular box.
    • Cupcakes in a tube didn’t seem like a great idea.

    Procedure:

    **1. **Make cupcakes, put into paper liners. To make them square, instead of using a standard pan, I just put the liners together into a square one.

    **2. **Bake in the oven. Cupcakes usually take 10-15 minutes, but I would definitely check earlier as opposed to later!

    **3. **Remove from the oven, and when they are cool enough to touch, remove from the pan.

    **4. **Prepare Pods! Each cupcake will be nestled into the bottom of the foam cup, anchored in by small forks (that I found in the toothpick section at the market).

    **5. **This shows my testing of the anchoring technique that I used, and yes, this pod is being held upside-down! Once you are confident about your strategy, frost the cupcakes, and apply sprinkles, of course.

    **6. **Place each cupcake superficially in a pod, and using two forks grasptg the outer shell, push the cupcake down. The forks should act like little prongs to hold it in, and possibly can be manipulated to help lift it out again.

    7. The cupcake in the center is ready to be pushed down, while the one off to the left has already been anchored. The next picture shows the army of anchored cupcakes. What isn’t shown is the extra two prongs that were criss crossed and pierced through the foam cup to hold it down (see step 5).

    **8. **For packaging, I decided to cover them with a piece of aluminum, tape it down, and then package them, tops together, in groups of 2. I then placed each podlet into a padded envelope.

    **9. **The packages are ready! I included instructions in each about documenting the opening, etc. I will post results here as they come in!

    RESULTS

    **1) North Carolina **to **Rhode Island: **665 miles, estimated travel time: 3-4 days

    For this package, I have to admit that I ruined an entire pod before even sending it out. I made the mistake of trying to drop three packages at once into the big metal, swinging “drop box” at the post office. Let’s just say that it didn’t swing open again, and I immediately knew that one of my packages was stuck. When I tried to move it back, I felt the awful crunch / smash of one of my parcels, but I had to go through with the damage, or else all of the packages wouldn’t fall through, and I’m sure the next person would apply a significant amount of force and potentially chop one of my parcels entirely in half. There was no one to help at the post office at 5:00am on a weekday, so I had to take matters into my own hands, and clunk the handle around until it finally wiggled through. The result is the damaged shipment below, but I’m still grateful that it arrived in one piece! I have included my brother’s comments, as they are super awesome :O) His removal technique is as well!

    An interesting package has arrived in the mail. Inside, a pair of "cupcake pods" and a message! Opening the first of the two cupcake pods to see what is inside... Now fully open, what is to be found? Oh no! It's, it's... dead! This poor cupcake did not survive the journey inside the pod, it seems. A second pod still remains. It will now be opened... Inside, an intact cupcake! But, it is stuck! Can it be removed without suffering any damage? Drastic measures are taken. The bottom of the pod must be perforated to free the trapped cupcake. Success! An intact cupcake is extricated from its former pod. The tale ends. One cupcake survived, one did not, but it ultimately doesn't matter because both shall suffer the same fate anyway (be eaten).

    2) North Carolina **to **New Hampshire: 724 miles, estimated travel time: 3-4 days. The videos below show my parents unpacking the parcel, in their very cute way :O)

    3) North Carolina **to California: **2,543 miles, estimated travel time, 4 days. Arrived! Looks pretty good for that many miles! Thank you US Postal Service!

    ·

  • Sudomance

    In honor of the xkcd cartoon, “Sudo make me a sandwich” :

    Version 1:

    Version 2:


    ·

  • Interactive Timeseries Visualization

    I am working on a mini project that, simply put, will be a webpage that visualizes BOLD timeseries for a particular task and contrast. The goal is to have these average group timeseries created and uploaded automatically so that there is always updated, live data to play with in the browser. I put together a rough mock up, which I’m excited to share! Here are the rough steps that I mentally sketched out to make this possible for one of our tasks.

    1. Script extracts and organizes data: Matlab is data master, so matlab was my choice for collected the data. I wrote up a quick script that goes through folders for each subject where we store timeseries matrices, and based on the names and embedded data, creates a master matrix of mean values for each contrast of interest. To step back a bit, these individual subject matrices are made when we run our PPI (Psychophysiological Interactions) pipeline, which, as part of the analysis, will produce a timeseries of values for a particular contrast / mask. We are currently using anatomically and functionally defined masks, so the area that the values are extracted from would be based on significant activation for an entire group, and an anatomical mask like the amygdala. Anyway, this script jumps around the various folders, figures out the masks and contrasts that we have data for, and creates a mean timeseries for each. It’s a harmless little scripty because it just loads data matrices and reads them, and then writes the extensive data to a .csv file for the next step.

    **2. Get data into web: **This next step I did manually for my mock up, but this would be incredibly easy to have done automatically, if that is what we choose. The web interface that I threw together is based on the Google Charts / Gapminder API, and the data is fed in live from a Google Spreadsheet. I can also code it to find a .csv file on a server somewhere, and read that data. I will decide on which one to utilize based on how often I would want this data updated, how it would be updated, and the level of security I want for the data. For the purposes of my mock up, I just imported the .csv into a google doc manually. But obviously there are very easy ways to get the data somewhere automatically! At the end of the day I can create a simple little batch script that checks for the data connection, produces the organized data matrix, formats it into a .csv, connects to somewhere to plop it, and that place that it gets plopped gets queried by the interactive chart. So awesome!

    **3. Allow for customization: **This last step is something that I haven’t delved into yet because I’d like to talk with my lab mates about how we want this to look, and whether we want it to be more static (with a manual update maybe each time we have a datafreeze) or happen automatically, on a nightly basis, for example. I could make this a part of our site, and have a nice little interface that lets the user select the Task, the number of subjects, and I’m thinking that it would be cool to also have gender and genotype as variables, but this would make the entire task a little more challenging, because that information isn’t easily “grabbable” from anywhere. And I’m not sure about what we are allowed to show… I need to do a multitude of checks before I start to work on anything more official.

    I figure that it would be cool to have something like this to show to a class, or anyone who is interested in what the lab is working on. For now, here is the mock up. I hope that you enjoy playing with this as much as I enjoyed making it!

    ·

  • Sudo, make me a sandwich!

    I love this cartoon from xkcd

    As I dream up a new design for this, how about some sandwich python?

    
    #!/usr/bin/python
    
    class sandwich:
    
    def __init__(self):
    self.sandwich ={'bread':'None','made':False,'spread':'None','name':'None'}
    
    def __repr__(self):
    return "<Name:%s,Made:%s,Spread:%s,Bread:%s>"%(self.sandwich['name'],self.sandwich['made'],self.sandwich['spread'],self.sandwich['bread'])
    
    def getFromPantry(self,fillings,bread):
    self.sandwich['spread'] = fillings
    self.sandwich['bread'] = bread
    
    def makeSandwich(self,bread,spread,sname):
    self.sandwich['made'] = True
    self.sandwich['name'] = sname
    
    def nomnom(self):
    print self
    print "Nom Nom!"
    
    def main():
    mysandwich = sandwich()
    mysandwich.getFromPantry(('peanut butter','fluff','love'),'honey wheat')
    mysandwich.makeSandwich('honey wheat',mysandwich.sandwich['spread'],'blisswich')
    mysandwich.nomnom()
    if __name__ == '__main__':
    main()
    
    
    ·

  • Peanut Butter Chip Cookies

    This was a recipe that I found based on the ingredients that I happened to buy at Trader Joe’s. I chose to add chocolate and white chocolate chips to yield a more interesting outcome!

    Ingredients

    • 1 1/2 cups all-purpose flour
    • 1/4 teaspoon table salt
    • 1 teaspoon baking soda (I used powder)
    • 1/4 pound (1 stick) unsalted butter, room temperature
    • 1 cup packed light-brown sugar (I used Trader Joe’s cane sugar)
    • 1 large egg
    • 1/2 teaspoon pure vanilla extract
    • 1 cup smooth peanut butter (I used JIF single serving packets I bought a while back and never used – and I used about 1 oz over 1 cup)
    • chocolate and white chips, to your liking
    • sugar for embellishing

    ** Directions**

    1. Prepare ingredients

    2. Preheat oven to 350 degrees. Prepare a baking sheet (I used a baking mat).

    3. Sift together flour, salt and baking soda, and set aside. Bonus points for pink mixing bowls!

    4. In the bowl of an electric mixer fitted with the paddle attachment, beat together butter and sugar on medium speed until light and fluffy.

    5. Add egg and vanilla, and beat until well combined.

    6. Add peanut butter, and beat until smooth.

    7. Add flour mixture, and beat on low until combined.

    **8. **Add chocolate chips to your liking. The goal is to maximize chips without making the dough unworkable.

    9. Form each cookie into a ball using about a tablespoon of dough. Place cookies on the prepared baking sheets, about 1 inch apart.

    10. Press fork slighty into the cookies to make the signature crosshatched top. Squiisshhhh…

    **11. **Roll some in sugar, either on the top or sides, if desired. I wouldn’t opt for the bottom because it will burn to the pan.

    12. Bake until golden, about 10-15 minutes. I would put them in for ten, and then check and adjust from there based on your cookie texture preference.

    13. Finished! Package and consume as desired!

    ·

  • NPR Puzzle of Week Class

    Merry Christmas everyone! I wanted to share the start of a class that I have written (and will very likely keep modifying) in python to solve NPR Puzzle’s of the Week. These puzzles commonly involve shuffling around letters in a state / city / TV show / etc, and re-arranging, subtracting, adding, to make something new. The way to get from some starting point to an answer is usually systematical and logical, and the limiting factor of the entire process is the human’s ability to filter through massive amounts of data.

    So let’s say that you have the patience of a monk. You still can very likely go through hundreds of possibilities and miss the answer. With this in mind, I wanted to create a class that would help solve the puzzles. I am very much someone who enjoys developing procedure to get to a solution that might be used again, and so instead of writing a more extensive script for every puzzle, I chose to write a class. It would have basic functionalities like reading data from file, retrieving data points, getting field names, and writing output to file. I would bet that if I searched, I’d find a combination of modules that I could use that would do this just fine, but I really wanted to create my own, and modify it over time to fit my needs.

    The challenge for last week was the following: Name a city in the United States that ends in the letter S. The city is one of the largest cities in its state. Change the S to a different letter and rearrange the result to get the state the city is in. What are the city and state?

    I broadly decided that I wanted the following functions in my class:

    INPUT

    • setFile(): we need something to be able to make sure the file exists and is readable
    • parseData(): should be able to read in all of the data, regardless of how many columns of data we have, after checking that file has been set.

    READ

    • lookup(): should be able to look up a value in the data based on an x,y coordinate
    • entireRow(): we should be able to return an entire row of data at once
    • getFieldNum(): given that the number of columns isn’t predictable, I want a function that tells me how many I have
    • getFieldName(): I want a function that, if called without argument, gives me all the headers / first entries of each field, and if given a coordinate, n, returns the header of the nth row

    OUTPUT

    • fileout(): sets the output name, so I can always specify what I want my results file to be called
    • writeOut(): writes to the output file, and will alert the user if the output file has not been defined

    Given these functions, I decided on these class variables:

    • self.file: the data file name to be read, originally set as Null
    • self.fields: will be a list holding the column titles of the data read from file
    • self.data: will be a list that holds the raw data for each column
    • self.numentries: the number of data entries (rows)
    • self.numfields: the number of fields (columns) in the file

    Here is the first draft of the class:

    
    #!/usr/bin/python
    
    import re
    import sys
    import os.path
    import csv
    
    # solve.py creates basic functionality for reading and writing files,
    # to be used to help with solving NPR Puzzle of the weeks!
    
    class solve(object):
    
    #-__init__--------------------
    # Defines object when created
    def __init__(self):
    self.file = None     # the data file for reading
    self.fileout = None  # the output file name
    self.fields = []     # A list that holds the column titles of data from file
    self.data = []       # A list that holds raw data for each column
    self.numentries = 0  # The number of data entries (rows)
    self.numfields = 0   # The number of fields (columns) in the file
    
    # -__repr__-------------------
    # Defines object when printed
    def __repr__(self):
    if self.file and self.fileout: return ""%(self.file,self.fileout,self.numfields,self.numentries)
    if self.file and not self.fileout: return ""%(self.file,self.numfields,self.numentries)
    else: return ""%(self.numfields,self.numentries)
    
    #------------------------------------------------------------
    # INPUT FUNCTIONS
    #------------------------------------------------------------
    #-setFile---------------------
    # Check file exists and is readable
    def setFile(self,filepath):
    try:
    self.file = open(filepath,'r')
    print "File " + filepath + " set."
    except IOError:
    print "The file does not exist, exiting";
    return
    
    #-parseData--------------------
    # read in columns into list
    def parseData(self,delim):
    if self.file is not None:
    filelist = []
    columns = csv.reader(self.file,delimiter=delim)
    #columnsniff = csv.Sniffer()
    
    for line in columns:
    filelist.append(line)
    
    #if csv.Sniffer.has_header(columnsniff,'%s %s %s'):
    self.fields = filelist.pop(0)
    self.numentries = len(filelist)
    self.data = filelist
    self.numfields = len(self.data[0])
    print self
    return
    
    #------------------------------------------------------------
    # READ FUNCTIONS
    #------------------------------------------------------------
    #-lookup--------------------
    # Lookup one data value based on a coordinate
    def lookup(self,row,column):
    return self.data[row][column]
    
    # EntireRow returns an entire row of data
    def entireRow(self,row):
    return self.data[row]
    
    #-getFieldNum---------------
    # Returns total number of fields (columns), or specific one
    def getFieldNum(self, fieldname = None):
    if not fieldname: return self.numfields
    else:
    for i in range(len(self.fields)):
    if self.fields[i] == fieldname: return i
    else: print "Field name not found";
    
    #-getFieldName--------------------
    # return list of header titles, or one header title
    def getFieldName(self,loc=None):
    if self.fields and loc in range(len(self.fields)): return self.fields[loc]
    elif self.fields: return self.fields
    else: print "No fields found."
    return
    
    #------------------------------------------------------------
    # OUTPUT FUNCTIONS
    #------------------------------------------------------------
    #-fileOut-------------------
    # Output file creation
    def fileOut(self,outputname):
    self.fileout = outputname
    print self
    return
    
    #-writeOut--------------------
    # Write line of possible answer to output file
    def writeOut(self,index):
    if not self.fileout: print "Output file not defined.  Use .fileOut() function to set outfile."
    else:
    try:
    outputfile = open(self.fileout,'a')
    except IOError: print "Output file not found.  Use .fileOut() function to set outfile"
    outputfile.write(str(self.data[index]) + "\n";
    outputfile.write(";\n";)
    outputfile.close()
    return
    
    #-main-------
    # Main class
    def main():
    puzzle = solve()
    
    # If being called as object and not directly, call this main
    if __name__ == '__main__':
    main()
    
    

    I also want to note that I found functionality in the csv class. something called “sniffer” that I am going to implement to make it possible to figure out if the data file has header info or not. Since I am still working on this, the sniffer code is currently commented out.

    …and here is the script that uses this class to solve the puzzle detailed above:

    
    #!/usr/bin/python
    
    from solve import solve
    
    statepuzzle = solve()     # Create new solve object
    statepuzzle.setFile('data')     # Give file to object
    statepuzzle.fileOut('answer')   # Set output file name
    statepuzzle.parseData('\t')     # Read file
    
    # Read in the city and state
    for i in range(len(statepuzzle.data)):
    print statepuzzle.entireRow(i)
    state=statepuzzle.lookup(i,1)
    city=statepuzzle.lookup(i,0)
    
    # Split the string by characters, put into list
    state = list(state.lower())
    city = list(city.lower())
    
    # Get rid of spaces
    while ' ' in city: city.remove(' ')
    while ' ' in state: state.remove(' ')
    
    # Find the number of matching letters
    sharedcount=0
    if len(city) is len(state):
    for j in range(len(state)):
    if state[j] in city:
    sharedcount=sharedcount+1
    
    # Print line to file as possible answer
    if sharedcount == len(state)-1:
    statepuzzle.writeOut(i)
    
    

    and here is the data file that I input (sans extension), which includes just under 300 of the largest cities, their respective states, and the population (didn’t need this one, but threw it in since it was available) in the United States.

    The second script above reads through each entry, and for each one, puts each character (made lowercase) from both the state and city into separate lists. It then removes all white spaces, and checks to see if the list lengths are equal. if they aren’t, we shouldn’t bother going any further. If they are, we move through one list and check each character for existence in the other. Whenever we find a match, we add one to the count. Here is where I would have added a line that removes the “found” character from the second list after being found in the case that we are dealing with strings with duplicate characters. Without the check, a city with two a’s would add one to the count for a state with only one a, since the a isn’t removed after being found. I am aware that this could produce imperfect results, however for this early version I decided that my human eye could be the solution for any slip-ups. And honestly, I jumped the gun a bit – I was so excited to give it a test run that I didn’t go back and add functionality to remove the character.

    The next step looks to see if the count is equal to the length minus 1, theoretically meaning that they might match by all but one of the characters. What I wanted was a short list of city/state contenders, and then my human eye could easily pick out the winner. I lucked out in that the script found only one possible answer, and it was the correct answer!

    [‘Yonkers’,’New York’,’201,066′]

    Hooray! That definitely is a solution. For those familiar with python, the above output is obviously just a print of the entire row of data that fit the bill. It would be entirely do-able to add more functionality to the class and to the script that uses it to produce a more attractive output file.

    I will continue to work to modify this class to fit my needs for new puzzles in coming weeks. And I think that it’s important to note that there is still an extremely large probability of error. Whether you are human or machine, you could always do a bad job of selecting your data. If the data file is weirdly formatted, it would have a lot of trouble being read, and I didn’t make any checks for a standard format. I also assume that the first row is the header data, and it might be the case that someone uses data without headers. These are things that are on my mind that I will continue to try and improve as I learn more python.

    ·