Tuesday, June 20, 2017

About grants: What are "indirect costs"?

Before blogging further about science, I wanted to explain something about the way research grants work in the US.  Consider this part of my series of posts intended to educate students (and perhaps the public) about careers in academic research.

When you write a proposal to a would-be source of research funding, you have to include a budget.  As anyone would expect, that budget will list direct costs - these are items that are clear research expenses.  Examples would include, say, $30K/yr for a graduate student's stipend, and $7K for a piece of laboratory electronics essential to the work, and $2K/yr to support travel of the student and the principal investigator (PI) to conferences.   However, budgets also include indirect costs, sometimes called overhead.  The idea is that research involves certain costs that aren't easy to account for directly, like the electricity to run the lights and air conditioning in the lab, or the costs to keep the laboratory building maintained so that the research can get done, or the (meta)costs for the university to administer the grant.  

So, how does the university to figure out how much to tack on for indirect costs?  For US federal grants, the magic (ahem) is all hidden away in OMB Circular A21 (wiki about it, pdf of the actual doc).  Universities periodically go through an elaborate negotiation process with the federal government (see here for a description of this regarding MIT), and determine an indirect cost rate for that university.  The idea is you take the a version of the direct costs ("modified total direct costs" - for example, a piece of equipment that costs more than $5K is considered a capital expense and not subject to indirect costs) and multiply by a negotiated factor (in the case of Rice right now, 56.5%) to arrive at the indirect costs.  The cost rates are lower for research done off campus (like at CERN), with the argument that this should be cheaper for the university.  (Effective indirect cost rates at US national labs tend to be much higher.)

Foundations and industry negotiate different rates with universities.  Foundations usually limit their indirect cost payments, arguing that they just can't afford to pay at the federal level.  The Bill and Melinda Gates Foundation, for example, only allows (pdf) 10% for indirect costs.   The effective indirect rate for a university, averaged over the whole research portfolio, is always quite a bit lower than the nominal A21 negotiated rate.  Vice provosts/presidents/chancellors for research at major US universities would be happy to explain at length that indirect cost recovery doesn't come close to covering the actual costs associated with doing university-based research.  

Indirect cost rates in the US are fraught with controversy, particularly now.  The current system is definitely complicated, and reasonable people can ask whether it makes sense (and adds administrative costs) to have every university negotiate its own rate with the feds.   It remains to be seen whether there are changes in the offing.

Saturday, June 17, 2017

Interesting reading material

Summer travel and other activities have slowed blogging, but I'll pick back up again soon.  In the meantime, here are a couple of interesting things to read:

  • Ignition!  An Informal History of Liquid Rocket Propellants (pdf) is fascinating, if rather chemistry-heavy.  Come for discussions of subtle side reactions involved in red fuming nitric acid slowly eating its storage containers and suggested (then rejected) propellants like dimethyl mercury (!!), and stay for writing like, "Miraculously, nobody was killed, but there was one casualty — the man who had been steadying the cylinder when it split. He was found some five hundred feet away, where he had reached Mach 2 and was still picking up speed when he was stopped by a heart attack."  This is basically the story from start to finish (in practical terms) of the development of liquid propellants for rockets.   That book also led me to stumbling onto this library of works, most of which are waaaaay too chemistry-oriented for me.  Update:  for a directly relevant short story, see here.
  • Optogenetics is the idea of using light to control and trigger the activation/inactivation of genes.  More recently, there is a big upswing in the idea of magnetogenetics, using magnetic fields to somehow do similar things.  One question at play is, what is the physical mechanism whereby magnetic fields can really do much at room temperature, since magnetic effects tend to be weak.  (Crudely speaking, the energy scale of visible photons is eV, much larger than the thermal energy scale of \(k_{\mathrm{B}}T \sim ~\)26 meV, and readily able to excite vibrations or drive electronic excitations.  However, one electron spin in a reasonably accessible magnetic field of 1 Tesla is \(g \mu_{\mathrm{B}}B \sim ~\) 0.1 meV.)  Here is a nice survey article about the constraints on how magnetogenetics could operate.
  • For a tutorial in how not to handle academic promotion cases, see here.  

Tuesday, June 06, 2017

Follow-up: More 5nm/7nm/10nm transistors

Two years ago I wrote this post about IBM's announcement of "7 nm transistors", and it still gets many pageviews every week.   So, what's been going on since then?

I encourage you to read this article from Semiconductor Engineering - it's a nice, in-depth look at what the major manufacturers are doing to get very small transistors actually to market.  It also explains what those numerical designations of size really mean, and changes in lithography technology.  (This is one part of my book that I really will have to revise at some point.)

Likewise, here is a nice article about IBM's latest announcement of "5 nm" transistors.  The writing at Ars Technica on this topic is usually of high quality.  The magic words that IBM is now using are "gate all around", which means designing the device geometry so that the gate electrode, the one that actually controls the conduction, affects the channel (where the current flows) from all sides.  In old-school planar transistors, the gate only couples to the channel from one direction.

Later I will write a long, hopefully accessible article about transistors, as this year is the 70th anniversary of the Bell Labs invention that literally reshaped global technology.

Thursday, June 01, 2017

What should everyone know about physics?

A couple of weeks ago, Sean Carroll made an offhand remark (the best kind) on twitter about what every physics major should know.  That prompted a more thoughtful look at our expectations for physics majors by Chad Orzel, with which I broadly agree, as does ZapperZ, who points out that most physics majors don't actually go on to be physics PhDs (and that's fine, btw.)  

Entertaining and thought-provoking as this was, it seems like it's worth having a discussion among practicing physicist popularizers about what we'd like everyone to know about physics.  (For the pedants in the audience, by "everyone" I'm not including very young children, and remember, this is aspirational.  It's what we'd like people to know, not what we actually expect people to know.)  I'm still thinking about this, but here are some basic ingredients that make the list, including some framing topics about science overall.
  • Science is a reason- and logic-based way to look at the natural world; part of science is figuring out "models" (ways of describing the natural world and how it works) that have explanatory (retrodictive) and predictive power.   
  • Basic science is about figuring out how the world works - figuring out the "rules of the game".  Engineering is about using that knowledge to achieve some practical goal.  The line is very blurry; lots of scientists are motivated by and think about eventual applications.
  • Different branches of science deal with different levels of complexity and have their own vocabularies.  When trying to answer a scientific question, it's important to use the appropriate level of complexity and the right vocabulary.  You wouldn't try to describe how a bicycle works by starting with the molecular composition of the grease on the chain....
  • Physics in particular uses mathematics as its language and a tool.  You can develop good intuition for how physics works, but to make quantitative predictions, you need math.
  • Ultimately, observation and experiment are the arbiters of whether a scientific model/theory is right and wrong, scientifically.  "Right" means "agrees with observation/experiment whenever checked", "wrong" means "predicts results at odds with reality".  Usually this means the model/theory needs an additional correction, or has only a limited range of applicability.  (Our commonplace understanding of how bicycles work doesn't do so well at speeds of thousands of miles an hour.  That doesn't mean we don't understand how bikes work at low speeds; it means that additional effects have to be considered at very high speeds.)
  • There are many branches of physics - it's not all particle physics and astrophysics, despite the impression you might get from TV or movies.
  • Physics explains light in all its forms (the microwaves that heat your food; the radio waves that carry your wifi and cell phone traffic; the light you see with your eye and which carries your internet data over fibers; the x-rays that can go through your skin; and the really high energy gamma rays that do not, in fact, turn you into an enormous green ragemonster).  
  • Physics includes not just looking at the tiniest building blocks of matter, but also understanding what happens when those building blocks come together in very large numbers - it can explain that diamond is hard and transparent, how/why water freezes and boils, and how little pieces of silicon in your computer can be used to switch electric current.  Physics provides a foundation for chemistry and biology, but in those fields often it makes much more sense to use chemistry and biology vocabulary and models.  
  • Quantum mechanics can be unintuitive and weird, but that doesn't mean it's magic, and it doesn't mean that everything we don't understand (e.g., consciousness) is deeply connected to quantum physics.  Quantum is often most important at small scales, like at the level of individual atoms and molecules.  That's one reason it can seem weird - your everyday world is much larger.
  • Relativity can also be unintuitive and weird - that's because it's most important at speeds near the speed of light, and again those are far from your everyday experience.   
  • We actually understand a heck of a lot of physics, and that's directly responsible for our enormous technological progress in the last hundred and fifty years.
  • Physicists enjoy being creative and speculative, but good and honest ones are careful to point out when they're hand waving or being fanciful. 
I'll add more to this list over time, but that's a start.... 

Wednesday, May 24, 2017

Hot electrons and a connection to thermoelectricity

The two recent posts about the Seebeck effect and hot electrons give some context so that I can talk about a paper we published last month.

We started out playing around with metal nanowires, and measuring the open-circuit voltage (that is, hook up a volt meter across the device, which nominally doesn't allow current to flow) across those wires as a function of where we illuminated them with a near-IR laser.  Because the metal absorbs some of the light, that laser spot acts like a local heat source (though figuring out the temperature profile requires some modeling of the heat transfer processes).   As mentioned here, particles tend to diffuse from hot locations to cold locations; in an open circuit, a voltage builds up to balance out this tendency, because in the steady state no net current flows in an open circuit; and in a metal, the way electron motion and scattering depend on the energy of the electrons gives you the magnitude and sign of this process.   If the metal is sufficiently nanoscale that boundary scattering matters, you end up with a thermoelectric response that depends on the metal geometry.  The end result is shown in the left portion of the figure.  If you illuminate the center of the metal wire, you measure no net voltage - you shouldn't, because the whole system is symmetric.  The junction where the wire fans out to a bigger pad acts like a thermocouple because of that boundary scattering, and if you illuminate it you get a net thermoelectric voltage (sign depends on how you pick ground and which end you're illuminating).   Bottom line:  Illumination heats the electrons a bit (say a few Kelvin), and you get a thermoelectric voltage because of that, to offset the tendency of the electrons to diffuse due to the temperature gradient.  In this system, the size of the effect is small - microvolts at our illumination conditions.

Now we can take that same nanowire, and break it to make a tunnel junction somewhere in there - a gap between the two electrodes where the electrons are able to "tunnel" across from one side to the other.  When we illuminate the tunnel junction, we now see open-circuit photovoltages that are much larger, and very localized to the gap region.  So, what is going on here?  The physics is related, but not true thermoelectricity (which assumes that it always makes sense to define temperature everywhere).   What we believe is happening is something that was discussed theoretically here, and was reported in molecule-containing junctions here.   As I said when talking about hot electrons, when light gets absorbed, it is possible to kick electrons way up in energy.  Usually that energy gets dissipated by being spread among other electrons very quickly.  However, if hot electrons encounter the tunnel junction before they've lost most of that energy, they have a higher likelihood of getting across the tunnel junction, because quantum tunneling is energy-dependent.  Producing more hot electrons on one side of the junction than the other will drive a tunneling current.  We still have an open circuit, though, so some voltage has to build up so that the net current in the steady state adds up to zero.  Bottom line:  Illumination here can drive a "hot" electron tunneling current, and you get a photovoltage to offset that process.  This isn't strictly a thermoelectric effect because the electrons aren't thermally distributed - it's the short-lived high energy tail that matters most.

It's fun to think about ways to try to better understand and maximize such effects, perhaps for applications in photodetection or other technologies....

Friday, May 19, 2017

What are "hot" electrons?

In basic chemistry or introductory quantum mechanics, you learn about the idea of energy levels for electrons.  If you throw a bunch of electrons into some system, you also learn about the ground state, the lowest energy state of the whole system, where the electrons fill up* the levels from the bottom up, in accord with the Pauli principle.   In statistical physics, there are often a whole lot of energy levels and a whole lot of electrons (like \(10^{22}\) per cc), so we have to talk about distribution functions, and how many electrons are in the levels with energies between \(E\) and \(E + dE\).   In thermal equilibrium (meaning our system of interest is free to exchange energy in the form of heat with some large reservoir described by a well-defined temperature \(T\)), the distribution of electrons as a function of energy is given by the Fermi-Dirac distribution.

So, what are "hot" electrons?  If we have a system driven out of equilibrium, it's possible to have the electrons arranged in a non-thermal (non-FD distribution!) way.  Two examples are of particular interest at the nanoscale.  In a transistor, say, or other nanoelectronic device, it is possible to apply a voltage across the system so that \(eV >> k_{\mathrm{B}}T\) and inject charge carriers at energies well above the thermally distributed population.  Often electron-electron scattering on the 10-100 fs timescale redistributes the energy across the electrons, restoring a thermal distribution at some higher effective temperature (and on longer timescales, that energy cascades down into the vibrations of the lattice).  Electrons in a metal like Au at the top of the distribution are typically moving at speeds of \(\sim 10^{6}\) m/s (!!), so that means that near where the current is injected, on distance scales like 10-100 nm, there can be "hot" electrons well above the FD distribution.  

The other key way to generate "hot" electrons is by optical absorption.  A visible photon (perhaps a green one with an energy \(\hbar \omega\) of 2 eV) can be absorbed by a metal or a semiconductor, and this can excite an electron at an energy \(\hbar \omega\) above the top of the FD distribution.  Often, on the 10-100 fs timescale, as above, that energy gets redistributed among many electrons, and then later into the lattice.  That's heating by optical absorption.  In recent years, there has been an enormous amount of interest in trying to capture and use those hot electrons or their energy before there is a chance for that energy go become converted to heat.  See here, for instance, for thoughts about solar energy harvesting, or here for a discussion of hot electron photochemistry.  Nanoscale systems are of great interest in this field for several reasons, including the essential fact that hot electrons generated in them can access the system surface or boundary in the crucial timespan before energy relaxation.

(Talking about this and thermoelectricity now sets the stage so I can talk about our recent paper in an upcoming post.)

*Really, the whole many-body electron wavefunction has to be antisymmetric under the exchange of any two electrons, so it's wrong to talk as if one particular electron is sitting in one particular state, but let's ignore that for now.  Also, in general, the energy levels of the many-electron system actually depend on the number and arrangement of the electrons in the system (correlation effects!), but let's ignore that, too.

Tuesday, May 16, 2017

More coming, soon.

I will be posting more soon.  I'm in the midst of finally shifting my group webpage to a more modern design.  In the meantime, if there are requests for particular topics, please put them in the comments and I'll see what I can do.

Update:  Victory.  After a battle with weird permissions issues associated with the way Rice does webhosting, it's up here:  natelson.web.rice.edu/group.html

Still a few things that should be updated and cleaned up (including my personal homepage), but the major work is done.