Thursday, October 30, 2008

Answers to simultaneity / time dilation questions

Hi guys!

So, I think this is my first message to Rebecca's physics students this year. I'm a little embarrassed it took me so long to say hello, but things out at CERN have been pretty busy for me recently.

Anyway, this week I'm posting the answers to the first two sets of questions. If you found the one on simultaneity a little abstract, I don't blame you. It's tough to find reasonable questions before we've developed any of the mathematics, but hopefully you'll agree that it gets better as we go.

Oh, and a standing invitation: feel free to leave me comments at any time, on any post; don't worry that I won't see them--I get notified automatically. Especially for the answers; if there's something you don't understand, or something you think I messed up, write me and let me know!

https://mywebspace.wisc.edu/mweinberg/web/SimultaneityAnswers.pdf

https://mywebspace.wisc.edu/mweinberg/web/TimeDilationAnswers.pdf

Monday, March 31, 2008

The fundamental forces

By this point, we’ve talked quite a bit about what matter is made of, but we haven’t really said much about how it works. That is to say, we have no idea yet how the fundamental particles interact with each other. (Actually, that isn’t entirely true. We’ve talked about the fact that quarks combine to make hadrons as a result of “color”, and I’ll get back to this in a moment.) We currently have all the pieces, but not the rules of the game.

So what are the rules of the game? You likely already know the answer to that question, mostly because I wrote it in the title, but also because you know from last semester that essentially all of mechanics is using Newton’s second equation, F = dp/dt = ma, to determine what a system will do.

Alright, so forces are what we need to tell us how particles interact. At the moment, we believe there are four forces in nature*: the strong force, electromagnetism, the weak force, and gravity. I certainly hope you’ve heard of at least two of those, by the way. But is this really the whole list? What about the normal force or friction? Or the drag force or the spring force? For that matter, what about centrifugal force? (For those of you who don’t think it exists, I refer you to this xkcd comic: http://xkcd.com/123/ .)

The answer is that none of these other forces are truly fundamental; all of them are produced by one of the four forces on the list. In fact, in most cases, each of the forces I mentioned is just a product of electromagnetism. Is this surprising? Take the normal force for example: as I write this, I’m sitting on a chair on the third floor of a building at CERN. Gravity is pulling everything down, so why don’t I fall through the chair? For that matter, why doesn’t the chair fall through the floor? The reason is electromagnetism: the electrons in me repel the electrons in the chair, and the electrons in the chair repel the electrons in the electrons in the floor, and as a result everything stays right where it is.

Okay, so hopefully you believe me that every interaction is really the result of just four fundamental forces. This means that it’s time for another table:


There are a few aspects of this table I’d like to make special mention of. For one thing, each force is associated with a particular charge. This shouldn’t be surprising, especially if you’re in EM this semester, where you’ll have seen the equation relating the electric force to the field: F = qE. That is, if we want to know the effect (the force) of a field on a particular particle, we have to know how susceptible the particle is to that field; this is exactly what charge is: a highly charged particle will be affected quite a bit by a particular electric field, whereas an uncharged particle won’t be affected at all by the field.

But electric charge is only the most familiar kind of charge (the one associated with electromagnetism). Every other force has its own charge as well. As it turns out, the mysterious “color” quantum number from last blog is just the charge of the strong force. This is the “true” definition of color, and why it deserves to be believed in. (Actually, even this isn’t really enough, but showing that it fits into the framework is a good start.)

The charge of the weak force is often called “flavor”, for reasons passing understanding. As a matter of fact, you already know the flavors of the quarks and leptons: the names themselves are the flavors. Thus, quarks come in six flavors: up, down, charm, strange, top and bottom. Leptons are generally considered to come in three flavors: electron type, muon type, and tau type.

Finally, the charge of gravity is, naturally enough, mass. After all, the amount of mass an object has determines how strongly it’s affected by gravity, and even just comparing Newton’s law of gravitation to Coulomb’s law,

Fg = GNm1m2/r2

Fe = keq1q2/r2

suggests that mass relates to gravity the same way as electric charge relates to electromagnetism.

The third column of the table is actually a little bogus, because each force behaves quite differently as a function of distance, so “relative strength” depends a lot on how far back you decide to stand, so to speak. Still, there’s some useful information to be gleaned here, including the answer to the question I posed a few weeks ago: If protons repel each other electrically, then why do they get smashed together so tightly inside the nucleus? The answer is the strong force: protons and neutrons are made of quarks, which carry color, as we saw last week. Thus, they experience the strong force, much stronger than electromagnetism, which binds them together inside the nucleus.

While we’re still looking at the table, I’d like to talk about the “carrier particles” in the fourth column. Classically, the way forces operate was a bit mysterious, because they seemed to work from a distance. Magnets, for instance, attract other magnets even though there’s nothing physically connecting them; and the Sun attracts the Earth even through the vacuum of space. This really irritated Newton, who couldn’t figure out how objects could affect each other without some sort of tangible connection between them. Newton couldn’t figure a way out of the problem of “action-at-a-distance”, so he did exactly what I do when confronted with a tough homework question: he punted, and left the issue for someone else to solve.

The problem had to wait another two and a half centuries, but eventually the newly arrived “quantum field theory” proposed a good answer. Forces, according to field theory, in fact don’t cause action at a distance: they are instead communicated from one particle to another by a particle which “carries” the force. These particles are listed in the fourth column; in many ways they’re very similar, but they’re also just different enough to explain why the individual forces act the way they do.

To be fair, I should mention that this is where the story really starts becoming incomplete. First of all, we need a separate field theory to explain each of the different forces: electromagnetism, the “easiest” force, is described by “quantum electrodynamics” (QED); the strong force is described by “quantum chromodynamics” (QCD); and the weak force, which has already been merged with electromagnetism by a guy named Steve Weinberg (no relation), is described by the “electroweak model”. We’ll talk more about each of these in the next few weeks.

And what about gravity? What quantum field theory describes that force? Well, this is embarrassing, but it turns out we don’t have one. That’s right: the oldest and best-known of the fundamental forces, and we have no idea how it works at small distances. None. Some textbooks and Scientific American articles play down this problem, saying the theory isn’t “completely satisfactory”, or that “gravity doesn’t play a significant role in particle interactions”. (That last point is mostly true; a glance at the table shows us that gravity is a billion trillion trillion times weaker than even the weak force.) But the problem is actually much worse than that: it is in fact impossible to construct a valid quantum field theory for gravity. This is a big deal, and it means more than just that gravity needs to be left out: it means that our entire model, quantum field theory itself, must not be the final story.

I’ll let you mull this over until next week, but feel free to send me any comments or questions. Here are this week’s problems:


https://mywebspace.wisc.edu/mweinberg/web/fundamentalForces.pdf

*In a few weeks I’ll contradict myself: there are actually only three fundamental forces, because we have “unified” electromagnetism with the weak force. If we’re going to talk about these as two separate forces, we might as well split up electricity and magnetism and say there are five forces. Many people believe (and I agree) that there is only one fundamental force, and everything we see is a product of that.

Friday, March 21, 2008

Introducing "color"

Last week, I raised two objections to the quark model: (1) it seems to violate the exclusion principle (at least in some cases), and (2) it doesn’t explain why quarks only come in certain combinations (three quarks, three antiquarks, or a quark-antiquark pair). As it turns out, both problems can be handled by introducing a new quantum property, exclusively for quarks, called color.

Our idea is to suppose that every quark actually comes in one of three types, which we’ll call red, blue, and green (R, B, G), by way of analogy with the primary colors of light. Oh, and by the way, I probably don’t need to mention this, but of course these labels have absolutely no connection to real colors; it’s just that there happen to be three of them. (Particles have no color at all; in fact, they don’t “look” like anything in the first place.)

Anyway, adding the new color label immediately fixes our problem with the exclusion principle. Why? The key point is that exclusion applies only to identical particles. If the quarks have different colors, they’re no longer identical, so they’re perfectly welcome to be in the same state. Take the Delta baryon from last week:

Δ++ = uRuBuG

This particle is a legitimate problem if all the up-quarks are the same, but they are now distinguished by their color, so there’s no conflict. Simple!

Well, not quite so simple, actually. This may solve one problem, but it immediately brings up another: if quarks can now have a color quantum number, then why can’t we have lots of different proton states? That is, we know protons are made of two up-quarks and one down-quark, but now we can color them too, so why can’t I make up lots of different color combinations, like

p = uRuBdG
= uRuGdG
= uBuRdR
… etc.

By my count (you might want to check me) there could be a total of 18 different proton states! And we’d be able to tell from experiments, too: for example, the Δ++ would be eighteen times more likely to decay into a proton; the neutron would decay about eighteen times faster, etc.

So this is a problem, because we don’t have 18 kinds of protons, we just have one. We now have to find a way of introducing this new “color” property of quarks without proliferating all our hadrons. But how do we do it?

The usual way to solve these kinds of problems is to take a theorist out to dinner, buy her a few drinks, and then ask what she would do. She’ll likely say something like this: “Now that you’ve introduced this color thing, what you need is a rule telling you how to use it.” Actually, she’d probably say that you needed a symmetry, because it turns out that every rule in physics is generated by a symmetry principle, but that’s a story for a different day. She’d then go on to say, “Why not require that your hadrons be invariant under rotations in color-space?” In simpler terms, she’s saying we should insist that our hadrons be colorless (or “white”, to stick to the analogy).

Take a look at this Venn diagram of color, which is what we’d get if we shined the three primary lights at a screen and let them overlap.* The overlap areas are important in this analogy, because we need them for the antiquarks: if quarks are given one of three possible colors, then antiquarks must be given one of three possible anticolors. (You may remember a couple of homeworks ago when I mentioned that antiparticles have all opposite quantum numbers, and this applies to color, too.) Physicists usually just refer to these as antired (R with a bar over it), antiblue (B with a bar over it), and antigreen (G with a bar over it), but if you like you can call them cyan, yellow, and magenta, to stick with the analogy.

Also, a key thing to notice here is that on the color wheel, an anticolor is exactly the same as a combination of the two other colors: for example, antired is exactly the same as blue plus green, and antiblue is the same as red plus green. This holds (mostly) for quantum color as well: the theory does not distinguish between, say, antiblue and RG, or antigreen and RB.

What does this mean for us? Well, assuming we trust our theorist, we can now get rid of our huge number of proton states by requiring the proton to be colorless: that is, an equal mixture of R, B, and G. Now instead of 18 proton states, we have only one.** While we’re at it, our new rule also fixes problem (2) from the beginning of the blog: there are now a unique set of ways to obtain colorless hadrons by mixing different color quarks and antiquarks:

(1) Equal mixture of red, blue, and green (RBG): baryon (qqq)

(2) Equal mixture of antired, antiblue, and antigreen: antibaryon (anti-q anti-q anti-q)

(3) Equal mixture of color and anticolor (R anti-R, B anti-B, G anti-G): meson (q anti-q)

And that’s that! These are the only ways to make a colorless hadron, and hence they are the only types of particles allowed. This is why we never see, for instance, a two-quark pair, or a single antiquark: any other combination would have to be colored, and so our rule says it’s out of bounds. Thus, our new color quantum number (along with our rule for using it) solves our problem with the exclusion principle without proliferating the number of hadrons we have, and it also explains why hadrons come only in these three varieties.

Well, okay, so that’s great, and you guys have been good sports about this so far, but I’m guessing at least a few of you are really irritated by this color nonsense. I can hear it now: “Is this seriously how science works? The quark model had a problem, so all we do is patch it together by introducing a new quantum number? Honestly, what’s to stop us from doing this every time we have a problem? A theory seems to violate uncertainty, or relativity, or energy conservation? No problem, don’t throw it away, more quantum numbers should fix it right up.”

If you were thinking this, then good for you. You’re thinking like a responsible scientist should, and you’re even being a little sarcastic about it. In point of fact, when the color idea came out in the mid-60s, a lot of very smart folks thought that the whole thing was just the last gasp of the dying theory of quarks. Even so, I ask you to reserve judgment for another couple of weeks, at which point I’ll talk about what color really means. But the point is well taken, and it’s a good idea to maintain a healthy dose of skepticism about all this stuff.

Speaking of which, you guys aren’t busting my chops nearly enough. You have to make me work for it. If you have questions or comments, let me hear them! Meanwhile, here are a few questions for you:

https://mywebspace.wisc.edu/mweinberg/web/quarkColor.pdf

*In case it helps, here’s a quick review from Rebecca of how colors of light combine. It may help for the questions this week, but remember that it’s only an analogy to our “quantum color”: When you shine all three primary colors of light (red, green, and blue) together, you get white light (look at the center of the Venn diagram). If you just shine red and blue light on a piece of paper, it looks magenta. Likewise, you can make the paper look cyan by shining green and blue light on it, or make it look yellow by shining red and green light on it. You can tell this from the Venn diagram where the circles of red, green, and blue overlap. That’s cool, but what if the paper isn’t white? A red book looks red because it reflects the red (and only the red) light into your eye. But the light shining on the book from the light bulb or the sun is white light, composed of all three primary colors of light. So what happens to the green and blue parts of the white light? The red book only reflects red, so it must absorb the other two colors. The bottom line is that a red book absorbs green and blue (which make up cyan light), so a red book is kinda like anti-cyan.

**Actually, that’s not entirely true. We really have three proton states left: uRuBdG, uBuGdR, and uRuGdB. Thus, we would expect, say, the decay Δ++p π+ to be three times as likely as it would be otherwise, and it turns out this is the case: when computing the probability for this decay using quantum field theory, you must multiply by three to get the right answer.

Tuesday, March 4, 2008

A quark problem

If you take a look back at the table I showed in last week’s blog, you’ll see that the only quantum number I showed for the fundamental particles was electric charge, Q. (By “quantum number” here I just mean a property of the particles, like spin or energy, for example.) Anyway, I’d like to reproduce the table again this week, but add a few other quantum numbers:


For starters, these new numbers, baryon number and lepton number, are a clever labeling system, in that they match the properties of the particles we talked about in the homework. For instance, the proton, a uud bound state (two up quarks and a down quark), is indeed a baryon (B = 1 = 1/3 + 1/3 + 1/3) and not a lepton (L = 0) and it has a total charge of +1. By contrast, the electron is a lepton (L = 1) and not a baryon (B = 0).

We can go further with the labeling. It turns out that antiparticles have all opposite quantum numbers from their corresponding particle; for example, an antimuon has B = 0, L = -1 (since it’s an antilepton), Q = +1. As a final example, a positively charged pion (π+) is a meson composed of an up and an antidown quark. It’s not a baryon or a lepton (B = 0 = 1/3 – 1/3), (L = 0) and its charge is +1 (Q = 1 = 2/3 + 1/3).

Okay, so I think it’s at least mildly nifty that you can make such a labeling system, but it turns out these properties of particles are much more useful than that: they are actually conserved values, so any particular physical process can only occur if it leaves these quantum numbers unchanged. For instance, we could have the interaction

because there is exactly one baryon, no leptons, and no net electric charge both before and after. Another valid interaction is


Once again, we have one baryon in both the initial and final states, no leptons (remember the antineutrino counts as -1 leptons!), and no net charge. However, we would never expect to see interactions like


(doesn’t conserve baryon number)



(doesn’t conserve lepton number)



(doesn’t conserve charge)

Historically speaking, the theory that hadrons were made of quarks was starting to look like a reasonably good idea at this point. It explained the proliferation of hadrons, it sorted them neatly into mesons and baryons, and best of all it made predictions about what kinds of interactions they could and couldn’t undergo. Unfortunately, it ran into some embarrassing trouble at the next step: accounting for Pauli’s exclusion principle.

I haven’t really talked about Pauli’s exclusion principle, and for the most part I’d like to save it until the end of the semester, because it’s one of the most magnificent ideas in physics, and it has a lot to do with my research in particular. But I’ll say a couple things about it now, since it’s important to understand our quark problem.

As it relates to chemistry, Pauli’s exclusion principle mostly just says that two electrons can’t be in exactly the same state. This is why electrons in atoms aren’t all sitting at the bottom, in the 1s state; instead they have to fill out the other energy levels and the other orbitals (a friend of mine calls this the “bus seat rule”: once someone has taken a seat, that’s it, the next guy just has to find a different seat). Actually, as you probably remember from chemistry, it’s okay for two electrons to be at the same energy level and in the same orbital; they’re not really in the same state, because electrons can have two different spins.

Why does this matter to us? Well, it turns out the exclusion principle also applies to quarks; that is, no two identical quarks can be in exactly the same state. (Notice, by the way, that this rule only applies to identical quarks. It’s perfectly fine for an up quark and a down quark to be in the same state, because they’re two distinct particles.) Now, bearing this in mind, let’s take another look at the proton and neutron:

p = uud

n = udd

So far, so good. Sure, the proton has two up quarks in the same state, and the neutron has two down quarks in the same state, but we can get around this the same way we did in chemistry: the quarks also have two possible spins, so at most two identical quarks can be in the (otherwise) same state. The real problem is this guy:

Δ++ = uuu

In 1951, Fermi and his collaborators found this “delta” baryon, and sure enough, it had all the properties the quark model predicted it would have: it got the charge right, the mass right, it even got the lifetime right. But it figured out all these things by assuming the delta was made up of three identical up quarks in exactly the same state! This is absolutely forbidden by Pauli’s exclusion principle. What gives?

Even setting this exclusion fiasco aside for the moment, we still have cracks starting to show in our naive quark theory. Certainly it’s true that the quark combinations qqq, anti-q anti-q anti-q, and q anti-q neatly fit the observed sequence of baryons, antibaryons, and mesons, respectively, but what about all the other possible combinations, like qq, anti-q anti-q, etc. For that matter, why can’t we just have single quarks by themselves?

I’ll let you mull over these puzzles until next week, but if you have ideas or suggestions, by all means write me a comment and let me know. In the meantime, here are this week’s questions:

Monday, February 11, 2008

The fundamental particles of matter

Ultimately, the whole point of particle physics is to figure out what the world is made of at the most fundamental level. It turns out it’s made of some weird stuff. About 2,600 years ago Anaximenes of Miletus claimed it was made of air, fire, earth and water, but I’m not sure what that was about since he apparently just made it up. About 25 centuries later Mendeleev came up with a better answer, which of course you guys are all familiar with: the periodic table. It’s kind of a monster, with well over a hundred chemical elements, and they’re not exactly arranged in a simple way, which accounts for most of why I was never very good at chemistry.

The fact that there are so many elements, and the fact that their properties seem to have repeating patterns, are both strong suggestions that they have a substructure; that is, they’re not fundamental, but rather made up of smaller particles. Why? Well, consider this: the periodic table may have a lot of elements, but it’s still a big step forward, because it showed us we could make the entire world out of a (relatively) few things: we didn’t need tree atoms for trees and book atoms for books and people atoms for people; we could make everything out of chemical elements. You can make the trillions of things around you with just a hundred elements. Not a bad start. But a hundred is still a lot, so people started looking for ways to make the chemical elements out of much fewer constituents.

The repeating systematics of the periodic table also suggest the elements are made of smaller particles: consider, if they really were fundamental, why would they be anything like one another? The fact that their properties repeat seems to mean that the elements are not themselves simple, but that they’re made of simple things, which is why we see patterns at all.

Okay, so nowadays we know that the elements in Mendeleev’s table are indeed built up of something more fundamental: electrons, protons, and neutrons. The protons and neutrons are crammed together inside the nucleus, while the electrons orbit way far away in the orbitals. Actually, while we’re on the subject, let me give you a puzzle to mull over: we know that it’s the electromagnetic force that binds the electrons in an atom; that is, electrons are electrically attracted to the protons in the nucleus, which is why they stick around. But why in the world do protons stay so smashed up against each other? They repel each other electrically, and yet not only do they hang close to each other, they wedge themselves in about a hundred thousand times closer than the nearest electron! What gives? I’ll answer this question next week, but in the meantime let it simmer a little.

Getting back to the story, we hit some bad news: as it happens, the neutron and proton were not alone. In fact, they just turned out to be among the lightest in a huge spectrum of particles called hadrons. The last I checked the Particle Data Group webpage, they had listed about 200 different kinds of hadrons. At one point the problem got so bad that a prominent physicist joked that they should start giving the Nobel prize to whoever didn’t discover a new particle that year. So it seemed we were right back where we started: a huge proliferation of “fundamental” particles needed to explain the universe.

I suppose that the advantage of being back where you started is that you know which way to go. In a straightforward replay of the discovery that atoms were made up of smaller particles, people figured out that the protons and neutrons and all the other hadrons were actually made up of smaller particles themselves: the quarks. And that’s where we are now: quarks, as near as we know, are not made up of anything smaller, but are true fundamental components of the universe.

By the way, perhaps you’re wondering whatever happened to the electron in this story. Well, people did find a couple extra particles that were like electrons in some ways, and they called these particles leptons. However, unlike the hadrons, we didn’t get a huge mess of them, and there is no evidence that they are made of something smaller, so we currently believe that leptons, like quarks, are truly fundamental. Here’s a table showing where things currently stand with the elementary particles of matter:


There are a few things you probably noticed about this table. For one thing, the names of the quarks are pretty ridiculous, and to some degree that’s the fault of a dude named Murray Gell-Mann, but it’s probably too late to do anything about it now.

Perhaps more importantly, all the particles are grouped into pairs of quarks and leptons, sometimes called “generations”. Each generation is basically a carbon-copy of the others, except for the masses of the particles: each one has two quarks, a charged lepton, and a neutral lepton (the neutrinos). Moreover, the particles are very similar across generations; for example, the muon is exactly like the electron, just heavier, and the tau is just a very heavy copy of the muon.

Maybe this repeating pattern of generations makes you a little antsy. After all, wasn’t it patterns just like this that convinced us atoms were made of protons, neutrons, and electrons? And then that protons and neutrons were made of quarks? How do we know that quarks and leptons aren’t made of still smaller particles?

Well, we don’t! All we can say at the moment is that there isn’t any evidence (yet) that quarks and leptons are made of smaller particles. We’ve looked at them pretty closely (down to about 10-16 m, or a ten-thousandth of a billionth of a millimeter), but we’ve never seen any suggestion that they are composite.

But maybe three generations, while irritating, aren’t nearly as bad as the hundred atoms of the periodic table, or the two hundred hadrons found later. Right now we’ve got twelve “fundamental” particles in the table, which is sort of right at the edge: it’s a lot, but not quite so many that you figure they’ve got to be made of something smaller.

Ah, but that’s a good point: do we know there are only three generations of particles? Couldn’t there be a fourth generation out there waiting to be discovered? Well, yes and no. We can’t say for certain that there are no additional generations, but we can say that if there are, they’d have to be very different from the first three. The reasons are fairly technical, so I won’t go into them now, but we might revisit this in a future blog. Still, a lot of people are intrigued by the idea of finding additional “fundamental” particles, and many people here at CERN are going to be looking for that very thing when the LHC turns on, so we’ll just have to wait and see what happens.

For now, let's talk a little more about hadrons: https://mywebspace.wisc.edu/mweinberg/web/constituentsOfMatter.pdf

Welcome back

Hey folks, hope you guys had a fantastic break. It was good getting to meet a bunch of you while I was in town. As for me, I’m back in Europe and I’ve had a couple of weeks to goof around and even get a little work done. More goofing around than work so far, though.

Anyway, back to the blog. Last semester we talked about special relativity and quantum mechanics, so I suppose it’s time to move on. Hopefully you enjoyed the stuff about QM; there’s some fairly crazy nonsense that goes on there. In my opinion, it’s much weirder than special relativity, but it’s also quite a bit more involved (mathematically speaking).

With those two things out of the way, I’d like to move on to something that’s particularly interesting to me: particle physics. You see, those subjects are both very interesting, but no one really “does” special relativity or quantum mechanics (or at least, not very many people). People write textbooks about them all the time, but you’d have to look pretty hard to find a paper published or a seminar given about them, because they’re no longer at the forefront of physics. In fact, they’re usually not viewed as proper theories at all, but rather frameworks, tools people use to do physics. It’s a bit like learning to play chess: knowing how the pieces move is not the same as being able to play the game. It’s certainly a prerequisite; you can’t even begin to play unless you know the rules, but it’s not enough all by itself.

If special relativity and quantum mechanics are the rules, then particle physics is the game. The object of the game is to figure out the fundamental building blocks of the universe, and it has occupied the minds of some of the most magnificent thinkers of the past hundred years. My goal over the next few weeks is to give you a sense of the “lay of the land”; some idea of what we know, and what we think we know, and what we definitely don’t know, about how the universe really works.

Monday, December 17, 2007

Heisenberg's uncertainty principle

Well, it’s getting to be about time to wrap up the semester, and I’ve saved the most important thing for last. Perhaps you’ve gotten the sense in the last few weeks that quantum mechanics is one incredibly bizarre thing after another, and this is certainly true. However, in a very real way, all that unbelievable weirdness actually emanates from a single source: the uncertainty principle. More than that, in fact: everything I’ve told you so far about quantum mechanics can actually be derived from the uncertainty principle.

A friend of mine once referred to uncertainty as “the beating heart of quantum mechanics”. This may be overly poetic, but it is true that the uncertainty principle is one of the two pillars on which all of quantum mechanics is based (the other being Pauli’s exclusion principle). In spite of being so important, it’s usually misquoted, and it’s very often misunderstood.

So what exactly is uncertainty? Actually, when Heisenberg originally wrote his paper in 1925, he didn’t really explain it very well. (I personally think he was hedging his bets. Remember, this was long before the debate about the realist vs. orthodox positions had been resolved.) Basically, what he said was this: the more precisely the momentum of a particle is known, the less precisely its position can be known, and vice versa.

To be a little more mathy about it, let’s talk about the quantities Δx and Δp. Strictly, these are the standard deviations of the position and momentum, x and p, though people typically just call them the uncertainties. (I’d rather not actually define these, just because I haven’t mentioned a couple of things that go into the definition. However, if you don’t remember what a standard deviation is, it’s safe to think of Δx and Δp as the “spread” in the position and momentum. Of course, Rebecca might be upset with you if you don’t remember what standard deviations are…) Anyway, what Heisenberg said was this:

ΔxΔp ≥ hbar/2

That is, the product of the two uncertainties is greater than some constant. This equation may seem fairly innocuous, but it’s actually an unbelievable result: it doesn’t matter what the actual constant is, the fact that the uncertainties must be greater than zero is the incredible thing. I’m really not sure that there’s anything in human experience which might prepare us for this. What Heisenberg had shown, even though he himself may not have realized it at the time, was the incompatibility of position and momentum.

When a lot of authors (including Heisenberg himself) talked about uncertainty, they made it sound as if it was somehow the experimenter’s fault. For instance, one way to measure a particle’s position is to hit it with a beam of light. If you hit it with low-energy light, you can do your measurement without disturbing the particle too much, so its momentum can be fairly well known. The trade off is that low-energy light isn’t good at resolving the particle, so you don’t know much about where exactly it is. Conversely, you could pummel it with high-energy light, in which case you’d get a great sense of where exactly it is, but the high-energy light would send the particle skittering off to wherever, so we’d have no idea what its momentum is.

This is not only silly, it’s downright misleading. The only conclusion that we would draw from that story is that this particular way of measuring a particle may not be very good. Sheesh, maybe the people doing these experiments just aren’t very smart; it sounds like a clever person would just find a less obtrusive way of measuring the particle’s position. For that matter, maybe the problem is even simpler: perhaps we just need to spend more money to buy a better machine.

But this isn’t it at all! What the uncertainty principle really says is much deeper than this. Recall from the orthodox position that a particle doesn’t always have an exact position or momentum. According to uncertainty, the more definitely a particle has a position, the less definitely it has a momentum, and vice versa. It’s not that an experiment that’s good at finding a particle’s position necessarily has to be bad at finding the particle’s momentum, it’s that when you measure the particle’s position, it doesn’t really have a momentum, so there’s nothing to measure!

At the two extremes, the uncertainty relation is actually even more surprising: if Δx = 0 (that is, the position is known exactly), then Δp → ∞ (that is, the particle has literally every momentum at once). On the other hand, if the particle has an exact momentum (Δp = 0), it exists everywhere simultaneously (Δx → ∞).

Alright, no doubt about it, that’s weird. But what makes uncertainty so important? What makes it the source of all weirdness in quantum mechanics? Actually, uncertainty is more general than I’ve let on: I started off talking about position and momentum, but the wonderful thing about uncertainty is that it applies to any two physical observables, be they position, momentum, energy, spin, or whatever else we can dream up that we might want to measure. Recall that every physical observable has its own operator in quantum mechanics. What the generalized uncertainty principle says is this: if A and B are any two operators, then

(ΔA)2(ΔB)2[A, B]2/4

This second, more general form of the uncertainty principle is the real engine here: we know that physical observables in quantum mechanics are represented by operators, and we know that sometimes two operators don’t commute. What uncertainty does is take this purely mathematical fact and turn it into something physical: because of uncertainty it is now impossible for some observables to have definite values at the same time; it’s why quantum mechanics has wave functions in the first place, and therefore probabilities, and by extension it’s the cause of the realist/orthodox/agnostic debate. So, no fooling, uncertainty really is the motivating force behind everything we’ve talked about in quantum mechanics.

Anyway, now that we know what commutators are (see last week’s blog), we have all the math we need to evaluate this relation. Well, almost. Actually the “absolute value” brackets mean something slightly different when we’re talking about imaginary numbers, but for the problems we’re doing you’ll be fine if you just remember to make sure the square of the commutator is positive. With this, you should be in good shape for this week’s homework.

https://mywebspace.wisc.edu/mweinberg/web/uncertainty.pdf
 
eXTReMe Tracker